threads
listlengths
1
2.99k
[ { "msg_contents": "A production instance crashed like so.\n\n< 2023-07-20 05:41:36.557 MDT >LOG: server process (PID 106001) was terminated by signal 11: Segmentation fault\n\n[168417.649549] postmaster[106001]: segfault at 12 ip 00000000008f1d40 sp 00007ffe885502e0 error 4 in postgres[400000+7dc000]\n\nCore was generated by `postgres: main main 127.0.0.1(51936) SELECT '.\n\n(gdb) bt\n#0 CachedPlanIsSimplyValid (plansource=0x2491398, plan=0x0, owner=0x130a198) at plancache.c:1441\n#1 0x00007fcab2e93d15 in exec_eval_simple_expr (rettypmod=0x7ffe88550374, rettype=0x7ffe88550370, isNull=0x7ffe8855036f, result=<synthetic pointer>, expr=0x2489dc8, estate=0x7ffe885507f0) at pl_exec.c:6067\n#2 exec_eval_expr (estate=0x7ffe885507f0, expr=0x2489dc8, isNull=0x7ffe8855036f, rettype=0x7ffe88550370, rettypmod=0x7ffe88550374) at pl_exec.c:5696\n#3 0x00007fcab2e97d42 in exec_assign_expr (estate=estate@entry=0x7ffe885507f0, target=0x16250f8, expr=0x2489dc8) at pl_exec.c:5028\n#4 0x00007fcab2e98675 in exec_stmt_assign (stmt=0x2489d80, stmt=0x2489d80, estate=0x7ffe885507f0) at pl_exec.c:2155\n#5 exec_stmts (estate=estate@entry=0x7ffe885507f0, stmts=0x2489ad8) at pl_exec.c:2019\n#6 0x00007fcab2e9b672 in exec_stmt_block (estate=estate@entry=0x7ffe885507f0, block=block@entry=0x248af48) at pl_exec.c:1942\n#7 0x00007fcab2e9b6cb in exec_toplevel_block (estate=estate@entry=0x7ffe885507f0, block=0x248af48) at pl_exec.c:1633\n#8 0x00007fcab2e9bf84 in plpgsql_exec_function (func=func@entry=0x1035388, fcinfo=fcinfo@entry=0x40a76b8, simple_eval_estate=simple_eval_estate@entry=0x0, simple_eval_resowner=simple_eval_resowner@entry=0x0,\n procedure_resowner=procedure_resowner@entry=0x0, atomic=atomic@entry=true) at pl_exec.c:622\n#9 0x00007fcab2ea7454 in plpgsql_call_handler (fcinfo=0x40a76b8) at pl_handler.c:277\n#10 0x0000000000658307 in ExecAggPlainTransByRef (setno=<optimized out>, aggcontext=<optimized out>, pergroup=0x41a3358, pertrans=<optimized out>, aggstate=<optimized out>) at execExprInterp.c:4379\n#11 ExecInterpExpr (state=0x40a7870, econtext=0x1843a58, isnull=<optimized out>) at execExprInterp.c:1770\n#12 0x0000000000670282 in ExecEvalExprSwitchContext (isNull=0x7ffe88550b47, econtext=<optimized out>, state=<optimized out>) at ../../../src/include/executor/executor.h:341\n#13 advance_aggregates (aggstate=<optimized out>, aggstate=<optimized out>) at nodeAgg.c:824\n#14 agg_fill_hash_table (aggstate=0x1843630) at nodeAgg.c:2544\n#15 ExecAgg (pstate=0x1843630) at nodeAgg.c:2154\n#16 0x0000000000662c58 in ExecProcNodeInstr (node=0x1843630) at execProcnode.c:480\n#17 0x000000000067c7f3 in ExecProcNode (node=0x1843630) at ../../../src/include/executor/executor.h:259\n#18 ExecHashJoinImpl (parallel=false, pstate=0x1843390) at nodeHashjoin.c:268\n#19 ExecHashJoin (pstate=0x1843390) at nodeHashjoin.c:621\n#20 0x0000000000662c58 in ExecProcNodeInstr (node=0x1843390) at execProcnode.c:480\n#21 0x000000000067c7f3 in ExecProcNode (node=0x1843390) at ../../../src/include/executor/executor.h:259\n#22 ExecHashJoinImpl (parallel=false, pstate=0x18430f0) at nodeHashjoin.c:268\n#23 ExecHashJoin (pstate=0x18430f0) at nodeHashjoin.c:621\n#24 0x0000000000662c58 in ExecProcNodeInstr (node=0x18430f0) at execProcnode.c:480\n#25 0x000000000068de76 in ExecProcNode (node=0x18430f0) at ../../../src/include/executor/executor.h:259\n#26 ExecSort (pstate=0x1842ee0) at nodeSort.c:149\n#27 0x0000000000662c58 in ExecProcNodeInstr (node=0x1842ee0) at execProcnode.c:480\n#28 0x000000000066ced9 in ExecProcNode (node=0x1842ee0) at ../../../src/include/executor/executor.h:259\n#29 fetch_input_tuple (aggstate=aggstate@entry=0x1842908) at nodeAgg.c:563\n#30 0x00000000006701b2 in agg_retrieve_direct (aggstate=0x1842908) at nodeAgg.c:2346\n#31 ExecAgg (pstate=0x1842908) at nodeAgg.c:2161\n#32 0x0000000000662c58 in ExecProcNodeInstr (node=0x1842908) at execProcnode.c:480\n#33 0x000000000065be12 in ExecProcNode (node=0x1842908) at ../../../src/include/executor/executor.h:259\n#34 ExecutePlan (execute_once=<optimized out>, dest=0x5fb39f8, direction=<optimized out>, numberTuples=0, sendTuples=true, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x1842908, estate=0x1842298)\n at execMain.c:1636\n#35 standard_ExecutorRun (queryDesc=0x2356ab8, direction=<optimized out>, count=0, execute_once=<optimized out>) at execMain.c:363\n#36 0x00007fcab32bd67d in pgss_ExecutorRun (queryDesc=0x2356ab8, direction=ForwardScanDirection, count=0, execute_once=<optimized out>) at pg_stat_statements.c:1010\n#37 0x00007fcab30b7781 in explain_ExecutorRun (queryDesc=0x2356ab8, direction=ForwardScanDirection, count=0, execute_once=<optimized out>) at auto_explain.c:320\n#38 0x00000000007d010e in PortalRunSelect (portal=portal@entry=0xf79598, forward=forward@entry=true, count=0, count@entry=9223372036854775807, dest=dest@entry=0x5fb39f8) at pquery.c:924\n#39 0x00000000007d15bf in PortalRun (portal=<optimized out>, count=9223372036854775807, isTopLevel=<optimized out>, run_once=<optimized out>, dest=0x5fb39f8, altdest=0x5fb39f8, qc=0x7ffe885512d0) at pquery.c:768\n#40 0x00000000007cd417 in exec_simple_query (\n query_string=0x15ee788 \"-- report: thresrept: None: E-UTRAN/eNB KPIs: ['2023-07-19 2023-07-20', '0 23', '4', '{\\\"report_type\\\": \\\"hourly\\\", \\\"busy_hour_granularity\\\": \\\"\\\", \\\"metric\\\": \\\"ue_context_abnorm_rel_enb_all_pct\\\"}']\\n--BEGIN SQ\"...) at postgres.c:1250\n#41 0x00000000007cda17 in PostgresMain (dbname=<optimized out>, username=<optimized out>) at postgres.c:4593\n#42 0x00000000004937f7 in BackendRun (port=<optimized out>, port=<optimized out>) at postmaster.c:4511\n#43 BackendStartup (port=0xf19cb0) at postmaster.c:4239\n#44 ServerLoop () at postmaster.c:1806\n#45 0x000000000074ba0d in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0xef1300) at postmaster.c:1478\n#46 0x0000000000494666 in main (argc=3, argv=0xef1300) at main.c:202\n\n(gdb) p *plansource\n$1 = {magic = 195726186, raw_parse_tree = 0x24914a8, query_string = 0x2491950 \"c := array_upper(x, 1)\", commandTag = CMDTAG_SELECT, param_types = 0x0, num_params = 0, parserSetup = 0x7fcab2e8efb0 <plpgsql_parser_setup>, \n parserSetupArg = 0x2489dc8, cursor_options = 0, fixed_result = false, resultDesc = 0x2491980, context = 0x2491280, query_list = 0x2491bb8, relationOids = 0x0, invalItems = 0x0, search_path = 0x2491f68, \n query_context = 0x2491aa0, rewriteRoleId = 16714, rewriteRowSecurity = true, dependsOnRLS = false, gplan = 0x0, is_oneshot = false, is_complete = true, is_saved = true, is_valid = true, generation = 30, node = {\n prev = 0x248d7b0, next = 0x24930a0}, generic_cost = 0.012500000000000001, total_custom_cost = 0, num_custom_plans = 0, num_generic_plans = 30}\n\nNote that the prior query seems to have timed out in the same / similar plpgsql\nstatement:\n\n65/138 E-UTRAN/eNB KPIs: e-UTRAN: hourly: _ON_AIR_FILTER_=Any Status: failed to run: ERROR: canceling statement due to statement timeout\nCONTEXT: PL/pgSQL assignment \"c := array_upper(x, 1)\"\nPL/pgSQL function array_add(anycompatiblearray,anycompatiblearray) line 12 at assignment\n\n65/138 E-UTRAN/eNB KPIs: e-UTRAN: hourly: _ON_AIR_FILTER_=Off-Air: failed to run: connection to server at \"database\" (127.0.0.1), port 5432 failed: FATAL: the database system is in recovery mode\n\nI haven't tried yet, but the crash is probably not consistently reproducible -\nit would've run the same report yesterday and the day before, etc.\n\nFor some reason, gdb doesn't show source code in \"list\". I think I've fixed\nthis before, but now cannot remember how.\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 20 Jul 2023 10:42:49 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "pg15.3: dereference null *plan in CachedPlanIsSimplyValid/plpgsql" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> A production instance crashed like so.\n\nOh, interesting. Apparently we got past the \"Has cache invalidation fired\non this plan?\" check:\n\n if (!plansource->is_valid || plan != plansource->gplan || !plan->is_valid)\n return false;\n\nbecause \"plan\" and \"plansource->gplan\" were *both* null, which is\na case the code wasn't expecting. plansource.c seems to be paranoid\nabout gplan possibly being null everywhere else but here :-(\n\n> Note that the prior query seems to have timed out in the same / similar plpgsql\n> statement:\n\nHm. Perhaps that helps explain how we got into this state. It must\nbe some weird race condition, given the lack of prior reports.\nAnyway, adding a check for plan not being null seems like the\nobvious fix.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 20 Jul 2023 11:55:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg15.3: dereference null *plan in CachedPlanIsSimplyValid/plpgsql" } ]
[ { "msg_contents": "Hi,\n\nRecently chipmunk has failed with the following errors at [1]:\n\nmake -C '../../..'\nDESTDIR='/home/pgbfarm/buildroot/HEAD/pgsql.build'/tmp_install install\n>'/home/pgbfarm/buildroot/HEAD/pgsql.build'/tmp_install/log/install.log\n2>&1\nmake -j1 checkprep\n>>'/home/pgbfarm/buildroot/HEAD/pgsql.build'/tmp_install/log/install.log\n2>&1\necho \"# +++ regress check in src/test/regress +++\" &&\nPATH=\"/home/pgbfarm/buildroot/HEAD/pgsql.build/tmp_install/home/pgbfarm/buildroot/HEAD/inst/bin:/home/pgbfarm/buildroot/HEAD/pgsql.build/src/test/regress:$PATH\"\nLD_LIBRARY_PATH=\"/home/pgbfarm/buildroot/HEAD/pgsql.build/tmp_install/home/pgbfarm/buildroot/HEAD/inst/lib\"\n ../../../src/test/regress/pg_regress --temp-instance=./tmp_check\n--inputdir=. --bindir=\n--temp-config=/tmp/buildfarm-BewC4d/bfextra.conf --no-locale\n--port=5678 --dlpath=. --max-concurrent-tests=20 --port=5678\n--schedule=./parallel_schedule\n# +++ regress check in src/test/regress +++\n# postmaster failed, examine\n\"/home/pgbfarm/buildroot/HEAD/pgsql.build/src/test/regress/log/postmaster.log\"\nfor the reason\nBail out!GNUmakefile:118: recipe for target 'check' failed\nmake: *** [check] Error 2\nlog files for step check:\n\npostmaster.log shows the following error:\n2023-07-20 08:21:00.343 GMT [27063] LOG: unrecognized configuration\nparameter \"force_parallel_mode\" in file\n\"/home/pgbfarm/buildroot/HEAD/pgsql.build/src/test/regress/tmp_check/data/postgresql.conf\"\nline 835\n2023-07-20 08:21:00.344 GMT [27063] FATAL: configuration file\n\"/home/pgbfarm/buildroot/HEAD/pgsql.build/src/test/regress/tmp_check/data/postgresql.conf\"\ncontains errors\n\n[1] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chipmunk&dt=2023-07-20%2006%3A48%3A23\n\nThoughts?\n\nRegards,\nVignesh\n\n\n", "msg_date": "Fri, 21 Jul 2023 09:10:54 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Buildfarm failures on chipmunk" }, { "msg_contents": "HI,\n\n> On Jul 21, 2023, at 11:40, vignesh C <vignesh21@gmail.com> wrote:\n> \n> Hi,\n> \n> Recently chipmunk has failed with the following errors at [1]:\n> \n> make -C '../../..'\n> DESTDIR='/home/pgbfarm/buildroot/HEAD/pgsql.build'/tmp_install install\n>> '/home/pgbfarm/buildroot/HEAD/pgsql.build'/tmp_install/log/install.log\n> 2>&1\n> make -j1 checkprep\n>>> '/home/pgbfarm/buildroot/HEAD/pgsql.build'/tmp_install/log/install.log\n> 2>&1\n> echo \"# +++ regress check in src/test/regress +++\" &&\n> PATH=\"/home/pgbfarm/buildroot/HEAD/pgsql.build/tmp_install/home/pgbfarm/buildroot/HEAD/inst/bin:/home/pgbfarm/buildroot/HEAD/pgsql.build/src/test/regress:$PATH\"\n> LD_LIBRARY_PATH=\"/home/pgbfarm/buildroot/HEAD/pgsql.build/tmp_install/home/pgbfarm/buildroot/HEAD/inst/lib\"\n> ../../../src/test/regress/pg_regress --temp-instance=./tmp_check\n> --inputdir=. --bindir=\n> --temp-config=/tmp/buildfarm-BewC4d/bfextra.conf --no-locale\n> --port=5678 --dlpath=. --max-concurrent-tests=20 --port=5678\n> --schedule=./parallel_schedule\n> # +++ regress check in src/test/regress +++\n> # postmaster failed, examine\n> \"/home/pgbfarm/buildroot/HEAD/pgsql.build/src/test/regress/log/postmaster.log\"\n> for the reason\n> Bail out!GNUmakefile:118: recipe for target 'check' failed\n> make: *** [check] Error 2\n> log files for step check:\n> \n> postmaster.log shows the following error:\n> 2023-07-20 08:21:00.343 GMT [27063] LOG: unrecognized configuration\n> parameter \"force_parallel_mode\" in file\n> \"/home/pgbfarm/buildroot/HEAD/pgsql.build/src/test/regress/tmp_check/data/postgresql.conf\"\n> line 835\n> 2023-07-20 08:21:00.344 GMT [27063] FATAL: configuration file\n> \"/home/pgbfarm/buildroot/HEAD/pgsql.build/src/test/regress/tmp_check/data/postgresql.conf\"\n> contains errors\n> \n> [1] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chipmunk&dt=2023-07-20%2006%3A48%3A23\n> \n> Thoughts?\n> \n> Regards,\n> Vignesh\n> \n> \n\n\nIt seems `force_parallel_mode` has been removed in 5352ca22e\n\nConfig should use `debug_parallel_query` instead.\n\n\n\n\nZhang Mingli\nhttps://www.hashdata.xyz\n\n\nHI,On Jul 21, 2023, at 11:40, vignesh C <vignesh21@gmail.com> wrote:Hi,Recently chipmunk has failed with the following errors at [1]:make -C '../../..'DESTDIR='/home/pgbfarm/buildroot/HEAD/pgsql.build'/tmp_install install'/home/pgbfarm/buildroot/HEAD/pgsql.build'/tmp_install/log/install.log2>&1make -j1  checkprep'/home/pgbfarm/buildroot/HEAD/pgsql.build'/tmp_install/log/install.log2>&1echo \"# +++ regress check in src/test/regress +++\" &&PATH=\"/home/pgbfarm/buildroot/HEAD/pgsql.build/tmp_install/home/pgbfarm/buildroot/HEAD/inst/bin:/home/pgbfarm/buildroot/HEAD/pgsql.build/src/test/regress:$PATH\"LD_LIBRARY_PATH=\"/home/pgbfarm/buildroot/HEAD/pgsql.build/tmp_install/home/pgbfarm/buildroot/HEAD/inst/lib\" ../../../src/test/regress/pg_regress --temp-instance=./tmp_check--inputdir=. --bindir=--temp-config=/tmp/buildfarm-BewC4d/bfextra.conf  --no-locale--port=5678 --dlpath=. --max-concurrent-tests=20 --port=5678--schedule=./parallel_schedule# +++ regress check in src/test/regress +++# postmaster failed, examine\"/home/pgbfarm/buildroot/HEAD/pgsql.build/src/test/regress/log/postmaster.log\"for the reasonBail out!GNUmakefile:118: recipe for target 'check' failedmake: *** [check] Error 2log files for step check:postmaster.log shows the following error:2023-07-20 08:21:00.343 GMT [27063] LOG:  unrecognized configurationparameter \"force_parallel_mode\" in file\"/home/pgbfarm/buildroot/HEAD/pgsql.build/src/test/regress/tmp_check/data/postgresql.conf\"line 8352023-07-20 08:21:00.344 GMT [27063] FATAL:  configuration file\"/home/pgbfarm/buildroot/HEAD/pgsql.build/src/test/regress/tmp_check/data/postgresql.conf\"contains errors[1] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chipmunk&dt=2023-07-20%2006%3A48%3A23Thoughts?Regards,VigneshIt seems `force_parallel_mode` has been removed in 5352ca22eConfig should use `debug_parallel_query` instead.\nZhang Minglihttps://www.hashdata.xyz", "msg_date": "Fri, 21 Jul 2023 12:13:35 +0800", "msg_from": "Mingli Zhang <zmlpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Buildfarm failures on chipmunk" }, { "msg_contents": "At Fri, 21 Jul 2023 09:10:54 +0530, vignesh C <vignesh21@gmail.com> wrote in \n> postmaster.log shows the following error:\n> 2023-07-20 08:21:00.343 GMT [27063] LOG: unrecognized configuration\n> parameter \"force_parallel_mode\" in file\n> \"/home/pgbfarm/buildroot/HEAD/pgsql.build/src/test/regress/tmp_check/data/postgresql.conf\"\n> line 835\n> 2023-07-20 08:21:00.344 GMT [27063] FATAL: configuration file\n> \"/home/pgbfarm/buildroot/HEAD/pgsql.build/src/test/regress/tmp_check/data/postgresql.conf\"\n> contains errors\n> \n> Thoughts?\n\n[1] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chipmunk&dt=2023-07-20%2006%3A48%3A23\n\n> $Script_Config = {\n> ...\n> 'extra_config' => {\n> 'DEFAULT' => [\n> ...\n> 'HEAD' => [\n> 'force_parallel_mode = regress'\n> ]\n> },\n\nThe BF config needs a fix for HEAD.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 21 Jul 2023 13:14:58 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Buildfarm failures on chipmunk" } ]
[ { "msg_contents": "Added my v1 patch to add REINDEX to event triggers.\n\nI originally built this against pg15 but rebased to master for the patch\nto hopefully make it easier for maintainers to merge. The rebase was\nautomatic so it should be easy to include into any recent version. I'd love\nfor this to land in pg16 if possible.\n\nI added regression tests and they are passing. I also formatted the code\nusing the project tools. Docs are included too.\n\nA few notes:\n1. I tried to not touch the function parameters too much and opted to\ncreate a convenience function that makes it easy to attach index or table\nOIDs to the current event trigger list.\n2. I made sure both concurrent and regular reindex of index, table,\ndatabase work as expected.\n3. The ddl end command will make available all indexes that were modified\nby the reindex command. This is a large list when you run \"reindex\ndatabase\" but works really well. I debated returning records of the table\nor DB that were reindexed but that doesn't really make sense. Returning\neach index fits my use case of building an audit record around the index\nlifecycle (hence the motivation for the patch).\n\nThanks for your time,\n\nGarrett Thornburg", "msg_date": "Thu, 20 Jul 2023 22:47:00 -0600", "msg_from": "Garrett Thornburg <film42@gmail.com>", "msg_from_op": true, "msg_subject": "PATCH: Add REINDEX tag to event triggers" }, { "msg_contents": "On Thu, Jul 20, 2023 at 10:47:00PM -0600, Garrett Thornburg wrote:\n> Added my v1 patch to add REINDEX to event triggers.\n>\n> I originally built this against pg15 but rebased to master for the patch\n> to hopefully make it easier for maintainers to merge. The rebase was\n> automatic so it should be easy to include into any recent version. I'd love\n> for this to land in pg16 if possible.\n\nThe development of Postgres 16 has finished last April and this\nversion is in a stabilization period. We are currently running the\nfirst commit fest of PostgreSQL 17, though. I would recommend to\nregister your patch here so as it is not lost:\nhttps://commitfest.postgresql.org/44/\n\n> I added regression tests and they are passing. I also formatted the code\n> using the project tools. Docs are included too.\n\nHmm. What are the use cases you have in mind where you need to know\nthe list of indexes listed by an event trigger like this? Just\nmonitoring to know which indexes have been rebuilt? We have a\ntable_rewrite which is equivalent to check if a relfilenode has\nchanged, so why not, I guess..\n\nAssuming that ddl_command_start is not supported is incorrect. For\nexample, with your patch applied, if I do the following setup:\ncreate event trigger regress_event_trigger on ddl_command_start\n execute procedure test_event_trigger();\ncreate function test_event_trigger() returns event_trigger as $$\n BEGIN\n RAISE NOTICE 'test_event_trigger: % %', tg_event, tg_tag;\n END\n $$ language plpgsql;\n\nThen a REINDEX gives me that:\nreindex table pg_class;\nNOTICE: 00000: test_event_trigger: ddl_command_start REINDEX\n\nThe first paragraphs of event-trigger.sgml describe the following:\n The <literal>ddl_command_start</literal> event occurs just before the\n execution of a <literal>CREATE</literal>, <literal>ALTER</literal>, <literal>DROP</literal>,\n <literal>SECURITY LABEL</literal>,\n <literal>COMMENT</literal>, <literal>GRANT</literal> or <literal>REVOKE</literal>\n[...]\n The <literal>ddl_command_end</literal> event occurs just after the execution of\n this same set of commands. To obtain more details on the <acronym>DDL</acronym>\n\nSo it would need a refresh to list REINDEX.\n\n case T_ReindexStmt:\n- lev = LOGSTMT_ALL; /* should this be DDL? */\n+ lev = LOGSTMT_DDL;\n\nThis, unfortunately, may not be as right as it is simple because\nREINDEX is not a DDL as far as I know (it is not in the SQL\nstandard).\n--\nMichael", "msg_date": "Tue, 25 Jul 2023 15:55:14 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add REINDEX tag to event triggers" }, { "msg_contents": "On Fri, Jul 21, 2023 at 12:47 PM Garrett Thornburg <film42@gmail.com> wrote:\n>\n> Added my v1 patch to add REINDEX to event triggers.\n>\n> I originally built this against pg15 but rebased to master for the patch to hopefully make it easier for maintainers to merge. The rebase was automatic so it should be easy to include into any recent version. I'd love for this to land in pg16 if possible.\n>\n> I added regression tests and they are passing. I also formatted the code using the project tools. Docs are included too.\n>\n> A few notes:\n> 1. I tried to not touch the function parameters too much and opted to create a convenience function that makes it easy to attach index or table OIDs to the current event trigger list.\n> 2. I made sure both concurrent and regular reindex of index, table, database work as expected.\n> 3. The ddl end command will make available all indexes that were modified by the reindex command. This is a large list when you run \"reindex database\" but works really well. I debated returning records of the table or DB that were reindexed but that doesn't really make sense. Returning each index fits my use case of building an audit record around the index lifecycle (hence the motivation for the patch).\n>\n> Thanks for your time,\n>\n> Garrett Thornburg\n\n\nmain/postgres/src/backend/tcop/utility.c\n535: /*\n536: * standard_ProcessUtility itself deals only with utility commands for\n537: * which we do not provide event trigger support. Commands that do have\n538: * such support are passed down to ProcessUtilitySlow, which contains the\n539: * necessary infrastructure for such triggers.\n540: *\n541: * This division is not just for performance: it's critical that the\n542: * event trigger code not be invoked when doing START TRANSACTION for\n543: * example, because we might need to refresh the event trigger cache,\n544: * which requires being in a valid transaction.\n545: */\n\nso T_ReindexStmt should only be in ProcessUtilitySlow, if you want\nto create an event trigger on reindex?\n\nregression tests work fine. I even play with partitions.\n\n\n", "msg_date": "Tue, 25 Jul 2023 16:34:47 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add REINDEX tag to event triggers" }, { "msg_contents": "On Tue, Jul 25, 2023 at 04:34:47PM +0800, jian he wrote:\n> so T_ReindexStmt should only be in ProcessUtilitySlow, if you want\n> to create an event trigger on reindex?\n> \n> regression tests work fine. I even play with partitions.\n\nIt would be an idea to have some regression tests for partitions,\nactually, so as some patterns around ReindexMultipleInternal() are\nchecked. We could have a REINDEX DATABASE in a TAP test with an event\ntrigger, as well, but I don't feel strongly about the need to do that\nmuch extra work in 090_reindexdb.pl or 091_reindexdb_all.pl if\npartitions cover the multi-table case.\n--\nMichael", "msg_date": "Wed, 26 Jul 2023 08:50:56 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add REINDEX tag to event triggers" }, { "msg_contents": "On Wed, Jul 26, 2023 at 7:51 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Jul 25, 2023 at 04:34:47PM +0800, jian he wrote:\n> > so T_ReindexStmt should only be in ProcessUtilitySlow, if you want\n> > to create an event trigger on reindex?\n> >\n> > regression tests work fine. I even play with partitions.\n>\n> It would be an idea to have some regression tests for partitions,\n> actually, so as some patterns around ReindexMultipleInternal() are\n> checked. We could have a REINDEX DATABASE in a TAP test with an event\n> trigger, as well, but I don't feel strongly about the need to do that\n> much extra work in 090_reindexdb.pl or 091_reindexdb_all.pl if\n> partitions cover the multi-table case.\n> --\n> Michael\n\nquite verbose, copied from partition-info.sql. meet the expectation:\npartitioned index will do nothing, partition index will trigger event\ntrigger.\n------------------------------------------------\nDROP EVENT TRIGGER IF EXISTS end_reindex_command CASCADE;\nDROP EVENT TRIGGER IF EXISTS start_reindex_command CASCADE;\n\nBEGIN;\nCREATE OR REPLACE FUNCTION reindex_end_command()\nRETURNS event_trigger AS $$\nDECLARE\n obj record;\nBEGIN\n raise notice 'begin of reindex_end_command';\n\n FOR obj IN SELECT * FROM pg_event_trigger_ddl_commands()\n LOOP\n RAISE NOTICE\n 'obj.command_tag:% obj.object_type:% obj.schema_name:%\nobj.object_identity:%'\n ,obj.command_tag, obj.object_type,obj.schema_name,obj.object_identity;\n RAISE NOTICE 'ddl_end_command -- REINDEX: %', pg_get_indexdef(obj.objid);\n END LOOP;\nEND;\n$$ LANGUAGE plpgsql;\n\nCREATE OR REPLACE FUNCTION start_reindex_command()\nRETURNS event_trigger AS $$\nDECLARE\n obj record;\nBEGIN\n FOR obj IN SELECT * FROM pg_event_trigger_ddl_commands()\n LOOP\n RAISE NOTICE\n 'obj.command_tag:% obj.object_type:% obj.schema_name:%\nobj.object_identity:%'\n , obj.command_tag, obj.object_type,obj.schema_name,obj.object_identity;\n RAISE NOTICE 'ddl_start_command -- REINDEX: %',\npg_get_indexdef(obj.objid);\n END LOOP;\n raise notice 'end of start_reindex_command';\nEND;\n$$ LANGUAGE plpgsql;\n\nBEGIN;\nCREATE EVENT TRIGGER end_reindex_command ON ddl_command_end\n WHEN TAG IN ('REINDEX') EXECUTE PROCEDURE reindex_end_command();\nCREATE EVENT TRIGGER start_reindex_command ON ddl_command_start\n WHEN TAG IN ('REINDEX') EXECUTE PROCEDURE start_reindex_command();\nCOMMIT;\n\n-- test Reindex Event Trigger\nBEGIN;\ndrop table if EXISTS ptif_test CASCADE;\nCREATE TABLE ptif_test (a int, b int) PARTITION BY range (a);\nCREATE TABLE ptif_test0 PARTITION OF ptif_test\n FOR VALUES FROM (minvalue) TO (0) PARTITION BY list (b);\nCREATE TABLE ptif_test01 PARTITION OF ptif_test0 FOR VALUES IN (1);\nCREATE TABLE ptif_test1 PARTITION OF ptif_test\n FOR VALUES FROM (0) TO (100) PARTITION BY list (b);\nCREATE TABLE ptif_test11 PARTITION OF ptif_test1 FOR VALUES IN (1);\n\nCREATE TABLE ptif_test2 PARTITION OF ptif_test\n FOR VALUES FROM (100) TO (200);\n-- This partitioned table should remain with no partitions.\nCREATE TABLE ptif_test3 PARTITION OF ptif_test\n FOR VALUES FROM (200) TO (maxvalue) PARTITION BY list (b);\n\n-- Test index partition tree\nCREATE INDEX ptif_test_index ON ONLY ptif_test (a);\n\nCREATE INDEX ptif_test0_index ON ONLY ptif_test0 (a);\nALTER INDEX ptif_test_index ATTACH PARTITION ptif_test0_index;\n\nCREATE INDEX ptif_test01_index ON ptif_test01 (a);\nALTER INDEX ptif_test0_index ATTACH PARTITION ptif_test01_index;\n\nCREATE INDEX ptif_test1_index ON ONLY ptif_test1 (a);\nALTER INDEX ptif_test_index ATTACH PARTITION ptif_test1_index;\n\nCREATE INDEX ptif_test11_index ON ptif_test11 (a);\nALTER INDEX ptif_test1_index ATTACH PARTITION ptif_test11_index;\n\nCREATE INDEX ptif_test2_index ON ptif_test2 (a);\nALTER INDEX ptif_test_index ATTACH PARTITION ptif_test2_index;\n\nCREATE INDEX ptif_test3_index ON ptif_test3 (a);\nALTER INDEX ptif_test_index ATTACH PARTITION ptif_test3_index;\nCOMMIT;\n\n--top level partitioned index. will recurse to each partition index.\nREINDEX INDEX CONCURRENTLY public.ptif_test_index;\n\n--ptif_test0 is partitioned table. it will index partition: ptif_test01_index\n-- event trigger will log ptif_test01_index\nREINDEX INDEX CONCURRENTLY public.ptif_test0_index;\n\n--ptif_test1_index is partitioned index. it will index partition:\nptif_test11_index\n-- event trigger will effect on partion index:ptif_test11_index\nREINDEX INDEX CONCURRENTLY public.ptif_test1_index;\n\n--ptif_test2 is a partition. event trigger will log ptif_test2_index\nREINDEX INDEX CONCURRENTLY public.ptif_test2_index;\n\n--no partitions. event trigger won't do anything.\nREINDEX INDEX CONCURRENTLY public.ptif_test3_index;\n\nreindex table ptif_test; --top level. will recurse to each partition index.\nreindex table ptif_test0; -- will direct to ptif_test01\nreindex table ptif_test01; -- will index it's associtaed index\nreindex table ptif_test11; -- will index it's associtaed index\nreindex table ptif_test2; -- will index it's associtaed index\nreindex table ptif_test3; -- no partion, index won't do anything.\n\nDROP EVENT TRIGGER IF EXISTS end_reindex_command CASCADE;\nDROP EVENT TRIGGER IF EXISTS start_reindex_command CASCADE;\nDROP FUNCTION IF EXISTS reindex_start_command;\nDROP FUNCTION IF EXISTS reindex_end_command;\nDROP TABLE if EXISTS ptif_test CASCADE;\n-----------------------\n\n\n", "msg_date": "Wed, 26 Jul 2023 18:30:09 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add REINDEX tag to event triggers" }, { "msg_contents": "Thanks! I added the patch to the commitfest. TIL.\n\n> Hmm. What are the use cases you have in mind where you need to know the\nlist of indexes listed by an event trigger like this?\n\nDirect use case is general index health and rebuild. We had a recent\nupgrade go poorly (our fault) that left a bunch of indexes in a corrupted\nstate. We ran a mass rebuild a few times. Would have been great to be able\nto observe changes happening in scripts. That left us wishing there was a\nway to monitor all changes around indexes. Figured I'd add support since my\nfirst thread seemed somewhat supportive of the idea.\n\n> We have a table_rewrite which is equivalent to check if a relfilenode has\nchanged, so why not, I guess..\n\nYeah. I was actually debating if we need to add a new reindex event instead\nof building on top of the DDL start/end. You are right that the\ndocumentation does not include nonsql commands like reindex. Maybe I should\nconsider going that direction again?\n\n> Assuming that ddl_command_start is not supported is incorrect.\n\nAlso great callout. Using `SELECT * FROM pg_event_trigger_ddl_commands()`\nin the event does not yield anything. Bad assumption there. I'll add a test\nwith my next patch version.\n\nRe: LOGSTMT_ALL\n\nAgree here. I think this is more for reference than anything else so I'm\nfine to call it LOGSTMT_ALL, but that goes back to the previous question\naround ddl events vs a new event similar to table_rewrite.\n\nP.S. Sorry for the double post. Sent from the wrong email so the mailing\nlist didn't get the message.\n\nOn Tue, Jul 25, 2023 at 12:55 AM Michael Paquier <michael@paquier.xyz>\nwrote:\n\n> On Thu, Jul 20, 2023 at 10:47:00PM -0600, Garrett Thornburg wrote:\n> > Added my v1 patch to add REINDEX to event triggers.\n> >\n> > I originally built this against pg15 but rebased to master for the patch\n> > to hopefully make it easier for maintainers to merge. The rebase was\n> > automatic so it should be easy to include into any recent version. I'd\n> love\n> > for this to land in pg16 if possible.\n>\n> The development of Postgres 16 has finished last April and this\n> version is in a stabilization period. We are currently running the\n> first commit fest of PostgreSQL 17, though. I would recommend to\n> register your patch here so as it is not lost:\n> https://commitfest.postgresql.org/44/\n>\n> > I added regression tests and they are passing. I also formatted the code\n> > using the project tools. Docs are included too.\n>\n> Hmm. What are the use cases you have in mind where you need to know\n> the list of indexes listed by an event trigger like this? Just\n> monitoring to know which indexes have been rebuilt? We have a\n> table_rewrite which is equivalent to check if a relfilenode has\n> changed, so why not, I guess..\n>\n> Assuming that ddl_command_start is not supported is incorrect. For\n> example, with your patch applied, if I do the following setup:\n> create event trigger regress_event_trigger on ddl_command_start\n> execute procedure test_event_trigger();\n> create function test_event_trigger() returns event_trigger as $$\n> BEGIN\n> RAISE NOTICE 'test_event_trigger: % %', tg_event, tg_tag;\n> END\n> $$ language plpgsql;\n>\n> Then a REINDEX gives me that:\n> reindex table pg_class;\n> NOTICE: 00000: test_event_trigger: ddl_command_start REINDEX\n>\n> The first paragraphs of event-trigger.sgml describe the following:\n> The <literal>ddl_command_start</literal> event occurs just before the\n> execution of a <literal>CREATE</literal>, <literal>ALTER</literal>,\n> <literal>DROP</literal>,\n> <literal>SECURITY LABEL</literal>,\n> <literal>COMMENT</literal>, <literal>GRANT</literal> or\n> <literal>REVOKE</literal>\n> [...]\n> The <literal>ddl_command_end</literal> event occurs just after the\n> execution of\n> this same set of commands. To obtain more details on the\n> <acronym>DDL</acronym>\n>\n> So it would need a refresh to list REINDEX.\n>\n> case T_ReindexStmt:\n> - lev = LOGSTMT_ALL; /* should this be DDL? */\n> + lev = LOGSTMT_DDL;\n>\n> This, unfortunately, may not be as right as it is simple because\n> REINDEX is not a DDL as far as I know (it is not in the SQL\n> standard).\n> --\n> Michael\n>\n\nThanks! I added the patch to the commitfest. TIL.> Hmm. What are the use cases you have in mind where you need to know the list of indexes listed by an event trigger like this?Direct use case is general index health and rebuild. We had a recent upgrade go poorly (our fault) that left a bunch of indexes in a corrupted state. We ran a mass rebuild a few times. Would have been great to be able to observe changes happening in scripts. That left us wishing there was a way to monitor all changes around indexes. Figured I'd add support since my first thread seemed somewhat supportive of the idea.> We have a table_rewrite which is equivalent to check if a relfilenode has changed, so why not, I guess..Yeah. I was actually debating if we need to add a new reindex event instead of building on top of the DDL start/end. You are right that the documentation does not include nonsql commands like reindex. Maybe I should consider going that direction again?> Assuming that ddl_command_start is not supported is incorrect.Also great callout. Using `SELECT * FROM pg_event_trigger_ddl_commands()` in the event does not yield anything. Bad assumption there. I'll add a test with my next patch version.Re: LOGSTMT_ALLAgree here. I think this is more for reference than anything else so I'm fine to call it LOGSTMT_ALL, but that goes back to the previous question around ddl events vs a new event similar to table_rewrite.P.S. Sorry for the double post. Sent from the wrong email so the mailing list didn't get the message.On Tue, Jul 25, 2023 at 12:55 AM Michael Paquier <michael@paquier.xyz> wrote:On Thu, Jul 20, 2023 at 10:47:00PM -0600, Garrett Thornburg wrote:\n> Added my v1 patch to add REINDEX to event triggers.\n>\n> I originally built this against pg15 but rebased to master for the patch\n> to hopefully make it easier for maintainers to merge. The rebase was\n> automatic so it should be easy to include into any recent version. I'd love\n> for this to land in pg16 if possible.\n\nThe development of Postgres 16 has finished last April and this\nversion is in a stabilization period.  We are currently running the\nfirst commit fest of PostgreSQL 17, though.  I would recommend to\nregister your patch here so as it is not lost:\nhttps://commitfest.postgresql.org/44/\n\n> I added regression tests and they are passing. I also formatted the code\n> using the project tools. Docs are included too.\n\nHmm.  What are the use cases you have in mind where you need to know\nthe list of indexes listed by an event trigger like this?  Just\nmonitoring to know which indexes have been rebuilt?  We have a\ntable_rewrite which is equivalent to check if a relfilenode has\nchanged, so why not, I guess..\n\nAssuming that ddl_command_start is not supported is incorrect.  For\nexample, with your patch applied, if I do the following setup:\ncreate event trigger regress_event_trigger on ddl_command_start\n  execute procedure test_event_trigger();\ncreate function test_event_trigger() returns event_trigger as $$\n  BEGIN\n      RAISE NOTICE 'test_event_trigger: % %', tg_event, tg_tag;\n  END\n  $$ language plpgsql;\n\nThen a REINDEX gives me that:\nreindex table pg_class;\nNOTICE:  00000: test_event_trigger: ddl_command_start REINDEX\n\nThe first paragraphs of event-trigger.sgml describe the following:\n     The <literal>ddl_command_start</literal> event occurs just before the\n     execution of a <literal>CREATE</literal>, <literal>ALTER</literal>, <literal>DROP</literal>,\n     <literal>SECURITY LABEL</literal>,\n     <literal>COMMENT</literal>, <literal>GRANT</literal> or <literal>REVOKE</literal>\n[...]\n    The <literal>ddl_command_end</literal> event occurs just after the execution of\n    this same set of commands.  To obtain more details on the <acronym>DDL</acronym>\n\nSo it would need a refresh to list REINDEX.\n\n        case T_ReindexStmt:\n-           lev = LOGSTMT_ALL;  /* should this be DDL? */\n+           lev = LOGSTMT_DDL;\n\nThis, unfortunately, may not be as right as it is simple because\nREINDEX is not a DDL as far as I know (it is not in the SQL\nstandard).\n--\nMichael", "msg_date": "Wed, 26 Jul 2023 22:33:55 -0600", "msg_from": "Garrett Thornburg <film42@gmail.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: Add REINDEX tag to event triggers" }, { "msg_contents": "Thank you for this, jian he! I will include it in the next patch version.\n\nP.S. Sorry for the double post. Sent from the wrong email address so I'm\nresending so the mailing list gets the email. My apologies!\n\nOn Wed, Jul 26, 2023 at 4:30 AM jian he <jian.universality@gmail.com> wrote:\n\n> On Wed, Jul 26, 2023 at 7:51 AM Michael Paquier <michael@paquier.xyz>\n> wrote:\n> >\n> > On Tue, Jul 25, 2023 at 04:34:47PM +0800, jian he wrote:\n> > > so T_ReindexStmt should only be in ProcessUtilitySlow, if you want\n> > > to create an event trigger on reindex?\n> > >\n> > > regression tests work fine. I even play with partitions.\n> >\n> > It would be an idea to have some regression tests for partitions,\n> > actually, so as some patterns around ReindexMultipleInternal() are\n> > checked. We could have a REINDEX DATABASE in a TAP test with an event\n> > trigger, as well, but I don't feel strongly about the need to do that\n> > much extra work in 090_reindexdb.pl or 091_reindexdb_all.pl if\n> > partitions cover the multi-table case.\n> > --\n> > Michael\n>\n> quite verbose, copied from partition-info.sql. meet the expectation:\n> partitioned index will do nothing, partition index will trigger event\n> trigger.\n> ------------------------------------------------\n> DROP EVENT TRIGGER IF EXISTS end_reindex_command CASCADE;\n> DROP EVENT TRIGGER IF EXISTS start_reindex_command CASCADE;\n>\n> BEGIN;\n> CREATE OR REPLACE FUNCTION reindex_end_command()\n> RETURNS event_trigger AS $$\n> DECLARE\n> obj record;\n> BEGIN\n> raise notice 'begin of reindex_end_command';\n>\n> FOR obj IN SELECT * FROM pg_event_trigger_ddl_commands()\n> LOOP\n> RAISE NOTICE\n> 'obj.command_tag:% obj.object_type:% obj.schema_name:%\n> obj.object_identity:%'\n> ,obj.command_tag,\n> obj.object_type,obj.schema_name,obj.object_identity;\n> RAISE NOTICE 'ddl_end_command -- REINDEX: %',\n> pg_get_indexdef(obj.objid);\n> END LOOP;\n> END;\n> $$ LANGUAGE plpgsql;\n>\n> CREATE OR REPLACE FUNCTION start_reindex_command()\n> RETURNS event_trigger AS $$\n> DECLARE\n> obj record;\n> BEGIN\n> FOR obj IN SELECT * FROM pg_event_trigger_ddl_commands()\n> LOOP\n> RAISE NOTICE\n> 'obj.command_tag:% obj.object_type:% obj.schema_name:%\n> obj.object_identity:%'\n> , obj.command_tag,\n> obj.object_type,obj.schema_name,obj.object_identity;\n> RAISE NOTICE 'ddl_start_command -- REINDEX: %',\n> pg_get_indexdef(obj.objid);\n> END LOOP;\n> raise notice 'end of start_reindex_command';\n> END;\n> $$ LANGUAGE plpgsql;\n>\n> BEGIN;\n> CREATE EVENT TRIGGER end_reindex_command ON ddl_command_end\n> WHEN TAG IN ('REINDEX') EXECUTE PROCEDURE reindex_end_command();\n> CREATE EVENT TRIGGER start_reindex_command ON ddl_command_start\n> WHEN TAG IN ('REINDEX') EXECUTE PROCEDURE start_reindex_command();\n> COMMIT;\n>\n> -- test Reindex Event Trigger\n> BEGIN;\n> drop table if EXISTS ptif_test CASCADE;\n> CREATE TABLE ptif_test (a int, b int) PARTITION BY range (a);\n> CREATE TABLE ptif_test0 PARTITION OF ptif_test\n> FOR VALUES FROM (minvalue) TO (0) PARTITION BY list (b);\n> CREATE TABLE ptif_test01 PARTITION OF ptif_test0 FOR VALUES IN (1);\n> CREATE TABLE ptif_test1 PARTITION OF ptif_test\n> FOR VALUES FROM (0) TO (100) PARTITION BY list (b);\n> CREATE TABLE ptif_test11 PARTITION OF ptif_test1 FOR VALUES IN (1);\n>\n> CREATE TABLE ptif_test2 PARTITION OF ptif_test\n> FOR VALUES FROM (100) TO (200);\n> -- This partitioned table should remain with no partitions.\n> CREATE TABLE ptif_test3 PARTITION OF ptif_test\n> FOR VALUES FROM (200) TO (maxvalue) PARTITION BY list (b);\n>\n> -- Test index partition tree\n> CREATE INDEX ptif_test_index ON ONLY ptif_test (a);\n>\n> CREATE INDEX ptif_test0_index ON ONLY ptif_test0 (a);\n> ALTER INDEX ptif_test_index ATTACH PARTITION ptif_test0_index;\n>\n> CREATE INDEX ptif_test01_index ON ptif_test01 (a);\n> ALTER INDEX ptif_test0_index ATTACH PARTITION ptif_test01_index;\n>\n> CREATE INDEX ptif_test1_index ON ONLY ptif_test1 (a);\n> ALTER INDEX ptif_test_index ATTACH PARTITION ptif_test1_index;\n>\n> CREATE INDEX ptif_test11_index ON ptif_test11 (a);\n> ALTER INDEX ptif_test1_index ATTACH PARTITION ptif_test11_index;\n>\n> CREATE INDEX ptif_test2_index ON ptif_test2 (a);\n> ALTER INDEX ptif_test_index ATTACH PARTITION ptif_test2_index;\n>\n> CREATE INDEX ptif_test3_index ON ptif_test3 (a);\n> ALTER INDEX ptif_test_index ATTACH PARTITION ptif_test3_index;\n> COMMIT;\n>\n> --top level partitioned index. will recurse to each partition index.\n> REINDEX INDEX CONCURRENTLY public.ptif_test_index;\n>\n> --ptif_test0 is partitioned table. it will index partition:\n> ptif_test01_index\n> -- event trigger will log ptif_test01_index\n> REINDEX INDEX CONCURRENTLY public.ptif_test0_index;\n>\n> --ptif_test1_index is partitioned index. it will index partition:\n> ptif_test11_index\n> -- event trigger will effect on partion index:ptif_test11_index\n> REINDEX INDEX CONCURRENTLY public.ptif_test1_index;\n>\n> --ptif_test2 is a partition. event trigger will log ptif_test2_index\n> REINDEX INDEX CONCURRENTLY public.ptif_test2_index;\n>\n> --no partitions. event trigger won't do anything.\n> REINDEX INDEX CONCURRENTLY public.ptif_test3_index;\n>\n> reindex table ptif_test; --top level. will recurse to each partition\n> index.\n> reindex table ptif_test0; -- will direct to ptif_test01\n> reindex table ptif_test01; -- will index it's associtaed index\n> reindex table ptif_test11; -- will index it's associtaed index\n> reindex table ptif_test2; -- will index it's associtaed index\n> reindex table ptif_test3; -- no partion, index won't do anything.\n>\n> DROP EVENT TRIGGER IF EXISTS end_reindex_command CASCADE;\n> DROP EVENT TRIGGER IF EXISTS start_reindex_command CASCADE;\n> DROP FUNCTION IF EXISTS reindex_start_command;\n> DROP FUNCTION IF EXISTS reindex_end_command;\n> DROP TABLE if EXISTS ptif_test CASCADE;\n> -----------------------\n>\n\nThank you for this, jian he! I will include it in the next patch version.P.S. Sorry for the double post. Sent from the wrong email address so I'm resending so the mailing list gets the email. My apologies!On Wed, Jul 26, 2023 at 4:30 AM jian he <jian.universality@gmail.com> wrote:On Wed, Jul 26, 2023 at 7:51 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Jul 25, 2023 at 04:34:47PM +0800, jian he wrote:\n> > so  T_ReindexStmt should only be in  ProcessUtilitySlow, if you want\n> > to create an event trigger on reindex?\n> >\n> > regression tests work fine. I even play with partitions.\n>\n> It would be an idea to have some regression tests for partitions,\n> actually, so as some patterns around ReindexMultipleInternal() are\n> checked.  We could have a REINDEX DATABASE in a TAP test with an event\n> trigger, as well, but I don't feel strongly about the need to do that\n> much extra work in 090_reindexdb.pl or 091_reindexdb_all.pl if\n> partitions cover the multi-table case.\n> --\n> Michael\n\nquite verbose, copied from partition-info.sql. meet the expectation:\npartitioned index will do nothing, partition index will trigger event\ntrigger.\n------------------------------------------------\nDROP EVENT TRIGGER  IF EXISTS  end_reindex_command  CASCADE;\nDROP EVENT TRIGGER  IF EXISTS  start_reindex_command  CASCADE;\n\nBEGIN;\nCREATE OR REPLACE FUNCTION reindex_end_command()\nRETURNS event_trigger AS $$\nDECLARE\n  obj record;\nBEGIN\n    raise notice 'begin of reindex_end_command';\n\n    FOR obj IN SELECT * FROM pg_event_trigger_ddl_commands()\n    LOOP\n      RAISE NOTICE\n        'obj.command_tag:% obj.object_type:% obj.schema_name:%\nobj.object_identity:%'\n        ,obj.command_tag, obj.object_type,obj.schema_name,obj.object_identity;\n      RAISE NOTICE 'ddl_end_command -- REINDEX: %', pg_get_indexdef(obj.objid);\n    END LOOP;\nEND;\n$$ LANGUAGE plpgsql;\n\nCREATE OR REPLACE FUNCTION start_reindex_command()\nRETURNS event_trigger AS $$\nDECLARE\n    obj record;\nBEGIN\n    FOR obj IN SELECT * FROM pg_event_trigger_ddl_commands()\n    LOOP\n        RAISE NOTICE\n        'obj.command_tag:% obj.object_type:% obj.schema_name:%\nobj.object_identity:%'\n        , obj.command_tag, obj.object_type,obj.schema_name,obj.object_identity;\n        RAISE NOTICE 'ddl_start_command -- REINDEX: %',\npg_get_indexdef(obj.objid);\n    END LOOP;\n    raise notice 'end of start_reindex_command';\nEND;\n$$ LANGUAGE plpgsql;\n\nBEGIN;\nCREATE EVENT TRIGGER end_reindex_command ON ddl_command_end\n    WHEN TAG IN ('REINDEX') EXECUTE PROCEDURE reindex_end_command();\nCREATE EVENT TRIGGER start_reindex_command ON ddl_command_start\n    WHEN TAG IN ('REINDEX') EXECUTE PROCEDURE start_reindex_command();\nCOMMIT;\n\n-- test Reindex Event Trigger\nBEGIN;\ndrop table if EXISTS ptif_test CASCADE;\nCREATE TABLE ptif_test (a int, b int) PARTITION BY range (a);\nCREATE TABLE ptif_test0 PARTITION OF ptif_test\n  FOR VALUES FROM (minvalue) TO (0) PARTITION BY list (b);\nCREATE TABLE ptif_test01 PARTITION OF ptif_test0 FOR VALUES IN (1);\nCREATE TABLE ptif_test1 PARTITION OF ptif_test\n  FOR VALUES FROM (0) TO (100) PARTITION BY list (b);\nCREATE TABLE ptif_test11 PARTITION OF ptif_test1 FOR VALUES IN (1);\n\nCREATE TABLE ptif_test2 PARTITION OF ptif_test\n  FOR VALUES FROM (100) TO (200);\n-- This partitioned table should remain with no partitions.\nCREATE TABLE ptif_test3 PARTITION OF ptif_test\n  FOR VALUES FROM (200) TO (maxvalue) PARTITION BY list (b);\n\n-- Test index partition tree\nCREATE INDEX ptif_test_index ON ONLY ptif_test (a);\n\nCREATE INDEX ptif_test0_index ON ONLY ptif_test0 (a);\nALTER INDEX ptif_test_index ATTACH PARTITION ptif_test0_index;\n\nCREATE INDEX ptif_test01_index ON ptif_test01 (a);\nALTER INDEX ptif_test0_index ATTACH PARTITION ptif_test01_index;\n\nCREATE INDEX ptif_test1_index ON ONLY ptif_test1 (a);\nALTER INDEX ptif_test_index ATTACH PARTITION ptif_test1_index;\n\nCREATE INDEX ptif_test11_index ON ptif_test11 (a);\nALTER INDEX ptif_test1_index ATTACH PARTITION ptif_test11_index;\n\nCREATE INDEX ptif_test2_index ON ptif_test2 (a);\nALTER INDEX ptif_test_index ATTACH PARTITION ptif_test2_index;\n\nCREATE INDEX ptif_test3_index ON ptif_test3 (a);\nALTER INDEX ptif_test_index ATTACH PARTITION ptif_test3_index;\nCOMMIT;\n\n--top level partitioned index. will recurse to each partition index.\nREINDEX INDEX CONCURRENTLY public.ptif_test_index;\n\n--ptif_test0 is partitioned table. it will index partition: ptif_test01_index\n-- event trigger will log  ptif_test01_index\nREINDEX INDEX CONCURRENTLY public.ptif_test0_index;\n\n--ptif_test1_index is partitioned index. it will index partition:\nptif_test11_index\n-- event trigger will effect on partion index:ptif_test11_index\nREINDEX INDEX CONCURRENTLY public.ptif_test1_index;\n\n--ptif_test2 is a partition. event trigger will log ptif_test2_index\nREINDEX INDEX CONCURRENTLY public.ptif_test2_index;\n\n--no partitions. event trigger won't do anything.\nREINDEX INDEX CONCURRENTLY public.ptif_test3_index;\n\nreindex table ptif_test; --top level.  will recurse to each partition index.\nreindex table ptif_test0; -- will direct to ptif_test01\nreindex table ptif_test01; -- will index it's associtaed index\nreindex table ptif_test11; -- will index it's associtaed index\nreindex table ptif_test2;  -- will index it's associtaed index\nreindex table ptif_test3;  -- no partion, index won't do anything.\n\nDROP EVENT TRIGGER  IF EXISTS  end_reindex_command  CASCADE;\nDROP EVENT TRIGGER  IF EXISTS  start_reindex_command  CASCADE;\nDROP FUNCTION IF EXISTS reindex_start_command;\nDROP FUNCTION IF EXISTS reindex_end_command;\nDROP TABLE if EXISTS ptif_test CASCADE;\n-----------------------", "msg_date": "Wed, 26 Jul 2023 22:35:00 -0600", "msg_from": "Garrett Thornburg <film42@gmail.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: Add REINDEX tag to event triggers" }, { "msg_contents": "I added a v2 patch for adding REINDEX to event triggers. The following has\nchanged:\n\n1. I fixed the docs to note that ddl_command_start is supported for\nREINDEX. Thanks, Michael!\n2. I added Jian He's excellent partition table test and updated the\nexpectations to include that regression test.\n\nWhat is still up for debate:\n\nMichael pointed out that REINDEX probably isn't DDL because the docs only\nspecify commands in the standard like CREATE, ALTER, etc. It might be worth\ncreating a new event to trigger on (similar to what was done for\ntable_rewrite) to make a REINDEX trigger fit in a little nicer. Let me\nknow what you think here. Until then, I left the code as `lev =\nLOGSTMT_DDL` for `T_ReindexStmt`.\n\nThanks,\nGarrett\n\nOn Thu, Jul 20, 2023 at 10:47 PM Garrett Thornburg <film42@gmail.com> wrote:\n\n> Added my v1 patch to add REINDEX to event triggers.\n>\n> I originally built this against pg15 but rebased to master for the patch\n> to hopefully make it easier for maintainers to merge. The rebase was\n> automatic so it should be easy to include into any recent version. I'd love\n> for this to land in pg16 if possible.\n>\n> I added regression tests and they are passing. I also formatted the code\n> using the project tools. Docs are included too.\n>\n> A few notes:\n> 1. I tried to not touch the function parameters too much and opted to\n> create a convenience function that makes it easy to attach index or table\n> OIDs to the current event trigger list.\n> 2. I made sure both concurrent and regular reindex of index, table,\n> database work as expected.\n> 3. The ddl end command will make available all indexes that were modified\n> by the reindex command. This is a large list when you run \"reindex\n> database\" but works really well. I debated returning records of the table\n> or DB that were reindexed but that doesn't really make sense. Returning\n> each index fits my use case of building an audit record around the index\n> lifecycle (hence the motivation for the patch).\n>\n> Thanks for your time,\n>\n> Garrett Thornburg\n>", "msg_date": "Wed, 26 Jul 2023 22:43:01 -0600", "msg_from": "Garrett Thornburg <film42@gmail.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: Add REINDEX tag to event triggers" }, { "msg_contents": "Greetings\n\nOn 27.07.23 06:43, Garrett Thornburg wrote:\n> I added a v2 patch for adding REINDEX to event triggers. The following \n> has changed:\n>\n> 1. I fixed the docs to note that ddl_command_start is supported for \n> REINDEX. Thanks, Michael!\n> 2. I added Jian He's excellent partition table test and updated the \n> expectations to include that regression test.\n>\n> What is still up for debate:\n>\n> Michael pointed out that REINDEX probably isn't DDL because the docs \n> only specify commands in the standard like CREATE, ALTER, etc. It \n> might be worth creating a new event to trigger on (similar to what was \n> done for table_rewrite) to make a REINDEX trigger fit in a little \n> nicer. Let me know what you think here. Until then, I left the code as \n> `lev = LOGSTMT_DDL` for `T_ReindexStmt`.\n>\n> Thanks,\n> Garrett\n>\nThanks for the patch, Garrett!\n\nI was unable to apply it in 7ef5f5f \n<https://github.com/postgres/postgres/commit/7ef5f5fb324096c7822c922ad59fd7fdd76f57b1>\n\n$ git apply -v v2-0001-Add-REINDEX-tag-to-event-triggers.patch\nChecking patch doc/src/sgml/event-trigger.sgml...\nChecking patch src/backend/commands/indexcmds.c...\nerror: while searching for:\nstatic List *ChooseIndexColumnNames(List *indexElems);\nstatic void ReindexIndex(RangeVar *indexRelation, ReindexParams *params,\n                                                  bool isTopLevel);\nstatic void RangeVarCallbackForReindexIndex(const RangeVar *relation,\nOid relId, Oid oldRelId, void *arg);\nstatic Oid      ReindexTable(RangeVar *relation, ReindexParams *params,\n\nerror: patch failed: src/backend/commands/indexcmds.c:94\nerror: src/backend/commands/indexcmds.c: patch does not apply\nChecking patch src/backend/tcop/utility.c...\nChecking patch src/include/tcop/cmdtaglist.h...\nChecking patch src/test/regress/expected/event_trigger.out...\nHunk #1 succeeded at 556 (offset 2 lines).\nChecking patch src/test/regress/sql/event_trigger.sql...\n\nAm I missing something?\n\nJim\n\n\n\n\n\n\nGreetings\n\nOn 27.07.23\n 06:43, Garrett Thornburg wrote:\n\n\n\nI added a v2 patch for\n adding REINDEX to event triggers. The following has changed:\n\n\n1. I fixed the docs to note that\n ddl_command_start is supported for REINDEX. Thanks, Michael!\n2. I added Jian He's excellent\n partition table test and updated the expectations to include\n that regression test.\n\n\nWhat is still up for debate:\n\n\nMichael pointed out that REINDEX\n probably isn't DDL because the docs only specify commands in\n the standard like CREATE, ALTER, etc. It might be worth\n creating a new event to trigger on (similar to what was done\n for table_rewrite) to make a REINDEX trigger fit in a little\n nicer. Let me know what you think here. Until then, I left\n the code as `lev = LOGSTMT_DDL` for `T_ReindexStmt`.\n\n\nThanks,\nGarrett\n\n\n\nThanks for the patch, Garrett!\n\nI was unable to apply it in 7ef5f5f\n$ git apply -v\n v2-0001-Add-REINDEX-tag-to-event-triggers.patch\n Checking patch doc/src/sgml/event-trigger.sgml...\n Checking patch src/backend/commands/indexcmds.c...\n error: while searching for:\n static List *ChooseIndexColumnNames(List *indexElems);\n static void ReindexIndex(RangeVar *indexRelation, ReindexParams\n *params,\n                                                  bool\n isTopLevel);\n static void RangeVarCallbackForReindexIndex(const RangeVar\n *relation,\n                                                                                       \n Oid relId, Oid oldRelId, void *arg);\n static Oid      ReindexTable(RangeVar *relation, ReindexParams\n *params,\n\n error: patch failed: src/backend/commands/indexcmds.c:94\n error: src/backend/commands/indexcmds.c: patch does not apply\n Checking patch src/backend/tcop/utility.c...\n Checking patch src/include/tcop/cmdtaglist.h...\n Checking patch src/test/regress/expected/event_trigger.out...\n Hunk #1 succeeded at 556 (offset 2 lines).\n Checking patch src/test/regress/sql/event_trigger.sql...\nAm I missing something?\nJim", "msg_date": "Wed, 30 Aug 2023 14:38:52 +0200", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add REINDEX tag to event triggers" }, { "msg_contents": "On Wed, Aug 30, 2023 at 8:38 PM Jim Jones <jim.jones@uni-muenster.de> wrote:\n>\n> Greetings\n>\n>\n>\n> I was unable to apply it in 7ef5f5f\n>\n> $ git apply -v v2-0001-Add-REINDEX-tag-to-event-triggers.patch\n> Checking patch doc/src/sgml/event-trigger.sgml...\n> Checking patch src/backend/commands/indexcmds.c...\n> error: while searching for:\n> static List *ChooseIndexColumnNames(List *indexElems);\n> static void ReindexIndex(RangeVar *indexRelation, ReindexParams *params,\n> bool isTopLevel);\n> static void RangeVarCallbackForReindexIndex(const RangeVar *relation,\n> Oid relId, Oid oldRelId, void *arg);\n> static Oid ReindexTable(RangeVar *relation, ReindexParams *params,\n>\n> error: patch failed: src/backend/commands/indexcmds.c:94\n> error: src/backend/commands/indexcmds.c: patch does not apply\n> Checking patch src/backend/tcop/utility.c...\n> Checking patch src/include/tcop/cmdtaglist.h...\n> Checking patch src/test/regress/expected/event_trigger.out...\n> Hunk #1 succeeded at 556 (offset 2 lines).\n> Checking patch src/test/regress/sql/event_trigger.sql...\n>\n> Am I missing something?\n>\n> Jim\n\nbecause the change made in here:\nhttps://git.postgresql.org/cgit/postgresql.git/diff/src/backend/commands/indexcmds.c?id=11af63fb48d278b86aa948a5b57f136ef03c2bb7\n\nI manually slight edited the patch content:\nfrom\n static List *ChooseIndexColumnNames(List *indexElems);\n static void ReindexIndex(RangeVar *indexRelation, ReindexParams *params,\n bool isTopLevel);\nto\nstatic List *ChooseIndexColumnNames(const List *indexElems);\nstatic void ReindexIndex(const RangeVar *indexRelation, const\nReindexParams *params,\nbool isTopLevel);\n\ncan you try the attached one.", "msg_date": "Fri, 1 Sep 2023 17:23:07 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add REINDEX tag to event triggers" }, { "msg_contents": "On 01.09.23 11:23, jian he wrote:\n> because the change made in here:\n> https://git.postgresql.org/cgit/postgresql.git/diff/src/backend/commands/indexcmds.c?id=11af63fb48d278b86aa948a5b57f136ef03c2bb7\n>\n> I manually slight edited the patch content:\n> from\n> static List *ChooseIndexColumnNames(List *indexElems);\n> static void ReindexIndex(RangeVar *indexRelation, ReindexParams *params,\n> bool isTopLevel);\n> to\n> static List *ChooseIndexColumnNames(const List *indexElems);\n> static void ReindexIndex(const RangeVar *indexRelation, const\n> ReindexParams *params,\n> bool isTopLevel);\n>\n> can you try the attached one.\n\nThe patch applies cleanly. However, now the compiler complains about a \nqualifier mismatch\n\nindexcmds.c:2707:33: warning: assignment discards ‘const’ qualifier from \npointer target type [-Wdiscarded-qualifiers]\n  2707 |         currentReindexStatement = stmt;\n\n\nPerhaps 'const ReindexStmt *currentReindexStatement'?\n\nTests also work just fine. I also tested it against unique and partial \nindexes, and it worked as expected.\n\nJim\n\n\n\n", "msg_date": "Fri, 1 Sep 2023 12:40:22 +0200", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add REINDEX tag to event triggers" }, { "msg_contents": "On Fri, Sep 1, 2023 at 6:40 PM Jim Jones <jim.jones@uni-muenster.de> wrote:\n>\n> On 01.09.23 11:23, jian he wrote:\n> > because the change made in here:\n> > https://git.postgresql.org/cgit/postgresql.git/diff/src/backend/commands/indexcmds.c?id=11af63fb48d278b86aa948a5b57f136ef03c2bb7\n> >\n> > I manually slight edited the patch content:\n> > from\n> > static List *ChooseIndexColumnNames(List *indexElems);\n> > static void ReindexIndex(RangeVar *indexRelation, ReindexParams *params,\n> > bool isTopLevel);\n> > to\n> > static List *ChooseIndexColumnNames(const List *indexElems);\n> > static void ReindexIndex(const RangeVar *indexRelation, const\n> > ReindexParams *params,\n> > bool isTopLevel);\n> >\n> > can you try the attached one.\n>\n> The patch applies cleanly. However, now the compiler complains about a\n> qualifier mismatch\n>\n> indexcmds.c:2707:33: warning: assignment discards ‘const’ qualifier from\n> pointer target type [-Wdiscarded-qualifiers]\n> 2707 | c\n>\n>\n> Perhaps 'const ReindexStmt *currentReindexStatement'?\n>\n> Tests also work just fine. I also tested it against unique and partial\n> indexes, and it worked as expected.\n>\n> Jim\n>\n\nThanks for pointing this out!\nalso due to commit\nhttps://git.postgresql.org/cgit/postgresql.git/commit/?id=11af63fb48d278b86aa948a5b57f136ef03c2bb7\nExecReindex function input arguments also changed. so we need to\nrebase this patch.\n\nchange to\ncurrentReindexStatement = unconstify(ReindexStmt*, stmt);\nseems to work for me. I tested it, no warning. But I am not 100% sure.\n\nanyway I refactored the patch, making it git applyable\nalso change from \"currentReindexStatement = stmt;\" to\n\"currentReindexStatement = unconstify(ReindexStmt*, stmt);\" due to\nExecReindex function changes.", "msg_date": "Sat, 2 Sep 2023 00:10:11 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add REINDEX tag to event triggers" }, { "msg_contents": "On 01.09.23 18:10, jian he wrote:\n> Thanks for pointing this out! \nThanks for the fix!\n> also due to commit\n> https://git.postgresql.org/cgit/postgresql.git/commit/?id=11af63fb48d278b86aa948a5b57f136ef03c2bb7\n> ExecReindex function input arguments also changed. so we need to\n> rebase this patch.\n>\n> change to\n> currentReindexStatement = unconstify(ReindexStmt*, stmt);\n> seems to work for me. I tested it, no warning. But I am not 100% sure.\n>\n> anyway I refactored the patch, making it git applyable\n> also change from \"currentReindexStatement = stmt;\" to\n> \"currentReindexStatement = unconstify(ReindexStmt*, stmt);\" due to\n> ExecReindex function changes.\n\nLGTM. It applies and builds cleanly, all tests pass and documentation \nalso builds ok. The CFbot seems also much happier now :)\n\nI tried this \"small stress test\" to see if the code was leaking .. but \nit all looks ok to me:\n\nDO $$\nBEGIN\n   FOR i IN 1..10000 LOOP\n     REINDEX INDEX reindex_test.reindex_test1_idx1;\n     REINDEX TABLE reindex_test.reindex_tester1;\n   END LOOP;\nEND;\n$$;\n\n\nJim\n\n\n\n", "msg_date": "Mon, 4 Sep 2023 20:00:52 +0200", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add REINDEX tag to event triggers" }, { "msg_contents": "On Mon, Sep 04, 2023 at 08:00:52PM +0200, Jim Jones wrote:\n> LGTM. It applies and builds cleanly, all tests pass and documentation also\n> builds ok. The CFbot seems also much happier now :)\n\n+\t/*\n+\t * Open and lock the relation. ShareLock is sufficient since we only need\n+\t * to prevent schema and data changes in it. The lock level used here\n+\t * should match catalog's reindex_relation().\n+\t */\n+\trel = try_table_open(relid, ShareLock);\n\nI was eyeing at 0003, and this strikes me as incorrect. Sure, this\nmatches what reindex_relation() does, but you've missed that\nCONCURRENTLY takes a lighter ShareUpdateExclusiveLock, and ShareLock\nconflicts with it. See:\nhttps://www.postgresql.org/docs/devel/explicit-locking.html\n\nSo, doesn't this disrupt a concurrent REINDEX?\n--\nMichael", "msg_date": "Fri, 27 Oct 2023 16:15:19 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add REINDEX tag to event triggers" }, { "msg_contents": "On Fri, Oct 27, 2023 at 3:15 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Sep 04, 2023 at 08:00:52PM +0200, Jim Jones wrote:\n> > LGTM. It applies and builds cleanly, all tests pass and documentation also\n> > builds ok. The CFbot seems also much happier now :)\n>\n> + /*\n> + * Open and lock the relation. ShareLock is sufficient since we only need\n> + * to prevent schema and data changes in it. The lock level used here\n> + * should match catalog's reindex_relation().\n> + */\n> + rel = try_table_open(relid, ShareLock);\n>\n> I was eyeing at 0003, and this strikes me as incorrect. Sure, this\n> matches what reindex_relation() does, but you've missed that\n> CONCURRENTLY takes a lighter ShareUpdateExclusiveLock, and ShareLock\n> conflicts with it. See:\n> https://www.postgresql.org/docs/devel/explicit-locking.html\n>\n> So, doesn't this disrupt a concurrent REINDEX?\n> --\n> Michael\n\nReindexPartitions called ReindexMultipleInternal\nReindexRelationConcurrently add reindex_event_trigger_collect to cover\nit. (line 3869)\nReindexIndex has the function reindex_event_trigger_collect. (line 2853)\n\nreindex_event_trigger_collect_relation called in\nReindexMultipleInternal, ReindexTable (line 2979).\nBoth are \"under concurrent is false\" branches.\n\nSo it should be fine.\n\n\n", "msg_date": "Sat, 28 Oct 2023 12:15:22 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add REINDEX tag to event triggers" }, { "msg_contents": "On Sat, Oct 28, 2023 at 12:15:22PM +0800, jian he wrote:\n> reindex_event_trigger_collect_relation called in\n> ReindexMultipleInternal, ReindexTable (line 2979).\n> Both are \"under concurrent is false\" branches.\n> \n> So it should be fine.\n\nSorry for the delay in replying.\n\nIndeed, you're right. reindex_event_trigger_collect_relation() gets\nonly called in two paths for the non-concurrent cases just after\nreindex_relation(), which is not a safe location to call it because\nthis may be used in CLUSTER or TRUNCATE, and the intention of the\npatch is only to count for the objects in REINDEX.\n\nAnyway, I think that the current implementation is incorrect. The\nobject is collected after the reindex is done, which is right.\nHowever, an object may be reindexed but not be reported if it was\ndropped between the reindex and its endwhen collecting all the objects\nof a single relation. Perhaps it does not matter because the object\nis gone, but that still looks incorrect to me because we finish by not\nreporting everything we should. I think that we should put the\ncollection deeper into the stack, where we know that we are holding\nthe locks on the objects we are collecting. Another side effect is\nthat the patch seems to be missing toast table indexes, which are also\nreindexed for a REINDEX TABLE, for instance. That's not right.\n\nActually, could there be an argument for pushing the collection down\nto reindex_relation() instead to count for all the commands that?\nEven better, reindex_index() would be a better candidate because it is\nitself called within reindex_relation() for each individual indexes?\nIf a code path should not be covered for event triggers, we could use\na new REINDEXOPT_ to control that, optionally. In short, it looks\nlike we should try to reduce the number of paths calling\nreindex_event_trigger_collect(), while collect_relation() ought to be\nremoved.\n\nShould REINDEX TABLE CONCURRENTLY's coverage be extended so as two new\nindexes are reported instead of just one?\n\nREINDEX SCHEMA is a command that perhaps should be tested to collect\nall the indexes rebuilt through it?\n\nA side-effect of this patch is that it impacts ddl_command_start and\nddl_command_end with the addition of REINDEX. Mixing that with the\naddition of a new event is strange. I think that the addition of\nREINDEX to the existing events should be done first, as a separate\npatch. Then a second patch should deal with the collection and the\nsupport of the new event.\n\nI have also dug into the archives to note that control commands have\nbeen proposed in the past to be added as part of event triggers, and\nit happens that I've mentioned in a comment that that this was a\nconcept perhaps contrary to what event triggers should do, as these\nare intended for DDLs:\nhttps://www.postgresql.org/message-id/CAB7nPqTqZ2-YcNzOQ5KVBUJYHQ4kDSd4Q55Mc-fBzM8GH0bV2Q@mail.gmail.com\n\nPhilosophically, I'm open on all that and having more commands\ndepending on the cases that are satisfied, FWIW, but let's organize\nthe changes.\n--\nMichael", "msg_date": "Mon, 20 Nov 2023 16:34:11 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add REINDEX tag to event triggers" }, { "msg_contents": "On Mon, Nov 20, 2023 at 3:34 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sat, Oct 28, 2023 at 12:15:22PM +0800, jian he wrote:\n> > reindex_event_trigger_collect_relation called in\n> > ReindexMultipleInternal, ReindexTable (line 2979).\n> > Both are \"under concurrent is false\" branches.\n> >\n> > So it should be fine.\n>\n> Sorry for the delay in replying.\n>\n> Indeed, you're right. reindex_event_trigger_collect_relation() gets\n> only called in two paths for the non-concurrent cases just after\n> reindex_relation(), which is not a safe location to call it because\n> this may be used in CLUSTER or TRUNCATE, and the intention of the\n> patch is only to count for the objects in REINDEX.\n>\n> Anyway, I think that the current implementation is incorrect. The\n> object is collected after the reindex is done, which is right.\n> However, an object may be reindexed but not be reported if it was\n> dropped between the reindex and its endwhen collecting all the objects\n> of a single relation. Perhaps it does not matter because the object\n> is gone, but that still looks incorrect to me because we finish by not\n> reporting everything we should. I think that we should put the\n> collection deeper into the stack, where we know that we are holding\n> the locks on the objects we are collecting. Another side effect is\n> that the patch seems to be missing toast table indexes, which are also\n> reindexed for a REINDEX TABLE, for instance. That's not right.\n>\n> Actually, could there be an argument for pushing the collection down\n> to reindex_relation() instead to count for all the commands that?\n> Even better, reindex_index() would be a better candidate because it is\n> itself called within reindex_relation() for each individual indexes?\n> If a code path should not be covered for event triggers, we could use\n> a new REINDEXOPT_ to control that, optionally. In short, it looks\n> like we should try to reduce the number of paths calling\n> reindex_event_trigger_collect(), while collect_relation() ought to be\n> removed.\n>\n> Should REINDEX TABLE CONCURRENTLY's coverage be extended so as two new\n> indexes are reported instead of just one?\n>\n> REINDEX SCHEMA is a command that perhaps should be tested to collect\n> all the indexes rebuilt through it?\n>\n> A side-effect of this patch is that it impacts ddl_command_start and\n> ddl_command_end with the addition of REINDEX. Mixing that with the\n> addition of a new event is strange. I think that the addition of\n> REINDEX to the existing events should be done first, as a separate\n> patch. Then a second patch should deal with the collection and the\n> support of the new event.\n>\n> I have also dug into the archives to note that control commands have\n> been proposed in the past to be added as part of event triggers, and\n> it happens that I've mentioned in a comment that that this was a\n> concept perhaps contrary to what event triggers should do, as these\n> are intended for DDLs:\n> https://www.postgresql.org/message-id/CAB7nPqTqZ2-YcNzOQ5KVBUJYHQ4kDSd4Q55Mc-fBzM8GH0bV2Q@mail.gmail.com\n>\n> Philosophically, I'm open on all that and having more commands\n> depending on the cases that are satisfied, FWIW, but let's organize\n> the changes.\n> --\n> Michael\n\n\nHi.\nTop level func is ExecReIndex. it will call {ReindexIndex,\nReindexTable, ReindexMultipleTables}\nthen trace deep down, it will call {ReindexRelationConcurrently, reindex_index}.\n\nImitate the CREATE INDEX logic, we need pass `const ReindexStmt *stmt`\nto all the following functions:\nReindexIndex\nReindexTable\nReindexMultipleTables\nReindexPartitions\nReindexMultipleInternal\nReindexRelationConcurrently\nreindex_relation\nreindex_index\n\nbecause the EventTriggerCollectSimpleCommand needs the `(ReindexStmt\n*) parsetree`.\nand we call EventTriggerCollectSimpleCommand at reindex_index or\nReindexRelationConcurrently.\n\nThe new test will cover the following case.\nreindex (concurrently) schema;\nreindex (concurrently) partitioned index;\nreindex (concurrently) partitioned table;\n\nnot sure this is the right approach. But now the reindex event\ncollection is after we call index_open.\nsame static function (reindex_event_trigger_collect) in\nsrc/backend/commands/indexcmds.c and src/backend/catalog/index.c for\nnow.\ngiven that ProcessUtilitySlow supports the event triggers facility.\nso probably no need for another REINDEXOPT_ option?\n\nI also added a test for disabled event triggers.", "msg_date": "Fri, 24 Nov 2023 08:00:00 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add REINDEX tag to event triggers" }, { "msg_contents": "On Fri, Nov 24, 2023 at 08:00:00AM +0800, jian he wrote:\n> not sure this is the right approach. But now the reindex event\n> collection is after we call index_open.\n> same static function (reindex_event_trigger_collect) in\n> src/backend/commands/indexcmds.c and src/backend/catalog/index.c for\n> now.\n\nKnowing that there are two calls of reindex_relation() and four\ncallers of reindex_index(), passing the stmt looks acceptable to me to\nbe able to get more context from the reindex statement where a given\nindex has been rebuilt.\n\nAnyway, as a whole, I am still confused by the shape of the patch..\nThis relies on pg_event_trigger_ddl_commands(), that reports a list of\nDDL commands, but the use case you are proposing is to report a list\nof the *indexes* rebuilt. So, in its current shape, we'd report the\nequivalent of N DDL commands matching the N indexes rebuilt in a\nsingle command. Shouldn't we use a new event type and a dedicated SQL\nfunction to report the list of indexes newly rebuilt to get a full\nlist of the work that has happened?\n\n> given that ProcessUtilitySlow supports the event triggers facility.\n> so probably no need for another REINDEXOPT_ option?\n\nRight, perhaps we don't need that if we just supply the stmt. If\nthere is no requirement for it, let's avoid that, then.\n\n- ReindexMultipleTables(stmt->name, stmt->kind, &params);\n+ ReindexMultipleTables(stmt, stmt->name, stmt->kind, &params);\n\nIf we begin including the stmt, there is no need for these extra\narguments in the extra routines of indexcmds.c.\n\nAs far as I can see, this patch is doing too much as presented. Could\nyou split the patch into more pieces, please? Based on v4 you have\nsent, there are refactoring and basic piece parts like:\n- Patch to make event triggers ddl_command_start and ddl_command_stop\nreact on ReindexStmt, so as a command is reported in\npg_event_trigger_ddl_commands().\n- Patch for GetCommandLogLevel() to switch ReindexStmt from\nLOGSTMT_ALL to LOGSTMT_DDL. True that this could be switched.\n- Patch to refactor the routines of indexcmds.c and index.c to use the\nReindexStmt as argument, as a preparation of the next patch...\n- Patch to add a new event type with a new SQL function to return a\nlist of the indexes rebuilt, with the ReindexStmt involved when the\nindex OID was trapped by the collection.\n\n1) and 2) are a minimal feature in themselves, which may be enough to\nsatisfy your original case, and where you'd know when a REINDEX has\nbeen run in event triggers. 3) and 4) are what you are trying to\nachieve, to get the list of the indexes rebuilt knowing the context of\na given command.\n--\nMichael", "msg_date": "Fri, 24 Nov 2023 11:43:58 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add REINDEX tag to event triggers" }, { "msg_contents": "On Fri, Nov 24, 2023 at 10:44 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> As far as I can see, this patch is doing too much as presented. Could\n> you split the patch into more pieces, please? Based on v4 you have\n> sent, there are refactoring and basic piece parts like:\n> - Patch to make event triggers ddl_command_start and ddl_command_stop\n> react on ReindexStmt, so as a command is reported in\n> pg_event_trigger_ddl_commands().\n> - Patch for GetCommandLogLevel() to switch ReindexStmt from\n> LOGSTMT_ALL to LOGSTMT_DDL. True that this could be switched.\n> - Patch to refactor the routines of indexcmds.c and index.c to use the\n> ReindexStmt as argument, as a preparation of the next patch...\n> - Patch to add a new event type with a new SQL function to return a\n> list of the indexes rebuilt, with the ReindexStmt involved when the\n> index OID was trapped by the collection.\n>\n> 1) and 2) are a minimal feature in themselves, which may be enough to\n> satisfy your original case, and where you'd know when a REINDEX has\n> been run in event triggers. 3) and 4) are what you are trying to\n> achieve, to get the list of the indexes rebuilt knowing the context of\n> a given command.\n> --\n> Michael\n\nhi.\nv5-0001. changed the REINDEX command tag from event_trigger_ok: false\nto event_trigger_ok: true.\nMove ReindexStmt moves from standard_ProcessUtility to ProcessUtilitySlow.\nBy default ProcessUtilitySlow will call trigger related functions.\nSo the event trigger facility can support reindex statements.\n\nv5-0002. In GetCommandLogLevel, change T_ReindexStmt from lev =\nLOGSTMT_ALL to lev = LOGSTMT_DDL. so log_statement (enum) >= 'ddl'\nwill make the reindex statement be logged.\n\nv5-0003. Refactor the following functions: {ReindexIndex,\nReindexTable, ReindexMultipleTables,\nReindexPartitions,ReindexMultipleInternal\n,ReindexRelationConcurrently, reindex_relation,reindex_index} by\nadding `const ReindexStmt *stmt` as their first argument.\nThis is for event trigger support reindex. We need to pass both the\nnewly generated indexId and the ReindexStmt to\nEventTriggerCollectSimpleCommand right after the newly index gets\ntheir lock. To do that, we have to refactor related functions.\n\nv5-0004. Event trigger support reindex command implementation,\ndocumentation, regress test, helper function pass reindex info to\nfunction EventTriggerCollectSimpleCommand.", "msg_date": "Mon, 27 Nov 2023 08:00:00 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add REINDEX tag to event triggers" }, { "msg_contents": "On Mon, Nov 27, 2023 at 11:00 AM jian he <jian.universality@gmail.com> wrote:\n>\n> On Fri, Nov 24, 2023 at 10:44 AM Michael Paquier <michael@paquier.xyz> wrote:\n> hi.\n> v5-0001. changed the REINDEX command tag from event_trigger_ok: false\n> to event_trigger_ok: true.\n> Move ReindexStmt moves from standard_ProcessUtility to ProcessUtilitySlow.\n> By default ProcessUtilitySlow will call trigger related functions.\n> So the event trigger facility can support reindex statements.\n>\n> v5-0002. In GetCommandLogLevel, change T_ReindexStmt from lev =\n> LOGSTMT_ALL to lev = LOGSTMT_DDL. so log_statement (enum) >= 'ddl'\n> will make the reindex statement be logged.\n>\n> v5-0003. Refactor the following functions: {ReindexIndex,\n> ReindexTable, ReindexMultipleTables,\n> ReindexPartitions,ReindexMultipleInternal\n> ,ReindexRelationConcurrently, reindex_relation,reindex_index} by\n> adding `const ReindexStmt *stmt` as their first argument.\n> This is for event trigger support reindex. We need to pass both the\n> newly generated indexId and the ReindexStmt to\n> EventTriggerCollectSimpleCommand right after the newly index gets\n> their lock. To do that, we have to refactor related functions.\n>\n> v5-0004. Event trigger support reindex command implementation,\n> documentation, regress test, helper function pass reindex info to\n> function EventTriggerCollectSimpleCommand.\n\nI just started reviewing the patch. Some minor comments:\nIn patch 0001:\nIn standard_ProcessUtility(), since you are unconditionally calling\nProcessUtilitySlow() in case of T_ReindexStmt, you really don't need\nthe case statement for T_ReindexStmt just like all the other commands\nwhich have event trigger support. It will call ProcessUtilitySlow() as\ndefault.\n\nIn patch 0004:\nNo need to duplicate reindex_event_trigger_collect in indexcmds.c\nsince it is already present in index.c. Add it to index.h and make the\nfunction extern so that it can be accessed in both index.c and\nindexcmds.c\n\nregards,\nAjin Cherian\nFujitsu Australia\n\n\n", "msg_date": "Mon, 27 Nov 2023 21:58:20 +1100", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add REINDEX tag to event triggers" }, { "msg_contents": "On Mon, Nov 27, 2023 at 6:58 PM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> I just started reviewing the patch. Some minor comments:\n> In patch 0001:\n> In standard_ProcessUtility(), since you are unconditionally calling\n> ProcessUtilitySlow() in case of T_ReindexStmt, you really don't need\n> the case statement for T_ReindexStmt just like all the other commands\n> which have event trigger support. It will call ProcessUtilitySlow() as\n> default.\n>\n> In patch 0004:\n> No need to duplicate reindex_event_trigger_collect in indexcmds.c\n> since it is already present in index.c. Add it to index.h and make the\n> function extern so that it can be accessed in both index.c and\n> indexcmds.c\n>\n> regards,\n> Ajin Cherian\n> Fujitsu Australia\n\nThanks for reviewing it!\nThe above 2 suggestions are good, now the code is more neat.", "msg_date": "Tue, 28 Nov 2023 00:28:50 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add REINDEX tag to event triggers" }, { "msg_contents": "On Tue, Nov 28, 2023 at 12:28:50AM +0800, jian he wrote:\n> On Mon, Nov 27, 2023 at 6:58 PM Ajin Cherian <itsajin@gmail.com> wrote:\n>> Ajin Cherian\n\nDammit. I've just noticed that I missed mentioning you in the commit\nlog as a reviewer. Sorry about that.\n\n> The above 2 suggestions are good, now the code is more neat.\n\nRight, the extra bits for the default case of\nstandard_ProcessUtility() was not necessary, because we don't need to\napply any object-level checks to see as we just collect a list of\nindexes.\n\nAnyway, I've been working on the patch for the last few days, and\napplied it after tweaking a bit its style, code and comments.\n\nMainly, I did not see a need for reindex_event_trigger_collect() as\nwrapper for EventTriggerCollectSimpleCommand() as it is only used in\ntwo code paths across two files. I have come back to the regression\ntests, and trimmed them down to a minimum as event_trigger.sql cannot\nbe run in parallel so this eats in run time, concluding that knowing\nthe list of indexes rebuilt is also something that we track with\nrelfilenode change tracking in the other test suites (including the\nSCHEMA, toast and partition cases), so doing that again had limited\nimpact.\n\nOne thing that I have been unable to conclude is if we should change\nGetCommandLogLevel() for ReindexStmt. I agree that there's a good\nargument for it, but I am going to raise a different thread about that\nto raise awareness, and this does not prevent the feature to work\n(even if it feels inconsistent with pg_event_trigger_ddl_commands()).\n--\nMichael", "msg_date": "Mon, 4 Dec 2023 10:04:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add REINDEX tag to event triggers" }, { "msg_contents": "Hi Michael,\n\n04.12.2023 04:04, Michael Paquier wrote:\n> Anyway, I've been working on the patch for the last few days, and\n> applied it after tweaking a bit its style, code and comments.\n\nPlease look at the assertion failure triggered when REINDEX processed by an event trigger:\nCREATE FUNCTION etf() RETURNS EVENT_TRIGGER AS $$ BEGIN PERFORM 1; END $$ LANGUAGE plpgsql;\nCREATE EVENT TRIGGER et ON ddl_command_end EXECUTE FUNCTION etf();\nCREATE TABLE t (i int);\nREINDEX (CONCURRENTLY) TABLE t;\n\nCore was generated by `postgres: law regression [local] REINDEX                                      '.\nProgram terminated with signal SIGABRT, Aborted.\n\nwarning: Section `.reg-xstate/338911' in core file too small.\n#0  __pthread_kill_implementation (no_tid=0, signo=6, threadid=140462399108928) at ./nptl/pthread_kill.c:44\n44      ./nptl/pthread_kill.c: No such file or directory.\n(gdb) bt\n#0  __pthread_kill_implementation (no_tid=0, signo=6, threadid=140462399108928) at ./nptl/pthread_kill.c:44\n#1  __pthread_kill_internal (signo=6, threadid=140462399108928) at ./nptl/pthread_kill.c:78\n#2  __GI___pthread_kill (threadid=140462399108928, signo=signo@entry=6) at ./nptl/pthread_kill.c:89\n#3  0x00007fbff2c09476 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26\n#4  0x00007fbff2bef7f3 in __GI_abort () at ./stdlib/abort.c:79\n#5  0x000055d88b8bb8de in ExceptionalCondition (conditionName=0x55d88baac6c0 \"portal->portalSnapshot == NULL\", \nfileName=0x55d88baac2c4 \"pquery.c\", lineNumber=1796) at assert.c:66\n#6  0x000055d88b6d6186 in EnsurePortalSnapshotExists () at pquery.c:1796\n#7  0x000055d88b47d21a in _SPI_execute_plan (plan=0x55d88bfd2a80, options=0x7ffdc53b94c0, snapshot=0x0, \ncrosscheck_snapshot=0x0, fire_triggers=true) at spi.c:2579\n#8  0x000055d88b479f37 in SPI_execute_plan_with_paramlist (plan=0x55d88bfd2a80, params=0x0, read_only=false, tcount=0) \nat spi.c:749\n#9  0x00007fbfe6de57ed in exec_run_select (estate=0x7ffdc53b97c0, expr=0x55d88bfba7f8, maxtuples=0, portalP=0x0) at \npl_exec.c:5815\n#10 0x00007fbfe6dddce9 in exec_stmt_perform (estate=0x7ffdc53b97c0, stmt=0x55d88bfba950) at pl_exec.c:2171\n#11 0x00007fbfe6ddd87c in exec_stmts (estate=0x7ffdc53b97c0, stmts=0x55d88bfbad90) at pl_exec.c:2023\n#12 0x00007fbfe6ddd5f8 in exec_stmt_block (estate=0x7ffdc53b97c0, block=0x55d88bfbade0) at pl_exec.c:1942\n#13 0x00007fbfe6ddccff in exec_toplevel_block (estate=0x7ffdc53b97c0, block=0x55d88bfbade0) at pl_exec.c:1633\n#14 0x00007fbfe6ddbb6a in plpgsql_exec_event_trigger (func=0x55d88bf66840, trigdata=0x7ffdc53b9b00) at pl_exec.c:1198\n#15 0x00007fbfe6df74c7 in plpgsql_call_handler (fcinfo=0x7ffdc53b9aa0) at pl_handler.c:272\n#16 0x000055d88b34f592 in EventTriggerInvoke (fn_oid_list=0x55d88beffa40, trigdata=0x7ffdc53b9b00) at event_trigger.c:1087\n#17 0x000055d88b34edea in EventTriggerDDLCommandEnd (parsetree=0x55d88bed6000) at event_trigger.c:803\n#18 0x000055d88b6d9a41 in ProcessUtilitySlow (pstate=0x55d88beff930, pstmt=0x55d88bed60b0, queryString=0x55d88bed54f0 \n\"REINDEX (CONCURRENTLY) TABLE t;\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x55d88bed6370,\n     qc=0x7ffdc53ba310) at utility.c:1937\n#19 0x000055d88b6d7ae9 in standard_ProcessUtility (pstmt=0x55d88bed60b0, queryString=0x55d88bed54f0 \"REINDEX \n(CONCURRENTLY) TABLE t;\", readOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0,\n     dest=0x55d88bed6370, qc=0x7ffdc53ba310) at utility.c:1074\n#20 0x000055d88b6d69ea in ProcessUtility (pstmt=0x55d88bed60b0, queryString=0x55d88bed54f0 \"REINDEX (CONCURRENTLY) TABLE \nt;\", readOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x55d88bed6370,\n     qc=0x7ffdc53ba310) at utility.c:530\n#21 0x000055d88b6d52b8 in PortalRunUtility (portal=0x55d88bf509f0, pstmt=0x55d88bed60b0, isTopLevel=true, \nsetHoldSnapshot=false, dest=0x55d88bed6370, qc=0x7ffdc53ba310) at pquery.c:1158\n#22 0x000055d88b6d552f in PortalRunMulti (portal=0x55d88bf509f0, isTopLevel=true, setHoldSnapshot=false, \ndest=0x55d88bed6370, altdest=0x55d88bed6370, qc=0x7ffdc53ba310) at pquery.c:1315\n#23 0x000055d88b6d4979 in PortalRun (portal=0x55d88bf509f0, count=9223372036854775807, isTopLevel=true, run_once=true, \ndest=0x55d88bed6370, altdest=0x55d88bed6370, qc=0x7ffdc53ba310) at pquery.c:791\n#24 0x000055d88b6cd74a in exec_simple_query (query_string=0x55d88bed54f0 \"REINDEX (CONCURRENTLY) TABLE t;\") at \npostgres.c:1273\n#25 0x000055d88b6d26f6 in PostgresMain (dbname=0x55d88bf0c860 \"regression\", username=0x55d88bf0c848 \"law\") at \npostgres.c:4653\n#26 0x000055d88b5f2014 in BackendRun (port=0x55d88bf00f80) at postmaster.c:4425\n#27 0x000055d88b5f1636 in BackendStartup (port=0x55d88bf00f80) at postmaster.c:4104\n#28 0x000055d88b5edcf6 in ServerLoop () at postmaster.c:1772\n#29 0x000055d88b5ed5f3 in PostmasterMain (argc=3, argv=0x55d88becf700) at postmaster.c:1471\n#30 0x000055d88b49d2ca in main (argc=3, argv=0x55d88becf700) at main.c:198\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Mon, 4 Dec 2023 18:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add REINDEX tag to event triggers" }, { "msg_contents": "On Mon, Dec 4, 2023 at 11:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n>\n> Hi Michael,\n>\n> 04.12.2023 04:04, Michael Paquier wrote:\n> > Anyway, I've been working on the patch for the last few days, and\n> > applied it after tweaking a bit its style, code and comments.\n>\n> Please look at the assertion failure triggered when REINDEX processed by an event trigger:\n> CREATE FUNCTION etf() RETURNS EVENT_TRIGGER AS $$ BEGIN PERFORM 1; END $$ LANGUAGE plpgsql;\n> CREATE EVENT TRIGGER et ON ddl_command_end EXECUTE FUNCTION etf();\n> CREATE TABLE t (i int);\n> REINDEX (CONCURRENTLY) TABLE t;\n>\n> Core was generated by `postgres: law regression [local] REINDEX '.\n> Program terminated with signal SIGABRT, Aborted.\n>\n> warning: Section `.reg-xstate/338911' in core file too small.\n> #0 __pthread_kill_implementation (no_tid=0, signo=6, threadid=140462399108928) at ./nptl/pthread_kill.c:44\n> 44 ./nptl/pthread_kill.c: No such file or directory.\n> (gdb) bt\n> #0 __pthread_kill_implementation (no_tid=0, signo=6, threadid=140462399108928) at ./nptl/pthread_kill.c:44\n> #1 __pthread_kill_internal (signo=6, threadid=140462399108928) at ./nptl/pthread_kill.c:78\n> #2 __GI___pthread_kill (threadid=140462399108928, signo=signo@entry=6) at ./nptl/pthread_kill.c:89\n> #3 0x00007fbff2c09476 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26\n> #4 0x00007fbff2bef7f3 in __GI_abort () at ./stdlib/abort.c:79\n> #5 0x000055d88b8bb8de in ExceptionalCondition (conditionName=0x55d88baac6c0 \"portal->portalSnapshot == NULL\",\n> fileName=0x55d88baac2c4 \"pquery.c\", lineNumber=1796) at assert.c:66\n> #6 0x000055d88b6d6186 in EnsurePortalSnapshotExists () at pquery.c:1796\n> #7 0x000055d88b47d21a in _SPI_execute_plan (plan=0x55d88bfd2a80, options=0x7ffdc53b94c0, snapshot=0x0,\n> crosscheck_snapshot=0x0, fire_triggers=true) at spi.c:2579\n> #8 0x000055d88b479f37 in SPI_execute_plan_with_paramlist (plan=0x55d88bfd2a80, params=0x0, read_only=false, tcount=0)\n> at spi.c:749\n> #9 0x00007fbfe6de57ed in exec_run_select (estate=0x7ffdc53b97c0, expr=0x55d88bfba7f8, maxtuples=0, portalP=0x0) at\n> pl_exec.c:5815\n> #10 0x00007fbfe6dddce9 in exec_stmt_perform (estate=0x7ffdc53b97c0, stmt=0x55d88bfba950) at pl_exec.c:2171\n> #11 0x00007fbfe6ddd87c in exec_stmts (estate=0x7ffdc53b97c0, stmts=0x55d88bfbad90) at pl_exec.c:2023\n> #12 0x00007fbfe6ddd5f8 in exec_stmt_block (estate=0x7ffdc53b97c0, block=0x55d88bfbade0) at pl_exec.c:1942\n> #13 0x00007fbfe6ddccff in exec_toplevel_block (estate=0x7ffdc53b97c0, block=0x55d88bfbade0) at pl_exec.c:1633\n> #14 0x00007fbfe6ddbb6a in plpgsql_exec_event_trigger (func=0x55d88bf66840, trigdata=0x7ffdc53b9b00) at pl_exec.c:1198\n> #15 0x00007fbfe6df74c7 in plpgsql_call_handler (fcinfo=0x7ffdc53b9aa0) at pl_handler.c:272\n> #16 0x000055d88b34f592 in EventTriggerInvoke (fn_oid_list=0x55d88beffa40, trigdata=0x7ffdc53b9b00) at event_trigger.c:1087\n> #17 0x000055d88b34edea in EventTriggerDDLCommandEnd (parsetree=0x55d88bed6000) at event_trigger.c:803\n> #18 0x000055d88b6d9a41 in ProcessUtilitySlow (pstate=0x55d88beff930, pstmt=0x55d88bed60b0, queryString=0x55d88bed54f0\n> \"REINDEX (CONCURRENTLY) TABLE t;\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x55d88bed6370,\n> qc=0x7ffdc53ba310) at utility.c:1937\n> #19 0x000055d88b6d7ae9 in standard_ProcessUtility (pstmt=0x55d88bed60b0, queryString=0x55d88bed54f0 \"REINDEX\n> (CONCURRENTLY) TABLE t;\", readOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0,\n> dest=0x55d88bed6370, qc=0x7ffdc53ba310) at utility.c:1074\n> #20 0x000055d88b6d69ea in ProcessUtility (pstmt=0x55d88bed60b0, queryString=0x55d88bed54f0 \"REINDEX (CONCURRENTLY) TABLE\n> t;\", readOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x55d88bed6370,\n> qc=0x7ffdc53ba310) at utility.c:530\n> #21 0x000055d88b6d52b8 in PortalRunUtility (portal=0x55d88bf509f0, pstmt=0x55d88bed60b0, isTopLevel=true,\n> setHoldSnapshot=false, dest=0x55d88bed6370, qc=0x7ffdc53ba310) at pquery.c:1158\n> #22 0x000055d88b6d552f in PortalRunMulti (portal=0x55d88bf509f0, isTopLevel=true, setHoldSnapshot=false,\n> dest=0x55d88bed6370, altdest=0x55d88bed6370, qc=0x7ffdc53ba310) at pquery.c:1315\n> #23 0x000055d88b6d4979 in PortalRun (portal=0x55d88bf509f0, count=9223372036854775807, isTopLevel=true, run_once=true,\n> dest=0x55d88bed6370, altdest=0x55d88bed6370, qc=0x7ffdc53ba310) at pquery.c:791\n> #24 0x000055d88b6cd74a in exec_simple_query (query_string=0x55d88bed54f0 \"REINDEX (CONCURRENTLY) TABLE t;\") at\n> postgres.c:1273\n> #25 0x000055d88b6d26f6 in PostgresMain (dbname=0x55d88bf0c860 \"regression\", username=0x55d88bf0c848 \"law\") at\n> postgres.c:4653\n> #26 0x000055d88b5f2014 in BackendRun (port=0x55d88bf00f80) at postmaster.c:4425\n> #27 0x000055d88b5f1636 in BackendStartup (port=0x55d88bf00f80) at postmaster.c:4104\n> #28 0x000055d88b5edcf6 in ServerLoop () at postmaster.c:1772\n> #29 0x000055d88b5ed5f3 in PostmasterMain (argc=3, argv=0x55d88becf700) at postmaster.c:1471\n> #30 0x000055d88b49d2ca in main (argc=3, argv=0x55d88becf700) at main.c:198\n>\n> Best regards,\n> Alexander\n\nIn indexcmd.c, function, ReindexRelationConcurrently\nif (indexIds == NIL)\n{\nPopActiveSnapshot();\nreturn false;\n}\n\nSo there is no snapshot left, then PERFORM 1; need a snapshot.\n\n\n", "msg_date": "Tue, 5 Dec 2023 00:49:12 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add REINDEX tag to event triggers" }, { "msg_contents": "On Tue, Dec 05, 2023 at 12:49:12AM +0800, jian he wrote:\n> On Mon, Dec 4, 2023 at 11:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n>> Please look at the assertion failure triggered when REINDEX processed by an event trigger:\n>> CREATE FUNCTION etf() RETURNS EVENT_TRIGGER AS $$ BEGIN PERFORM 1; END $$ LANGUAGE plpgsql;\n>> CREATE EVENT TRIGGER et ON ddl_command_end EXECUTE FUNCTION etf();\n>> CREATE TABLE t (i int);\n>> REINDEX (CONCURRENTLY) TABLE t;\n> \n> In indexcmd.c, function, ReindexRelationConcurrently\n> if (indexIds == NIL)\n> {\n> PopActiveSnapshot();\n> return false;\n> }\n> \n> So there is no snapshot left, then PERFORM 1; need a snapshot.\n\nPopping a snapshot at this stage when there are no indexes has been a\ndecision taken by the original commit in 5dc92b844e68 because we had\nno need for it yet, but we may do now depending on the function\ntriggered. I have been looking at the whole stack and something like\nthe attached to make a pop conditional seems to be sufficient to\nsatisfy all the cases I've been able to come up with, including the\none reported here.\n\nAlexander, do you see any hole in that? Perhaps the snapshot push\nshould be more aggressive at the end of ReindexRelationConcurrently()\nas well (in the last transaction when rebuilds happen)?\n--\nMichael", "msg_date": "Tue, 5 Dec 2023 08:45:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add REINDEX tag to event triggers" }, { "msg_contents": "Hi Michael,\n\n05.12.2023 02:45, Michael Paquier wrote:\n> Popping a snapshot at this stage when there are no indexes has been a\n> decision taken by the original commit in 5dc92b844e68 because we had\n> no need for it yet, but we may do now depending on the function\n> triggered. I have been looking at the whole stack and something like\n> the attached to make a pop conditional seems to be sufficient to\n> satisfy all the cases I've been able to come up with, including the\n> one reported here.\n>\n> Alexander, do you see any hole in that? Perhaps the snapshot push\n> should be more aggressive at the end of ReindexRelationConcurrently()\n> as well (in the last transaction when rebuilds happen)?\n\nThank you for the fix proposed!\n\nI agree with it. I had worried a bit about ReindexRelationConcurrently()\nbecoming twofold for callers (it can leave the snapshot or pop it), but I\ncouldn't find a way to hide this twofoldness inside without adding more\ncomplexity. On the other hand, ReindexRelationConcurrently() now satisfies\nEnsurePortalSnapshotExists() in all cases.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Wed, 6 Dec 2023 10:00:01 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add REINDEX tag to event triggers" }, { "msg_contents": "On Wed, Dec 06, 2023 at 10:00:01AM +0300, Alexander Lakhin wrote:\n> I agree with it. I had worried a bit about ReindexRelationConcurrently()\n> becoming twofold for callers (it can leave the snapshot or pop it), but I\n> couldn't find a way to hide this twofoldness inside without adding more\n> complexity. On the other hand, ReindexRelationConcurrently() now satisfies\n> EnsurePortalSnapshotExists() in all cases.\n\nThanks, applied that with a few more tests, covering a bit more than\nthe code path you've reported with a failure.\n\nI was wondering if this should be backpatched, actually, but could not\nmake a case for it as we've never needed a snapshot after a reindex\nuntil now, AFAIK.\n--\nMichael", "msg_date": "Thu, 7 Dec 2023 08:32:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add REINDEX tag to event triggers" } ]
[ { "msg_contents": "Hey hackers,\n\nI noticed there are some places calling table_open with\nRowExclusiveLock but table_close with NoLock, like in function\ntoast_save_datum.\n\nCan anybody explain the underlying logic, thanks in advance.\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Fri, 21 Jul 2023 14:05:56 +0800", "msg_from": "Junwang Zhao <zhjwpku@gmail.com>", "msg_from_op": true, "msg_subject": "table_open/table_close with different lock mode" }, { "msg_contents": "On Fri, Jul 21, 2023 at 02:05:56PM +0800, Junwang Zhao wrote:\n> I noticed there are some places calling table_open with\n> RowExclusiveLock but table_close with NoLock, like in function\n> toast_save_datum.\n> \n> Can anybody explain the underlying logic, thanks in advance.\n\nThis rings a bell. This is a wanted behavior, see commit f99870d and\nits related thread:\nhttps://postgr.es/m/17268-d2fb426e0895abd4@postgresql.org\n\nThe tests added by this commit in src/test/isolation/ will show the\ndifference in terms of the way the toast values get handled with and\nwithout the change.\n--\nMichael", "msg_date": "Fri, 21 Jul 2023 15:26:22 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: table_open/table_close with different lock mode" }, { "msg_contents": "On Fri, Jul 21, 2023 at 2:26 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Jul 21, 2023 at 02:05:56PM +0800, Junwang Zhao wrote:\n> > I noticed there are some places calling table_open with\n> > RowExclusiveLock but table_close with NoLock, like in function\n> > toast_save_datum.\n> >\n> > Can anybody explain the underlying logic, thanks in advance.\n>\n> This rings a bell. This is a wanted behavior, see commit f99870d and\n> its related thread:\n> https://postgr.es/m/17268-d2fb426e0895abd4@postgresql.org\n>\n\nI see this patch, so all the locks held by a transaction will be released\nat the commit phase, right? Can you show me where the logic is located?\n\n> The tests added by this commit in src/test/isolation/ will show the\n> difference in terms of the way the toast values get handled with and\n> without the change.\n> --\n> Michael\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Fri, 21 Jul 2023 14:38:34 +0800", "msg_from": "Junwang Zhao <zhjwpku@gmail.com>", "msg_from_op": true, "msg_subject": "Re: table_open/table_close with different lock mode" }, { "msg_contents": "On Thu, Jul 20, 2023 at 11:38 PM Junwang Zhao <zhjwpku@gmail.com> wrote:\n>\n> On Fri, Jul 21, 2023 at 2:26 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Fri, Jul 21, 2023 at 02:05:56PM +0800, Junwang Zhao wrote:\n> > > I noticed there are some places calling table_open with\n> > > RowExclusiveLock but table_close with NoLock, like in function\n> > > toast_save_datum.\n> > >\n> > > Can anybody explain the underlying logic, thanks in advance.\n> >\n> > This rings a bell. This is a wanted behavior, see commit f99870d and\n> > its related thread:\n> > https://postgr.es/m/17268-d2fb426e0895abd4@postgresql.org\n> >\n>\n> I see this patch, so all the locks held by a transaction will be released\n> at the commit phase, right? Can you show me where the logic is located?\n\nThe NoLock is simple a marker that tells the underlying machinery to\nnot bother releasing any locks. As a matter of fact, you can pass\nNoLock in *_open() calls, too, to indicate that you don't want any new\nlocks, perhaps because the transaction has already taken an\nappropriate lock on the object.\n\nAs for lock-releasing codepath at transaction end, see\nCommitTransaction() in xact.c, and specifically at the\nResourceOwnerRelease() calls in there.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n", "msg_date": "Thu, 20 Jul 2023 23:56:49 -0700", "msg_from": "Gurjeet Singh <gurjeet@singh.im>", "msg_from_op": false, "msg_subject": "Re: table_open/table_close with different lock mode" }, { "msg_contents": "On Fri, Jul 21, 2023 at 2:57 PM Gurjeet Singh <gurjeet@singh.im> wrote:\n>\n> On Thu, Jul 20, 2023 at 11:38 PM Junwang Zhao <zhjwpku@gmail.com> wrote:\n> >\n> > On Fri, Jul 21, 2023 at 2:26 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > >\n> > > On Fri, Jul 21, 2023 at 02:05:56PM +0800, Junwang Zhao wrote:\n> > > > I noticed there are some places calling table_open with\n> > > > RowExclusiveLock but table_close with NoLock, like in function\n> > > > toast_save_datum.\n> > > >\n> > > > Can anybody explain the underlying logic, thanks in advance.\n> > >\n> > > This rings a bell. This is a wanted behavior, see commit f99870d and\n> > > its related thread:\n> > > https://postgr.es/m/17268-d2fb426e0895abd4@postgresql.org\n> > >\n> >\n> > I see this patch, so all the locks held by a transaction will be released\n> > at the commit phase, right? Can you show me where the logic is located?\n>\n> The NoLock is simple a marker that tells the underlying machinery to\n> not bother releasing any locks. As a matter of fact, you can pass\n> NoLock in *_open() calls, too, to indicate that you don't want any new\n> locks, perhaps because the transaction has already taken an\n> appropriate lock on the object.\n>\n> As for lock-releasing codepath at transaction end, see\n> CommitTransaction() in xact.c, and specifically at the\n> ResourceOwnerRelease() calls in there.\n>\n\nGreat, thanks for the thorough explanation, will look into the code :)\n\n> Best regards,\n> Gurjeet\n> http://Gurje.et\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Fri, 21 Jul 2023 15:01:59 +0800", "msg_from": "Junwang Zhao <zhjwpku@gmail.com>", "msg_from_op": true, "msg_subject": "Re: table_open/table_close with different lock mode" } ]
[ { "msg_contents": "This Assert failure can be reproduced with the query below.\n\ncreate table part_tbl (a integer) partition by range (a);\ncreate table part_tbl1 partition of part_tbl for values from (0) to (100);\nset enable_partitionwise_join to on;\n\nexplain (costs off)\nselect * from part_tbl t1\n left join part_tbl t2 on t1.a = t2.a\n left join part_tbl t3 on t2.a = t3.a;\nserver closed the connection unexpectedly\n\nThis should be an oversight in 9df8f903. It seems that the new added\nfunction add_outer_joins_to_relids() does not cope well with child\njoins. The 'input_relids' for a child join is the relid sets of child\nrels while 'othersj->min_xxxhand' refers to relids of parent rels. So\nthere would be problem when we add the relids of the pushed-down joins.\n\nInstead of fixing add_outer_joins_to_relids() to cope with child joins,\nI'm wondering if we can build join relids for a child join from its\nparent by adjust_child_relids, something like attached.\n\nThanks\nRichard", "msg_date": "Fri, 21 Jul 2023 18:02:10 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Assert failure on bms_equal(child_joinrel->relids, child_joinrelids)" }, { "msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> This should be an oversight in 9df8f903. It seems that the new added\n> function add_outer_joins_to_relids() does not cope well with child\n> joins. The 'input_relids' for a child join is the relid sets of child\n> rels while 'othersj->min_xxxhand' refers to relids of parent rels. So\n> there would be problem when we add the relids of the pushed-down joins.\n\nIndeed. Adding the OJ relid itself works fine, but we won't get the\nrequired matches when we scan the join_info_list.\n\n> Instead of fixing add_outer_joins_to_relids() to cope with child joins,\n> I'm wondering if we can build join relids for a child join from its\n> parent by adjust_child_relids, something like attached.\n\nThat looks like a good solid solution. Pushed with a bit of\neditorialization --- mostly, that I put the test case into\npartition_join.sql where there's already suitable test tables.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 21 Jul 2023 12:02:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Assert failure on bms_equal(child_joinrel->relids,\n child_joinrelids)" }, { "msg_contents": "On Sat, Jul 22, 2023 at 12:02 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Richard Guo <guofenglinux@gmail.com> writes:\n> > Instead of fixing add_outer_joins_to_relids() to cope with child joins,\n> > I'm wondering if we can build join relids for a child join from its\n> > parent by adjust_child_relids, something like attached.\n>\n> That looks like a good solid solution. Pushed with a bit of\n> editorialization --- mostly, that I put the test case into\n> partition_join.sql where there's already suitable test tables.\n\n\nThanks for pushing it!\n\nThanks\nRichard\n\nOn Sat, Jul 22, 2023 at 12:02 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Richard Guo <guofenglinux@gmail.com> writes:\n> Instead of fixing add_outer_joins_to_relids() to cope with child joins,\n> I'm wondering if we can build join relids for a child join from its\n> parent by adjust_child_relids, something like attached.\n\nThat looks like a good solid solution.  Pushed with a bit of\neditorialization --- mostly, that I put the test case into\npartition_join.sql where there's already suitable test tables.Thanks for pushing it!ThanksRichard", "msg_date": "Mon, 24 Jul 2023 10:47:00 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Assert failure on bms_equal(child_joinrel->relids,\n child_joinrelids)" }, { "msg_contents": "Hi Tom, Richard,\n\nOn Mon, Jul 24, 2023 at 8:17 AM Richard Guo <guofenglinux@gmail.com> wrote:\n>\n> Thanks for pushing it!\n\nWith this fix, I saw a noticeable increase in the memory consumption\nof planner. I was running experiments mentioned in [1] The reason is\nthe Bitmapset created by bms_union() are not freed during planning and\nwhen there are thousands of child joins involved, bitmapsets takes up\na large memory and there any a large number of bitmaps.\n\nAttached 0002 patch fixes the memory consumption by calculating\nappinfos only once and using them twice. The number look like below\n\nNumber of tables joined | without patch | with patch |\n------------------------------------------------------\n 2 | 40.8 MiB | 40.3 MiB |\n 3 | 151.6 MiB | 146.9 MiB |\n 4 | 463.9 MiB | 445.5 MiB |\n 5 | 1663.9 MiB | 1563.3 MiB |\n\nThe memory consumption is prominent at higher number of joins as that\nexponentially increases the number of child joins.\n\nAttached setup.sql and queries.sql and patch 0001 were used to measure\nmemory similar to [1].\n\nI don't think it's a problem with the patch itself. We should be able\nto use Bitmapset APIs similar to what patch is doing. But there's a\nproblem with our Bitmapset implementation. It's not space efficient\nfor thousands of partitions. A quick calculation reveals this.\n\nIf the number of partitions is 1000, the matching partitions will\nusually be 1000, 2000, 3000 and so on. Thus the bitmapset represnting\nthe relids will be {b 1000, 2000, 3000, ...}. To represent a single\n1000th digit current Bitmapset implementation will allocate 1000/8 =\n125 bytes of memory. A 5 way join will require 125 * 5 = 625 bytes of\nmemory. This is even true for lower relid numbers since they will be\n1000 bits away e.g. (1, 1001, 2001, 3001, ...). So 1000 such child\njoins require 625KiB memory. Doing this as many times as the number of\njoins we get 120 * 625 KiB = 75 MiB which is closer to the memory\ndifference we see above.\n\nEven if we allocate a list to hold 5 integers it will not take 625\nbytes. I think we need to improve our Bitmapset representation to be\nefficient in such cases. Of course whatever representation we choose\nhas to be space efficient for a small number of tables as well and\nshould gel well with our planner logic. So I guess some kind of\ndynamic representation which changes the underlying layout based on\nthe contents is required. I have looked up past hacker threads to see\nif this has been discussed previously.\n\nI don't think this is the thread to discuss it and also I am not\nplanning to work on it in the immediate future. But I thought I would\nmention it where I found it.\n\n[1] https://www.postgresql.org/message-id/CAExHW5stmOUobE55pMt83r8UxvfCph+Pvo5dNpdrVCsBgXEzDQ@mail.gmail.com\n\n--\nBest Wishes,\nAshutosh Bapat", "msg_date": "Fri, 28 Jul 2023 15:16:57 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Assert failure on bms_equal(child_joinrel->relids,\n child_joinrelids)" }, { "msg_contents": "On Fri, Jul 28, 2023 at 3:16 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> Hi Tom, Richard,\n>\n> On Mon, Jul 24, 2023 at 8:17 AM Richard Guo <guofenglinux@gmail.com> wrote:\n> >\n> > Thanks for pushing it!\n>\n> With this fix, I saw a noticeable increase in the memory consumption\n> of planner. I was running experiments mentioned in [1] The reason is\n> the Bitmapset created by bms_union() are not freed during planning and\n> when there are thousands of child joins involved, bitmapsets takes up\n> a large memory and there any a large number of bitmaps.\n>\n> Attached 0002 patch fixes the memory consumption by calculating\n> appinfos only once and using them twice. The number look like below\n>\n> Number of tables joined | without patch | with patch |\n> ------------------------------------------------------\n> 2 | 40.8 MiB | 40.3 MiB |\n> 3 | 151.6 MiB | 146.9 MiB |\n> 4 | 463.9 MiB | 445.5 MiB |\n> 5 | 1663.9 MiB | 1563.3 MiB |\n>\n> The memory consumption is prominent at higher number of joins as that\n> exponentially increases the number of child joins.\n>\n> Attached setup.sql and queries.sql and patch 0001 were used to measure\n> memory similar to [1].\n>\n> I don't think it's a problem with the patch itself. We should be able\n> to use Bitmapset APIs similar to what patch is doing. But there's a\n> problem with our Bitmapset implementation. It's not space efficient\n> for thousands of partitions. A quick calculation reveals this.\n>\n> If the number of partitions is 1000, the matching partitions will\n> usually be 1000, 2000, 3000 and so on. Thus the bitmapset represnting\n> the relids will be {b 1000, 2000, 3000, ...}. To represent a single\n> 1000th digit current Bitmapset implementation will allocate 1000/8 =\n> 125 bytes of memory. A 5 way join will require 125 * 5 = 625 bytes of\n> memory. This is even true for lower relid numbers since they will be\n> 1000 bits away e.g. (1, 1001, 2001, 3001, ...). So 1000 such child\n> joins require 625KiB memory. Doing this as many times as the number of\n> joins we get 120 * 625 KiB = 75 MiB which is closer to the memory\n> difference we see above.\n>\n> Even if we allocate a list to hold 5 integers it will not take 625\n> bytes. I think we need to improve our Bitmapset representation to be\n> efficient in such cases. Of course whatever representation we choose\n> has to be space efficient for a small number of tables as well and\n> should gel well with our planner logic. So I guess some kind of\n> dynamic representation which changes the underlying layout based on\n> the contents is required. I have looked up past hacker threads to see\n> if this has been discussed previously.\n>\n> I don't think this is the thread to discuss it and also I am not\n> planning to work on it in the immediate future. But I thought I would\n> mention it where I found it.\n>\n> [1] https://www.postgresql.org/message-id/CAExHW5stmOUobE55pMt83r8UxvfCph+Pvo5dNpdrVCsBgXEzDQ@mail.gmail.com\n>\n\nAdding this small patch to the commitfest in case somebody finds it\nworth fixing this specific memory consumption. With a new subject.\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 8 Sep 2023 15:22:33 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Reuse child_relids in try_partitionwise_join was Re: Assert failure\n on bms_equal(child_joinrel->relids, child_joinrelids)" }, { "msg_contents": "On Fri, Sep 8, 2023 at 3:22 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Fri, Jul 28, 2023 at 3:16 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > Hi Tom, Richard,\n> >\n> > On Mon, Jul 24, 2023 at 8:17 AM Richard Guo <guofenglinux@gmail.com> wrote:\n> > >\n> > > Thanks for pushing it!\n> >\n> > With this fix, I saw a noticeable increase in the memory consumption\n> > of planner. I was running experiments mentioned in [1] The reason is\n> > the Bitmapset created by bms_union() are not freed during planning and\n> > when there are thousands of child joins involved, bitmapsets takes up\n> > a large memory and there any a large number of bitmaps.\n> >\n> > Attached 0002 patch fixes the memory consumption by calculating\n> > appinfos only once and using them twice. The number look like below\n> >\n> > Number of tables joined | without patch | with patch |\n> > ------------------------------------------------------\n> > 2 | 40.8 MiB | 40.3 MiB |\n> > 3 | 151.6 MiB | 146.9 MiB |\n> > 4 | 463.9 MiB | 445.5 MiB |\n> > 5 | 1663.9 MiB | 1563.3 MiB |\n> >\n> > The memory consumption is prominent at higher number of joins as that\n> > exponentially increases the number of child joins.\n> >\n> > Attached setup.sql and queries.sql and patch 0001 were used to measure\n> > memory similar to [1].\n> >\n> > I don't think it's a problem with the patch itself. We should be able\n> > to use Bitmapset APIs similar to what patch is doing. But there's a\n> > problem with our Bitmapset implementation. It's not space efficient\n> > for thousands of partitions. A quick calculation reveals this.\n> >\n> > If the number of partitions is 1000, the matching partitions will\n> > usually be 1000, 2000, 3000 and so on. Thus the bitmapset represnting\n> > the relids will be {b 1000, 2000, 3000, ...}. To represent a single\n> > 1000th digit current Bitmapset implementation will allocate 1000/8 =\n> > 125 bytes of memory. A 5 way join will require 125 * 5 = 625 bytes of\n> > memory. This is even true for lower relid numbers since they will be\n> > 1000 bits away e.g. (1, 1001, 2001, 3001, ...). So 1000 such child\n> > joins require 625KiB memory. Doing this as many times as the number of\n> > joins we get 120 * 625 KiB = 75 MiB which is closer to the memory\n> > difference we see above.\n> >\n> > Even if we allocate a list to hold 5 integers it will not take 625\n> > bytes. I think we need to improve our Bitmapset representation to be\n> > efficient in such cases. Of course whatever representation we choose\n> > has to be space efficient for a small number of tables as well and\n> > should gel well with our planner logic. So I guess some kind of\n> > dynamic representation which changes the underlying layout based on\n> > the contents is required. I have looked up past hacker threads to see\n> > if this has been discussed previously.\n> >\n> > I don't think this is the thread to discuss it and also I am not\n> > planning to work on it in the immediate future. But I thought I would\n> > mention it where I found it.\n> >\n> > [1] https://www.postgresql.org/message-id/CAExHW5stmOUobE55pMt83r8UxvfCph+Pvo5dNpdrVCsBgXEzDQ@mail.gmail.com\n> >\n>\n> Adding this small patch to the commitfest in case somebody finds it\n> worth fixing this specific memory consumption. With a new subject.\n\nRebased patches.\n0001 - is same as the squashed version of patches at\nhttps://www.postgresql.org/message-id/CAExHW5sCJX7696sF-OnugAiaXS=Ag95=-m1cSrjcmyYj8Pduuw@mail.gmail.com.\n0002 is the actual fix described earlier\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Tue, 31 Oct 2023 11:06:48 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Reuse child_relids in try_partitionwise_join was Re: Assert\n failure on bms_equal(child_joinrel->relids, child_joinrelids)" }, { "msg_contents": "explain.out in 0001 needed some adjustments. Without those CIbot shows\nfailures. Fixed in the attached patchset. 0001 is just for diagnosis,\nnot for actual commit. 0002 which is the actual patch has no changes\nwrt to the previous version.\n\nOn Tue, Oct 31, 2023 at 11:06 AM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Fri, Sep 8, 2023 at 3:22 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > On Fri, Jul 28, 2023 at 3:16 PM Ashutosh Bapat\n> > <ashutosh.bapat.oss@gmail.com> wrote:\n> > >\n> > > Hi Tom, Richard,\n> > >\n> > > On Mon, Jul 24, 2023 at 8:17 AM Richard Guo <guofenglinux@gmail.com> wrote:\n> > > >\n> > > > Thanks for pushing it!\n> > >\n> > > With this fix, I saw a noticeable increase in the memory consumption\n> > > of planner. I was running experiments mentioned in [1] The reason is\n> > > the Bitmapset created by bms_union() are not freed during planning and\n> > > when there are thousands of child joins involved, bitmapsets takes up\n> > > a large memory and there any a large number of bitmaps.\n> > >\n> > > Attached 0002 patch fixes the memory consumption by calculating\n> > > appinfos only once and using them twice. The number look like below\n> > >\n> > > Number of tables joined | without patch | with patch |\n> > > ------------------------------------------------------\n> > > 2 | 40.8 MiB | 40.3 MiB |\n> > > 3 | 151.6 MiB | 146.9 MiB |\n> > > 4 | 463.9 MiB | 445.5 MiB |\n> > > 5 | 1663.9 MiB | 1563.3 MiB |\n> > >\n> > > The memory consumption is prominent at higher number of joins as that\n> > > exponentially increases the number of child joins.\n> > >\n> > > Attached setup.sql and queries.sql and patch 0001 were used to measure\n> > > memory similar to [1].\n> > >\n> > > I don't think it's a problem with the patch itself. We should be able\n> > > to use Bitmapset APIs similar to what patch is doing. But there's a\n> > > problem with our Bitmapset implementation. It's not space efficient\n> > > for thousands of partitions. A quick calculation reveals this.\n> > >\n> > > If the number of partitions is 1000, the matching partitions will\n> > > usually be 1000, 2000, 3000 and so on. Thus the bitmapset represnting\n> > > the relids will be {b 1000, 2000, 3000, ...}. To represent a single\n> > > 1000th digit current Bitmapset implementation will allocate 1000/8 =\n> > > 125 bytes of memory. A 5 way join will require 125 * 5 = 625 bytes of\n> > > memory. This is even true for lower relid numbers since they will be\n> > > 1000 bits away e.g. (1, 1001, 2001, 3001, ...). So 1000 such child\n> > > joins require 625KiB memory. Doing this as many times as the number of\n> > > joins we get 120 * 625 KiB = 75 MiB which is closer to the memory\n> > > difference we see above.\n> > >\n> > > Even if we allocate a list to hold 5 integers it will not take 625\n> > > bytes. I think we need to improve our Bitmapset representation to be\n> > > efficient in such cases. Of course whatever representation we choose\n> > > has to be space efficient for a small number of tables as well and\n> > > should gel well with our planner logic. So I guess some kind of\n> > > dynamic representation which changes the underlying layout based on\n> > > the contents is required. I have looked up past hacker threads to see\n> > > if this has been discussed previously.\n> > >\n> > > I don't think this is the thread to discuss it and also I am not\n> > > planning to work on it in the immediate future. But I thought I would\n> > > mention it where I found it.\n> > >\n> > > [1] https://www.postgresql.org/message-id/CAExHW5stmOUobE55pMt83r8UxvfCph+Pvo5dNpdrVCsBgXEzDQ@mail.gmail.com\n> > >\n> >\n> > Adding this small patch to the commitfest in case somebody finds it\n> > worth fixing this specific memory consumption. With a new subject.\n>\n> Rebased patches.\n> 0001 - is same as the squashed version of patches at\n> https://www.postgresql.org/message-id/CAExHW5sCJX7696sF-OnugAiaXS=Ag95=-m1cSrjcmyYj8Pduuw@mail.gmail.com.\n> 0002 is the actual fix described earlier\n>\n> --\n> Best Wishes,\n> Ashutosh Bapat\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Mon, 6 Nov 2023 14:48:18 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Reuse child_relids in try_partitionwise_join was Re: Assert\n failure on bms_equal(child_joinrel->relids, child_joinrelids)" }, { "msg_contents": "On Mon, Nov 6, 2023 at 2:48 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> explain.out in 0001 needed some adjustments. Without those CIbot shows\n> failures. Fixed in the attached patchset. 0001 is just for diagnosis,\n> not for actual commit. 0002 which is the actual patch has no changes\n> wrt to the previous version.\n>\n\nFirst patch is no longer required. PFA rebased second patch.\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Tue, 6 Feb 2024 18:37:20 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Reuse child_relids in try_partitionwise_join was Re: Assert\n failure on bms_equal(child_joinrel->relids, child_joinrelids)" }, { "msg_contents": "Hello,\n\nI am looking through some outstanding CommitFest entries; I wonder if\nthis particular patch is already effectively fixed by 5278d0a2, which\nis both attributed to the original author as well as an extremely\nsimilar approach. Can this entry\n(https://commitfest.postgresql.org/48/4553/) be closed?\n\nBest,\n\nDavid\n\n\n", "msg_date": "Thu, 30 May 2024 13:49:07 -0700", "msg_from": "David Christensen <david+pg@pgguru.net>", "msg_from_op": false, "msg_subject": "Re: Reuse child_relids in try_partitionwise_join was Re: Assert\n failure on bms_equal(child_joinrel->relids, child_joinrelids)" }, { "msg_contents": "Hi David,\nThanks for looking into this.\n\nOn Fri, May 31, 2024 at 2:19 AM David Christensen <david+pg@pgguru.net>\nwrote:\n\n> Hello,\n>\n> I am looking through some outstanding CommitFest entries; I wonder if\n> this particular patch is already effectively fixed by 5278d0a2, which\n> is both attributed to the original author as well as an extremely\n> similar approach. Can this entry\n> (https://commitfest.postgresql.org/48/4553/) be closed?\n>\n\nThis is different. But it needs a rebase. PFA rebased patch. The revised\ncommit message explains the change better.\n\nHere are numbers revised after 5278d0a2. Since the code changes only affect\npartitionwise join code path, I am reporting only partition wise join\nnumbers. The first column reports the number of joining relations, each\nhaving 1000 partitions. Rest of the column names are self-explanatory.\n\nPlanning memory used:\n num_joins | master | patched | memory saved | memory saved\n-----------+---------+---------+--------------+----------------------------\n 2 | 31 MB | 30 MB | 525 kB | 1.68%\n 3 | 111 MB | 107 MB | 4588 kB | 4.03%\n 4 | 339 MB | 321 MB | 17 MB | 5.13%\n 5 | 1062 MB | 967 MB | 95 MB | 8.96%\n\nHere's planning time measurements.\n num_joins | master (ms) | patched (ms) | change in planning time (ms) |\nchange in planning time\n-----------+-------------+--------------+------------------------------+---------------------------------------\n 2 | 187.86 | 177.27 | 10.59 |\n 5.64%\n 3 | 721.81 | 758.80 | -36.99 |\n -5.12%\n 4 | 2239.87 | 2236.19 | 3.68 |\n 0.16%\n 5 | 6830.86 | 7027.76 | -196.90 |\n -2.88%\nThe patch calls find_appinfos_by_relids() only once (instead of twice),\ncalls bms_union() on child relids only once, allocates and deallocates\nappinfos array only once saving some CPU cycles but spends some CPU cycles\nin bms_free(). There's expected to be a net small saving in planning time.\nBut above numbers show planning time increases in some cases and decreases\nin some cases. Repeating the experiment multiple times does not show a\nstable increase or decrease. But the variations all being within noise\nlevels. We could conclude that the patch does not affect planning time\nadversely.\n\nThe scripts I used originally [1] need a revision too since EXPLAIN MEMORY\noutput has changed. PFA the scripts. setup.sql creates the required\nfunctions and tables. queries.sql EXPLAINs queries with different number of\njoining relations collecting planning time and memory numbers in a table\n(msmts). We need to change psql variable code to 'patched' or 'master' when\nusing master + patches or master branch respectively. Following queries are\nused to report above measurements from msmt.\n\nplanning memory: select p.num_joins, pg_size_pretty(m.average * 1024)\n\"master\", pg_size_pretty(p.average * 1024) as \"patched\",\npg_size_pretty((m.average - p.average) * 1024) \"memory saved\", ((m.average\n- p.average) * 100/m.average)::numeric(42, 2) \"memory saved in percentage\"\nfrom msmts p, msmts m where p.num_joins = m.num_joins and p.variant =\nm.variant and p.code = 'patched' and m.code = 'master' and p.measurement =\nm.measurement and p.measurement = 'planning memory' and p.variant = 'pwj'\norder by num_joins;\n\nplanning time: select p.num_joins, m.average \"master (ms)\", p.average\n\"patched (ms)\", m.average - p.average \"change in planning time (ms)\",\n((m.average - p.\naverage) * 100/m.average)::numeric(42, 2) \"change in planning time as\npercentage\" from msmts p, msmts m where p.num_joins = m.num_joins and\np.variant = m.var\niant and p.code = 'patched' and m.code = 'master' and p.measurement =\nm.measurement and p.measurement = 'planning time' and p.variant = 'pwj'\norder by p.vari\nant, num_joins;\n\n[1]\nhttps://www.postgresql.org/message-id/CAExHW5snUW7pD2RdtaBa1T_TqJYaY6W_YPVjWDrgSf33i-0uqA%40mail.gmail.com\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Wed, 5 Jun 2024 13:17:56 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Reuse child_relids in try_partitionwise_join was Re: Assert\n failure on bms_equal(child_joinrel->relids, child_joinrelids)" }, { "msg_contents": "On Wed, Jun 5, 2024 at 1:17 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n\n> Hi David,\n> Thanks for looking into this.\n>\n> On Fri, May 31, 2024 at 2:19 AM David Christensen <david+pg@pgguru.net>\n> wrote:\n>\n>> Hello,\n>>\n>> I am looking through some outstanding CommitFest entries; I wonder if\n>> this particular patch is already effectively fixed by 5278d0a2, which\n>> is both attributed to the original author as well as an extremely\n>> similar approach. Can this entry\n>> (https://commitfest.postgresql.org/48/4553/) be closed?\n>>\n>\n> This is different. But it needs a rebase. PFA rebased patch. The revised\n> commit message explains the change better.\n>\n> Here are numbers revised after 5278d0a2. Since the code changes only\n> affect partitionwise join code path, I am reporting only partition wise\n> join numbers. The first column reports the number of joining relations,\n> each having 1000 partitions. Rest of the column names are self-explanatory.\n>\n> Planning memory used:\n> num_joins | master | patched | memory saved | memory saved\n> -----------+---------+---------+--------------+----------------------------\n> 2 | 31 MB | 30 MB | 525 kB | 1.68%\n> 3 | 111 MB | 107 MB | 4588 kB | 4.03%\n> 4 | 339 MB | 321 MB | 17 MB | 5.13%\n> 5 | 1062 MB | 967 MB | 95 MB | 8.96%\n>\n> Here's planning time measurements.\n> num_joins | master (ms) | patched (ms) | change in planning time (ms) |\n> change in planning time\n>\n> -----------+-------------+--------------+------------------------------+---------------------------------------\n> 2 | 187.86 | 177.27 | 10.59 |\n> 5.64%\n> 3 | 721.81 | 758.80 | -36.99 |\n> -5.12%\n> 4 | 2239.87 | 2236.19 | 3.68 |\n> 0.16%\n> 5 | 6830.86 | 7027.76 | -196.90 |\n> -2.88%\n>\n\nHere are some numbers with lower number of partitions per relation\nFor 100 partitions\nPlanning memory:\n num_joins | master | patched | memory saved | memory saved in percentage\n-----------+---------+---------+---------------+----------------------------\n 2 | 2820 kB | 2812 kB | 8192.00 bytes | 0.28%\n 3 | 9335 kB | 9270 kB | 65 kB | 0.70%\n 4 | 27 MB | 27 MB | 247 kB | 0.90%\n 5 | 78 MB | 77 MB | 1101 kB | 1.37%\n\nPlanning time:\n num_joins | master (ms) | patched (ms) | change in planning time (ms) |\nchange in planning time as percentage\n-----------+-------------+--------------+------------------------------+---------------------------------------\n 2 | 3.33 | 3.21 | 0.12 |\n 3.60%\n 3 | 11.40 | 11.17 | 0.23 |\n 2.02%\n 4 | 37.61 | 37.56 | 0.05 |\n 0.13%\n 5 | 124.39 | 126.08 | -1.69 |\n -1.36%\n\nFor 10 partitions\nPlanning memory:\n num_joins | master | patched | memory saved | memory saved in percentage\n-----------+---------+---------+---------------+----------------------------\n 2 | 310 kB | 310 kB | 0.00 bytes | 0.00\n 3 | 992 kB | 989 kB | 3072.00 bytes | 0.30\n 4 | 2891 kB | 2883 kB | 8192.00 bytes | 0.28\n 5 | 8317 kB | 8290 kB | 27 kB | 0.32\n\nPlanning time:\n num_joins | master (ms) | patched (ms) | change in planning time (ms) |\nchange in planning time as percentage\n-----------+-------------+--------------+------------------------------+---------------------------------------\n 2 | 0.42 | 0.37 | 0.05 |\n 11.90\n 3 | 1.08 | 1.09 | -0.01 |\n -0.93\n 4 | 3.56 | 3.32 | 0.24 |\n 6.74\n 5 | 8.86 | 8.78 | 0.08 |\n 0.90\n\nObservations:\n1. As expected the memory savings are smaller for a small number of\npartitions, but they are not negative. Thus the patch helps in case of a\nlarge number of partitions but does not adversely affect a small number of\npartitions\n2. Planning time changes are within noise. But usually I see that the\nplanning time varies a lot. Thus the numbers here just indicate that the\nplanning times are not adversely affected by the change. Theoretically, I\nwould expect the patch to improve planning time as explained in the earlier\nemail.\n3. As mentioned in the earlier email, the patch affects only the\npartitionwise join code path hence I am reporting numbers with\npartitionwise join enabled.\n\n--\nBest Wishes,\nAshutosh Bapat\n\nOn Wed, Jun 5, 2024 at 1:17 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:Hi David,Thanks for looking into this.On Fri, May 31, 2024 at 2:19 AM David Christensen <david+pg@pgguru.net> wrote:Hello,\n\r\nI am looking through some outstanding CommitFest entries; I wonder if\r\nthis particular patch is already effectively fixed by 5278d0a2, which\r\nis both attributed to the original author as well as an extremely\r\nsimilar approach.  Can this entry\r\n(https://commitfest.postgresql.org/48/4553/) be closed?This is different. But it needs a rebase. PFA rebased patch. The revised commit message explains the change better.Here are numbers revised after 5278d0a2. Since the code changes only affect partitionwise join code path, I am reporting only partition wise join numbers. The first column reports the number of joining relations, each having 1000 partitions. Rest of the column names are self-explanatory.Planning memory used: num_joins | master  | patched | memory saved | memory saved-----------+---------+---------+--------------+----------------------------         2 | 31 MB   | 30 MB   | 525 kB       |                       1.68%         3 | 111 MB  | 107 MB  | 4588 kB      |                       4.03%         4 | 339 MB  | 321 MB  | 17 MB        |                       5.13%         5 | 1062 MB | 967 MB  | 95 MB        |                       8.96%Here's planning time measurements. num_joins | master (ms) | patched (ms) | change in planning time (ms) | change in planning time-----------+-------------+--------------+------------------------------+---------------------------------------         2 |      187.86 |       177.27 |                        10.59 |                                  5.64%         3 |      721.81 |       758.80 |                       -36.99 |                                 -5.12%         4 |     2239.87 |      2236.19 |                         3.68 |                                  0.16%         5 |     6830.86 |      7027.76 |                      -196.90 |                                 -2.88%Here are some numbers with lower number of partitions per relationFor 100 partitionsPlanning memory: num_joins | master  | patched | memory saved  | memory saved in percentage -----------+---------+---------+---------------+----------------------------         2 | 2820 kB | 2812 kB | 8192.00 bytes |                       0.28%         3 | 9335 kB | 9270 kB | 65 kB         |                       0.70%         4 | 27 MB   | 27 MB   | 247 kB        |                       0.90%         5 | 78 MB   | 77 MB   | 1101 kB       |                       1.37%Planning time: num_joins | master (ms) | patched (ms) | change in planning time (ms) | change in planning time as percentage -----------+-------------+--------------+------------------------------+---------------------------------------         2 |        3.33 |         3.21 |                         0.12 |                                  3.60%         3 |       11.40 |        11.17 |                         0.23 |                                  2.02%         4 |       37.61 |        37.56 |                         0.05 |                                  0.13%         5 |      124.39 |       126.08 |                        -1.69 |                                 -1.36%For 10 partitionsPlanning memory: num_joins | master  | patched | memory saved  | memory saved in percentage -----------+---------+---------+---------------+----------------------------         2 | 310 kB  | 310 kB  | 0.00 bytes    |                       0.00         3 | 992 kB  | 989 kB  | 3072.00 bytes |                       0.30         4 | 2891 kB | 2883 kB | 8192.00 bytes |                       0.28         5 | 8317 kB | 8290 kB | 27 kB         |                       0.32Planning time: num_joins | master (ms) | patched (ms) | change in planning time (ms) | change in planning time as percentage -----------+-------------+--------------+------------------------------+---------------------------------------         2 |        0.42 |         0.37 |                         0.05 |                                 11.90         3 |        1.08 |         1.09 |                        -0.01 |                                 -0.93         4 |        3.56 |         3.32 |                         0.24 |                                  6.74         5 |        8.86 |         8.78 |                         0.08 |                                  0.90Observations:1. As expected the memory savings are smaller for a small number of partitions, but they are not negative. Thus the patch helps in case of a large number of partitions but does not adversely affect a small number of partitions2. Planning time changes are within noise. But usually I see that the planning time varies a lot. Thus the numbers here just indicate that the planning times are not adversely affected by the change. Theoretically, I would expect the patch to improve planning time as explained in the earlier email.3. As mentioned in the earlier email, the patch affects only the partitionwise join code path hence I am reporting numbers with partitionwise join enabled.--Best Wishes,Ashutosh Bapat", "msg_date": "Thu, 6 Jun 2024 10:31:06 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Reuse child_relids in try_partitionwise_join was Re: Assert\n failure on bms_equal(child_joinrel->relids, child_joinrelids)" }, { "msg_contents": "On Wed, Jun 5, 2024 at 3:48 AM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> Here's planning time measurements.\n> num_joins | master (ms) | patched (ms) | change in planning time (ms) | change in planning time\n> -----------+-------------+--------------+------------------------------+---------------------------------------\n> 2 | 187.86 | 177.27 | 10.59 | 5.64%\n> 3 | 721.81 | 758.80 | -36.99 | -5.12%\n> 4 | 2239.87 | 2236.19 | 3.68 | 0.16%\n> 5 | 6830.86 | 7027.76 | -196.90 | -2.88%\n\nI find these results concerning. Why should the planning time\nsometimes go up? And why should it go up for odd numbers of joinrels\nand down for even numbers of joinrels? I wonder if these results are\nreally correct, and if they are, I wonder what could account for such\nodd behavior\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 6 Jun 2024 12:30:39 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Reuse child_relids in try_partitionwise_join was Re: Assert\n failure on bms_equal(child_joinrel->relids, child_joinrelids)" }, { "msg_contents": "On Thu, Jun 6, 2024 at 10:00 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Wed, Jun 5, 2024 at 3:48 AM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> > Here's planning time measurements.\n> > num_joins | master (ms) | patched (ms) | change in planning time (ms) |\n> change in planning time\n> >\n> -----------+-------------+--------------+------------------------------+---------------------------------------\n> > 2 | 187.86 | 177.27 | 10.59\n> | 5.64%\n> > 3 | 721.81 | 758.80 | -36.99\n> | -5.12%\n> > 4 | 2239.87 | 2236.19 | 3.68\n> | 0.16%\n> > 5 | 6830.86 | 7027.76 | -196.90\n> | -2.88%\n>\n> I find these results concerning. Why should the planning time\n> sometimes go up? And why should it go up for odd numbers of joinrels\n> and down for even numbers of joinrels? I wonder if these results are\n> really correct, and if they are, I wonder what could account for such\n> odd behavior\n>\n\nThis is just one instance of measurements. If I run the experiment multiple\ntimes the results and the patterns will vary. Usually I have found planning\ntime to vary within 5% for regular tables and within 9% for partitioned\ntables with a large number of partitions. Below are measurements with the\nexperiment repeated multiple times. For a given number of partitioned\ntables (each with 1000 partitions) being joined, planning time is measured\n10 consecutive times. For this set of 10 runs we calculate average and\nstandard deviation of planning time. Such 10 sets are sampled. This means\nplanning time is sampled 100 times in total with and without patch\nrespectively. Measurements with master and patched are reported in the\nattached excel sheet.\n\nObservations from spreadsheet\n1. Differences in planning time with master or patched version are within\nstandard deviation. The difference is both ways. Indicating that the patch\ndoesn't affect planning time adversely.\n2. If we just look at the 5-way join numbers, there are more +ve s in\ncolumn E and sometimes higher than standard deviation. Indicating that the\npatch improves planning time when there are higher number of partitioned\ntables being joined.\n3. If we just look at the 5-way join numbers, the patch seems to reduce the\nstandard deviation (column B vs column D) indicating that the planning time\nis more stable and predictable with patched version.\n3. There is no bias towards even or odd numbers of partitioned tables being\njoined.\n\nLooking at the memory savings numbers reported earlier, the patch improves\nmemory consumption of partitionwise join without adversely affecting\nplanning time. It doesn't affect the code paths which do not use\npartitionwise join. That's overall net positive impact for relatively less\ncode churn.\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Mon, 10 Jun 2024 12:38:54 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Reuse child_relids in try_partitionwise_join was Re: Assert\n failure on bms_equal(child_joinrel->relids, child_joinrelids)" }, { "msg_contents": "On Mon, Jun 10, 2024 at 3:09 AM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> This is just one instance of measurements. If I run the experiment multiple times the results and the patterns will vary. Usually I have found planning time to vary within 5% for regular tables and within 9% for partitioned tables with a large number of partitions. Below are measurements with the experiment repeated multiple times. For a given number of partitioned tables (each with 1000 partitions) being joined, planning time is measured 10 consecutive times. For this set of 10 runs we calculate average and standard deviation of planning time. Such 10 sets are sampled. This means planning time is sampled 100 times in total with and without patch respectively. Measurements with master and patched are reported in the attached excel sheet.\n\nWell, this is fine then I guess, but if the original results weren't\nstable enough for people to draw conclusions from, then it's better\nnot to post them, and instead do this work to get results that are\nstable before posting.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 10 Jun 2024 09:14:59 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Reuse child_relids in try_partitionwise_join was Re: Assert\n failure on bms_equal(child_joinrel->relids, child_joinrelids)" }, { "msg_contents": "On Mon, Jun 10, 2024 at 8:15 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Jun 10, 2024 at 3:09 AM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> > This is just one instance of measurements. If I run the experiment multiple times the results and the patterns will vary. Usually I have found planning time to vary within 5% for regular tables and within 9% for partitioned tables with a large number of partitions. Below are measurements with the experiment repeated multiple times. For a given number of partitioned tables (each with 1000 partitions) being joined, planning time is measured 10 consecutive times. For this set of 10 runs we calculate average and standard deviation of planning time. Such 10 sets are sampled. This means planning time is sampled 100 times in total with and without patch respectively. Measurements with master and patched are reported in the attached excel sheet.\n>\n> Well, this is fine then I guess, but if the original results weren't\n> stable enough for people to draw conclusions from, then it's better\n> not to post them, and instead do this work to get results that are\n> stable before posting.\n\nJust doing a quick code review of the structure and the caller, I'd\nagree that this is properly hoisting the invariant, so don't see that\nit should contribute to any performance regressions. To the extent\nthat it's called multiple times I can see that it's an improvement,\nwith minimal code shuffling it seems like a sensible change (even in\nthe single-caller case).\n\nIn short +1 from me.\n\nDavid\n\n\n", "msg_date": "Tue, 11 Jun 2024 17:52:43 -0500", "msg_from": "David Christensen <david+pg@pgguru.net>", "msg_from_op": false, "msg_subject": "Re: Reuse child_relids in try_partitionwise_join was Re: Assert\n failure on bms_equal(child_joinrel->relids, child_joinrelids)" }, { "msg_contents": "On Wed, Jun 12, 2024 at 4:22 AM David Christensen <david+pg@pgguru.net>\nwrote:\n\n> On Mon, Jun 10, 2024 at 8:15 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Mon, Jun 10, 2024 at 3:09 AM Ashutosh Bapat\n> > <ashutosh.bapat.oss@gmail.com> wrote:\n> > > This is just one instance of measurements. If I run the experiment\n> multiple times the results and the patterns will vary. Usually I have found\n> planning time to vary within 5% for regular tables and within 9% for\n> partitioned tables with a large number of partitions. Below are\n> measurements with the experiment repeated multiple times. For a given\n> number of partitioned tables (each with 1000 partitions) being joined,\n> planning time is measured 10 consecutive times. For this set of 10 runs we\n> calculate average and standard deviation of planning time. Such 10 sets are\n> sampled. This means planning time is sampled 100 times in total with and\n> without patch respectively. Measurements with master and patched are\n> reported in the attached excel sheet.\n> >\n> > Well, this is fine then I guess, but if the original results weren't\n> > stable enough for people to draw conclusions from, then it's better\n> > not to post them, and instead do this work to get results that are\n> > stable before posting.\n>\n> Just doing a quick code review of the structure and the caller, I'd\n> agree that this is properly hoisting the invariant, so don't see that\n> it should contribute to any performance regressions. To the extent\n> that it's called multiple times I can see that it's an improvement,\n> with minimal code shuffling it seems like a sensible change (even in\n> the single-caller case).\n>\n> In short +1 from me.\n>\n\nHi David,\nDo you think it needs more review or we can change it to \"ready for\ncommitter\"?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Wed, Jun 12, 2024 at 4:22 AM David Christensen <david+pg@pgguru.net> wrote:On Mon, Jun 10, 2024 at 8:15 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Jun 10, 2024 at 3:09 AM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> > This is just one instance of measurements. If I run the experiment multiple times the results and the patterns will vary. Usually I have found planning time to vary within 5% for regular tables and within 9% for partitioned tables with a large number of partitions. Below are measurements with the experiment repeated multiple times. For a given number of partitioned tables (each with 1000 partitions) being joined, planning time is measured 10 consecutive times. For this set of 10 runs we calculate average and standard deviation of planning time. Such 10 sets are sampled. This means planning time is sampled 100 times in total with and without patch respectively. Measurements with master and patched are reported in the attached excel sheet.\n>\n> Well, this is fine then I guess, but if the original results weren't\n> stable enough for people to draw conclusions from, then it's better\n> not to post them, and instead do this work to get results that are\n> stable before posting.\n\nJust doing a quick code review of the structure and the caller, I'd\nagree that this is properly hoisting the invariant, so don't see that\nit should contribute to any performance regressions.  To the extent\nthat it's called multiple times I can see that it's an improvement,\nwith minimal code shuffling it seems like a sensible change (even in\nthe single-caller case).\n\nIn short +1 from me.Hi David,Do you think it needs more review or we can change it to \"ready for committer\"? -- Best Wishes,Ashutosh Bapat", "msg_date": "Mon, 1 Jul 2024 11:25:56 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Reuse child_relids in try_partitionwise_join was Re: Assert\n failure on bms_equal(child_joinrel->relids, child_joinrelids)" }, { "msg_contents": "On Jul 1, 2024, at 12:56 AM, Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:On Wed, Jun 12, 2024 at 4:22 AM David Christensen <david+pg@pgguru.net> wrote:On Mon, Jun 10, 2024 at 8:15 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Jun 10, 2024 at 3:09 AM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> > This is just one instance of measurements. If I run the experiment multiple times the results and the patterns will vary. Usually I have found planning time to vary within 5% for regular tables and within 9% for partitioned tables with a large number of partitions. Below are measurements with the experiment repeated multiple times. For a given number of partitioned tables (each with 1000 partitions) being joined, planning time is measured 10 consecutive times. For this set of 10 runs we calculate average and standard deviation of planning time. Such 10 sets are sampled. This means planning time is sampled 100 times in total with and without patch respectively. Measurements with master and patched are reported in the attached excel sheet.\n>\n> Well, this is fine then I guess, but if the original results weren't\n> stable enough for people to draw conclusions from, then it's better\n> not to post them, and instead do this work to get results that are\n> stable before posting.\n\nJust doing a quick code review of the structure and the caller, I'd\nagree that this is properly hoisting the invariant, so don't see that\nit should contribute to any performance regressions.  To the extent\nthat it's called multiple times I can see that it's an improvement,\nwith minimal code shuffling it seems like a sensible change (even in\nthe single-caller case).\n\nIn short +1 from me.Hi David,Do you think it needs more review or we can change it to \"ready for committer\"? I think it’s ready from my standpoint. David", "msg_date": "Mon, 1 Jul 2024 06:28:57 -0500", "msg_from": "David Christensen <david@pgguru.net>", "msg_from_op": false, "msg_subject": "Re: Reuse child_relids in try_partitionwise_join was Re: Assert\n failure on bms_equal(child_joinrel->relids, child_joinrelids)" }, { "msg_contents": "On Wed, Jun 5, 2024 at 3:48 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> This is different. But it needs a rebase. PFA rebased patch. The revised commit message explains the change better.\n\nI looked through this patch and I think it is in a good shape.\n\nSome minor comments:\n\n* I think it's better to declare 'child_relids' as type Relids\nrather than Bitmapset *. While they are the same thing, Bitmapset *\nisn't typically used in joinrels.c. And I doubt that it is necessary\nto explicitly initialize it to NULL.\n\n* This patch changes the signature of function build_child_join_rel().\nI recommend arranging the two newly added parameters as 'int\nnappinfos, AppendRelInfo **appinfos' to maintain consistency with\nother functions that include these parameters.\n\n* At the end of the for loop in try_partitionwise_join(), we attempt\nto free the variables used within the loop. I suggest freeing these\nvariables in the order or reverse order of their allocation, rather\nthan arbitrarily. Also, in non-partitionwise-join planning, we do not\nfree local variables in this manner. I understand that we do this for\npartitionwise-join because accumulating local variables can lead to\nsignificant memory usage, particularly with many partitions. I think\nit would be beneficial to add a comment in try_partitionwise_join()\nexplaining this behavior.\n\nThanks\nRichard\n\n\n", "msg_date": "Tue, 23 Jul 2024 16:35:44 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Reuse child_relids in try_partitionwise_join was Re: Assert\n failure on bms_equal(child_joinrel->relids, child_joinrelids)" }, { "msg_contents": "On Tue, Jul 23, 2024 at 2:05 PM Richard Guo <guofenglinux@gmail.com> wrote:\n>\n> On Wed, Jun 5, 2024 at 3:48 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> > This is different. But it needs a rebase. PFA rebased patch. The revised commit message explains the change better.\n>\n> I looked through this patch and I think it is in a good shape.\n>\n> Some minor comments:\n>\n> * I think it's better to declare 'child_relids' as type Relids\n> rather than Bitmapset *. While they are the same thing, Bitmapset *\n> isn't typically used in joinrels.c.\n\nAgreed. Sorry. Fixed now.\n\n> And I doubt that it is necessary\n> to explicitly initialize it to NULL.\n\nDone.\n\n>\n> * This patch changes the signature of function build_child_join_rel().\n> I recommend arranging the two newly added parameters as 'int\n> nappinfos, AppendRelInfo **appinfos' to maintain consistency with\n> other functions that include these parameters.\n\nYes. Done.\n\n>\n> * At the end of the for loop in try_partitionwise_join(), we attempt\n> to free the variables used within the loop. I suggest freeing these\n> variables in the order or reverse order of their allocation, rather\n> than arbitrarily.\n\nDone. Are you suggesting it for aesthetic purposes or there's\nsomething else (read, less defragmentation, performance gain etc.)? I\nam curious.\n\n> Also, in non-partitionwise-join planning, we do not\n> free local variables in this manner. I understand that we do this for\n> partitionwise-join because accumulating local variables can lead to\n> significant memory usage, particularly with many partitions. I think\n> it would be beneficial to add a comment in try_partitionwise_join()\n> explaining this behavior.\n\nSounds useful. Done.\n\nPFA patches:\n0001 - same as previous one\n0002 - addresses your review comments\n\n--\nBest Wishes,\nAshutosh Bapat", "msg_date": "Wed, 24 Jul 2024 13:00:23 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Reuse child_relids in try_partitionwise_join was Re: Assert\n failure on bms_equal(child_joinrel->relids, child_joinrelids)" }, { "msg_contents": "On Wed, Jul 24, 2024 at 3:30 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> Done. Are you suggesting it for aesthetic purposes or there's\n> something else (read, less defragmentation, performance gain etc.)? I\n> am curious.\n\nI think it's just for readability.\n\n> PFA patches:\n> 0001 - same as previous one\n> 0002 - addresses your review comments\n\nI've worked a bit more on the comments and commit message, and I plan\nto push the attached soon, barring any objections or comments.\n\nThanks\nRichard", "msg_date": "Fri, 26 Jul 2024 17:44:33 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Reuse child_relids in try_partitionwise_join was Re: Assert\n failure on bms_equal(child_joinrel->relids, child_joinrelids)" }, { "msg_contents": "On Fri, Jul 26, 2024 at 5:44 PM Richard Guo <guofenglinux@gmail.com> wrote:\n> I've worked a bit more on the comments and commit message, and I plan\n> to push the attached soon, barring any objections or comments.\n\nPushed.\n\nThanks\nRichard\n\n\n", "msg_date": "Mon, 29 Jul 2024 11:26:09 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Reuse child_relids in try_partitionwise_join was Re: Assert\n failure on bms_equal(child_joinrel->relids, child_joinrelids)" }, { "msg_contents": "Thanks a lot Richard.\n\nOn Mon, Jul 29, 2024 at 8:56 AM Richard Guo <guofenglinux@gmail.com> wrote:\n>\n> On Fri, Jul 26, 2024 at 5:44 PM Richard Guo <guofenglinux@gmail.com> wrote:\n> > I've worked a bit more on the comments and commit message, and I plan\n> > to push the attached soon, barring any objections or comments.\n>\n> Pushed.\n>\n> Thanks\n> Richard\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 29 Jul 2024 09:14:16 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Reuse child_relids in try_partitionwise_join was Re: Assert\n failure on bms_equal(child_joinrel->relids, child_joinrelids)" } ]
[ { "msg_contents": "I found that multiple membership grants added in v16 affects the \ninformation_schema.applicable_roles view.\n\nExamples on a master, but they works for v16 too.\n\nSetup multiple membership alice in bob:\n\npostgres@postgres(17.0)=# \\drg alice\n                List of role grants\n  Role name | Member of |   Options    | Grantor\n-----------+-----------+--------------+----------\n  alice     | bob       | INHERIT, SET | alice\n  alice     | bob       | INHERIT, SET | charlie\n  alice     | bob       | ADMIN        | postgres\n(3 rows)\n\nThe application_roles view shows duplicates:\n\npostgres@postgres(17.0)=# SELECT * FROM \ninformation_schema.applicable_roles WHERE grantee = 'alice';\n  grantee | role_name | is_grantable\n---------+-----------+--------------\n  alice   | bob       | NO\n  alice   | bob       | YES\n(2 rows)\n\nView definition:\n\npostgres@postgres(17.0)=# \\sv information_schema.applicable_roles\nCREATE OR REPLACE VIEW information_schema.applicable_roles AS\n  SELECT a.rolname::information_schema.sql_identifier AS grantee,\n     b.rolname::information_schema.sql_identifier AS role_name,\n         CASE\n             WHEN m.admin_option THEN 'YES'::text\n             ELSE 'NO'::text\n         END::information_schema.yes_or_no AS is_grantable\n    FROM ( SELECT pg_auth_members.member,\n             pg_auth_members.roleid,\n             pg_auth_members.admin_option\n            FROM pg_auth_members\n         UNION\n          SELECT pg_database.datdba,\n             pg_authid.oid,\n             false\n            FROM pg_database,\n             pg_authid\n           WHERE pg_database.datname = current_database() AND \npg_authid.rolname = 'pg_database_owner'::name) m\n      JOIN pg_authid a ON m.member = a.oid\n      JOIN pg_authid b ON m.roleid = b.oid\n   WHERE pg_has_role(a.oid, 'USAGE'::text);\n\n\nI think that only one row with admin option should be returned.\nThis can be achieved by adding group by + bool_or to the inner select \nfrom pg_auth_members.\n\nBEGIN;\nBEGIN\npostgres@postgres(17.0)=*# CREATE OR REPLACE VIEW \ninformation_schema.applicable_roles AS\nSELECT a.rolname::information_schema.sql_identifier AS grantee,\n     b.rolname::information_schema.sql_identifier AS role_name,\n         CASE\n             WHEN m.admin_option THEN 'YES'::text\n             ELSE 'NO'::text\n         END::information_schema.yes_or_no AS is_grantable\n    FROM ( SELECT pg_auth_members.member,\n             pg_auth_members.roleid,\n             bool_or(pg_auth_members.admin_option) AS admin_option\n            FROM pg_auth_members\n            GROUP BY 1, 2\n         UNION\n          SELECT pg_database.datdba,\n             pg_authid.oid,\n             false\n            FROM pg_database,\n             pg_authid\n           WHERE pg_database.datname = current_database() AND \npg_authid.rolname = 'pg_database_owner'::name) m\n      JOIN pg_authid a ON m.member = a.oid\n      JOIN pg_authid b ON m.roleid = b.oid\n   WHERE pg_has_role(a.oid, 'USAGE'::text);\nCREATE VIEW\npostgres@postgres(17.0)=*# SELECT * FROM \ninformation_schema.applicable_roles WHERE grantee = 'alice';\n  grantee | role_name | is_grantable\n---------+-----------+--------------\n  alice   | bob       | YES\n(1 row)\n\npostgres@postgres(17.0)=*# ROLLBACK;\nROLLBACK\n\nShould we add group by + bool_or to the applicable_roles view?\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com\n\n\n\n", "msg_date": "Sun, 23 Jul 2023 21:28:46 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "multiple membership grants and information_schema.applicable_roles" }, { "msg_contents": "Pavel Luzanov <p.luzanov@postgrespro.ru> writes:\n> The application_roles view shows duplicates:\n\n> postgres@postgres(17.0)=# SELECT * FROM \n> information_schema.applicable_roles WHERE grantee = 'alice';\n>  grantee | role_name | is_grantable\n> ---------+-----------+--------------\n>  alice   | bob       | NO\n>  alice   | bob       | YES\n> (2 rows)\n\nAFAICT this is also possible with the SQL standard's definition\nof this view, so I don't see a bug here:\n\nCREATE RECURSIVE VIEW APPLICABLE_ROLES ( GRANTEE, ROLE_NAME, IS_GRANTABLE ) AS\n ( ( SELECT GRANTEE, ROLE_NAME, IS_GRANTABLE\n FROM DEFINITION_SCHEMA.ROLE_AUTHORIZATION_DESCRIPTORS\n WHERE ( GRANTEE IN\n ( CURRENT_USER, 'PUBLIC' )\n OR\n GRANTEE IN\n ( SELECT ROLE_NAME\n FROM ENABLED_ROLES ) ) )\n UNION\n ( SELECT RAD.GRANTEE, RAD.ROLE_NAME, RAD.IS_GRANTABLE\n FROM DEFINITION_SCHEMA.ROLE_AUTHORIZATION_DESCRIPTORS RAD\n JOIN\n APPLICABLE_ROLES R\n ON\n RAD.GRANTEE = R.ROLE_NAME ) );\n\nThe UNION would remove rows only when they are duplicates across all\nthree columns.\n\nI do see what seems like a different issue: the standard appears to expect\nthat indirect role grants should also be shown (via the recursive CTE),\nand we are not doing that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 23 Jul 2023 16:03:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: multiple membership grants and\n information_schema.applicable_roles" }, { "msg_contents": "On 23.07.2023 23:03, Tom Lane wrote:\n> CREATE RECURSIVE VIEW APPLICABLE_ROLES ( GRANTEE, ROLE_NAME, IS_GRANTABLE ) AS\n> ( ( SELECT GRANTEE, ROLE_NAME, IS_GRANTABLE\n> FROM DEFINITION_SCHEMA.ROLE_AUTHORIZATION_DESCRIPTORS\n> WHERE ( GRANTEE IN\n> ( CURRENT_USER, 'PUBLIC' )\n> OR\n> GRANTEE IN\n> ( SELECT ROLE_NAME\n> FROM ENABLED_ROLES ) ) )\n> UNION\n> ( SELECT RAD.GRANTEE, RAD.ROLE_NAME, RAD.IS_GRANTABLE\n> FROM DEFINITION_SCHEMA.ROLE_AUTHORIZATION_DESCRIPTORS RAD\n> JOIN\n> APPLICABLE_ROLES R\n> ON\n> RAD.GRANTEE = R.ROLE_NAME ) );\n>\n> The UNION would remove rows only when they are duplicates across all\n> three columns.\n\nHm, I think there is one more thing to check in the SQL standard.\nIs IS_GRANTABLE a key column for ROLE_AUTHORIZATION_DESCRIPTORS?\nIf not, duplicates is not possible. Right?\n\nCan't check now, since I don't have access to the SQL standard definition.\n\n> I do see what seems like a different issue: the standard appears to expect\n> that indirect role grants should also be shown (via the recursive CTE),\n> and we are not doing that.\n\nI noticed this, but the view stays unchanged so long time.\nI thought it was done intentionally.\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com\n\n\n\n", "msg_date": "Mon, 24 Jul 2023 09:42:10 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: multiple membership grants and\n information_schema.applicable_roles" }, { "msg_contents": "On 24.07.2023 09:42, Pavel Luzanov wrote:\n> Is IS_GRANTABLE a key column for ROLE_AUTHORIZATION_DESCRIPTORS?\n> If not, duplicates is not possible. Right?\n\nThe answer is: no.\nDuplicate pairs (grantee, role_name) is impossible only with defined key \nwith this two columns.\nIf there is no such key or key contain another column, for example grantor,\nthen the information_schema.applicable_roles view definition is correct \nin this part.\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com\n\n\n\n", "msg_date": "Mon, 24 Jul 2023 12:32:28 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: multiple membership grants and\n information_schema.applicable_roles" }, { "msg_contents": "On 24.07.23 08:42, Pavel Luzanov wrote:\n>> I do see what seems like a different issue: the standard appears to \n>> expect\n>> that indirect role grants should also be shown (via the recursive CTE),\n>> and we are not doing that.\n> \n> I noticed this, but the view stays unchanged so long time.\n> I thought it was done intentionally.\n\nThe implementation of the information_schema.applicable_roles view \npredates both indirect role grants and recursive query support. So some \nupdates might be in order.\n\n\n\n", "msg_date": "Wed, 2 Aug 2023 11:32:43 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: multiple membership grants and\n information_schema.applicable_roles" } ]
[ { "msg_contents": "Hello!\n\nMy colleague Victoria Shepard reported that pgbench might crash\nduring initialization with some values of shared_buffers and\nmax_worker_processes in conf.\n\nAfter some research, found this happens when the LimitAdditionalPins() returns exactly zero.\nIn the current master, this will happen e.g. if shared_buffers = 10MB and max_worker_processes = 40.\nThen the command \"pgbench --initialize postgres\" will lead to crash.\nSee the backtrace attached.\n\nThere is a fix in the patch applied. Please take a look on it.\n\nWith the best regards,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Sun, 23 Jul 2023 23:21:47 +0300", "msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>", "msg_from_op": true, "msg_subject": "[BUG] Crash on pgbench initialization." }, { "msg_contents": "On Sun, Jul 23, 2023 at 11:21:47PM +0300, Anton A. Melnikov wrote:\n> After some research, found this happens when the LimitAdditionalPins() returns exactly zero.\n> In the current master, this will happen e.g. if shared_buffers = 10MB and max_worker_processes = 40.\n> Then the command \"pgbench --initialize postgres\" will lead to crash.\n> See the backtrace attached.\n> \n> There is a fix in the patch applied. Please take a look on it.\n\nNice catch, issue reproduced here so I am adding an open item for now.\n(I have not looked at the patch, yet.)\n--\nMichael", "msg_date": "Mon, 24 Jul 2023 17:17:20 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [BUG] Crash on pgbench initialization." }, { "msg_contents": "On 2023-Jul-24, Michael Paquier wrote:\n\n> On Sun, Jul 23, 2023 at 11:21:47PM +0300, Anton A. Melnikov wrote:\n> > After some research, found this happens when the LimitAdditionalPins() returns exactly zero.\n> > In the current master, this will happen e.g. if shared_buffers = 10MB and max_worker_processes = 40.\n> > Then the command \"pgbench --initialize postgres\" will lead to crash.\n> > See the backtrace attached.\n> > \n> > There is a fix in the patch applied. Please take a look on it.\n> \n> Nice catch, issue reproduced here so I am adding an open item for now.\n> (I have not looked at the patch, yet.)\n\nHmm, I see that all this code was added by 31966b151e6a, which makes\nthis Andres' item. I see Michael marked it as such in the open items\npage, but did not CC Andres, so I'm doing that here now.\n\nI don't know this code at all, but I hope that this can be solved with\njust Anton's proposed patch.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Find a bug in a program, and fix it, and the program will work today.\nShow the program how to find and fix a bug, and the program\nwill work forever\" (Oliver Silfridge)\n\n\n", "msg_date": "Mon, 24 Jul 2023 15:54:33 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: [BUG] Crash on pgbench initialization." }, { "msg_contents": "Hi,\n\nOn 2023-07-24 15:54:33 +0200, Alvaro Herrera wrote:\n> On 2023-Jul-24, Michael Paquier wrote:\n> \n> > On Sun, Jul 23, 2023 at 11:21:47PM +0300, Anton A. Melnikov wrote:\n> > > After some research, found this happens when the LimitAdditionalPins() returns exactly zero.\n> > > In the current master, this will happen e.g. if shared_buffers = 10MB and max_worker_processes = 40.\n> > > Then the command \"pgbench --initialize postgres\" will lead to crash.\n> > > See the backtrace attached.\n> > > \n> > > There is a fix in the patch applied. Please take a look on it.\n> > \n> > Nice catch, issue reproduced here so I am adding an open item for now.\n> > (I have not looked at the patch, yet.)\n> \n> Hmm, I see that all this code was added by 31966b151e6a, which makes\n> this Andres' item. I see Michael marked it as such in the open items\n> page, but did not CC Andres, so I'm doing that here now.\n\nThanks - I had indeed not seen this. I can't really keep up with the list at\nall times...\n\n\n> I don't know this code at all, but I hope that this can be solved with\n> just Anton's proposed patch.\n\nYes, it's just that off-by-one. I need to check if there's a similar bug for\nlocal / temp table buffers though.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 24 Jul 2023 09:42:44 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [BUG] Crash on pgbench initialization." }, { "msg_contents": "On Mon, Jul 24, 2023 at 03:54:33PM +0200, Alvaro Herrera wrote:\n> I see Michael marked it as such in the open items\n> page, but did not CC Andres, so I'm doing that here now.\n\nIndeed, thanks!\n--\nMichael", "msg_date": "Tue, 25 Jul 2023 07:46:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [BUG] Crash on pgbench initialization." }, { "msg_contents": "Hi,\n\nOn 2023-07-24 09:42:44 -0700, Andres Freund wrote:\n> > I don't know this code at all, but I hope that this can be solved with\n> > just Anton's proposed patch.\n> \n> Yes, it's just that off-by-one. I need to check if there's a similar bug for\n> local / temp table buffers though.\n\nDoesn't appear that way. We *do* fail if there's only 1 remaining buffer, but\nwe already did before my change (because we also need to pin the fsm). I don't\nthink that's an issue worth worrying about, if all-1 of local buffers are\npinned, you're going to have problems.\n\nThanks Anton / Victoria for the report and fix. Pushed.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 24 Jul 2023 20:24:11 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [BUG] Crash on pgbench initialization." }, { "msg_contents": "On 25.07.2023 06:24, Andres Freund wrote:\n> Thanks Anton / Victoria for the report and fix. Pushed.\n\nThanks!\nHave a nice day!\n\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Tue, 25 Jul 2023 11:29:49 +0300", "msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>", "msg_from_op": true, "msg_subject": "Re: [BUG] Crash on pgbench initialization." } ]
[ { "msg_contents": "hi.\nI am confused with bt_recheck_sibling_links.\n\n>\n> ereport(ERROR,\n> (errcode(ERRCODE_INDEX_CORRUPTED),\n> errmsg(\"left link/right link pair in index \\\"%s\\\" not in agreement\",\n> RelationGetRelationName(state->rel)),\n> errdetail_internal(\"Block=%u left block=%u left link from block=%u.\",\n> state->targetblock, leftcurrent,\n> btpo_prev_from_target)));\n\nshould we put the above code into the branch {if (!state->readonly)} ?\nI can understand it if put inside.\nnow the whole function be like:\n\nif (!state->readonly)\n{}\nerreport{}.\n\n---------------------------------------------------------------------------------\nif (btpo_prev_from_target == leftcurrent)\n{\n/* Report split in left sibling, not target (or new target) */\nereport(DEBUG1,\n(errcode(ERRCODE_INTERNAL_ERROR),\nerrmsg_internal(\"harmless concurrent page split detected in index \\\"%s\\\"\",\nRelationGetRelationName(state->rel)),\nerrdetail_internal(\"Block=%u new right sibling=%u original right sibling=%u.\",\nleftcurrent, newtargetblock,\nstate->targetblock)));\nreturn;\n}\n\nonly concurrency read case, (btpo_prev_from_target == leftcurrent)\nwill be true? If so, then I am confused with the ereport content.\n\n\n", "msg_date": "Mon, 24 Jul 2023 22:06:34 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": true, "msg_subject": "bt_recheck_sibling_links contrib/amcheck/verify_nbtree.c" } ]
[ { "msg_contents": "Neon provides a quick start mechanism for psql using the following \nworkflow:\n\n\t$ psql -h pg.neon.tech\n\tNOTICE: Welcome to Neon!\n\tAuthenticate by visiting:\n\t https://console.neon.tech/psql_session/xxx\n\nUpon navigating to the link, the user selects their database to work \nwith. The psql process is then connected to a Postgres database at Neon.\n\npsql provides a way to reconnect to the database from within itself: \\c. \nIf, after connecting to a database such as the process previously \nmentioned, the user starts a reconnection to the database from within \npsql, the process will refuse to interrupt the reconnection attempt \nafter sending SIGINT to it. \n\n\ttristan=> \\c\n\tNOTICE: Welcome to Neon!\n\tAuthenticate by visiting:\n\t https://console.neon.tech/psql_session/yyy\n\t\n\t^C\n\t^C^C\n\t^C^C\n\nI am not really sure if this was a problem on Windows machines, but the \nattached patch should support it if it does. If this was never a problem \non Windows, it might be best to check for any regressions.\n\nThis was originally discussed in the a GitHub issue[0] at Neon.\n\n[0]: https://github.com/neondatabase/neon/issues/1449\n\n-- \nTristan Partin\nNeon (https://neon.tech)", "msg_date": "Mon, 24 Jul 2023 11:25:47 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "psql not responding to SIGINT upon db reconnection" }, { "msg_contents": "On Mon, Jul 24, 2023 at 9:26 AM Tristan Partin <tristan@neon.tech> wrote:\n\n> attached patch\n\n+ /*\n+ * Restore the default SIGINT behavior while within libpq.\nOtherwise, we\n+ * can never exit from polling for the database connection. Failure to\n+ * restore is non-fatal.\n+ */\n+ newact.sa_handler = SIG_DFL;\n+ rc = sigaction(SIGINT, &newact, &oldact);\n\n There's no action taken if rc != 0. It doesn't seem right to\ncontinue as if everything's fine when the handler registration fails.\nAt least a warning is warranted, so that the user reports such\nfailures to the community.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n", "msg_date": "Mon, 24 Jul 2023 10:00:06 -0700", "msg_from": "Gurjeet Singh <gurjeet@singh.im>", "msg_from_op": false, "msg_subject": "Re: psql not responding to SIGINT upon db reconnection" }, { "msg_contents": "On Mon Jul 24, 2023 at 12:00 PM CDT, Gurjeet Singh wrote:\n> On Mon, Jul 24, 2023 at 9:26 AM Tristan Partin <tristan@neon.tech> wrote:\n>\n> > attached patch\n>\n> + /*\n> + * Restore the default SIGINT behavior while within libpq.\n> Otherwise, we\n> + * can never exit from polling for the database connection. Failure to\n> + * restore is non-fatal.\n> + */\n> + newact.sa_handler = SIG_DFL;\n> + rc = sigaction(SIGINT, &newact, &oldact);\n>\n> There's no action taken if rc != 0. It doesn't seem right to\n> continue as if everything's fine when the handler registration fails.\n> At least a warning is warranted, so that the user reports such\n> failures to the community.\n\nIf we fail to go back to the default handler, then we just get the \nbehavior we currently have. I am not sure logging a message like \"Failed \nto restore default SIGINT handler\" is that useful to the user. It isn't \nactionable, and at the end of the day isn't going to affect them for the \nmost part. They also aren't even aware that default handler was ever \noverridden in the first place. I'm more than happy to add a debug log if \nit is the blocker to getting this patch accepted however.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 24 Jul 2023 12:09:05 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: psql not responding to SIGINT upon db reconnection" }, { "msg_contents": "v2 is attached which fixes a grammatical issue in a comment.\n\n-- \nTristan Partin\nNeon (https://neon.tech)", "msg_date": "Mon, 24 Jul 2023 12:09:49 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: psql not responding to SIGINT upon db reconnection" }, { "msg_contents": "v3 is attached which fixes up some code comments I added which I hadn't \nattached to the commit already, sigh.\n\n-- \nTristan Partin\nNeon (https://neon.tech)", "msg_date": "Mon, 24 Jul 2023 12:11:27 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: psql not responding to SIGINT upon db reconnection" }, { "msg_contents": "\"Tristan Partin\" <tristan@neon.tech> writes:\n> v3 is attached which fixes up some code comments I added which I hadn't \n> attached to the commit already, sigh.\n\nI don't care for this patch at all. You're bypassing the pqsignal\nabstraction layer that the rest of psql goes through, and the behavior\nyou're implementing isn't very nice. People do not expect ^C to\nkill psql - it should just stop the \\c attempt and leave you as you\nwere.\n\nAdmittedly, getting PQconnectdbParams to return control on SIGINT\nisn't too practical. But you could probably replace that with a loop\naround PQconnectPoll and test for CancelRequested in the loop.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 24 Jul 2023 13:43:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql not responding to SIGINT upon db reconnection" }, { "msg_contents": "On Mon Jul 24, 2023 at 12:43 PM CDT, Tom Lane wrote:\n> \"Tristan Partin\" <tristan@neon.tech> writes:\n> > v3 is attached which fixes up some code comments I added which I hadn't \n> > attached to the commit already, sigh.\n>\n> I don't care for this patch at all. You're bypassing the pqsignal\n> abstraction layer that the rest of psql goes through, and the behavior\n> you're implementing isn't very nice. People do not expect ^C to\n> kill psql - it should just stop the \\c attempt and leave you as you\n> were.\n>\n> Admittedly, getting PQconnectdbParams to return control on SIGINT\n> isn't too practical. But you could probably replace that with a loop\n> around PQconnectPoll and test for CancelRequested in the loop.\n\nThat sounds like a much better solution. Attached you will find a v4 \nthat implements your suggestion. Please let me know if there is \nsomething that I missed. I can confirm that the patch works.\n\n\t$ ./build/src/bin/psql/psql -h pg.neon.tech\n\tNOTICE: Welcome to Neon!\n\tAuthenticate by visiting:\n\t https://console.neon.tech/psql_session/xxx\n\t\n\t\n\tNOTICE: Connecting to database.\n\tpsql (17devel, server 15.3)\n\tSSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off)\n\tType \"help\" for help.\n\t\n\ttristan957=> \\c\n\tNOTICE: Welcome to Neon!\n\tAuthenticate by visiting:\n\t https://console.neon.tech/psql_session/yyy\n\t\n\t\n\t^Cconnection to server at \"pg.neon.tech\" (3.18.6.96), port 5432 failed: \n\tPrevious connection kept\n\ttristan957=> \n\n-- \nTristan Partin\nNeon (https://neon.tech)", "msg_date": "Mon, 24 Jul 2023 14:57:58 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: psql not responding to SIGINT upon db reconnection" }, { "msg_contents": "Hi,\n\n> That sounds like a much better solution. Attached you will find a v4\n> that implements your suggestion. Please let me know if there is\n> something that I missed. I can confirm that the patch works.\n>\n> $ ./build/src/bin/psql/psql -h pg.neon.tech\n> NOTICE: Welcome to Neon!\n> Authenticate by visiting:\n> https://console.neon.tech/psql_session/xxx\n>\n>\n> NOTICE: Connecting to database.\n> psql (17devel, server 15.3)\n> SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off)\n> Type \"help\" for help.\n>\n> tristan957=> \\c\n> NOTICE: Welcome to Neon!\n> Authenticate by visiting:\n> https://console.neon.tech/psql_session/yyy\n>\n>\n> ^Cconnection to server at \"pg.neon.tech\" (3.18.6.96), port 5432 failed:\n> Previous connection kept\n> tristan957=>\n\nI went through CFbot and found out that some tests are timing out.\nLinks:\nhttps://cirrus-ci.com/task/6735437444677632\nhttps://cirrus-ci.com/task/4536414189125632\nhttps://cirrus-ci.com/task/5046587584413696\nhttps://cirrus-ci.com/task/6172487491256320\nhttps://cirrus-ci.com/task/5609537537835008\n\nSome tests which are timing out are as follows:\n[00:48:49.243] Summary of Failures:\n[00:48:49.243]\n[00:48:49.243] 4/270 postgresql:regress / regress/regress TIMEOUT 1000.01s\n[00:48:49.243] 6/270 postgresql:pg_upgrade / pg_upgrade/002_pg_upgrade\nTIMEOUT 1000.02s\n[00:48:49.243] 34/270 postgresql:recovery /\nrecovery/027_stream_regress TIMEOUT 1000.02s\n[00:48:49.243] 48/270 postgresql:plpgsql / plpgsql/regress TIMEOUT 1000.02s\n[00:48:49.243] 49/270 postgresql:plperl / plperl/regress TIMEOUT 1000.01s\n[00:48:49.243] 61/270 postgresql:dblink / dblink/regress TIMEOUT 1000.03s\n[00:48:49.243] 89/270 postgresql:postgres_fdw / postgres_fdw/regress\nTIMEOUT 1000.01s\n[00:48:49.243] 93/270 postgresql:test_decoding / test_decoding/regress\nTIMEOUT 1000.02s\n[00:48:49.243] 110/270 postgresql:test_extensions /\ntest_extensions/regress TIMEOUT 1000.03s\n[00:48:49.243] 123/270 postgresql:unsafe_tests / unsafe_tests/regress\nTIMEOUT 1000.02s\n[00:48:49.243] 152/270 postgresql:pg_dump / pg_dump/010_dump_connstr\nTIMEOUT 1000.03s\n\nJust want to make sure you are aware of the issue.\n\nThanks\nShlok Kumar Kyal\n\n\n", "msg_date": "Thu, 2 Nov 2023 14:33:40 +0530", "msg_from": "Shlok Kyal <shlok.kyal.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql not responding to SIGINT upon db reconnection" }, { "msg_contents": "On Thu Nov 2, 2023 at 4:03 AM CDT, Shlok Kyal wrote:\n> Hi,\n>\n> > That sounds like a much better solution. Attached you will find a v4\n> > that implements your suggestion. Please let me know if there is\n> > something that I missed. I can confirm that the patch works.\n> >\n> > $ ./build/src/bin/psql/psql -h pg.neon.tech\n> > NOTICE: Welcome to Neon!\n> > Authenticate by visiting:\n> > https://console.neon.tech/psql_session/xxx\n> >\n> >\n> > NOTICE: Connecting to database.\n> > psql (17devel, server 15.3)\n> > SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off)\n> > Type \"help\" for help.\n> >\n> > tristan957=> \\c\n> > NOTICE: Welcome to Neon!\n> > Authenticate by visiting:\n> > https://console.neon.tech/psql_session/yyy\n> >\n> >\n> > ^Cconnection to server at \"pg.neon.tech\" (3.18.6.96), port 5432 failed:\n> > Previous connection kept\n> > tristan957=>\n>\n> I went through CFbot and found out that some tests are timing out.\n> Links:\n> https://cirrus-ci.com/task/6735437444677632\n> https://cirrus-ci.com/task/4536414189125632\n> https://cirrus-ci.com/task/5046587584413696\n> https://cirrus-ci.com/task/6172487491256320\n> https://cirrus-ci.com/task/5609537537835008\n>\n> Some tests which are timing out are as follows:\n> [00:48:49.243] Summary of Failures:\n> [00:48:49.243]\n> [00:48:49.243] 4/270 postgresql:regress / regress/regress TIMEOUT 1000.01s\n> [00:48:49.243] 6/270 postgresql:pg_upgrade / pg_upgrade/002_pg_upgrade\n> TIMEOUT 1000.02s\n> [00:48:49.243] 34/270 postgresql:recovery /\n> recovery/027_stream_regress TIMEOUT 1000.02s\n> [00:48:49.243] 48/270 postgresql:plpgsql / plpgsql/regress TIMEOUT 1000.02s\n> [00:48:49.243] 49/270 postgresql:plperl / plperl/regress TIMEOUT 1000.01s\n> [00:48:49.243] 61/270 postgresql:dblink / dblink/regress TIMEOUT 1000.03s\n> [00:48:49.243] 89/270 postgresql:postgres_fdw / postgres_fdw/regress\n> TIMEOUT 1000.01s\n> [00:48:49.243] 93/270 postgresql:test_decoding / test_decoding/regress\n> TIMEOUT 1000.02s\n> [00:48:49.243] 110/270 postgresql:test_extensions /\n> test_extensions/regress TIMEOUT 1000.03s\n> [00:48:49.243] 123/270 postgresql:unsafe_tests / unsafe_tests/regress\n> TIMEOUT 1000.02s\n> [00:48:49.243] 152/270 postgresql:pg_dump / pg_dump/010_dump_connstr\n> TIMEOUT 1000.03s\n>\n> Just want to make sure you are aware of the issue.\n\nHi Shlok!\n\nThanks for taking a look. I will check these failures out to see if \nI can reproduce.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 06 Nov 2023 11:16:34 -0600", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: psql not responding to SIGINT upon db reconnection" }, { "msg_contents": "On 06/11/2023 19:16, Tristan Partin wrote:\n>>> That sounds like a much better solution. Attached you will find a v4\n>>> that implements your suggestion. Please let me know if there is\n>>> something that I missed. I can confirm that the patch works.\n\nThis patch is missing a select(). It will busy loop until the connection \nis established or cancelled.\n\nShouldn't we also clear CancelRequested after we have cancelled the \nattempt? Otherwise, any subsequent attempts will immediately fail too.\n\nShould we use 'cancel_pressed' here rather than CancelRequested? To be \nhonest, I don't understand the difference, so that's a genuine question. \nThere was an attempt at unifying them in the past but it was reverted in \ncommit 5d43c3c54d.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Thu, 16 Nov 2023 15:33:52 +0100", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: psql not responding to SIGINT upon db reconnection" }, { "msg_contents": "On Thu Nov 16, 2023 at 8:33 AM CST, Heikki Linnakangas wrote:\n> On 06/11/2023 19:16, Tristan Partin wrote:\n> >>> That sounds like a much better solution. Attached you will find a v4\n> >>> that implements your suggestion. Please let me know if there is\n> >>> something that I missed. I can confirm that the patch works.\n>\n> This patch is missing a select(). It will busy loop until the connection \n> is established or cancelled.\n\nIf I add a wait (select, poll, etc.), then I can't control-C during the \nblocking call, so it doesn't really solve the problem. On Linux, we have \nsignalfds which seem like the perfect solution to this problem, \"wait on \nthe pq socket or SIGINT.\" But that doesn't translate well to other \noperating systems :(.\n\n> tristan957=> \\c\n> NOTICE: Welcome to Neon!\n> Authenticate by visiting:\n> https://console.neon.tech/psql_session/XXXX\n> \n> \n> ^CTerminated\n\nYou can see here that I can't terminate the command. Where you see \n\"Terminated\" is me running `kill $(pgrep psql)` in another terminal.\n\n> Shouldn't we also clear CancelRequested after we have cancelled the \n> attempt? Otherwise, any subsequent attempts will immediately fail too.\n\nAfter switching to cancel_pressed, I don't think so. See this bit of \ncode in the psql main loop:\n\n> /* main loop to get queries and execute them */\n> while (successResult == EXIT_SUCCESS)\n> {\n> \t/*\n> \t * Clean up after a previous Control-C\n> \t */\n> \tif (cancel_pressed)\n> \t{\n> \t\tif (!pset.cur_cmd_interactive)\n> \t\t{\n> \t\t\t/*\n> \t\t\t * You get here if you stopped a script with Ctrl-C.\n> \t\t\t */\n> \t\t\tsuccessResult = EXIT_USER;\n> \t\t\tbreak;\n> \t\t}\n> \n> \t\tcancel_pressed = false;\n> \t}\n\n> Should we use 'cancel_pressed' here rather than CancelRequested? To be \n> honest, I don't understand the difference, so that's a genuine question. \n> There was an attempt at unifying them in the past but it was reverted in \n> commit 5d43c3c54d.\n\nThe more I look at this, the more I don't understand... I need to look \nat this harder, but wanted to get this email out. Switched to \ncancel_pressed though. Thanks for pointing it out. Going to wait to send \na v2 while more discussion occurs.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Wed, 22 Nov 2023 11:29:42 -0600", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: psql not responding to SIGINT upon db reconnection" }, { "msg_contents": "On 22/11/2023 19:29, Tristan Partin wrote:\n> On Thu Nov 16, 2023 at 8:33 AM CST, Heikki Linnakangas wrote:\n>> On 06/11/2023 19:16, Tristan Partin wrote:\n>>>>> That sounds like a much better solution. Attached you will find a v4\n>>>>> that implements your suggestion. Please let me know if there is\n>>>>> something that I missed. I can confirm that the patch works.\n>>\n>> This patch is missing a select(). It will busy loop until the connection\n>> is established or cancelled.\n> \n> If I add a wait (select, poll, etc.), then I can't control-C during the\n> blocking call, so it doesn't really solve the problem.\n\nHmm, they should return with EINTR on signal. At least on Linux; I'm not \nsure how portable that is. See signal(7) man page, section \"Interruption \nof system calls and library functions by signal handlers\". You could \nalso use a timeout like 5 s to ensure that you wake up and notice that \nthe signal was received eventually, even if it doesn't interrupt the \nblocking call.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Wed, 22 Nov 2023 23:00:03 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: psql not responding to SIGINT upon db reconnection" }, { "msg_contents": "On Wed Nov 22, 2023 at 3:00 PM CST, Heikki Linnakangas wrote:\n> On 22/11/2023 19:29, Tristan Partin wrote:\n> > On Thu Nov 16, 2023 at 8:33 AM CST, Heikki Linnakangas wrote:\n> >> On 06/11/2023 19:16, Tristan Partin wrote:\n> >>>>> That sounds like a much better solution. Attached you will find a v4\n> >>>>> that implements your suggestion. Please let me know if there is\n> >>>>> something that I missed. I can confirm that the patch works.\n> >>\n> >> This patch is missing a select(). It will busy loop until the connection\n> >> is established or cancelled.\n> > \n> > If I add a wait (select, poll, etc.), then I can't control-C during the\n> > blocking call, so it doesn't really solve the problem.\n>\n> Hmm, they should return with EINTR on signal. At least on Linux; I'm not \n> sure how portable that is. See signal(7) man page, section \"Interruption \n> of system calls and library functions by signal handlers\". You could \n> also use a timeout like 5 s to ensure that you wake up and notice that \n> the signal was received eventually, even if it doesn't interrupt the \n> blocking call.\n\nHa, you're right. I had this working yesterday, but convinced myself it \ndidn't. I had a do while loop wrapping the blocking call. Here is a v4, \nwhich seems to pass the tests that were pointed out to be failing \nearlier.\n\nNoticed that I copy-pasted pqSocketPoll() into the psql code. I think \nthis may be controversial. Not sure what the best solution to the issue \nis. I will pay attention to the buildfarm animals when they pick this \nup.\n\n-- \nTristan Partin\nNeon (https://neon.tech)", "msg_date": "Wed, 22 Nov 2023 15:29:45 -0600", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: psql not responding to SIGINT upon db reconnection" }, { "msg_contents": "On 22/11/2023 23:29, Tristan Partin wrote:\n> Ha, you're right. I had this working yesterday, but convinced myself it\n> didn't. I had a do while loop wrapping the blocking call. Here is a v4,\n> which seems to pass the tests that were pointed out to be failing\n> earlier.\n\nThanks! This suffers from a classic race condition:\n\n> +\t\t\tif (cancel_pressed)\n> +\t\t\t\tbreak;\n> +\n> +\t\t\tsock = PQsocket(n_conn);\n> +\t\t\tif (sock == -1)\n> +\t\t\t\tbreak;\n> +\n> +\t\t\trc = pqSocketPoll(sock, for_read, !for_read, (time_t)-1);\n> +\t\t\tAssert(rc != 0); /* Timeout is impossible. */\n> +\t\t\tif (rc == -1)\n> +\t\t\t{\n> +\t\t\t\tsuccess = false;\n> +\t\t\t\tbreak;\n> +\t\t\t}\n\nIf a signal arrives between the \"if (cancel_pressed)\" check and \npqSocketPoll(), pgSocketPoll will \"miss\" the signal and block \nindefinitely. There are three solutions to this:\n\n1. Use a timeout, so that you wake up periodically to check for any \nmissed signals. Easy solution, the downsides are that you will not react \nas quickly if the signal is missed, and you waste some cycles by waking \nup unnecessarily.\n\n2. The Self-pipe trick: https://cr.yp.to/docs/selfpipe.html. We also use \nthat in src/backend/storage/ipc/latch.c. It's portable, but somewhat \ncomplicated.\n\n3. Use pselect() or ppoll(). They were created specifically to address \nthis problem. Not fully portable, I think it's missing on Windows at least.\n\nMaybe use pselect() if it's available, and select() with a timeout if \nit's not.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Thu, 23 Nov 2023 11:19:14 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: psql not responding to SIGINT upon db reconnection" }, { "msg_contents": "On Thu Nov 23, 2023 at 3:19 AM CST, Heikki Linnakangas wrote:\n> On 22/11/2023 23:29, Tristan Partin wrote:\n> > Ha, you're right. I had this working yesterday, but convinced myself it\n> > didn't. I had a do while loop wrapping the blocking call. Here is a v4,\n> > which seems to pass the tests that were pointed out to be failing\n> > earlier.\n>\n> Thanks! This suffers from a classic race condition:\n>\n> > +\t\t\tif (cancel_pressed)\n> > +\t\t\t\tbreak;\n> > +\n> > +\t\t\tsock = PQsocket(n_conn);\n> > +\t\t\tif (sock == -1)\n> > +\t\t\t\tbreak;\n> > +\n> > +\t\t\trc = pqSocketPoll(sock, for_read, !for_read, (time_t)-1);\n> > +\t\t\tAssert(rc != 0); /* Timeout is impossible. */\n> > +\t\t\tif (rc == -1)\n> > +\t\t\t{\n> > +\t\t\t\tsuccess = false;\n> > +\t\t\t\tbreak;\n> > +\t\t\t}\n>\n> If a signal arrives between the \"if (cancel_pressed)\" check and \n> pqSocketPoll(), pgSocketPoll will \"miss\" the signal and block \n> indefinitely. There are three solutions to this:\n>\n> 1. Use a timeout, so that you wake up periodically to check for any \n> missed signals. Easy solution, the downsides are that you will not react \n> as quickly if the signal is missed, and you waste some cycles by waking \n> up unnecessarily.\n>\n> 2. The Self-pipe trick: https://cr.yp.to/docs/selfpipe.html. We also use \n> that in src/backend/storage/ipc/latch.c. It's portable, but somewhat \n> complicated.\n>\n> 3. Use pselect() or ppoll(). They were created specifically to address \n> this problem. Not fully portable, I think it's missing on Windows at least.\n>\n> Maybe use pselect() if it's available, and select() with a timeout if \n> it's not.\n\nFirst I have ever heard of this race condition, and now I will commit it \nto durable storage :). I went ahead and did option #3 that you proposed. \nOn Windows, I have a 1 second timeout for the select/poll. That seemed \nlike a good balance of user experience and spinning the CPU. But I am \nopen to other values. I don't have a Windows VM, but maybe I should set \none up...\n\nI am not completely in love with the code I have written. Lots of \nconditional compilation which makes it hard to read. Looking forward to \nanother round of review to see what y'all think.\n\nFor what it's worth, I did try #2, but since psql installs its own \nSIGINT handler, the code became really hairy.\n\n-- \nTristan Partin\nNeon (https://neon.tech)", "msg_date": "Wed, 29 Nov 2023 11:48:51 -0600", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: psql not responding to SIGINT upon db reconnection" }, { "msg_contents": "On Wed Nov 29, 2023 at 11:48 AM CST, Tristan Partin wrote:\n> I am not completely in love with the code I have written. Lots of \n> conditional compilation which makes it hard to read. Looking forward to \n> another round of review to see what y'all think.\n\nOk. Here is a patch which just uses select(2) with a timeout of 1s or \npselect(2) if it is available. I also moved the state machine processing \ninto its own function.\n\nThanks for your comments thus far.\n\n-- \nTristan Partin\nNeon (https://neon.tech)", "msg_date": "Tue, 05 Dec 2023 12:35:31 -0600", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: psql not responding to SIGINT upon db reconnection" }, { "msg_contents": "On Tue, Dec 5, 2023 at 1:35 PM Tristan Partin <tristan@neon.tech> wrote:\n> On Wed Nov 29, 2023 at 11:48 AM CST, Tristan Partin wrote:\n> > I am not completely in love with the code I have written. Lots of\n> > conditional compilation which makes it hard to read. Looking forward to\n> > another round of review to see what y'all think.\n>\n> Ok. Here is a patch which just uses select(2) with a timeout of 1s or\n> pselect(2) if it is available. I also moved the state machine processing\n> into its own function.\n\nHmm, this adds a function called pqSocketPoll to psql/command.c. But\nthere already is such a function in libpq/fe-misc.c. It's not quite\nthe same, though. Having the same function in two different modules\nwith subtly different definitions seems like it's probably not the\nright idea.\n\nAlso, this seems awfully complicated for something that's supposed to\n(checks notes) wait for a file descriptor to become ready for I/O for\nup to 1 second. It's 160 lines of code in pqSocketPoll and another 50\nin the caller. If this is really the simplest way to do this, we\nreally need to rethink our libpq APIs or, uh, something. I wonder if\nwe could make this simpler by, say:\n\n- always use select\n- always use a 1-second timeout\n- if it returns faster because the race condition doesn't happen, cool\n- if it doesn't, well then you get to wait for a second, oh well\n\nI don't feel strongly that that's the right way to go and if Heikki or\nsome other committer wants to go with this more complex conditional\napproach, that's fine. But to me it seems like a lot.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Jan 2024 13:24:34 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql not responding to SIGINT upon db reconnection" }, { "msg_contents": "On Fri Jan 5, 2024 at 12:24 PM CST, Robert Haas wrote:\n> On Tue, Dec 5, 2023 at 1:35 PM Tristan Partin <tristan@neon.tech> wrote:\n> > On Wed Nov 29, 2023 at 11:48 AM CST, Tristan Partin wrote:\n> > > I am not completely in love with the code I have written. Lots of\n> > > conditional compilation which makes it hard to read. Looking forward to\n> > > another round of review to see what y'all think.\n> >\n> > Ok. Here is a patch which just uses select(2) with a timeout of 1s or\n> > pselect(2) if it is available. I also moved the state machine processing\n> > into its own function.\n>\n> Hmm, this adds a function called pqSocketPoll to psql/command.c. But\n> there already is such a function in libpq/fe-misc.c. It's not quite\n> the same, though. Having the same function in two different modules\n> with subtly different definitions seems like it's probably not the\n> right idea.\n\nYep, not tied to the function name. Happy to rename as anyone suggests.\n\n> Also, this seems awfully complicated for something that's supposed to\n> (checks notes) wait for a file descriptor to become ready for I/O for\n> up to 1 second. It's 160 lines of code in pqSocketPoll and another 50\n> in the caller. If this is really the simplest way to do this, we\n> really need to rethink our libpq APIs or, uh, something. I wonder if\n> we could make this simpler by, say:\n>\n> - always use select\n> - always use a 1-second timeout\n> - if it returns faster because the race condition doesn't happen, cool\n> - if it doesn't, well then you get to wait for a second, oh well\n>\n> I don't feel strongly that that's the right way to go and if Heikki or\n> some other committer wants to go with this more complex conditional\n> approach, that's fine. But to me it seems like a lot.\n\nI think the way to go is to expose some variation of libpq's\npqSocketPoll(), which I would be happy to put together a patch for.\nMaking frontends, psql in this case, have to reimplement the polling\nlogic doesn't strike me as fruitful, which is essentially what I have\ndone.\n\nThanks for your input!\n\nBut also wait a second. In my last email, I said:\n\n> Ok. Here is a patch which just uses select(2) with a timeout of 1s or\n> pselect(2) if it is available. I also moved the state machine processing\n> into its own function.\n\nThis is definitely not the patch I meant to send. What the? Here is what\nI meant to send, but I stand by my comment that we should just expose\na variation of pqSocketPoll().\n\n--\nTristan Partin\nNeon (https://neon.tech)", "msg_date": "Mon, 08 Jan 2024 00:03:45 -0600", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: psql not responding to SIGINT upon db reconnection" }, { "msg_contents": "On Mon, Jan 8, 2024 at 1:03 AM Tristan Partin <tristan@neon.tech> wrote:\n> I think the way to go is to expose some variation of libpq's\n> pqSocketPoll(), which I would be happy to put together a patch for.\n> Making frontends, psql in this case, have to reimplement the polling\n> logic doesn't strike me as fruitful, which is essentially what I have\n> done.\n\nI encourage further exploration of this line of attack. I fear that if\nI were to commit something like what you've posted up until now,\npeople would complain that that code was too ugly to live, and I'd\nhave a hard time telling them that they're wrong.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 12 Jan 2024 11:45:06 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql not responding to SIGINT upon db reconnection" }, { "msg_contents": "On Fri Jan 12, 2024 at 10:45 AM CST, Robert Haas wrote:\n> On Mon, Jan 8, 2024 at 1:03 AM Tristan Partin <tristan@neon.tech> wrote:\n> > I think the way to go is to expose some variation of libpq's\n> > pqSocketPoll(), which I would be happy to put together a patch for.\n> > Making frontends, psql in this case, have to reimplement the polling\n> > logic doesn't strike me as fruitful, which is essentially what I have\n> > done.\n>\n> I encourage further exploration of this line of attack. I fear that if\n> I were to commit something like what you've posted up until now,\n> people would complain that that code was too ugly to live, and I'd\n> have a hard time telling them that they're wrong.\n\nCompletely agree. Let me look into this. Perhaps I can get something up \nnext week or the week after.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Fri, 12 Jan 2024 11:13:40 -0600", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: psql not responding to SIGINT upon db reconnection" }, { "msg_contents": "On Fri Jan 12, 2024 at 11:13 AM CST, Tristan Partin wrote:\n> On Fri Jan 12, 2024 at 10:45 AM CST, Robert Haas wrote:\n> > On Mon, Jan 8, 2024 at 1:03 AM Tristan Partin <tristan@neon.tech> wrote:\n> > > I think the way to go is to expose some variation of libpq's\n> > > pqSocketPoll(), which I would be happy to put together a patch for.\n> > > Making frontends, psql in this case, have to reimplement the polling\n> > > logic doesn't strike me as fruitful, which is essentially what I have\n> > > done.\n> >\n> > I encourage further exploration of this line of attack. I fear that if\n> > I were to commit something like what you've posted up until now,\n> > people would complain that that code was too ugly to live, and I'd\n> > have a hard time telling them that they're wrong.\n>\n> Completely agree. Let me look into this. Perhaps I can get something up \n> next week or the week after.\n\nNot next week, but here is a respin. I've exposed pqSocketPoll as \nPQsocketPoll and am just using that. You can see the diff is so much \nsmaller, which is great!\n\nIn order to fight the race condition, I am just using a 1 second timeout \ninstead of trying to integrate pselect or ppoll. We could add \na PQsocketPPoll() to support those use cases, but I am not sure how \navailable pselect and ppoll are. I guess on Windows we don't have \npselect. I don't think using the pipe trick that Heikki mentioned \nearlier is suitable to expose via an API in libpq, but someone else \nmight have a different opinion.\n\nMaybe this is good enough until someone complains? Most people would \nprobably just chalk any latency between keypress and cancellation as \nnetwork latency and not a hardcoded 1 second.\n\nThanks for your feedback Robert!\n\n-- \nTristan Partin\nNeon (https://neon.tech)", "msg_date": "Tue, 30 Jan 2024 16:19:54 -0600", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: psql not responding to SIGINT upon db reconnection" }, { "msg_contents": "On Tue, 30 Jan 2024 at 23:20, Tristan Partin <tristan@neon.tech> wrote:\n> Not next week, but here is a respin. I've exposed pqSocketPoll as\n> PQsocketPoll and am just using that. You can see the diff is so much\n> smaller, which is great!\n\nThe exports.txt change should be made part of patch 0001, also docs\nare missing for the newly exposed function. PQsocketPoll does look\nlike quite a nice API to expose from libpq.\n\n\n", "msg_date": "Tue, 30 Jan 2024 23:42:50 +0100", "msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: psql not responding to SIGINT upon db reconnection" }, { "msg_contents": "On Tue Jan 30, 2024 at 4:42 PM CST, Jelte Fennema-Nio wrote:\n> On Tue, 30 Jan 2024 at 23:20, Tristan Partin <tristan@neon.tech> wrote:\n> > Not next week, but here is a respin. I've exposed pqSocketPoll as\n> > PQsocketPoll and am just using that. You can see the diff is so much\n> > smaller, which is great!\n>\n> The exports.txt change should be made part of patch 0001, also docs\n> are missing for the newly exposed function. PQsocketPoll does look\n> like quite a nice API to expose from libpq.\n\nI was looking for documentation of PQsocket(), but didn't find any \nstandalone (unless I completely missed it). So I just copied how \nPQsocket() is documented in PQconnectPoll(). I am happy to document it \nseparately if you think it would be useful.\n\nMy bad on the exports.txt change being in the wrong commit. Git \nthings... I will fix it on the next re-spin after resolving the previous \nparagraph.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Wed, 31 Jan 2024 12:07:12 -0600", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: psql not responding to SIGINT upon db reconnection" }, { "msg_contents": "On Wed, 31 Jan 2024 at 19:07, Tristan Partin <tristan@neon.tech> wrote:\n> I was looking for documentation of PQsocket(), but didn't find any\n> standalone (unless I completely missed it). So I just copied how\n> PQsocket() is documented in PQconnectPoll(). I am happy to document it\n> separately if you think it would be useful.\n\nPQsocket its documentation is here:\nhttps://www.postgresql.org/docs/16/libpq-status.html#LIBPQ-PQSOCKET\n\nI think PQsocketPoll should probably have its API documentation\n(describing arguments and return value at minimum) in this section of\nthe docs: https://www.postgresql.org/docs/16/libpq-async.html\nAnd I think the example in the PQconnnectPoll API docs could benefit\nfrom having PQsocketPoll used in that example.\n\n\n", "msg_date": "Wed, 31 Jan 2024 19:34:30 +0100", "msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: psql not responding to SIGINT upon db reconnection" }, { "msg_contents": "On Wed, Jan 31, 2024 at 1:07 PM Tristan Partin <tristan@neon.tech> wrote:\n> I was looking for documentation of PQsocket(), but didn't find any\n> standalone (unless I completely missed it). So I just copied how\n> PQsocket() is documented in PQconnectPoll(). I am happy to document it\n> separately if you think it would be useful.\n\nAs Jelte said back at the end of January, I think you just completely\nmissed it. The relevant part of libpq.sgml starts like this:\n\n <varlistentry id=\"libpq-PQsocket\">\n <term><function>PQsocket</function><indexterm><primary>PQsocket</primary></indexterm></term>\n\nAs far as I know, we document all of the exported libpq functions in\nthat SGML file.\n\nI think there's no real reason why we couldn't get at least 0001 and\nmaybe also 0002 into this release, but only if you move quickly on\nthis. Feature freeze is approaching rapidly.\n\nModulo the documentation changes, I think 0001 is pretty much ready to\ngo. 0002 needs comments. I'm also not so sure about the name\nprocess_connection_state_machine(); it seems a little verbose. How\nabout something like wait_until_connected()? And maybe put it below\ndo_connect() instead of above.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 22 Mar 2024 10:59:30 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql not responding to SIGINT upon db reconnection" }, { "msg_contents": "On Fri Mar 22, 2024 at 9:59 AM CDT, Robert Haas wrote:\n> On Wed, Jan 31, 2024 at 1:07 PM Tristan Partin <tristan@neon.tech> wrote:\n> > I was looking for documentation of PQsocket(), but didn't find any\n> > standalone (unless I completely missed it). So I just copied how\n> > PQsocket() is documented in PQconnectPoll(). I am happy to document it\n> > separately if you think it would be useful.\n>\n> As Jelte said back at the end of January, I think you just completely\n> missed it. The relevant part of libpq.sgml starts like this:\n>\n> <varlistentry id=\"libpq-PQsocket\">\n> <term><function>PQsocket</function><indexterm><primary>PQsocket</primary></indexterm></term>\n>\n> As far as I know, we document all of the exported libpq functions in\n> that SGML file.\n>\n> I think there's no real reason why we couldn't get at least 0001 and\n> maybe also 0002 into this release, but only if you move quickly on\n> this. Feature freeze is approaching rapidly.\n>\n> Modulo the documentation changes, I think 0001 is pretty much ready to\n> go. 0002 needs comments. I'm also not so sure about the name\n> process_connection_state_machine(); it seems a little verbose. How\n> about something like wait_until_connected()? And maybe put it below\n> do_connect() instead of above.\n\nRobert, Jelte:\n\nSorry for taking a while to get back to y'all. I have taken your \nfeedback into consideration for v9. This is my first time writing \nPostgres docs, so I'm ready for another round of editing :).\n\nRobert, could you point out some places where comments would be useful \nin 0002? I did rename the function and moved it as suggested, thanks! In \nturn, I also realized I forgot a prototype, so also added it.\n\nThis patchset has also been rebased on master, so the libpq export \nnumber for PQsocketPoll was bumped.\n\n-- \nTristan Partin\nNeon (https://neon.tech)", "msg_date": "Fri, 22 Mar 2024 12:05:20 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: psql not responding to SIGINT upon db reconnection" }, { "msg_contents": "On Fri, Mar 22, 2024 at 1:05 PM Tristan Partin <tristan@neon.tech> wrote:\n> Sorry for taking a while to get back to y'all. I have taken your\n> feedback into consideration for v9. This is my first time writing\n> Postgres docs, so I'm ready for another round of editing :).\n\nYeah, that looks like it needs some wordsmithing yet. I can take a\ncrack at that at some point, but here are a few notes:\n\n- \"takes care of\" and \"working through the state machine\" seem quite\nvague to me.\n- the meanings of forRead and forWrite don't seem to be documented.\n- \"retuns\" is a typo.\n\n> Robert, could you point out some places where comments would be useful\n> in 0002? I did rename the function and moved it as suggested, thanks! In\n> turn, I also realized I forgot a prototype, so also added it.\n\nWell, for starters, I'd give the function a header comment.\n\nTelling me that a 1 second timeout is a 1 second timeout is less\nuseful than telling me why we've picked a 1 second timeout. Presumably\nthe answer is \"so we can respond to cancel interrupts in a timely\nfashion\", but I'd make that explicit.\n\nIt might be worth a comment saying that PQsocket() shouldn't be\nhoisted out of the loop because it could be a multi-host connection\nstring and the socket might change under us. Unless that's not true,\nin which case we should hoist the PQsocket() call out of the loop.\n\nI think it would also be worth a comment saying that we don't need to\nreport errors here, as the caller will do that; we just need to wait\nuntil the connection is known to have either succeeded or failed, or\nuntil the user presses cancel.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 22 Mar 2024 13:17:10 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql not responding to SIGINT upon db reconnection" }, { "msg_contents": "On Fri Mar 22, 2024 at 12:17 PM CDT, Robert Haas wrote:\n> On Fri, Mar 22, 2024 at 1:05 PM Tristan Partin <tristan@neon.tech> wrote:\n> > Sorry for taking a while to get back to y'all. I have taken your\n> > feedback into consideration for v9. This is my first time writing\n> > Postgres docs, so I'm ready for another round of editing :).\n>\n> Yeah, that looks like it needs some wordsmithing yet. I can take a\n> crack at that at some point, but here are a few notes:\n>\n> - \"takes care of\" and \"working through the state machine\" seem quite\n> vague to me.\n> - the meanings of forRead and forWrite don't seem to be documented.\n> - \"retuns\" is a typo.\n>\n> > Robert, could you point out some places where comments would be useful\n> > in 0002? I did rename the function and moved it as suggested, thanks! In\n> > turn, I also realized I forgot a prototype, so also added it.\n>\n> Well, for starters, I'd give the function a header comment.\n>\n> Telling me that a 1 second timeout is a 1 second timeout is less\n> useful than telling me why we've picked a 1 second timeout. Presumably\n> the answer is \"so we can respond to cancel interrupts in a timely\n> fashion\", but I'd make that explicit.\n>\n> It might be worth a comment saying that PQsocket() shouldn't be\n> hoisted out of the loop because it could be a multi-host connection\n> string and the socket might change under us. Unless that's not true,\n> in which case we should hoist the PQsocket() call out of the loop.\n>\n> I think it would also be worth a comment saying that we don't need to\n> report errors here, as the caller will do that; we just need to wait\n> until the connection is known to have either succeeded or failed, or\n> until the user presses cancel.\n\nThis is good feedback, thanks. I have added comments where you \nsuggested. I reworded the PQsocketPoll docs to hopefully meet your \nexpectations. I am using the term \"connection sequence\" which is from \nthe PQconnectStartParams docs, so hopefully people will be able to make \nthat connection. I wrote documentation for \"forRead\" and \"forWrite\" as \nwell.\n\nI had a question about parameter naming. Right now I have a mix of \ncamel-case and snake-case in the function signature since that is what \nI inherited. Should I change that to be consistent? If so, which case \nwould you like?\n\nThanks for your continued reviews.\n\n-- \nTristan Partin\nNeon (https://neon.tech)", "msg_date": "Fri, 22 Mar 2024 15:58:42 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: psql not responding to SIGINT upon db reconnection" }, { "msg_contents": "On Fri, Mar 22, 2024 at 4:58 PM Tristan Partin <tristan@neon.tech> wrote:\n> I had a question about parameter naming. Right now I have a mix of\n> camel-case and snake-case in the function signature since that is what\n> I inherited. Should I change that to be consistent? If so, which case\n> would you like?\n\nUh... PostgreSQL is kind of the wild west in that regard. The thing to\ndo is look for nearby precedents, but that doesn't help much here\nbecause in the very same file, libpq-fe.h, we have:\n\nextern int PQsetResultAttrs(PGresult *res, int numAttributes,\nPGresAttDesc *attDescs);\nextern int PQsetvalue(PGresult *res, int tup_num, int field_num,\nchar *value, int len);\n\nSince the existing naming is consistent with one of those two styles,\nI'd probably just leave it be.\n\n+ The function returns a value greater than <literal>0</literal>\nif the specified condition\n+ is met, <literal>0</literal> if a timeout occurred, or\n<literal>-1</literal> if an error\n+ or interrupt occurred. In the event <literal>forRead</literal> and\n\nWe either need to tell people how to find out which error it was, or\nif that's not possible and we can't reasonably make it possible, we\nneed to tell them why they shouldn't care. Because there's nothing\nmore delightful than someone who shows up and says \"hey, I tried to do\nXYZ, and I got an error,\" as if that were sufficient information for\nme to do something useful.\n\n+ <literal>end_time</literal> is the time in the future in\nseconds starting from the UNIX\n+ epoch in which you would like the function to return if the\ncondition is not met.\n\nThis sentence seems a bit contorted to me, like maybe Yoda wrote it. I\nwas about to try to rephrase it and maybe split it in two when I\nwondered why we need to document how time_t works at all. Can't we\njust say something like \"If end_time is not -1, it specifies the time\nat which this function should stop waiting for the condition to be\nmet\" -- and maybe move it to the end of the first paragraph, so it's\nbefore where we list the meanings of the return values?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Mar 2024 14:44:00 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql not responding to SIGINT upon db reconnection" }, { "msg_contents": "On Mon Mar 25, 2024 at 1:44 PM CDT, Robert Haas wrote:\n> On Fri, Mar 22, 2024 at 4:58 PM Tristan Partin <tristan@neon.tech> wrote:\n> > I had a question about parameter naming. Right now I have a mix of\n> > camel-case and snake-case in the function signature since that is what\n> > I inherited. Should I change that to be consistent? If so, which case\n> > would you like?\n>\n> Uh... PostgreSQL is kind of the wild west in that regard. The thing to\n> do is look for nearby precedents, but that doesn't help much here\n> because in the very same file, libpq-fe.h, we have:\n>\n> extern int PQsetResultAttrs(PGresult *res, int numAttributes,\n> PGresAttDesc *attDescs);\n> extern int PQsetvalue(PGresult *res, int tup_num, int field_num,\n> char *value, int len);\n>\n> Since the existing naming is consistent with one of those two styles,\n> I'd probably just leave it be.\n>\n> + The function returns a value greater than <literal>0</literal>\n> if the specified condition\n> + is met, <literal>0</literal> if a timeout occurred, or\n> <literal>-1</literal> if an error\n> + or interrupt occurred. In the event <literal>forRead</literal> and\n>\n> We either need to tell people how to find out which error it was, or\n> if that's not possible and we can't reasonably make it possible, we\n> need to tell them why they shouldn't care. Because there's nothing\n> more delightful than someone who shows up and says \"hey, I tried to do\n> XYZ, and I got an error,\" as if that were sufficient information for\n> me to do something useful.\n>\n> + <literal>end_time</literal> is the time in the future in\n> seconds starting from the UNIX\n> + epoch in which you would like the function to return if the\n> condition is not met.\n>\n> This sentence seems a bit contorted to me, like maybe Yoda wrote it. I\n> was about to try to rephrase it and maybe split it in two when I\n> wondered why we need to document how time_t works at all. Can't we\n> just say something like \"If end_time is not -1, it specifies the time\n> at which this function should stop waiting for the condition to be\n> met\" -- and maybe move it to the end of the first paragraph, so it's\n> before where we list the meanings of the return values?\n\nIncorporated feedback, I have :).\n\n-- \nTristan Partin\nNeon (https://neon.tech)", "msg_date": "Mon, 01 Apr 2024 11:04:00 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: psql not responding to SIGINT upon db reconnection" }, { "msg_contents": "On Mon, Apr 1, 2024 at 12:04 PM Tristan Partin <tristan@neon.tech> wrote:\n> > This sentence seems a bit contorted to me, like maybe Yoda wrote it.\n>\n> Incorporated feedback, I have :).\n\nCommitted it, I did. My thanks for working on this issue, I extend.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 2 Apr 2024 10:32:54 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql not responding to SIGINT upon db reconnection" }, { "msg_contents": "On Tue Apr 2, 2024 at 9:32 AM CDT, Robert Haas wrote:\n> On Mon, Apr 1, 2024 at 12:04 PM Tristan Partin <tristan@neon.tech> wrote:\n> > > This sentence seems a bit contorted to me, like maybe Yoda wrote it.\n> >\n> > Incorporated feedback, I have :).\n>\n> Committed it, I did. My thanks for working on this issue, I extend.\n\nThanks to everyone for their reviews! Patch is in a much better place \nthan when it started.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Tue, 02 Apr 2024 09:35:09 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: psql not responding to SIGINT upon db reconnection" }, { "msg_contents": "On Tue, 2 Apr 2024 at 16:33, Robert Haas <robertmhaas@gmail.com> wrote:\n> Committed it, I did. My thanks for working on this issue, I extend.\n\nLooking at the committed version of this patch, the pg_unreachable\ncalls seemed weird to me. 1 is actually incorrect, thus possibly\nresulting in undefined behaviour. And for the other call an imho\nbetter fix would be to remove the now 21 year unused enum variant,\ninstead of introducing its only reference in the whole codebase.\n\nAttached are two trivial patches, feel free to remove both of the\npg_unreachable calls.", "msg_date": "Wed, 3 Apr 2024 15:32:39 +0200", "msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: psql not responding to SIGINT upon db reconnection" }, { "msg_contents": "On Wed Apr 3, 2024 at 8:32 AM CDT, Jelte Fennema-Nio wrote:\n> On Tue, 2 Apr 2024 at 16:33, Robert Haas <robertmhaas@gmail.com> wrote:\n> > Committed it, I did. My thanks for working on this issue, I extend.\n>\n> Looking at the committed version of this patch, the pg_unreachable\n> calls seemed weird to me. 1 is actually incorrect, thus possibly\n> resulting in undefined behaviour. And for the other call an imho\n> better fix would be to remove the now 21 year unused enum variant,\n> instead of introducing its only reference in the whole codebase.\n>\n> Attached are two trivial patches, feel free to remove both of the\n> pg_unreachable calls.\n\nPatches look good. Sorry about causing you to do some work.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Wed, 03 Apr 2024 09:20:36 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: psql not responding to SIGINT upon db reconnection" }, { "msg_contents": "Jelte Fennema-Nio <postgres@jeltef.nl> writes:\n> Looking at the committed version of this patch, the pg_unreachable\n> calls seemed weird to me. 1 is actually incorrect, thus possibly\n> resulting in undefined behaviour. And for the other call an imho\n> better fix would be to remove the now 21 year unused enum variant,\n> instead of introducing its only reference in the whole codebase.\n\nIf we do the latter, we will almost certainly get pushback from\ndistros who check for library ABI breaks. I fear the comment\nsuggesting that we could remove it someday is too optimistic.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Apr 2024 10:31:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql not responding to SIGINT upon db reconnection" }, { "msg_contents": "On Wed, 3 Apr 2024 at 16:31, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> If we do the latter, we will almost certainly get pushback from\n> distros who check for library ABI breaks. I fear the comment\n> suggesting that we could remove it someday is too optimistic.\n\nAlright, changed patch 0002 to keep the variant. But remove it from\nthe recently added switch/case. And also updated the comment to remove\nthe \"for awhile\".", "msg_date": "Wed, 3 Apr 2024 16:50:07 +0200", "msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: psql not responding to SIGINT upon db reconnection" }, { "msg_contents": "On Wed Apr 3, 2024 at 9:50 AM CDT, Jelte Fennema-Nio wrote:\n> On Wed, 3 Apr 2024 at 16:31, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > If we do the latter, we will almost certainly get pushback from\n> > distros who check for library ABI breaks. I fear the comment\n> > suggesting that we could remove it someday is too optimistic.\n>\n> Alright, changed patch 0002 to keep the variant. But remove it from\n> the recently added switch/case. And also updated the comment to remove\n> the \"for awhile\".\n\nRemoving from the switch statement causes a warning:\n\n> [1/2] Compiling C object src/bin/psql/psql.p/command.c.o\n> ../src/bin/psql/command.c: In function ‘wait_until_connected’:\n> ../src/bin/psql/command.c:3803:17: warning: enumeration value ‘PGRES_POLLING_ACTIVE’ not handled in switch [-Wswitch]\n> 3803 | switch (PQconnectPoll(conn))\n> | ^~~~~~\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Wed, 03 Apr 2024 09:55:08 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: psql not responding to SIGINT upon db reconnection" }, { "msg_contents": "On Wed, 3 Apr 2024 at 16:55, Tristan Partin <tristan@neon.tech> wrote:\n> Removing from the switch statement causes a warning:\n>\n> > [1/2] Compiling C object src/bin/psql/psql.p/command.c.o\n> > ../src/bin/psql/command.c: In function ‘wait_until_connected’:\n> > ../src/bin/psql/command.c:3803:17: warning: enumeration value ‘PGRES_POLLING_ACTIVE’ not handled in switch [-Wswitch]\n> > 3803 | switch (PQconnectPoll(conn))\n> > | ^~~~~~\n\n\nOfcourse... fixed now", "msg_date": "Wed, 3 Apr 2024 17:05:44 +0200", "msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: psql not responding to SIGINT upon db reconnection" }, { "msg_contents": "On Wed Apr 3, 2024 at 10:05 AM CDT, Jelte Fennema-Nio wrote:\n> On Wed, 3 Apr 2024 at 16:55, Tristan Partin <tristan@neon.tech> wrote:\n> > Removing from the switch statement causes a warning:\n> >\n> > > [1/2] Compiling C object src/bin/psql/psql.p/command.c.o\n> > > ../src/bin/psql/command.c: In function ‘wait_until_connected’:\n> > > ../src/bin/psql/command.c:3803:17: warning: enumeration value ‘PGRES_POLLING_ACTIVE’ not handled in switch [-Wswitch]\n> > > 3803 | switch (PQconnectPoll(conn))\n> > > | ^~~~~~\n>\n>\n> Ofcourse... fixed now\n\nI think patch 2 makes it worse. The value in -Wswitch is that when new \nenum variants are added, the developer knows the locations to update. \nAdding a default case makes -Wswitch pointless.\n\nPatch 1 is still good. The comment change in patch 2 is good too!\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Wed, 03 Apr 2024 10:16:58 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: psql not responding to SIGINT upon db reconnection" }, { "msg_contents": "On Wed, Apr 3, 2024 at 11:17 AM Tristan Partin <tristan@neon.tech> wrote:\n> I think patch 2 makes it worse. The value in -Wswitch is that when new\n> enum variants are added, the developer knows the locations to update.\n> Adding a default case makes -Wswitch pointless.\n>\n> Patch 1 is still good. The comment change in patch 2 is good too!\n\nIt seems to me that 0001 should either remove the pg_unreachable()\ncall or change the break to a return, but not both. The commit message\ntries to justify doing both by saying that the pg_unreachable() call\ndoesn't have much value, but there's not much value in omitting\npg_unreachable() from unreachable places, either, so I'm not\nconvinced.\n\nI agree with Tristan's analysis of 0002.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 3 Apr 2024 15:49:09 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql not responding to SIGINT upon db reconnection" }, { "msg_contents": "On Wed, 3 Apr 2024 at 21:49, Robert Haas <robertmhaas@gmail.com> wrote:\n> It seems to me that 0001 should either remove the pg_unreachable()\n> call or change the break to a return, but not both.\n\nUpdated the patch to only remove the pg_unreachable call (and keep the breaks).\n\n> but there's not much value in omitting\n> pg_unreachable() from unreachable places, either, so I'm not\n> convinced.\n\nBut I don't agree with this. Having pg_unreachable in places where it\nbrings no perf benefit has two main downsides:\n\n1. It's extra lines of code\n2. If a programmer/reviewer is not careful about maintaining this\nunreachable invariant (now or in the future), undefined behavior can\nbe introduced quite easily.\n\nAlso, I'd expect any optimizing compiler to know that the code after\nthe loop is already unreachable if there are no break/goto statements\nin its body.\n\n> I agree with Tristan's analysis of 0002.\n\nUpdated 0002, to only change the comment. But honestly I don't think\nusing \"default\" really introduces many chances for future bugs in this\ncase, since it seems very unlikely we'll ever add new variants to the\nPostgresPollingStatusType enum.", "msg_date": "Thu, 4 Apr 2024 12:43:29 +0200", "msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: psql not responding to SIGINT upon db reconnection" }, { "msg_contents": "On Thu, Apr 4, 2024 at 6:43 AM Jelte Fennema-Nio <postgres@jeltef.nl> wrote:\n> But I don't agree with this. Having pg_unreachable in places where it\n> brings no perf benefit has two main downsides:\n>\n> 1. It's extra lines of code\n> 2. If a programmer/reviewer is not careful about maintaining this\n> unreachable invariant (now or in the future), undefined behavior can\n> be introduced quite easily.\n>\n> Also, I'd expect any optimizing compiler to know that the code after\n> the loop is already unreachable if there are no break/goto statements\n> in its body.\n\nI'm not super-well-informed about the effects of pg_unreachable(), but\nI feel like it's a little strange to complain that it's an extra line\nof code. Presumably, it translates into no executable instructions,\nand might even reduce the size of the generated code. Now, it could\nstill be a cognitive burden on human readers, but hopefully it should\ndo the exact opposite: it should make the code easier to understand by\ndocumenting that the author thought that a certain point in the code\ncould not be reached. In that sense, it's like a code comment.\n\nLikewise, it certainly is true that one must be careful to put\npg_unreachable() only in places that aren't reachable, and to add or\nremove it as appropriate if the set of unreachable places changes. But\nit's also true that one must be careful to put any other thing that\nlooks like a function call only in places where that thing is\nappropriate.\n\n> > I agree with Tristan's analysis of 0002.\n>\n> Updated 0002, to only change the comment. But honestly I don't think\n> using \"default\" really introduces many chances for future bugs in this\n> case, since it seems very unlikely we'll ever add new variants to the\n> PostgresPollingStatusType enum.\n\nThat might be true, but we've avoided using default: for switches over\nlots of other enum types in lots of other places in PostgreSQL, and\noverall it's saved us from lots of bugs. I'm not a stickler about\nthis; if there's some reason not to do it in some particular case,\nthat's fine with me. But where it's possible to do it without any real\ndisadvantage, I think it's a healthy practice.\n\nI have committed these versions of the patches.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Apr 2024 16:32:35 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql not responding to SIGINT upon db reconnection" }, { "msg_contents": "Thanks Jelte and Robert for the extra effort to correct my mistake. \nI apologize. Bad copy-paste from a previous revision of the patch at \nsome point.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Thu, 04 Apr 2024 15:40:34 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": true, "msg_subject": "Re: psql not responding to SIGINT upon db reconnection" } ]
[ { "msg_contents": "We got a complaint at [1] about how a not-so-unreasonable LDAP\nconfiguration can hit the \"authentication file token too long,\nskipping\" error case in hba.c's next_token(). I think we've\nseen similar complaints before, although a desultory archives\nsearch didn't turn one up.\n\nA minimum-change response would be to increase the MAX_TOKEN\nconstant from 256 to (say) 1K or 10K. But it wouldn't be all\nthat hard to replace the fixed-size buffer with a StringInfo,\nas attached.\n\nGiven the infrequency of complaints, I'm inclined to apply\nthe more thorough fix only in HEAD, and to just raise MAX_TOKEN\nin the back branches. Thoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/PH0PR04MB8294A4C5A65D9D492CBBD349C002A%40PH0PR04MB8294.namprd04.prod.outlook.com", "msg_date": "Mon, 24 Jul 2023 13:53:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Removing the fixed-size buffer restriction in hba.c" }, { "msg_contents": "On Mon, Jul 24, 2023 at 2:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> We got a complaint at [1] about how a not-so-unreasonable LDAP\n> configuration can hit the \"authentication file token too long,\n> skipping\" error case in hba.c's next_token(). I think we've\n> seen similar complaints before, although a desultory archives\n> search didn't turn one up.\n>\n> A minimum-change response would be to increase the MAX_TOKEN\n> constant from 256 to (say) 1K or 10K. But it wouldn't be all\n> that hard to replace the fixed-size buffer with a StringInfo,\n> as attached.\n>\n\n+1 for replacing it with StringInfo. And the patch LGTM!\n\n\n>\n> Given the infrequency of complaints, I'm inclined to apply\n> the more thorough fix only in HEAD, and to just raise MAX_TOKEN\n> in the back branches. Thoughts?\n>\n\nIt makes sense to change it only in HEAD.\n\nRegards,\n\n--\nFabrízio de Royes Mello\n\nOn Mon, Jul 24, 2023 at 2:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:>> We got a complaint at [1] about how a not-so-unreasonable LDAP> configuration can hit the \"authentication file token too long,> skipping\" error case in hba.c's next_token().  I think we've> seen similar complaints before, although a desultory archives> search didn't turn one up.>> A minimum-change response would be to increase the MAX_TOKEN> constant from 256 to (say) 1K or 10K.  But it wouldn't be all> that hard to replace the fixed-size buffer with a StringInfo,> as attached.>+1 for replacing it with StringInfo. And the patch LGTM! >> Given the infrequency of complaints, I'm inclined to apply> the more thorough fix only in HEAD, and to just raise MAX_TOKEN> in the back branches.  Thoughts?>It makes sense to change it only in HEAD. Regards,--Fabrízio de Royes Mello", "msg_date": "Mon, 24 Jul 2023 15:07:07 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Removing the fixed-size buffer restriction in hba.c" }, { "msg_contents": "On Mon, Jul 24, 2023 at 03:07:07PM -0300, Fabrízio de Royes Mello wrote:\n> On Mon, Jul 24, 2023 at 2:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> We got a complaint at [1] about how a not-so-unreasonable LDAP\n>> configuration can hit the \"authentication file token too long,\n>> skipping\" error case in hba.c's next_token(). I think we've\n>> seen similar complaints before, although a desultory archives\n>> search didn't turn one up.\n>>\n>> A minimum-change response would be to increase the MAX_TOKEN\n>> constant from 256 to (say) 1K or 10K. But it wouldn't be all\n>> that hard to replace the fixed-size buffer with a StringInfo,\n>> as attached.\n>>\n> \n> +1 for replacing it with StringInfo. And the patch LGTM!\n\nI'd suggest noting that next_token() clears buf, but otherwise it looks\nreasonable to me, too.\n\n>> Given the infrequency of complaints, I'm inclined to apply\n>> the more thorough fix only in HEAD, and to just raise MAX_TOKEN\n>> in the back branches. Thoughts?\n>>\n> \n> It makes sense to change it only in HEAD.\n\nI wouldn't be opposed to back-patching the more thorough fix, but I don't\nfeel too strongly about it.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 24 Jul 2023 15:58:31 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Removing the fixed-size buffer restriction in hba.c" }, { "msg_contents": "On Mon, Jul 24, 2023 at 03:58:31PM -0700, Nathan Bossart wrote:\n> On Mon, Jul 24, 2023 at 03:07:07PM -0300, Fabrízio de Royes Mello wrote:\n>>> Given the infrequency of complaints, I'm inclined to apply\n>>> the more thorough fix only in HEAD, and to just raise MAX_TOKEN\n>>> in the back branches. Thoughts?\n>>>\n>> \n>> It makes sense to change it only in HEAD.\n> \n> I wouldn't be opposed to back-patching the more thorough fix, but I don't\n> feel too strongly about it.\n\n- * The token, if any, is returned at *buf (a buffer of size bufsz), and\n+ * The token, if any, is returned into *buf, and\n * *lineptr is advanced past the token.\n\nThe comment indentation is a bit off here.\n\n * In event of an error, log a message at ereport level elevel, and also\n- * set *err_msg to a string describing the error. Currently the only\n- * possible error is token too long for buf.\n+ * set *err_msg to a string describing the error. Currently no error\n+ * conditions are defined.\n\nI find the choice to keep err_msg in next_token() a bit confusing in\nnext_field_expand(). If no errors are possible, why not just get rid\nof it?\n\nFWIW, I don't feel strongly about backpatching that either, so for the\nback branches I'd choose to bump up the token size limit and call it a\nday.\n--\nMichael", "msg_date": "Tue, 25 Jul 2023 09:14:54 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Removing the fixed-size buffer restriction in hba.c" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> I find the choice to keep err_msg in next_token() a bit confusing in\n> next_field_expand(). If no errors are possible, why not just get rid\n> of it?\n\nYeah, I dithered about that. I felt like removing it only to put it\nback later would be silly, but then again maybe there won't ever be\na need to put it back. I'm OK with removing it if there's not\nobjections.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 24 Jul 2023 20:21:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Removing the fixed-size buffer restriction in hba.c" }, { "msg_contents": "On Mon, Jul 24, 2023 at 08:21:54PM -0400, Tom Lane wrote:\n> Yeah, I dithered about that. I felt like removing it only to put it\n> back later would be silly, but then again maybe there won't ever be\n> a need to put it back. I'm OK with removing it if there's not\n> objections.\n\nSeeing what this code does, the odds of needing an err_msg seem rather\nlow to me, so I would just remove it for now for the sake of clarity.\nIt can always be added later if need be.\n--\nMichael", "msg_date": "Tue, 25 Jul 2023 09:45:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Removing the fixed-size buffer restriction in hba.c" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Seeing what this code does, the odds of needing an err_msg seem rather\n> low to me, so I would just remove it for now for the sake of clarity.\n> It can always be added later if need be.\n\nDone that way. Thanks for looking at it!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 27 Jul 2023 12:11:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Removing the fixed-size buffer restriction in hba.c" }, { "msg_contents": "On Thu, Jul 27, 2023 at 12:11:38PM -0400, Tom Lane wrote:\n> Done that way. Thanks for looking at it!\n\nThanks for applying!\n--\nMichael", "msg_date": "Fri, 28 Jul 2023 07:42:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Removing the fixed-size buffer restriction in hba.c" } ]
[ { "msg_contents": "Hi,\n\nCurrently InitCatalogCache() has only one Assert() for cacheinfo[]\nthat checks .reloid. The proposed patch adds sanity checks for the\nrest of the fields.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Mon, 24 Jul 2023 21:58:15 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "[PATCH] Check more invariants during syscache initialization" }, { "msg_contents": "On Mon, Jul 24, 2023 at 09:58:15PM +0300, Aleksander Alekseev wrote:\n> Currently InitCatalogCache() has only one Assert() for cacheinfo[]\n> that checks .reloid. The proposed patch adds sanity checks for the\n> rest of the fields.\n\n- Assert(cacheinfo[cacheId].reloid != 0);\n+ Assert(cacheinfo[cacheId].reloid != InvalidOid);\n+ Assert(cacheinfo[cacheId].indoid != InvalidOid);\nNo objections about checking for the index OID given out to catch\nany failures at an early stage before doing an actual lookup? I guess\nthat you've added an incorrect entry and noticed the problem only when\ntriggering a syscache lookup for the new incorrect entry?\n\n+ Assert(key[i] != InvalidAttrNumber);\n\nYeah, same here.\n\n+ Assert((cacheinfo[cacheId].nkeys > 0) && (cacheinfo[cacheId].nkeys <= 4));\n\nIs this addition actually useful? The maximum number of lookup keys\nis enforced at API level with the SearchSysCache() family. Or perhaps\nyou've been able to break this layer in a way I cannot imagine and\nwere looking at a quick way to spot the inconsistency?\n--\nMichael", "msg_date": "Tue, 25 Jul 2023 08:02:36 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Check more invariants during syscache initialization" }, { "msg_contents": "Hi Michael,\n\n> - Assert(cacheinfo[cacheId].reloid != 0);\n> + Assert(cacheinfo[cacheId].reloid != InvalidOid);\n> + Assert(cacheinfo[cacheId].indoid != InvalidOid);\n> No objections about checking for the index OID given out to catch\n> any failures at an early stage before doing an actual lookup? I guess\n> that you've added an incorrect entry and noticed the problem only when\n> triggering a syscache lookup for the new incorrect entry?\n> + Assert(key[i] != InvalidAttrNumber);\n>\n> Yeah, same here.\n\nNot sure if I understand your question or suggestion. Thes proposed\npatch adds Asserts() to InitCatalogCache() and InitCatCache() called\nby it, before any actual lookups.\n\n> + Assert((cacheinfo[cacheId].nkeys > 0) && (cacheinfo[cacheId].nkeys <= 4));\n>\n> Is this addition actually useful?\n\nI believe it is. One can mistakenly use 5+ keys in cacheinfo[] declaration, e.g:\n\n```\ndiff --git a/src/backend/utils/cache/syscache.c\nb/src/backend/utils/cache/syscache.c\nindex 4883f36a67..c7a8a5b8bb 100644\n--- a/src/backend/utils/cache/syscache.c\n+++ b/src/backend/utils/cache/syscache.c\n@@ -159,6 +159,7 @@ static const struct cachedesc cacheinfo[] = {\n KEY(Anum_pg_amop_amopfamily,\n Anum_pg_amop_amoplefttype,\n Anum_pg_amop_amoprighttype,\n+ Anum_pg_amop_amoplefttype,\n Anum_pg_amop_amopstrategy),\n 64\n },\n```\n\nWhat happens in this case, at least with GCC - the warning is shown\nbut the code compiles fine:\n\n```\n1032/1882] Compiling C object\nsrc/backend/postgres_lib.a.p/utils_cache_syscache.c.o\nsrc/include/catalog/pg_amop_d.h:30:35: warning: excess elements in\narray initializer\n 30 | #define Anum_pg_amop_amopstrategy 5\n | ^\n../src/backend/utils/cache/syscache.c:127:48: note: in definition of macro ‘KEY’\n 127 | #define KEY(...) VA_ARGS_NARGS(__VA_ARGS__), { __VA_ARGS__ }\n | ^~~~~~~~~~~\n../src/backend/utils/cache/syscache.c:163:25: note: in expansion of\nmacro ‘Anum_pg_amop_amopstrategy’\n 163 | Anum_pg_amop_amopstrategy),\n | ^~~~~~~~~~~~~~~~~~~~~~~~~\nsrc/include/catalog/pg_amop_d.h:30:35: note: (near initialization for\n‘cacheinfo[4].key’)\n 30 | #define Anum_pg_amop_amopstrategy 5\n | ^\n../src/backend/utils/cache/syscache.c:127:48: note: in definition of macro ‘KEY’\n 127 | #define KEY(...) VA_ARGS_NARGS(__VA_ARGS__), { __VA_ARGS__ }\n | ^~~~~~~~~~~\n../src/backend/utils/cache/syscache.c:163:25: note: in expansion of\nmacro ‘Anum_pg_amop_amopstrategy’\n 163 | Anum_pg_amop_amopstrategy),\n | ^~~~~~~~~~~~~~~~~~~~~~~~~\n```\n\nAll in all I'm not claiming that these proposed new Asserts() make a\nhuge difference. I merely noticed that InitCatalogCache checks only\ncacheinfo[cacheId].reloid. Checking the rest of the values would be\nhelpful for the developers and will not impact the performance of the\nrelease builds.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 25 Jul 2023 14:35:31 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Check more invariants during syscache initialization" }, { "msg_contents": "On Tue, Jul 25, 2023 at 02:35:31PM +0300, Aleksander Alekseev wrote:\n>> - Assert(cacheinfo[cacheId].reloid != 0);\n>> + Assert(cacheinfo[cacheId].reloid != InvalidOid);\n>> + Assert(cacheinfo[cacheId].indoid != InvalidOid);\n>> No objections about checking for the index OID given out to catch\n>> any failures at an early stage before doing an actual lookup? I guess\n>> that you've added an incorrect entry and noticed the problem only when\n>> triggering a syscache lookup for the new incorrect entry?\n>> + Assert(key[i] != InvalidAttrNumber);\n>>\n>> Yeah, same here.\n> \n> Not sure if I understand your question or suggestion. Thes proposed\n> patch adds Asserts() to InitCatalogCache() and InitCatCache() called\n> by it, before any actual lookups.\n\nThat was more a question. I was wondering if it was something you've\nnoticed while working on a different patch because you somewhat\nassigned incorrect values in the syscache array, but it looks like you\nhave noticed that while scanning the code.\n\n>> + Assert((cacheinfo[cacheId].nkeys > 0) && (cacheinfo[cacheId].nkeys <= 4));\n>>\n>> Is this addition actually useful?\n> \n> I believe it is. One can mistakenly use 5+ keys in cacheinfo[] declaration, e.g:\n\nStill it's hard to miss at compile time. I think that I would remove\nthis one.\n\n> All in all I'm not claiming that these proposed new Asserts() make a\n> huge difference. I merely noticed that InitCatalogCache checks only\n> cacheinfo[cacheId].reloid. Checking the rest of the values would be\n> helpful for the developers and will not impact the performance of the\n> release builds.\n\nNo counter-arguments on that. The checks for the index OID and the\nkeys allow to catch failures in these structures at an earlier stage\nwhen initializing a backend. Agreed that it can be useful for\ndevelopers.\n--\nMichael", "msg_date": "Wed, 26 Jul 2023 08:45:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Check more invariants during syscache initialization" }, { "msg_contents": "Hi Michael,\n\n> That was more a question. I was wondering if it was something you've\n> noticed while working on a different patch because you somewhat\n> assigned incorrect values in the syscache array, but it looks like you\n> have noticed that while scanning the code.\n\nOh, got it. That's correct.\n\n> Still it's hard to miss at compile time. I think that I would remove\n> this one.\n\nFair enough. Here is the updated patch.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Wed, 26 Jul 2023 15:50:53 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Check more invariants during syscache initialization" }, { "msg_contents": "HI,\n\n> On Jul 26, 2023, at 20:50, Aleksander Alekseev <aleksander@timescale.com> wrote:\n> \n> Hi Michael,\n> \n>> That was more a question. I was wondering if it was something you've\n>> noticed while working on a different patch because you somewhat\n>> assigned incorrect values in the syscache array, but it looks like you\n>> have noticed that while scanning the code.\n> \n> Oh, got it. That's correct.\n> \n>> Still it's hard to miss at compile time. I think that I would remove\n>> this one.\n> \n> Fair enough. Here is the updated patch.\n> \n> -- \n> Best regards,\n> Aleksander Alekseev\n> <v2-0001-Check-more-invariants-during-syscache-initializat.patch>\n\n\n\nLGTM.\n\n```\n-\t\tAssert(cacheinfo[cacheId].reloid != 0);\n+\t\tAssert(cacheinfo[cacheId].reloid != InvalidOid);\n```\n\nThat remind me to have a look other codes, and a grep search `oid != 0` show there are several files using old != 0.\n\n```\n.//src/bin/pg_resetwal/pg_resetwal.c:\tif (set_oid != 0)\n.//src/bin/pg_resetwal/pg_resetwal.c:\tif (set_oid != 0)\n.//src/bin/pg_dump/pg_backup_tar.c:\t\t\tif (oid != 0)\n.//src/bin/pg_dump/pg_backup_custom.c:\twhile (oid != 0)\n.//src/bin/pg_dump/pg_backup_custom.c:\twhile (oid != 0)\n```\nThat is another story…I would like provide a patch if it worths.\n\n\nZhang Mingli\nhttps://www.hashdata.xyz\n\n\nHI,On Jul 26, 2023, at 20:50, Aleksander Alekseev <aleksander@timescale.com> wrote:Hi Michael,That was more a question.  I was wondering if it was something you'venoticed while working on a different patch because you somewhatassigned incorrect values in the syscache array, but it looks like youhave noticed that while scanning the code.Oh, got it. That's correct.Still it's hard to miss at compile time.  I think that I would removethis one.Fair enough. Here is the updated patch.-- Best regards,Aleksander Alekseev<v2-0001-Check-more-invariants-during-syscache-initializat.patch>LGTM.```- Assert(cacheinfo[cacheId].reloid != 0);+ Assert(cacheinfo[cacheId].reloid != InvalidOid);```That remind me to have a look other codes, and a grep search `oid != 0` show there are several files using old != 0.```.//src/bin/pg_resetwal/pg_resetwal.c: if (set_oid != 0).//src/bin/pg_resetwal/pg_resetwal.c: if (set_oid != 0).//src/bin/pg_dump/pg_backup_tar.c: if (oid != 0).//src/bin/pg_dump/pg_backup_custom.c: while (oid != 0).//src/bin/pg_dump/pg_backup_custom.c: while (oid != 0)```That is another story…I would like provide a patch if it worths.\nZhang Minglihttps://www.hashdata.xyz", "msg_date": "Wed, 26 Jul 2023 21:29:53 +0800", "msg_from": "Zhang Mingli <zmlpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Check more invariants during syscache initialization" }, { "msg_contents": "Hi Zhang,\n\n> That remind me to have a look other codes, and a grep search `oid != 0` show there are several files using old != 0.\n>\n> ```\n> .//src/bin/pg_resetwal/pg_resetwal.c: if (set_oid != 0)\n> .//src/bin/pg_resetwal/pg_resetwal.c: if (set_oid != 0)\n> .//src/bin/pg_dump/pg_backup_tar.c: if (oid != 0)\n> .//src/bin/pg_dump/pg_backup_custom.c: while (oid != 0)\n> .//src/bin/pg_dump/pg_backup_custom.c: while (oid != 0)\n> ```\n> That is another story…I would like provide a patch if it worths.\n\nGood catch. Please do so.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 26 Jul 2023 16:59:31 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Check more invariants during syscache initialization" }, { "msg_contents": "Aleksander Alekseev <aleksander@timescale.com> writes:\n\n> Hi Zhang,\n>\n>> That remind me to have a look other codes, and a grep search `oid != 0` show there are several files using old != 0.\n>>\n>> ```\n>> .//src/bin/pg_resetwal/pg_resetwal.c: if (set_oid != 0)\n>> .//src/bin/pg_resetwal/pg_resetwal.c: if (set_oid != 0)\n>> .//src/bin/pg_dump/pg_backup_tar.c: if (oid != 0)\n>> .//src/bin/pg_dump/pg_backup_custom.c: while (oid != 0)\n>> .//src/bin/pg_dump/pg_backup_custom.c: while (oid != 0)\n>> ```\n>> That is another story…I would like provide a patch if it worths.\n>\n> Good catch. Please do so.\n\nShouldn't these be calling `OidIsValid(…)`, not comparing directly to\n`InvalidOid`?\n\n~/src/postgresql (master $)$ git grep '[Oo]id [!=]= 0' | wc -l\n18\n~/src/postgresql (master $)$ git grep '[!=]= InvalidOid' | wc -l\n296\n~/src/postgresql (master $)$ git grep 'OidIsValid' |\nwc -l\n1462\n\n- ilmari\n\n\n", "msg_date": "Wed, 26 Jul 2023 15:28:55 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Check more invariants during syscache initialization" }, { "msg_contents": "Hi,\n\n> Shouldn't these be calling `OidIsValid(…)`, not comparing directly to\n> `InvalidOid`?\n\nThey should, thanks. Here is the updated patch. I made sure there are\nno others != InvalidOid checks in syscache.c and catcache.c which are\naffected by my patch. I didn't change any other files since Zhang\nwants to propose the corresponding patch.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Wed, 26 Jul 2023 19:01:11 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Check more invariants during syscache initialization" }, { "msg_contents": "On Wed, Jul 26, 2023 at 03:28:55PM +0100, Dagfinn Ilmari Mannsåker wrote:\n> Shouldn't these be calling `OidIsValid(…)`, not comparing directly to\n> `InvalidOid`?\n\nI think that they should use OidIsValid() on correctness ground, and\nthat's the style I prefer. Now, I don't think that these are worth\nchanging now except if some of the surrounding code is changed for a\ndifferent reason. One reason is that this increases the odds of\nconflicts when backpatching on a stable branch.\n--\nMichael", "msg_date": "Thu, 27 Jul 2023 10:05:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Check more invariants during syscache initialization" }, { "msg_contents": "Hi,\n\n> On Jul 27, 2023, at 09:05, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> One reason is that this increases the odds of\n> conflicts when backpatching on a stable branch.\n\nAgree. We could suggest to use OidIsValid() for new patches during review.\n\nZhang Mingli\nhttps://www.hashdata.xyz\n\n\nHi,On Jul 27, 2023, at 09:05, Michael Paquier <michael@paquier.xyz> wrote: One reason is that this increases the odds ofconflicts when backpatching on a stable branch.Agree. We could suggest to use OidIsValid() for new patches during review.\nZhang Minglihttps://www.hashdata.xyz", "msg_date": "Thu, 27 Jul 2023 09:48:10 +0800", "msg_from": "Zhang Mingli <zmlpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Check more invariants during syscache initialization" }, { "msg_contents": "On Wed, Jul 26, 2023 at 07:01:11PM +0300, Aleksander Alekseev wrote:\n> Hi,\n> \n> > Shouldn't these be calling `OidIsValid(…)`, not comparing directly to\n> > `InvalidOid`?\n> \n> They should, thanks. Here is the updated patch. I made sure there are\n> no others != InvalidOid checks in syscache.c and catcache.c which are\n> affected by my patch. I didn't change any other files since Zhang\n> wants to propose the corresponding patch.\n\nWhile arguing about OidIsValid() for the relation OID part, I found a\nbit surprising that you did not consider AttributeNumberIsValid() for\nthe new check on the keys. I've switched the second check to that,\nand applied v3.\n--\nMichael", "msg_date": "Thu, 27 Jul 2023 10:57:20 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Check more invariants during syscache initialization" } ]
[ { "msg_contents": "Hi,\r\n\r\nWhile recently looking into partition maintenance, I found a case in which\r\nDETACH PARTITION FINALIZE could case deadlocks. This occurs when a\r\nALTER TABLE DETACH CONCURRENTLY, followed by a cancel, the followed by\r\nan ALTER TABLE DETACH FINALIZE, and this sequence of steps occur in the\r\nmiddle of other REPEATABLE READ/SERIALIZABLE transactions. These RR/SERIALIZABLE\r\nshould not accessed the partition until the FINALIZE step starts. See the attached\r\nrepro.\r\n\r\nThis seems to occur as the FINALIZE is calling WaitForOlderSnapshot [1]\r\nto make sure that all older snapshots are completed before the finalize\r\nis completed and the detached partition is removed from the parent table\r\nassociation.\r\n\r\nWaitForOlderSnapshots is used here to ensure that snapshots older than\r\nthe start of the ALTER TABLE DETACH CONCURRENTLY are completely removed\r\nto guarantee consistency, however it does seem to cause deadlocks for at\r\nleast RR/SERIALIZABLE transactions.\r\n\r\nThere are cases [2] in which certain operations are accepted as not being\r\nMVCC-safe, and now I am wondering if this is another case? Deadlocks are\r\nnot a good scenario, and WaitForOlderSnapshot does not appear to do\r\nanything for READ COMMITTED transactions.\r\n\r\nSo do we actually need WaitForOlderSnapshot in the FINALIZE code? and\r\nCould be acceptable that this operation is marked as not MVCC-safe like\r\nthe other aforementioned operations?\r\n\r\nPerhaps I am missing some important point here, so any feedback will be\r\nappreciated.\r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)\r\n\r\n\r\n[1] https://github.com/postgres/postgres/blob/master/src/backend/commands/tablecmds.c#L18757-L18764\r\n[2] https://www.postgresql.org/docs/current/mvcc-caveats.html", "msg_date": "Mon, 24 Jul 2023 19:40:04 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "WaitForOlderSnapshots in DETACH PARTITION causes deadlocks" }, { "msg_contents": "On Mon, Jul 24, 2023 at 07:40:04PM +0000, Imseih (AWS), Sami wrote:\n> WaitForOlderSnapshots is used here to ensure that snapshots older than\n> the start of the ALTER TABLE DETACH CONCURRENTLY are completely removed\n> to guarantee consistency, however it does seem to cause deadlocks for at\n> least RR/SERIALIZABLE transactions.\n> \n> There are cases [2] in which certain operations are accepted as not being\n> MVCC-safe, and now I am wondering if this is another case? Deadlocks are\n> not a good scenario, and WaitForOlderSnapshot does not appear to do\n> anything for READ COMMITTED transactions.\n> \n> So do we actually need WaitForOlderSnapshot in the FINALIZE code? and\n> Could be acceptable that this operation is marked as not MVCC-safe like\n> the other aforementioned operations?\n\nI guess that there is an argument with lifting that a bit. Based on\nthe test case your are providing, a deadlock occuring between the\nFINALIZE and a scan of the top-level partitioned table in a\ntransaction that began before the DETACH CONCURRENTLY does not seem\nlike the best answer to have from the user perspective. I got to\nwonder whether there is room to make the wait for older snapshots in\nthe finalize phase more robust, though, and actually make it wait\nuntil the first transaction commits rather than fail because of a\ndeadlock like that.\n\n> Perhaps I am missing some important point here, so any feedback will be\n> appreciated.\n\nAdding Alvaro in CC as the author of 71f4c8c6 for input, FYI.\n--\nMichael", "msg_date": "Tue, 25 Jul 2023 16:03:22 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: WaitForOlderSnapshots in DETACH PARTITION causes deadlocks" }, { "msg_contents": "On 2023-Jul-25, Michael Paquier wrote:\n\n> On Mon, Jul 24, 2023 at 07:40:04PM +0000, Imseih (AWS), Sami wrote:\n> > WaitForOlderSnapshots is used here to ensure that snapshots older than\n> > the start of the ALTER TABLE DETACH CONCURRENTLY are completely removed\n> > to guarantee consistency, however it does seem to cause deadlocks for at\n> > least RR/SERIALIZABLE transactions.\n> > \n> > There are cases [2] in which certain operations are accepted as not being\n> > MVCC-safe, and now I am wondering if this is another case? Deadlocks are\n> > not a good scenario, and WaitForOlderSnapshot does not appear to do\n> > anything for READ COMMITTED transactions.\n> > \n> > So do we actually need WaitForOlderSnapshot in the FINALIZE code? and\n> > Could be acceptable that this operation is marked as not MVCC-safe like\n> > the other aforementioned operations?\n> \n> I guess that there is an argument with lifting that a bit. Based on\n> the test case your are providing, a deadlock occuring between the\n> FINALIZE and a scan of the top-level partitioned table in a\n> transaction that began before the DETACH CONCURRENTLY does not seem\n> like the best answer to have from the user perspective. I got to\n> wonder whether there is room to make the wait for older snapshots in\n> the finalize phase more robust, though, and actually make it wait\n> until the first transaction commits rather than fail because of a\n> deadlock like that.\n\nIt took a lot of work to get SERIALIZABLE/RR transactions to work with\nDETACHED CONCURRENTLY, so I'm certainly not in a hurry to give up and\njust \"document that it doesn't work\". I don't think that would be an\nacceptable feature regression.\n\nI spent some time yesterday trying to turn the reproducer into an\nisolationtester spec file. I have not succeeded yet, but I'll continue\nwith that this afternoon.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Every machine is a smoke machine if you operate it wrong enough.\"\nhttps://twitter.com/libseybieda/status/1541673325781196801\n\n\n", "msg_date": "Wed, 26 Jul 2023 15:57:03 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: WaitForOlderSnapshots in DETACH PARTITION causes deadlocks" } ]
[ { "msg_contents": "Please find attached a patch to jumble savepoint name, to prevent certain\ntransactional commands from filling up pg_stat_statements.\nThis has been a problem with some busy systems that use django, which likes\nto wrap everything in uniquely named savepoints. Soon, over 50% of\nyour pg_stat_statements buffer is filled with savepoint stuff, pushing out\nthe more useful queries. As each query is unique, it looks like this:\n\npostgres=# select calls, query, queryid from pg_stat_statements where query\n~ 'save|release|rollback' order by 2;\n\n calls | query | queryid\n-------+----------------------------------------------+----------------------\n 1 | release b900150983cd24fb0d6963f7d28e17f7 | 8797482500264589878\n 1 | release ed9407630eb1000c0f6b63842defa7de | -9206510099095862114\n 1 | rollback | -2049453941623996126\n 1 | rollback to c900150983cd24fb0d6963f7d28e17f7 | -5335832667999552746\n 1 | savepoint b900150983cd24fb0d6963f7d28e17f7 | -1888817254996647181\n 1 | savepoint c47bce5c74f589f4867dbd57e9ca9f80 | 355123032993044571\n 1 | savepoint c900150983cd24fb0d6963f7d28e17f7 | -5921314469994822125\n 1 | savepoint d8f8e0260c64418510cefb2b06eee5cd | -981090856656063578\n 1 | savepoint ed9407630eb1000c0f6b63842defa7de | -25952890433218603\n\nAs the actual name of the savepoint is not particularly useful, the patch\nwill basically ignore the savepoint name and allow things to be collapsed:\n\n calls | query | queryid\n-------+----------------+----------------------\n 2 | release $1 | -7998168840889089775\n 1 | rollback | 3749380189022910195\n 1 | rollback to $1 | -1816677871228308673\n 5 | savepoint $1 | 6160699978368237767\n\nWithout the patch, the only solution is to keep raising\npg_stat_statements.max to larger and larger values to compensate for the\npollution of the\nstatement pool.\n\nCheers,\nGreg", "msg_date": "Mon, 24 Jul 2023 16:09:23 -0400", "msg_from": "Greg Sabino Mullane <htamfids@gmail.com>", "msg_from_op": true, "msg_subject": "Improve pg_stat_statements by making jumble handle savepoint names\n better" }, { "msg_contents": "On Mon, Jul 24, 2023 at 04:09:23PM -0400, Greg Sabino Mullane wrote:\n> Without the patch, the only solution is to keep raising\n> pg_stat_statements.max to larger and larger values to compensate for the\n> pollution of the\n> statement pool.\n\nYes, that can be painful depending on your workload.\n\n bool chain; /* AND CHAIN option */\n+ int location; /* token location, or -1 if unknown */\n } TransactionStmt;\n[...]\n+ if ($f eq 'savepoint_name') {\n+ print $jff \"\\tJUMBLE_LOCATION(location);\\n\";\n+ next;\n+ }\n\nShouldn't this new field be marked as query_jumble_location instead of\nforcing the perl script to do that? Or is there something in the\nstructure of TransactionStmt that makes the move difficult because of\nthe other transaction commands supported?\n\nTesting for new query patterns is very important for such patches.\nCould you add something in pg_stat_statements's utility.sql? I think\nthat you should check the compilations of at least two savepoints with\ndifferent names to see that they compile the same query ID, for a\nbunch of patterns in the grammar, say:\nBEGIN;\nSAVEPOINT p1;\nROLLBACK TO SAVEPOINT p1;\nROLLBACK TRANSACTION TO SAVEPOINT p1;\nRELEASE SAVEPOINT p1;\nSAVEPOINT p2;\nROLLBACK TO SAVEPOINT p2;\nROLLBACK TRANSACTION TO SAVEPOINT p2;\nRELEASE SAVEPOINT p2;\nCOMMIT;\n--\nMichael", "msg_date": "Tue, 25 Jul 2023 07:46:05 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Improve pg_stat_statements by making jumble handle savepoint\n names better" }, { "msg_contents": "On Mon, Jul 24, 2023 at 6:46 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> Shouldn't this new field be marked as query_jumble_location\n>\n\nYes, it should. I had some trouble getting it to work that way in the first\nplace, but now I realize it was just my unfamiliarity with this part of the\ncode. So thanks for the hint: v2 of the patch is much simplified by adding\ntwo attributes to theTransactionStmt node. I've also added some tests per\nyour suggestion.\n\nUnrelated to this patch, I'm struggling with meson testing. Why doesn't\nthis update the postgres test binary?:\n\nmeson test --suite pg_stat_statements\n\nIt runs \"ninja\" as expected, but it does not put a new\nbuild/tmp_install/home/greg/pg/17/bin/postgres in place until I do a \"meson\ntest\"\n\nCheers,\nGreg", "msg_date": "Tue, 25 Jul 2023 12:37:18 -0400", "msg_from": "Greg Sabino Mullane <htamfids@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improve pg_stat_statements by making jumble handle savepoint\n names better" }, { "msg_contents": "On Tue, Jul 25, 2023 at 12:37:18PM -0400, Greg Sabino Mullane wrote:\n> Yes, it should. I had some trouble getting it to work that way in the first\n> place, but now I realize it was just my unfamiliarity with this part of the\n> code. So thanks for the hint: v2 of the patch is much simplified by adding\n> two attributes to theTransactionStmt node. I've also added some tests per\n> your suggestion.\n\n- char *savepoint_name; /* for savepoint commands */\n+ char *savepoint_name pg_node_attr(query_jumble_ignore);\n char *gid; /* for two-phase-commit related commands */\n bool chain; /* AND CHAIN option */\n+ int location pg_node_attr(query_jumble_location);\n } TransactionStmt;\n\nYou got that right. Applying query_jumble_ignore makes sure that we\ndon't finish with different query IDs for different savepoint names.\n\n> Unrelated to this patch, I'm struggling with meson testing. Why doesn't\n> this update the postgres test binary?:\n> \n> meson test --suite pg_stat_statements\n\nYes, it should do so, I assume. I've seen that myself. That's\ncontrary to what make check does, but perhaps things are better\nintegrated this way with meson.\n\nI think that I'm OK with your proposal as savepoint names are in\ndefined places in these queries (contrary to some of the craziness\nwith BEGIN and the node structure of TransactionStmt, for example).\n\nHas somebody an opinion or a comment to share?\n--\nMichael", "msg_date": "Wed, 26 Jul 2023 07:53:02 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Improve pg_stat_statements by making jumble handle savepoint\n names better" }, { "msg_contents": "On Wed, Jul 26, 2023 at 07:53:02AM +0900, Michael Paquier wrote:\n> I think that I'm OK with your proposal as savepoint names are in\n> defined places in these queries (contrary to some of the craziness\n> with BEGIN and the node structure of TransactionStmt, for example).\n> \n> Has somebody an opinion or a comment to share?\n\nWell, on second look this is really simple, so I've applied that after\nadding back some comments that ought to be and simplifying a bit the\ntests (less queries, same coverage).\n--\nMichael", "msg_date": "Thu, 27 Jul 2023 09:57:09 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Improve pg_stat_statements by making jumble handle savepoint\n names better" } ]
[ { "msg_contents": "I've been working on a variety of improvements to nbtree's native\nScalarArrayOpExpr execution. This builds on Tom's work in commit\n9e8da0f7.\n\nAttached patch is still at the prototype stage. I'm posting it as v1 a\nlittle earlier than I usually would because there has been much back\nand forth about it on a couple of other threads involving Tomas Vondra\nand Jeff Davis -- seems like it would be easier to discuss with\nworking code available.\n\nThe patch adds two closely related enhancements to ScalarArrayOp\nexecution by nbtree:\n\n1. Execution of quals with ScalarArrayOpExpr clauses during nbtree\nindex scans (for equality-strategy SK_SEARCHARRAY scan keys) can now\n\"advance the scan's array keys locally\", which sometimes avoids\nsignificant amounts of unneeded pinning/locking of the same set of\nindex pages.\n\nSAOP index scans become capable of eliding primitive index scans for\nthe next set of array keys in line in cases where it isn't truly\nnecessary to descend the B-Tree again. Index scans are now capable of\n\"sticking with the existing leaf page for now\" when it is determined\nthat the end of the current set of array keys is physically close to\nthe start of the next set of array keys (the next set in line to be\nmaterialized by the _bt_advance_array_keys state machine). This is\noften possible.\n\nNaturally, we still prefer to advance the array keys in the\ntraditional way (\"globally\") much of the time. That means we'll\nperform another _bt_first/_bt_search descent of the index, starting a\nnew primitive index scan. Whether we try to skip pages on the leaf\nlevel or stick with the current primitive index scan (by advancing\narray keys locally) is likely to vary a great deal. Even during the\nsame index scan. Everything is decided dynamically, which is the only\napproach that really makes sense.\n\nThis optimization can significantly lower the number of buffers pinned\nand locked in cases with significant locality, and/or with many array\nkeys with no matches. The savings (when measured in buffers\npined/locked) can be as high as 10x, 100x, or even more. Benchmarking\nhas shown that transaction throughput for variants of \"pgbench -S\"\ndesigned to stress the implementation (hundreds of array constants)\nunder concurrent load can have up to 5.5x higher transaction\nthroughput with the patch. Less extreme cases (10 array constants,\nspaced apart) see about a 20% improvement in throughput. There are\nsimilar improvements to latency for the patch, in each case.\n\n2. The optimizer now produces index paths with multiple SAOP clauses\n(or other clauses we can safely treat as \"equality constraints'') on\neach of the leading columns from a composite index -- all while\npreserving index ordering/useful pathkeys in most cases.\n\nThe nbtree work from item 1 is useful even with the simplest IN() list\nquery involving a scan of a single column index. Obviously, it's very\ninefficient for the nbtree code to use 100 primitive index scans when\n1 is sufficient. But that's not really why I'm pursuing this project.\nMy real goal is to implement (or to enable the implementation of) a\nwhole family of useful techniques for multi-column indexes. I call\nthese \"MDAM techniques\", after the 1995 paper \"Efficient Search of\nMultidimensional B-Trees\" [1][2]-- MDAM is short for \"multidimensional\naccess method\". In the context of the paper, \"dimension\" refers to\ndimensions in a decision support system.\n\nThe most compelling cases for the patch all involve multiple index\ncolumns with multiple SAOP clauses (especially where each column\nrepresents a separate \"dimension\", in the DSS sense). It's important\nthat index sort be preserved whenever possible, too. Sometimes this is\ndirectly useful (e.g., because the query has an ORDER BY), but it's\nalways indirectly needed, on the nbtree side (when the optimizations\nare applicable at all). The new nbtree code now has special\nrequirements surrounding SAOP search type scan keys with composite\nindexes. These requirements make changes in the optimizer all but\nessential.\n\nIndex order\n===========\n\nAs I said, there are cases where preserving index order is immediately\nand obviously useful, in and of itself. Let's start there.\n\nHere's a test case that you can run against the regression test database:\n\npg@regression:5432 =# create index order_by_saop on tenk1(two,four,twenty);\nCREATE INDEX\n\npg@regression:5432 =# EXPLAIN (ANALYZE, BUFFERS)\nselect ctid, thousand from tenk1\nwhere two in (0,1) and four in (1,2) and twenty in (1,2)\norder by two, four, twenty limit 20;\n\nWith the patch, this query gets 13 buffer hits. On the master branch,\nit gets 1377 buffer hits -- which exceeds the number you'll get from a\nsequential scan by about 4x. No coaxing was required to get the\nplanner to produce this plan on the master branch. Almost all of the\nsavings shown here are related to heap page buffer hits -- the nbtree\nchanges don't directly help in this particular example (strictly\nspeaking, you only need the optimizer changes to get this result).\n\nObviously, the immediate reason why the patch wins by so much is\nbecause it produces a plan that allows the LIMIT to terminate the scan\nfar sooner. Benoit Tigeot (CC'd) happened to run into this issue\norganically -- that was also due to heap hits, a LIMIT, and so on. As\nluck would have it, I stumbled upon his problem report (in the\nPostgres slack channel) while I was working on this patch. He produced\na fairly complete test case, which was helpful [3]. This example is\nmore or less just a distillation of his test case, designed to be easy\nfor a Postgres hacker to try out for themselves.\n\nThere are also variants of this query where a LIMIT isn't the crucial\nfactor, and where index page hits are the problem. This query uses an\nindex-only scan, both on master and with the patch (same index as\nbefore):\n\nselect count(*), two, four, twenty\nfrom tenk1\nwhere two in (0, 1) and four in (1, 2, 3, 4) and\ntwenty in (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,14,15)\ngroup by two, four, twenty\norder by two, four, twenty;\n\nThe patch gets 18 buffer hits for this query. That outcome makes\nintuitive sense, since this query is highly unselective -- it's\napproaching the selectivity of the query \"select count(*) from tenk1\".\nThe simple count(*) query gets 19 buffer hits for its own index-only\nscan, confirming that the patch managed to skip only one or two leaf\npages in the complicated \"group by\" variant of the count(*) query.\nOverall, the GroupAggregate plan used by the patch is slower than the\nsimple count(*) case (despite touching fewer pages). But both plans\nhave *approximately* the same execution cost, which makes sense, since\nthey both have very similar selectivities.\n\nThe master branch gets 245 buffer hits for the same group by query.\nThis is almost as many hits as a sequential scan would require -- even\nthough there are precisely zero heap accesses needed by the underlying\nindex-only scan. As with the first example, no planner coaxing was\nrequired to get this outcome on master. It is inherently very\ndifficult to predict how selective a query like this will be using\nconventional statistics. But that's not actually the problem in this\nexample -- the planner gets that part right, on this occasion. The\nreal problem is that there is a multiplicative factor to worry about\non master, when executing multiple SAOPs. That makes it almost\nimpossible to predict the number of pages we'll pin. While with the\npatch, scans with multiple SAOPs are often fairly similar to scans\nthat happen to just have one on the leading column.\n\nWith the patch, it is simply impossible for an SAOP index scan to\nvisit any single leaf page more than once. Just like a conventional\nindex scan. Whereas right now, on master, using more than one SAOP\nclause for a multi column index seems to me to be a wildly risky\nproposition. You can easily have cases that work just fine on master,\nwhile only slight variations of the same query see costs explode\n(especially likely with a LIMIT). ISTM that there is significant value\nin knowing for sure in having a pretty accurate idea of the worst case\nin the planner.\n\nGiving nbtree the ability to skip or not skip dynamically, based on\nactual conditions in the index (not on statistics), seems like it has\na lot of potential as a way of improving performance *stability*.\nPersonally I'm most interested in this aspect of the project.\n\nNote: we can visit internal pages more than once, but that seems to\nmake a negligible difference to the overall cost profile of scans. Our\npolicy is to not charge an I/O cost for those pages. Plus, the number\nof internal page access is dramatically reduced (it's just not\nguaranteed that there won't be any repeat accesses for internal pages,\nis all).\n\nNote also: there are hard-to-pin-down interactions between the\nimmediate problem on the nbtree side, and the use of filter quals\nrather than true index quals, where the use of index quals is possible\nin principle. Some problematic cases see excessive amounts of heap\npage hits only (as with my first example query). Other problematic\ncases see excessive amounts of index page hits, with little to no\nimpact on heap page hits at all (as with my second example query).\nSome combination of the two is also possible.\n\nSafety\n======\n\nAs mentioned already, the ability to \"advance the current set of array\nkeys locally\" during a scan (the nbtree work in item 1) actually\nrelies the optimizer work in item 2 -- it's not just a question of\nunlocking the potential of the nbtree work. Now I'll discuss those\naspects in a bit more detail.\n\nWithout the optimizer work, nbtree will produce wrong answers to\nqueries, in a way that resembles the complaint addressed by historical\nbugfix commit 807a40c5. This incorrect behavior (if the optimizer were\nto permit it) would only be seen when there are multiple\narrays/columns, and an inequality on a leading column -- just like\nwith that historical bug. (It works both ways, though -- the nbtree\nchanges also make the optimizer changes safe by limiting the worst\ncase, which would otherwise be too much of a risk to countenance. You\ncan't separate one from the other.)\n\nThe primary change on the optimizer side is the addition of logic to\ndifferentiate between the following two cases when building an index\npath in indxpath.c:\n\n* Unsafe: Cases where it's fundamentally unsafe to treat\nmulti-column-with-SAOP-clause index paths as returning tuples in a\nuseful sort order.\n\nFor example, the test case committed as part of that bugfix involves\nan inequality, so it continues to be treated as unsafe.\n\n* Safe: Cases where (at least in theory) bugfix commit 807a40c5 went\nfurther than it really had to.\n\nThose cases get to use the optimization, and usually get to have\nuseful path keys.\n\nMy optimizer changes are very kludgey. I came up with various ad-hoc\nrules to distinguish between the safe and unsafe cases, without ever\nreally placing those changes into some kind of larger framework. That\nwas enough to validate the general approach in nbtree, but it\ncertainly has problems -- glaring problems. The biggest problem of all\nmay be my whole \"safe vs unsafe\" framing itself. I know that many of\nthe ostensibly unsafe cases are in fact safe (with the right\ninfrastructure in place), because the MDAM paper says just that. The\noptimizer can't support inequalities right now, but the paper\ndescribes how to support \"NOT IN( )\" lists -- clearly an inequality!\nThe current ad-hoc rules are at best incomplete, and at worst are\naddressing the problem in fundamentally the wrong way.\n\n CNF -> DNF conversion\n=====================\n\nLike many great papers, the MDAM paper takes one core idea, and finds\nways to leverage it to the hilt. Here the core idea is to take\npredicates in conjunctive normal form (an \"AND of ORs\"), and convert\nthem into disjunctive normal form (an \"OR of ANDs\"). DNF quals are\nlogically equivalent to CNF quals, but ideally suited to SAOP-array\nstyle processing by an ordered B-Tree index scan -- they reduce\neverything to a series of non-overlapping primitive index scans, that\ncan be processed in keyspace order. We already do this today in the\ncase of SAOPs, in effect. The nbtree \"next array keys\" state machine\nalready materializes values that can be seen as MDAM style DNF single\nvalue predicates. The state machine works by outputting the cartesian\nproduct of each array as a multi-column index is scanned, but that\ncould be taken a lot further in the future. We can use essentially the\nsame kind of state machine to do everything described in the paper --\nultimately, it just needs to output a list of disjuncts, like the DNF\nclauses that the paper shows in \"Table 3\".\n\nIn theory, anything can be supported via a sufficiently complete CNF\n-> DNF conversion framework. There will likely always be the potential\nfor unsafe/unsupported clauses and/or types in an extensible system\nlike Postgres, though. So we will probably need to retain some notion\nof safety. It seems like more of a job for nbtree preprocessing (or\nsome suitably index-AM-agnostic version of the same idea) than the\noptimizer, in any case. But that's not entirely true, either (that\nwould be far too easy).\n\nThe optimizer still needs to optimize. It can't very well do that\nwithout having some kind of advanced notice of what is and is not\nsupported by the index AM. And, the index AM cannot just unilaterally\ndecide that index quals actually should be treated as filter/qpquals,\nafter all -- it doesn't get a veto. So there is a mutual dependency\nthat needs to be resolved. I suspect that there needs to be a two way\nconversation between the optimizer and nbtree code to break the\ndependency -- a callback that does some of the preprocessing work\nduring planning. Tom said something along the same lines in passing,\nwhen discussing the MDAM paper last year [2]. Much work remains here.\n\nSkip Scan\n=========\n\nMDAM encompasses something that people tend to call \"skip scan\" --\nterminology with a great deal of baggage. These days I prefer to call\nit \"filling in missing key predicates\", per the paper. That's much\nmore descriptive, and makes it less likely that people will conflate\nthe techniques with InnoDB style \"loose Index scans\" -- the latter is\na much more specialized/targeted optimization. (I now believe that\nthese are very different things, though I was thrown off by the\nsuperficial similarities for a long time. It's pretty confusing.)\n\nI see this work as a key enabler of \"filling in missing key\npredicates\". MDAM describes how to implement this technique by\napplying the same principles that it applies everywhere else: it\nproposes a scheme that converts predicates from CNF to DNF. With just\na little extra logic required to do index probes to feed the\nDNF-generating state machine, on demand.\n\nMore concretely, in Postgres terms: skip scan can be implemented by\ninventing a new placeholder clause that can be composed alongside\nScalarArrayOpExprs, in the same way that multiple ScalarArrayOpExprs\ncan be composed together in the patch already. I'm thinking of a type\nof clause that makes the nbtree code materialize a set of \"array keys\"\nfor a SK_SEARCHARRAY scan key dynamically, via ad-hoc index probes\n(perhaps static approaches would be better for types like boolean,\nwhich the paper contemplates). It should be possible to teach the\n_bt_advance_array_keys state machine to generate those values in\napproximately the same fashion as it already does for\nScalarArrayOpExprs -- and, it shouldn't be too hard to do it in a\nlocalized fashion, allowing everything else to continue to work in the\nsame way without any special concern. This separation of concerns is a\nnice consequence of the way that the MDAM design really leverages\npreprocessing/DNF for everything.\n\nBoth types of clauses can be treated as part of a general class of\nScalarArrayOpExpr-like clauses. Making the rules around\n\"composability\" simple will be important.\n\nAlthough skip scan gets a lot of attention, it's not necessarily the\nmost compelling MDAM technique. It's also not especially challenging\nto implement on top of everything else. It really isn't that special.\nRight now I'm focussed on the big picture, in any case. I want to\nemphasize the very general nature of these techniques. Although I'm\nfocussed on SOAPs in the short term, many queries that don't make use\nof SAOPs should ultimately see similar benefits. For example, the\npaper also describes transformations that apply to BETWEEN/range\npredicates. We might end up needing a third type of expression for\nthose. They're all just DNF single value predicates, under the hood.\n\nThoughts?\n\n[1] http://vldb.org/conf/1995/P710.PDF\n[2] https://www.postgresql.org/message-id/2587523.1647982549@sss.pgh.pa.us\n[3] https://gist.github.com/benoittgt/ab72dc4cfedea2a0c6a5ee809d16e04d\n--\nPeter Geoghegan", "msg_date": "Mon, 24 Jul 2023 18:33:52 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Tue, 25 Jul 2023 at 03:34, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> I've been working on a variety of improvements to nbtree's native\n> ScalarArrayOpExpr execution. This builds on Tom's work in commit\n> 9e8da0f7.\n\nCool.\n\n> Attached patch is still at the prototype stage. I'm posting it as v1 a\n> little earlier than I usually would because there has been much back\n> and forth about it on a couple of other threads involving Tomas Vondra\n> and Jeff Davis -- seems like it would be easier to discuss with\n> working code available.\n>\n> The patch adds two closely related enhancements to ScalarArrayOp\n> execution by nbtree:\n>\n> 1. Execution of quals with ScalarArrayOpExpr clauses during nbtree\n> index scans (for equality-strategy SK_SEARCHARRAY scan keys) can now\n> \"advance the scan's array keys locally\", which sometimes avoids\n> significant amounts of unneeded pinning/locking of the same set of\n> index pages.\n>\n> SAOP index scans become capable of eliding primitive index scans for\n> the next set of array keys in line in cases where it isn't truly\n> necessary to descend the B-Tree again. Index scans are now capable of\n> \"sticking with the existing leaf page for now\" when it is determined\n> that the end of the current set of array keys is physically close to\n> the start of the next set of array keys (the next set in line to be\n> materialized by the _bt_advance_array_keys state machine). This is\n> often possible.\n>\n> Naturally, we still prefer to advance the array keys in the\n> traditional way (\"globally\") much of the time. That means we'll\n> perform another _bt_first/_bt_search descent of the index, starting a\n> new primitive index scan. Whether we try to skip pages on the leaf\n> level or stick with the current primitive index scan (by advancing\n> array keys locally) is likely to vary a great deal. Even during the\n> same index scan. Everything is decided dynamically, which is the only\n> approach that really makes sense.\n>\n> This optimization can significantly lower the number of buffers pinned\n> and locked in cases with significant locality, and/or with many array\n> keys with no matches. The savings (when measured in buffers\n> pined/locked) can be as high as 10x, 100x, or even more. Benchmarking\n> has shown that transaction throughput for variants of \"pgbench -S\"\n> designed to stress the implementation (hundreds of array constants)\n> under concurrent load can have up to 5.5x higher transaction\n> throughput with the patch. Less extreme cases (10 array constants,\n> spaced apart) see about a 20% improvement in throughput. There are\n> similar improvements to latency for the patch, in each case.\n\nConsidering that it caches/reuses the page across SAOP operations, can\n(or does) this also improve performance for index scans on the outer\nside of a join if the order of join columns matches the order of the\nindex?\nThat is, I believe this caches (leaf) pages across scan keys, but can\n(or does) it also reuse these already-cached leaf pages across\nrestarts of the index scan/across multiple index lookups in the same\nplan node, so that retrieval of nearby index values does not need to\ndo an index traversal?\n\n> [...]\n> Skip Scan\n> =========\n>\n> MDAM encompasses something that people tend to call \"skip scan\" --\n> terminology with a great deal of baggage. These days I prefer to call\n> it \"filling in missing key predicates\", per the paper. That's much\n> more descriptive, and makes it less likely that people will conflate\n> the techniques with InnoDB style \"loose Index scans\" -- the latter is\n> a much more specialized/targeted optimization. (I now believe that\n> these are very different things, though I was thrown off by the\n> superficial similarities for a long time. It's pretty confusing.)\n\nI'm not sure I understand. MDAM seems to work on an index level to\nreturn full ranges of values, while \"skip scan\" seems to try to allow\nsystems to signal to the index to skip to some other index condition\nbased on arbitrary cutoffs. This would usually be those of which the\ninformation is not stored in the index, such as \"SELECT user_id FROM\norders GROUP BY user_id HAVING COUNT(*) > 10\", where the scan would go\nthough the user_id index and skip to the next user_id value when it\ngets enough rows of a matching result (where \"enough\" is determined\nabove the index AM's plan node, or otherwise is impossible to\ndetermine with only the scan key info in the index AM). I'm not sure\nhow this could work without specifically adding skip scan-related\nindex AM functionality, and I don't see how it fits in with this\nMDAM/SAOP system.\n\n> [...]\n>\n> Thoughts?\n\nMDAM seems to require exponential storage for \"scan key operations\"\nfor conditions on N columns (to be precise, the product of the number\nof distinct conditions on each column); e.g. an index on mytable\n(a,b,c,d,e,f,g,h) with conditions \"a IN (1, 2) AND b IN (1, 2) AND ...\nAND h IN (1, 2)\" would require 2^8 entries. If 4 conditions were used\nfor each column, that'd be 4^8, etc...\nWith an index column limit of 32, that's quite a lot of memory\npotentially needed to execute the statement.\nSo, this begs the question: does this patch have the same issue? Does\nit fail with OOM, does it gracefully fall back to the old behaviour\nwhen the clauses are too complex to linearize/compose/fold into the\nbtree ordering clauses, or are scan keys dynamically constructed using\njust-in-time- or generator patterns?\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech/)\n\n\n", "msg_date": "Wed, 26 Jul 2023 14:29:45 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Wed, Jul 26, 2023 at 5:29 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> Considering that it caches/reuses the page across SAOP operations, can\n> (or does) this also improve performance for index scans on the outer\n> side of a join if the order of join columns matches the order of the\n> index?\n\nIt doesn't really cache leaf pages at all. What it does is advance the\narray keys locally, while the original buffer lock is still held on\nthat same page.\n\n> That is, I believe this caches (leaf) pages across scan keys, but can\n> (or does) it also reuse these already-cached leaf pages across\n> restarts of the index scan/across multiple index lookups in the same\n> plan node, so that retrieval of nearby index values does not need to\n> do an index traversal?\n\nI'm not sure what you mean. There is no reason why you need to do more\nthan one single descent of an index to scan many leaf pages using many\ndistinct sets of array keys. Obviously, this depends on being able to\nobserve that we really don't need to redescend the index to advance\nthe array keys, again and again. Note in particularly that this\nusually works across leaf pages.\n\n> I'm not sure I understand. MDAM seems to work on an index level to\n> return full ranges of values, while \"skip scan\" seems to try to allow\n> systems to signal to the index to skip to some other index condition\n> based on arbitrary cutoffs. This would usually be those of which the\n> information is not stored in the index, such as \"SELECT user_id FROM\n> orders GROUP BY user_id HAVING COUNT(*) > 10\", where the scan would go\n> though the user_id index and skip to the next user_id value when it\n> gets enough rows of a matching result (where \"enough\" is determined\n> above the index AM's plan node, or otherwise is impossible to\n> determine with only the scan key info in the index AM). I'm not sure\n> how this could work without specifically adding skip scan-related\n> index AM functionality, and I don't see how it fits in with this\n> MDAM/SAOP system.\n\nI think of that as being quite a different thing.\n\nBasically, the patch that added that feature had to revise the index\nAM API, in order to support a mode of operation where scans return\ngroupings rather than tuples. Whereas this patch requires none of\nthat. It makes affected index scans as similar as possible to\nconventional index scans.\n\n> > [...]\n> >\n> > Thoughts?\n>\n> MDAM seems to require exponential storage for \"scan key operations\"\n> for conditions on N columns (to be precise, the product of the number\n> of distinct conditions on each column); e.g. an index on mytable\n> (a,b,c,d,e,f,g,h) with conditions \"a IN (1, 2) AND b IN (1, 2) AND ...\n> AND h IN (1, 2)\" would require 2^8 entries.\n\nNote that I haven't actually changed anything about the way that the\nstate machine generates new sets of single value predicates -- it's\nstill just cycling through each distinct set of array keys in the\npatch.\n\nWhat you describe is a problem in theory, but I doubt that it's a\nproblem in practice. You don't actually have to materialize the\npredicates up-front, or at all. Plus you can skip over them using the\nnext index tuple. So skipping works both ways.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 26 Jul 2023 06:41:48 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Wed, 26 Jul 2023 at 15:42, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Wed, Jul 26, 2023 at 5:29 AM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > Considering that it caches/reuses the page across SAOP operations, can\n> > (or does) this also improve performance for index scans on the outer\n> > side of a join if the order of join columns matches the order of the\n> > index?\n>\n> It doesn't really cache leaf pages at all. What it does is advance the\n> array keys locally, while the original buffer lock is still held on\n> that same page.\n\nHmm, then I had a mistaken understanding of what we do in _bt_readpage\nwith _bt_saveitem.\n\n> > That is, I believe this caches (leaf) pages across scan keys, but can\n> > (or does) it also reuse these already-cached leaf pages across\n> > restarts of the index scan/across multiple index lookups in the same\n> > plan node, so that retrieval of nearby index values does not need to\n> > do an index traversal?\n>\n> I'm not sure what you mean. There is no reason why you need to do more\n> than one single descent of an index to scan many leaf pages using many\n> distinct sets of array keys. Obviously, this depends on being able to\n> observe that we really don't need to redescend the index to advance\n> the array keys, again and again. Note in particularly that this\n> usually works across leaf pages.\n\nIn a NestedLoop(inner=seqscan, outer=indexscan), the index gets\nrepeatedly scanned from the root, right? It seems that right now, we\ncopy matching index entries into a local cache (that is deleted on\namrescan), then we drop our locks and pins on the buffer, and then\nstart returning values from our local cache (in _bt_saveitem).\nWe could cache the last accessed leaf page across amrescan operations\nto reduce the number of index traversals needed when the join key of\nthe left side is highly (but not necessarily strictly) correllated.\nThe worst case overhead of this would be 2 _bt_compares (to check if\nthe value is supposed to be fully located on the cached leaf page)\nplus one memcpy( , , BLCKSZ) in the previous loop. With some smart\nheuristics (e.g. page fill factor, number of distinct values, and\nwhether we previously hit this same leaf page in the previous scan of\nthis Node) we can probably also reduce this overhead to a minimum if\nthe joined keys are not correllated, but accellerate the query\nsignificantly when we find out they are correllated.\n\nOf course, in the cases where we'd expect very few distinct join keys\nthe planner would likely put a Memoize node above the index scan, but\nfor mostly unique join keys I think this could save significant\namounts of time, if only on buffer pinning and locking.\n\nI guess I'll try to code something up when I have the time, as it\nsounds not quite exactly related to your patch but an interesting\nimprovement nonetheless.\n\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Wed, 26 Jul 2023 18:07:20 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Wed, Jul 26, 2023 at 12:07 PM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> We could cache the last accessed leaf page across amrescan operations\n> to reduce the number of index traversals needed when the join key of\n> the left side is highly (but not necessarily strictly) correllated.\n\nThat sounds like block nested loop join. It's possible that that could\nreuse some infrastructure from this patch, but I'm not sure.\n\nIn general, SAOP execution/MDAM performs \"duplicate elimination before\nit reads the data\" by sorting and deduplicating the arrays up front.\nWhile my patch sometimes elides a primitive index scan, primitive\nindex scans are already disjuncts that are combined to create what can\nbe considered one big index scan (that's how the planner and executor\nthink of them). The patch takes that one step further by recognizing\nthat it could quite literally be one big index scan in some cases (or\nfewer, larger scans, at least). It's a natural incremental\nimprovement, as opposed to inventing a new kind of index scan. If\nanything the patch makes SAOP execution more similar to traditional\nindex scans, especially when costing them.\n\nLike InnoDB style loose index scan (for DISTINCT and GROUP BY\noptimization), block nested loop join would require inventing a new\ntype of index scan. Both of these other two optimizations involve the\nuse of semantic information that spans multiple levels of abstraction.\nLoose scan requires duplicate elimination (that's the whole point),\nwhile IIUC block nested loop join needs to \"simulate multiple inner\nindex scans\" by deliberately returning duplicates for each would-be\ninner index scan. These are specialized things.\n\nTo be clear, I think that all of these ideas are reasonable. I just\nfind it useful to classify these sorts of techniques according to\nwhether or not the index AM API would have to change or not, and the\ngeneral nature of any required changes. MDAM can do a lot of cool\nthings without requiring any revisions to the index AM API, which\nshould allow it to play nice with everything else (index path clause\nsafety issues notwithstanding).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 27 Jul 2023 00:13:47 -0400", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Wed, 26 Jul 2023 at 15:42, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Wed, Jul 26, 2023 at 5:29 AM Matthias van de Meent\n> > I'm not sure I understand. MDAM seems to work on an index level to\n> > return full ranges of values, while \"skip scan\" seems to try to allow\n> > systems to signal to the index to skip to some other index condition\n> > based on arbitrary cutoffs. This would usually be those of which the\n> > information is not stored in the index, such as \"SELECT user_id FROM\n> > orders GROUP BY user_id HAVING COUNT(*) > 10\", where the scan would go\n> > though the user_id index and skip to the next user_id value when it\n> > gets enough rows of a matching result (where \"enough\" is determined\n> > above the index AM's plan node, or otherwise is impossible to\n> > determine with only the scan key info in the index AM). I'm not sure\n> > how this could work without specifically adding skip scan-related\n> > index AM functionality, and I don't see how it fits in with this\n> > MDAM/SAOP system.\n>\n> I think of that as being quite a different thing.\n>\n> Basically, the patch that added that feature had to revise the index\n> AM API, in order to support a mode of operation where scans return\n> groupings rather than tuples. Whereas this patch requires none of\n> that. It makes affected index scans as similar as possible to\n> conventional index scans.\n\nHmm, yes. I see now where my confusion started. You called it out in\nyour first paragraph of the original mail, too, but that didn't help\nme then:\n\nThe wiki does not distinguish \"Index Skip Scans\" and \"Loose Index\nScans\", but these are not the same.\n\nIn the one page on \"Loose indexscan\", it refers to MySQL's \"loose\nindex scan\" documentation, which does handle groupings, and this was\ntargeted with the previous, mislabeled, \"Index skipscan\" patchset.\nHowever, crucially, it also refers to other databases' Index Skip Scan\ndocumentation, which document and implement this approach of 'skipping\nto the next potential key range to get efficient non-prefix qual\nresults', giving me a false impression that those two features are one\nand the same when they are not.\n\nIt seems like I'll have to wait a bit longer for the functionality of\nLoose Index Scans.\n\n> > > [...]\n> > >\n> > > Thoughts?\n> >\n> > MDAM seems to require exponential storage for \"scan key operations\"\n> > for conditions on N columns (to be precise, the product of the number\n> > of distinct conditions on each column); e.g. an index on mytable\n> > (a,b,c,d,e,f,g,h) with conditions \"a IN (1, 2) AND b IN (1, 2) AND ...\n> > AND h IN (1, 2)\" would require 2^8 entries.\n>\n> Note that I haven't actually changed anything about the way that the\n> state machine generates new sets of single value predicates -- it's\n> still just cycling through each distinct set of array keys in the\n> patch.\n>\n> What you describe is a problem in theory, but I doubt that it's a\n> problem in practice. You don't actually have to materialize the\n> predicates up-front, or at all.\n\nYes, that's why I asked: The MDAM paper's examples seem to materialize\nthe full predicate up-front, which would require a product of all\nindexed columns' quals in size, so that materialization has a good\nchance to get really, really large. But if we're not doing that\nmaterialization upfront, then there is no issue with resource\nconsumption (except CPU time, which can likely be improved with other\nmethods)\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech/)\n\n\n", "msg_date": "Thu, 27 Jul 2023 13:59:13 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Thu, 27 Jul 2023 at 06:14, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Wed, Jul 26, 2023 at 12:07 PM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > We could cache the last accessed leaf page across amrescan operations\n> > to reduce the number of index traversals needed when the join key of\n> > the left side is highly (but not necessarily strictly) correllated.\n>\n> That sounds like block nested loop join. It's possible that that could\n> reuse some infrastructure from this patch, but I'm not sure.\n\nMy idea is not quite block nested loop join. It's more 'restart the\nindex scan at the location the previous index scan ended, if\nheuristics say there's a good chance that might save us time'. I'd say\nit is comparable to the fast tree descent optimization that we have\nfor endpoint queries, and comparable to this patch's scankey\noptimization, but across AM-level rescans instead of internal rescans.\n\nSee also the attached prototype and loosely coded patch. It passes\ntests, but it might not be without bugs.\n\nThe basic design of that patch is this: We keep track of how many\ntimes we've rescanned, and the end location of the index scan. If a\nnew index scan hits the same page after _bt_search as the previous\nscan ended, we register that. Those two values - num_rescans and\nnum_samepage - are used as heuristics for the following:\n\nIf 50% or more of rescans hit the same page as the end location of the\nprevious scan, we start saving the scan's end location's buffer into\nthe BTScanOpaque, so that the next _bt_first can check whether that\npage might be the right leaf page, and if so, immediately go to that\nbuffer instead of descending the tree - saving one tree descent in the\nprocess.\n\nFurther optimizations of this mechanism could easily be implemented by\ne.g. only copying the min/max index tuples instead of the full index\npage, reducing the overhead at scan end.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)", "msg_date": "Thu, 27 Jul 2023 15:59:51 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Thu, Jul 27, 2023 at 7:59 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> > Basically, the patch that added that feature had to revise the index\n> > AM API, in order to support a mode of operation where scans return\n> > groupings rather than tuples. Whereas this patch requires none of\n> > that. It makes affected index scans as similar as possible to\n> > conventional index scans.\n>\n> Hmm, yes. I see now where my confusion started. You called it out in\n> your first paragraph of the original mail, too, but that didn't help\n> me then:\n>\n> The wiki does not distinguish \"Index Skip Scans\" and \"Loose Index\n> Scans\", but these are not the same.\n\nA lot of people (myself included) were confused on this point for\nquite a while. To make matters even more confusing, one of the really\ncompelling cases for the MDAM design is scans that feed into\nGroupAggregates -- preserving index sort order for naturally big index\nscans will tend to enable it. One of my examples from the start of\nthis thread showed just that. (It just so happened that that example\nwas faster because of all the \"skipping\" that nbtree *wasn't* doing\nwith the patch.)\n\n> Yes, that's why I asked: The MDAM paper's examples seem to materialize\n> the full predicate up-front, which would require a product of all\n> indexed columns' quals in size, so that materialization has a good\n> chance to get really, really large. But if we're not doing that\n> materialization upfront, then there is no issue with resource\n> consumption (except CPU time, which can likely be improved with other\n> methods)\n\nI get why you asked. I might have asked the same question.\n\nAs I said, the MDAM paper has *surprisingly* little to say about\nB-Tree executor stuff -- it's almost all just describing the\npreprocessing/transformation process. It seems as if optimizations\nlike the one from my patch were considered too obvious to talk about\nand/or out of scope by the authors. Thinking about the MDAM paper like\nthat was what made everything fall into place for me. Remember,\n\"missing key predicates\" isn't all that special.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 27 Jul 2023 10:00:31 -0400", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Thu, 27 Jul 2023 at 16:01, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Thu, Jul 27, 2023 at 7:59 AM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > > Basically, the patch that added that feature had to revise the index\n> > > AM API, in order to support a mode of operation where scans return\n> > > groupings rather than tuples. Whereas this patch requires none of\n> > > that. It makes affected index scans as similar as possible to\n> > > conventional index scans.\n> >\n> > Hmm, yes. I see now where my confusion started. You called it out in\n> > your first paragraph of the original mail, too, but that didn't help\n> > me then:\n> >\n> > The wiki does not distinguish \"Index Skip Scans\" and \"Loose Index\n> > Scans\", but these are not the same.\n>\n> A lot of people (myself included) were confused on this point for\n> quite a while.\n\nI've taken the liberty to update the \"Loose indexscan\" wiki page [0],\nadding detail that Loose indexscans are distinct from Skip scans, and\nshowing some high-level distinguishing properties.\nI also split the TODO entry for `` \"loose\" or \"skip\" scan `` into two,\nand added links to the relevant recent threads so that it's clear\nthese are different (and that some previous efforts may have had a\nconfusing name).\n\nI hope this will reduce the chance of future confusion between the two\ndifferent approaches to improving index scan performance.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n[0]: https://wiki.postgresql.org/wiki/Loose_indexscan\n\n\n", "msg_date": "Fri, 28 Jul 2023 13:11:47 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "Hi, all!\n\n> CNF -> DNF conversion\n> =====================\n>\n> Like many great papers, the MDAM paper takes one core idea, and finds\n> ways to leverage it to the hilt. Here the core idea is to take\n> predicates in conjunctive normal form (an \"AND of ORs\"), and convert\n> them into disjunctive normal form (an \"OR of ANDs\"). DNF quals are\n> logically equivalent to CNF quals, but ideally suited to SAOP-array\n> style processing by an ordered B-Tree index scan -- they reduce\n> everything to a series of non-overlapping primitive index scans, that\n> can be processed in keyspace order. We already do this today in the\n> case of SAOPs, in effect. The nbtree \"next array keys\" state machine\n> already materializes values that can be seen as MDAM style DNF single\n> value predicates. The state machine works by outputting the cartesian\n> product of each array as a multi-column index is scanned, but that\n> could be taken a lot further in the future. We can use essentially the\n> same kind of state machine to do everything described in the paper --\n> ultimately, it just needs to output a list of disjuncts, like the DNF\n> clauses that the paper shows in \"Table 3\".\n>\n> In theory, anything can be supported via a sufficiently complete CNF\n> -> DNF conversion framework. There will likely always be the potential\n> for unsafe/unsupported clauses and/or types in an extensible system\n> like Postgres, though. So we will probably need to retain some notion\n> of safety. It seems like more of a job for nbtree preprocessing (or\n> some suitably index-AM-agnostic version of the same idea) than the\n> optimizer, in any case. But that's not entirely true, either (that\n> would be far too easy).\n>\n> The optimizer still needs to optimize. It can't very well do that\n> without having some kind of advanced notice of what is and is not\n> supported by the index AM. And, the index AM cannot just unilaterally\n> decide that index quals actually should be treated as filter/qpquals,\n> after all -- it doesn't get a veto. So there is a mutual dependency\n> that needs to be resolved. I suspect that there needs to be a two way\n> conversation between the optimizer and nbtree code to break the\n> dependency -- a callback that does some of the preprocessing work\n> during planning. Tom said something along the same lines in passing,\n> when discussing the MDAM paper last year [2]. Much work remains here.\n>\nHonestly, I'm just reading and delving into this thread and other topics \nrelated to it, so excuse me if I ask you a few obvious questions.\n\nI noticed that you are going to add CNF->DNF transformation at the index \nconstruction stage. If I understand correctly, you will rewrite \nrestrictinfo node,\nchange boolean \"AND\" expressions to \"OR\" expressions, but would it be \npossible to apply such a procedure earlier? Otherwise I suppose you \ncould face the problem of\nincorrect selectivity of the calculation and, consequently, the \ncardinality calculation?\nI can't clearly understand at what stage it is clear that the such a \ntransformation needs to be applied?\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional\n\n\n\n", "msg_date": "Mon, 31 Jul 2023 19:24:52 +0300", "msg_from": "Alena Rybakina <lena.ribackina@yandex.ru>", "msg_from_op": false, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Thu, Jul 27, 2023 at 10:00 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> My idea is not quite block nested loop join. It's more 'restart the\n> index scan at the location the previous index scan ended, if\n> heuristics say there's a good chance that might save us time'. I'd say\n> it is comparable to the fast tree descent optimization that we have\n> for endpoint queries, and comparable to this patch's scankey\n> optimization, but across AM-level rescans instead of internal rescans.\n\nYeah, I see what you mean. Seems related, even though what you've\nshown in your prototype patch doesn't seem like it fits into my\ntaxonomy very neatly.\n\n(BTW, I was a little confused by the use of the term \"endpoint\" at\nfirst, since there is a function that uses that term to refer to a\ndescent of the tree that happens without any insertion scan key. This\npath is used whenever the best we can do in _bt_first is to descend to\nthe rightmost or leftmost page.)\n\n> The basic design of that patch is this: We keep track of how many\n> times we've rescanned, and the end location of the index scan. If a\n> new index scan hits the same page after _bt_search as the previous\n> scan ended, we register that.\n\nI can see one advantage that block nested loop join would retain here:\nit does block-based accesses on both sides of the join. Since it\n\"looks ahead\" on both sides of the join, more repeat accesses are\nlikely to be avoided.\n\nNot too sure how much that matters in practice, though.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 31 Jul 2023 12:34:47 -0400", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Mon, Jul 31, 2023 at 12:24 PM Alena Rybakina\n<lena.ribackina@yandex.ru> wrote:\n> I noticed that you are going to add CNF->DNF transformation at the index\n> construction stage. If I understand correctly, you will rewrite\n> restrictinfo node,\n> change boolean \"AND\" expressions to \"OR\" expressions, but would it be\n> possible to apply such a procedure earlier?\n\nSort of. I haven't really added any new CNF->DNF transformations. The\ncode you're talking about is really just checking that every index\npath has clauses that we know that nbtree can handle. That's a big,\nugly modularity violation -- many of these details are quite specific\nto the nbtree index AM (in theory we could have other index AMs that\nare amsearcharray).\n\nAt most, v1 of the patch makes greater use of an existing\ntransformation that takes place in the nbtree index AM, as it\npreprocesses scan keys for these types of queries (it's not inventing\nnew transformations at all). This is a slightly creative\ninterpretation, too. Tom's commit 9e8da0f7 didn't actually say\nanything about CNF/DNF.\n\n> Otherwise I suppose you\n> could face the problem of\n> incorrect selectivity of the calculation and, consequently, the\n> cardinality calculation?\n\nI can't think of any reason why that should happen as a direct result\nof what I have done here. Multi-column index paths + multiple SAOP\nclauses are not a new thing. The number of rows returned does not\ndepend on whether we have some columns as filter quals or not.\n\nOf course that doesn't mean that the costing has no problems. The\ncosting definitely has several problems right now.\n\nIt also isn't necessarily okay that it's \"just as good as before\" if\nit turns out that it needs to be better now. But I don't see why it\nwould be. (Actually, my hope is that selectivity estimation might be\n*less* important as a practical matter with the patch.)\n\n> I can't clearly understand at what stage it is clear that the such a\n> transformation needs to be applied?\n\nI don't know either.\n\nI think that most of this work needs to take place in the nbtree code,\nduring preprocessing. But it's not so simple. There is a mutual\ndependency between the code that generates index paths in the planner\nand nbtree scan key preprocessing. The planner needs to know what\nkinds of index paths are possible/safe up-front, so that it can choose\nthe fastest plan (the fastest that the index AM knows how to execute\ncorrectly). But, there are lots of small annoying nbtree\nimplementation details that might matter, and can change.\n\nI think we need to have nbtree register a callback, so that the\nplanner can initialize some preprocessing early. I think that we\nrequire a \"two way conversation\" between the planner and the index AM.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 31 Jul 2023 13:04:09 -0400", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Wed, Jul 26, 2023 at 6:41 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > MDAM seems to require exponential storage for \"scan key operations\"\n> > for conditions on N columns (to be precise, the product of the number\n> > of distinct conditions on each column); e.g. an index on mytable\n> > (a,b,c,d,e,f,g,h) with conditions \"a IN (1, 2) AND b IN (1, 2) AND ...\n> > AND h IN (1, 2)\" would require 2^8 entries.\n\n> What you describe is a problem in theory, but I doubt that it's a\n> problem in practice. You don't actually have to materialize the\n> predicates up-front, or at all. Plus you can skip over them using the\n> next index tuple. So skipping works both ways.\n\nAttached is v2, which makes all array key advancement take place using\nthe \"next index tuple\" approach (using binary searches to find array\nkeys using index tuple values). This approach was necessary for fairly\nmundane reasons (it limits the amount of work required while holding a\nbuffer lock), but it also solves quite a few other problems that I\nfind far more interesting.\n\nIt's easy to imagine the state machine from v2 of the patch being\nextended for skip scan. My approach \"abstracts away\" the arrays. For\nskip scan, it would more or less behave as if the user had written a\nquery \"WHERE a in (<Every possible value for this column>) AND b = 5\n... \" -- without actually knowing what the so-called array keys for\nthe high-order skipped column are (not up front, at least). We'd only\nneed to track the current \"array key\" for the scan key on the skipped\ncolumn, \"a\". The state machine would notice when the scan had reached\nthe next-greatest \"a\" value in the index (whatever that might be), and\nthen make that the current value. Finally, the state machine would\neffectively instruct its caller to consider repositioning the scan via\na new descent of the index. In other words, almost everything for skip\nscan would work just like regular SAOPs -- and any differences would\nbe well encapsulated.\n\nBut it's not just skip scan. This approach also enables thinking of\nSAOP index scans (using nbtree) as just another type of indexable\nclause, without any special restrictions (compared to true indexable\noperators such as \"=\", say). Particularly in the planner. That was\nalways the general thrust of teaching nbtree about SAOPs, from the\nstart. But it's something that should be totally embraced IMV. That's\njust what the patch proposes to do.\n\nIn particular, the patch now:\n\n1. Entirely removes the long-standing restriction on generating path\nkeys for index paths with SAOPs, even when there are inequalities on a\nhigh order column present. You can mix SAOPs together with other\nclause types, arbitrarily, and everything still works and works\nefficiently.\n\nFor example, the regression test expected output for this query/test\n(from bugfix commit 807a40c5) is updated by the patch, as shown here:\n\n explain (costs off)\n SELECT thousand, tenthous FROM tenk1\n WHERE thousand < 2 AND tenthous IN (1001,3000)\n ORDER BY thousand;\n- QUERY PLAN\n---------------------------------------------------------------------------------------\n- Sort\n- Sort Key: thousand\n- -> Index Scan using tenk1_thous_tenthous on tenk1\n- Index Cond: ((thousand < 2) AND (tenthous = ANY\n('{1001,3000}'::integer[])))\n-(4 rows)\n+ QUERY PLAN\n+--------------------------------------------------------------------------------\n+ Index Scan using tenk1_thous_tenthous on tenk1\n+ Index Cond: ((thousand < 2) AND (tenthous = ANY ('{1001,3000}'::integer[])))\n+(2 rows)\n\nWe don't need a sort node anymore -- even though the leading column\nhere (thousand) uses an inequality, a particularly tricky case. Now\nit's an index scan, much like any other, with no particular\nrestrictions caused by using a SAOP.\n\n2. Adds an nbtree strategy for non-required equality array scan keys,\nwhich is built on the same state machine, with only minor differences\nto deal with column values \"appearing out of key space order\".\n\n3. Simplifies the optimizer side of things by consistently avoiding\nfilter quals (except when it's truly unavoidable). The optimizer\ndoesn't even consider alternative index paths with filter quals for\nlower-order SAOP columns, because they have no possible advantage\nanymore. On the other hand, as we saw already, upthread, filter quals\nhave huge disadvantages. By always using true index quals, we\nautomatically avoid any question of getting excessive amounts of heap\npage accesses just to eliminate non-matching rows. AFAICT we don't\nneed to make a trade-off here.\n\nThe first version of the patch added some crufty code to the\noptimizer, to account for various restrictions on sort order. This\nrevised version actually removes existing cruft from the same place\n(indxpath.c) instead.\n\nItems 1, 2, and 3 are all closely related. Take the query I've shown\nfor item 1. Bugfix commit 807a40c5 (which added the test query in\nquestion) dealt with an oversight in the then-recent original nbtree\nSAOP patch (commit 9e8da0f7): when nbtree combines two primitive index\nscans with an inequality on their leading column, we cannot be sure\nthat the output will appear in the same order as the order that one\nbig continuous index scan returns rows in. We can only expect to\nmaintain the illusion that we're doing one continuous index scan when\nindividual primitive index scans access earlier columns via the\nequality strategy -- we need \"equality constraints\".\n\nIn practice, the optimizer (indxpath.c) is very conservative (more\nconservative than it really needs to be) when it comes to trusting the\nindex scan to output rows in index order, in the presence of SAOPs.\nAll of that now seems totally unnecessary. Again, I don't see a need\nto make a trade-off here.\n\nMy observation about this query (and others like it) is: why not\nliterally perform one continuous index scan instead (not multiple\nprimitive index scans)? That is strictly better, given all the\nspecifics here. Once we have a way to do that (which the nbtree\nexecutor work listed under item 2 provides), it becomes safe to assume\nthat the tuples will be output in index order -- there is no illusion\nleft to preserve. Who needs an illusion that isn't actually helping\nus? We actually do less I/O by using this strategy, for the usual\nreasons (we can avoid repeating index page accesses).\n\nA more concrete benefit of the non-required-scankeys stuff can be seen\nby running Benoit Tigeot's test case [1] with v2. He had a query like\nthis:\n\nSELECT * FROM docs\nWHERE status IN ('draft', 'sent') AND\nsender_reference IN ('Custom/1175', 'Client/362', 'Custom/280')\nORDER BY sent_at DESC NULLS LAST LIMIT 20;\n\nAnd, his test case had an index on \"sent_at DESC NULLS LAST,\nsender_reference, status\". This variant was a weak spot for v1.\n\nv2 of the patch is vastly more efficient here, since we don't have to\ngo to the heap to eliminate non-matching tuples -- that can happen in\nthe index AM instead. This can easily be 2x-3x faster on a warm cache,\nand have *hundreds* of times fewer buffer accesses (which Benoit\nverified with an early version of this v2). All because we now require\nvastly less heap access -- the quals are fairly selective here, and we\nhave to scan hundreds of leaf pages before the scan can terminate.\nAvoiding filter quals is a huge win.\n\nThis particular improvement is hard to squarely attribute to any one\nof my 3 items. The immediate problem that the query presents us with\non the master branch is the problem of filter quals that require heap\naccesses to do visibility checks (a problem that index quals can never\nhave). That makes it tempting to credit my item 3. But you can't\nreally have item 3 without also having items 1 and 2. Taken together,\nthey eliminate all possible downsides from using index quals.\n\nThat high level direction (try to have one good choice for the\noptimizer) seems important to me. Both for this project, and in\ngeneral.\n\nOther changes in v2:\n\n* Improved costing, that takes advantage of the fact that nbtree now\npromises to not repeat any leaf page accesses (unless the scan is\nrestarted or the direction of the scan changes). This makes the worst\ncase far more predictable, and more related to selectivity estimation\n-- you can't scan more pages than you have in the whole index. Just\nlike with every other sort of index scan.\n\n* Support for parallel index scans.\n\nThe existing approach to array keys for parallel index scan has been\nadopted to work with individual primitive index scans, not individual\narray keys. I haven't tested this very thoroughly just yet, but it\nseems to work well enough already. I think that it's important to not\nhave very much variation between parallel and serial index scans,\nwhich I seem to have mostly avoided.\n\n[1] https://gist.github.com/benoittgt/ab72dc4cfedea2a0c6a5ee809d16e04d?permalink_comment_id=4690491#gistcomment-4690491\n-- \nPeter Geoghegan", "msg_date": "Sun, 17 Sep 2023 16:47:45 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Sun, Sep 17, 2023 at 4:47 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached is v2, which makes all array key advancement take place using\n> the \"next index tuple\" approach (using binary searches to find array\n> keys using index tuple values).\n\nAttached is v3, which fixes bitrot caused by today's bugfix commit 714780dc.\n\nNo notable changes here compared to v2.\n\n--\nPeter Geoghegan", "msg_date": "Thu, 28 Sep 2023 17:32:19 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Thu, Sep 28, 2023 at 5:32 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Sun, Sep 17, 2023 at 4:47 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Attached is v2, which makes all array key advancement take place using\n> > the \"next index tuple\" approach (using binary searches to find array\n> > keys using index tuple values).\n>\n> Attached is v3, which fixes bitrot caused by today's bugfix commit 714780dc.\n\nAttached is v4, which applies cleanly on top of HEAD. This was needed\ndue to Alexandar Korotkov's commit e0b1ee17, \"Skip checking of scan\nkeys required for directional scan in B-tree\".\n\nUnfortunately I have more or less dealt with the conflicts on HEAD by\ndisabling the optimization from that commit, for the time being. The\ncommit in question is rather poorly documented, and it's not\nimmediately clear how to integrate it with my work. I just want to\nmake sure that there's a testable patch available.\n\n-- \nPeter Geoghegan", "msg_date": "Sun, 15 Oct 2023 13:50:43 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Sun, Oct 15, 2023 at 1:50 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached is v4, which applies cleanly on top of HEAD. This was needed\n> due to Alexandar Korotkov's commit e0b1ee17, \"Skip checking of scan\n> keys required for directional scan in B-tree\".\n>\n> Unfortunately I have more or less dealt with the conflicts on HEAD by\n> disabling the optimization from that commit, for the time being.\n\nAttached is v5, which deals with the conflict with the optimization\nadded by Alexandar Korotkov's commit e0b1ee17 sensibly: the\noptimization is now only disabled in cases without array scan keys.\n(It'd be very hard to make it work with array scan keys, since an\nimportant principle for my patch is that we can change search-type\nscan keys right in the middle of any _bt_readpage() call).\n\nv5 also fixes a longstanding open item for the patch: we no longer\ncall _bt_preprocess_keys() with a buffer lock held, which was a bad\nidea at best, and unsafe (due to the syscache lookups within\n_bt_preprocess_keys) at worst. A new, minimal version of the function\n(called _bt_preprocess_keys_leafbuf) is called at the same point\ninstead. That change, combined with the array binary search stuff\n(which was added back in v2), makes the total amount of work performed\nwith a buffer lock held totally reasonable in all cases. It's even\nokay in extreme or adversarial cases with many millions of array keys.\n\nMaking this _bt_preprocess_keys_leafbuf approach work has a downside:\nit requires that _bt_preprocess_keys be a little less aggressive about\nremoving redundant scan keys, in order to meet certain assumptions\nheld by the new _bt_preprocess_keys_leafbuf function. Essentially,\n_bt_preprocess_keys must now worry about current and future array key\nvalues when determining redundancy among scan keys -- not just the\ncurrent array key values. _bt_preprocess_keys knows nothing about\nSK_SEARCHARRAY scan keys on HEAD, because on HEAD there is a strict\n1:1 correspondence between the number of primitive index scans and the\nnumber of array keys (actually, the number of distinct combinations of\narray keys). Obviously that's no longer the case with the patch\n(that's the whole point of the patch).\n\nIt's easiest to understand how elimination of redundant quals needs to\nwork in v5 by way of an example. Consider the following query:\n\nselect count(*), two, four, twenty, hundred\nfrom\n tenk1\nwhere\n two in (0, 1) and four in (1, 2, 3)\n and two < 1;\n\nNotice that \"two\" appears in the where clause twice. First it appears\nas an SAOP, and then as an inequality. Right now, on HEAD, the\nprimitive index scan where the SAOP's scankey is \"two = 0\" renders\n\"two < 1\" redundant. However, the subsequent primitive index scan\nwhere \"two = 1\" does *not* render \"two < 1\" redundant. This has\nimplications for the mechanism in the patch, since the patch will\nperform one big primitive index scan for all array constants, with\nonly a single _bt_preprocess_keys call at the start of its one and\nonly _bt_first call (but with multiple _bt_preprocess_keys_leafbuf\ncalls once we reach the leaf level).\n\nThe compromise that I've settled on in v5 is to teach\n_bt_preprocess_keys to *never* treat \"two < 1\" as redundant with such\na query -- even though there is some squishy sense in which \"two < 1\"\nis indeed still redundant (for the first SAOP key of value 0). My\napproach is reasonably well targeted in that it mostly doesn't affect\nqueries that don't need it. But it will add cycles to some badly\nwritten queries that wouldn't have had them in earlier Postgres\nversions. I'm not entirely sure how much this matters, but my current\nsense is that it doesn't matter all that much. This is the kind of\nthing that is hard to test and poorly tested, so simplicity is even\nmore of a virtue than usual.\n\nNote that the changes to _bt_preprocess_keys in v5 *don't* affect how\nwe determine if the scan has contradictory quals, which is generally\nmore important. With contradictory quals, _bt_first can avoid reading\nany data from the index. OTOH eliminating redundant quals (i.e. the\nthing that v5 *does* change) merely makes evaluating index quals less\nexpensive via preprocessing-away unneeded scan keys. In other words,\nwhile it's possible that the approach taken by v5 will add CPU cycles\nin a small number of cases, it should never result in more page\naccesses.\n\n--\nPeter Geoghegan", "msg_date": "Fri, 20 Oct 2023 15:39:44 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Sat, 21 Oct 2023 at 00:40, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Sun, Oct 15, 2023 at 1:50 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Attached is v4, which applies cleanly on top of HEAD. This was needed\n> > due to Alexandar Korotkov's commit e0b1ee17, \"Skip checking of scan\n> > keys required for directional scan in B-tree\".\n> >\n> > Unfortunately I have more or less dealt with the conflicts on HEAD by\n> > disabling the optimization from that commit, for the time being.\n>\n> Attached is v5, which deals with the conflict with the optimization\n> added by Alexandar Korotkov's commit e0b1ee17 sensibly: the\n> optimization is now only disabled in cases without array scan keys.\n> (It'd be very hard to make it work with array scan keys, since an\n> important principle for my patch is that we can change search-type\n> scan keys right in the middle of any _bt_readpage() call).\n\nI'm planning on reviewing this patch tomorrow, but in an initial scan\nthrough the patch I noticed there's little information about how the\narray keys state machine works in this new design. Do you have a more\ntoplevel description of the full state machine used in the new design?\nIf not, I'll probably be able to discover my own understanding of the\nmechanism used in the patch, but if there is a framework to build that\nunderstanding on (rather than having to build it from scratch) that'd\nbe greatly appreciated.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 6 Nov 2023 22:28:01 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Mon, Nov 6, 2023 at 1:28 PM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> I'm planning on reviewing this patch tomorrow, but in an initial scan\n> through the patch I noticed there's little information about how the\n> array keys state machine works in this new design. Do you have a more\n> toplevel description of the full state machine used in the new design?\n\nThis is an excellent question. You're entirely right: there isn't\nenough information about the design of the state machine.\n\nIn v1 of the patch, from all the way back in July, the \"state machine\"\nadvanced in the hackiest way possible: via repeated \"incremental\"\nadvancement (using logic from the function that we call\n_bt_advance_array_keys() on HEAD) in a loop -- we just kept doing that\nuntil the function I'm now calling _bt_tuple_before_array_skeys()\neventually reported that the array keys were now sufficiently\nadvanced. v2 greatly improved matters by totally overhauling\n_bt_advance_array_keys(): it was taught to use binary searches to\nadvance the array keys, with limited remaining use of \"incremental\"\narray key advancement.\n\nHowever, version 2 (and all later versions to date) have somewhat\nwonky state machine transitions, in one important respect: calls to\nthe new _bt_advance_array_keys() won't always advance the array keys\nto the maximum extent possible (possible while still getting correct\nbehavior, that is). There were still various complicated scenarios\ninvolving multiple \"required\" array keys (SK_BT_REQFWD + SK_BT_REQBKWD\nscan keys that use BTEqualStrategyNumber), where one single call to\n_bt_advance_array_keys() would advance the array keys to a point that\nwas still < caller's tuple. AFAICT this didn't cause wrong answers to\nqueries (that would require failing to find a set of exactly matching\narray keys where a matching set exists), but it was kludgey. It was\nsloppy in roughly the same way as the approach in my v1 prototype was\nsloppy (just to a lesser degree).\n\nI should be able to post v6 later this week. My current plan is to\ncommit the other nbtree patch first (the backwards scan \"boundary\ncases\" one from the ongoing CF) -- since I saw your review earlier\ntoday. I think that you should probably wait for this v6 before\nstarting your review. The upcoming version will have simple\npreconditions and postconditions for the function that advances the\narray key state machine (the new _bt_advance_array_keys). These are\nenforced by assertions at the start and end of the function. So the\nrules for the state machine become crystal clear and fairly easy to\nkeep in your head (e.g., tuple must be >= required array keys on entry\nand <= required array keys on exit, the array keys must always either\nadvance by one increment or be completely exhausted for the top-level\nscan in the current scan direction).\n\nUnsurprisingly, I found that adding and enforcing these invariants led\nto a simpler and more general design within _bt_advance_array_keys.\nThat code is still the most complicated part of the patch, but it's\nmuch less of a bag of tricks. Another reason for you to hold off for a\nfew more days.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 6 Nov 2023 15:02:57 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Tue, 7 Nov 2023 at 00:03, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, Nov 6, 2023 at 1:28 PM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > I'm planning on reviewing this patch tomorrow, but in an initial scan\n> > through the patch I noticed there's little information about how the\n> > array keys state machine works in this new design. Do you have a more\n> > toplevel description of the full state machine used in the new design?\n>\n> This is an excellent question. You're entirely right: there isn't\n> enough information about the design of the state machine.\n>\n> I should be able to post v6 later this week. My current plan is to\n> commit the other nbtree patch first (the backwards scan \"boundary\n> cases\" one from the ongoing CF) -- since I saw your review earlier\n> today. I think that you should probably wait for this v6 before\n> starting your review.\n\nOkay, thanks for the update, then I'll wait for v6 to be posted.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Tue, 7 Nov 2023 13:20:39 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Tue, Nov 7, 2023 at 4:20 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> On Tue, 7 Nov 2023 at 00:03, Peter Geoghegan <pg@bowt.ie> wrote:\n> > I should be able to post v6 later this week. My current plan is to\n> > commit the other nbtree patch first (the backwards scan \"boundary\n> > cases\" one from the ongoing CF) -- since I saw your review earlier\n> > today. I think that you should probably wait for this v6 before\n> > starting your review.\n>\n> Okay, thanks for the update, then I'll wait for v6 to be posted.\n\nOn second thought, I'll just post v6 now (there won't be conflicts\nagainst the master branch once the other patch is committed anyway).\n\nHighlights:\n\n* Major simplifications to the array key state machine, already\ndescribed by my recent email.\n\n* Added preprocessing of \"redundant and contradictory\" array elements\nto _bt_preprocess_array_keys().\n\nThis makes the special preprocessing pass just for array keys\n(\"preprocessing preprocessing\") within _bt_preprocess_array_keys()\nmake this query into a no-op:\n\nselect * from tab where a in (180, 345) and a in (230, 300); -- contradictory\n\nSimilarly, it can make this query only attempt one single primitive\nindex scan for \"230\":\n\nselect * from tab where a in (180, 230) and a in (230, 300); -- has\nredundancies, plus some individual elements contradict each other\n\nThis duplicates some of what _bt_preprocess_keys can do already. But\n_bt_preprocess_keys can only do this stuff at the level of individual\narray elements/primitive index scans. Whereas this works \"one level\nup\", allowing preprocessing to see the full picture rather than just\nseeing the start of one particular primitive index scan. It explicitly\nworks across array keys, saving repeat work inside\n_bt_preprocess_keys. That could really add up with thousands of array\nkeys and/or multiple SAOPs. (Note that _bt_preprocess_array_keys\nalready does something like this, to deal with SAOP inequalities such\nas \"WHERE my_col >= any (array[1, 2])\" -- it's a little surprising\nthat this obvious optimization wasn't part of the original nbtree SAOP\npatch.)\n\nThis reminds me: you might want to try breaking the patch by coming up\nwith adversarial cases, Matthias. The patch needs to be able to deal\nwith absurdly large amounts of array keys reasonably well, because it\nproposes to normalize passing those to the nbtree code. It's\nespecially important that the patch never takes too much time to do\nsomething (e.g., binary searching through array keys) while holding a\nbuffer lock -- even with very silly adversarial queries.\n\nSo, for example, queries like this one (specifically designed to\nstress the implementation) *need* to work reasonably well:\n\nwith a as (\n select i from generate_series(0, 500000) i\n)\nselect\n count(*), thousand, tenthous\nfrom\n tenk1\nwhere\n thousand = any (array[(select array_agg(i) from a)]) and\n tenthous = any (array[(select array_agg(i) from a)])\ngroup by\n thousand, tenthous\norder by\n thousand, tenthous;\n\n (You can run this yourself after the regression tests finish, of course.)\n\nThis takes about 130ms on my machine, hardly any of which takes place\nin the nbtree code with the patch (think tens of microseconds per\n_bt_readpage call, at most) -- the plan is an index-only scan that\ngets only 30 buffer hits. On the master branch, it's vastly slower --\n1000025 buffer hits. The query as a whole takes about 3 seconds there.\n\nIf you have 3 or 4 SOAPs (with a composite index that has as many\ncolumns) you can quite easily DOS the master branch, since the planner\nmakes a generic assumption that each of these SOAPs will have only 10\nelements. The planner also thinks that with the patch applied, with\none important difference: it doesn't matter to nbtree. The cost of\nscanning each index page should be practically independent of the\ntotal size of each array, at least past a certain point. Similarly,\nthe maximum cost of an index scan should be approximately fixed: it\nshould be capped at the cost of a full index scan (with the added cost\nof these relatively expensive quals still capped, still essentially\nindependent of array sizes past some point).\n\nI notice that if I remove the \"thousand = any (array[(select\narray_agg(i) from a)]) and\" line from the adversarial query, executing\nthe resulting query still get 30 buffer hits with the patch -- though\nit only takes 90ms this time (it's faster for reasons that likely have\nless than you'd think to do with nbtree overheads). This is just\nanother way of getting roughly the same full index scan. That's a\ncompletely new way of thinking about nbtree SAOPs from a planner\nperspective (also from a user's perspective, I suppose).\n\nIt's important that the planner's new optimistic assumptions about the\ncost profile of SOAPS (that it can expect reasonable\nperformance/access patterns with wildly unreasonable/huge/arbitrarily\ncomplicated SAOPs) always be met by nbtree -- no repeat index page\naccesses, no holding a buffer lock for more than (say) a small\nfraction of 1 millisecond (no matter the complexity of the query), and\npossibly other things I haven't thought of yet.\n\nIf you end up finding a bug in this v6, it'll most likely be a case\nwhere nbtree fails to live up to that. This project is as much about\nrobust/predictable performance as anything else -- nbtree needs to be\nable to cope with practically anything. I suggest that your review\nstart by trying to break the patch along these lines.\n\n-- \nPeter Geoghegan", "msg_date": "Tue, 7 Nov 2023 17:53:01 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Tue, Nov 7, 2023 at 5:53 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> If you end up finding a bug in this v6, it'll most likely be a case\n> where nbtree fails to live up to that. This project is as much about\n> robust/predictable performance as anything else -- nbtree needs to be\n> able to cope with practically anything. I suggest that your review\n> start by trying to break the patch along these lines.\n\nI spent some time on this myself today (which I'd already planned on).\n\nAttached is an adversarial stress-test, which shows something that\nmust be approaching the worst case for the patch in terms of time\nspent with a buffer lock held, due to spending so much time evaluating\nunusually expensive SAOP index quals. The array binary searches that\ntake place with a buffer lock held aren't quite like anything else\nthat nbtree can do right now, so it's worthy of special attention.\n\nI thought of several factors that maximize both the number of binary\nsearches within any given _bt_readpage, as well as the cost of each\nbinary search -- the SQL file has full details. My test query is\n*extremely* unrealistic, since it combines multiple independent\nunrealistic factors, all of which aim to make life hard for the\nimplementation in one way or another. I hesitate to say that it\ncouldn't be much worse (I only spent a few hours on this), but I'm\nprepared to say that it seems very unlikely that any real world query\ncould make the patch spend as many cycles in\n_bt_readpage/_bt_checkkeys as this one does.\n\nPerhaps you can think of some other factor that would make this test\ncase even less sympathetic towards the patch, Matthias? The only thing\nI thought of that I've left out was the use of a custom btree opclass,\n\"unrealistically_slow_ops\". Something that calls pg_usleep in its\norder proc. (I left it out because it wouldn't prove anything.)\n\nOn my machine, custom instrumentation shows that each call to\n_bt_readpage made while this query executes (on a patched server)\ntakes just under 1.4 milliseconds. While that is far longer than it\nusually takes, it's basically acceptable IMV. It's not significantly\nlonger than I'd expect heap_index_delete_tuples() to take on an\naverage day with EBS (or other network-attached storage). But that's a\nprocess that happens all the time, with an exclusive buffer lock held\non the leaf page throughout -- whereas this is only a shared buffer\nlock, and involves a query that's just absurd .\n\nAnother factor that makes this seem acceptable is just how sensitive\nthe test case is to everything going exactly and perfectly wrong, all\nat the same time, again and again. The test case uses a 32 column\nindex (the INDEX_MAX_KEYS maximum), with a query that has 32 SAOP\nclauses (one per index column). If I reduce the number of SAOP clauses\nin the query to (say) 8, I still have a test case that's almost as\nsilly as my original -- but now we only spend ~225 microseconds in\neach _bt_readpage call (i.e. we spend over 6x less time in each\n_bt_readpage call). (Admittedly if I also make the CREATE INDEX use\nonly 8 columns, we can fit more index tuples on one page, leaving us\nat ~800 microseconds).\n\nI'm a little surprised that it isn't a lot worse than this, given how\nfar I went. I was a little concerned that it would prove necessary to\nlock this kind of thing down at some higher level (e.g., in the\nplanner), but that now looks unnecessary. There are much better ways\nto DOS the server than this. For example, you could run this same\nquery while forcing a sequential scan! That appears to be quite a lot\nless responsive to interrupts (in addition to being hopelessly slow),\nprobably because it uses parallel workers, each of which will use\nwildly expensive filter quals that just do a linear scan of the SAOP.\n\n-- \nPeter Geoghegan", "msg_date": "Thu, 9 Nov 2023 15:57:51 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Fri, 10 Nov 2023 at 00:58, Peter Geoghegan <pg@bowt.ie> wrote:\n> On Tue, Nov 7, 2023 at 5:53 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > If you end up finding a bug in this v6, it'll most likely be a case\n> > where nbtree fails to live up to that. This project is as much about\n> > robust/predictable performance as anything else -- nbtree needs to be\n> > able to cope with practically anything. I suggest that your review\n> > start by trying to break the patch along these lines.\n>\n> I spent some time on this myself today (which I'd already planned on).\n>\n> Attached is an adversarial stress-test, which shows something that\n> must be approaching the worst case for the patch in terms of time\n> spent with a buffer lock held, due to spending so much time evaluating\n> unusually expensive SAOP index quals. The array binary searches that\n> take place with a buffer lock held aren't quite like anything else\n> that nbtree can do right now, so it's worthy of special attention.\n>\n> I thought of several factors that maximize both the number of binary\n> searches within any given _bt_readpage, as well as the cost of each\n> binary search -- the SQL file has full details. My test query is\n> *extremely* unrealistic, since it combines multiple independent\n> unrealistic factors, all of which aim to make life hard for the\n> implementation in one way or another. I hesitate to say that it\n> couldn't be much worse (I only spent a few hours on this), but I'm\n> prepared to say that it seems very unlikely that any real world query\n> could make the patch spend as many cycles in\n> _bt_readpage/_bt_checkkeys as this one does.\n>\n> Perhaps you can think of some other factor that would make this test\n> case even less sympathetic towards the patch, Matthias? The only thing\n> I thought of that I've left out was the use of a custom btree opclass,\n> \"unrealistically_slow_ops\". Something that calls pg_usleep in its\n> order proc. (I left it out because it wouldn't prove anything.)\n\nHave you tried using text index columns that are sorted with\nnon-default locales?\nI've seen non-default locales use significantly more resources during\ncompare operations than any other ordering operation I know of (which\nhas mostly been in finding the locale), and use it extensively to test\nimprovements for worst index shapes over at my btree patchsets because\nlocales are dynamically loaded in text compare and nondefault locales\nare not cached at all. I suspect that this would be even worse if a\nsomehow even worse locale path is available than what I'm using for\ntest right now; this could be the case with complex custom ICU\nlocales.\n\n> On my machine, custom instrumentation shows that each call to\n> _bt_readpage made while this query executes (on a patched server)\n> takes just under 1.4 milliseconds. While that is far longer than it\n> usually takes, it's basically acceptable IMV. It's not significantly\n> longer than I'd expect heap_index_delete_tuples() to take on an\n> average day with EBS (or other network-attached storage). But that's a\n> process that happens all the time, with an exclusive buffer lock held\n> on the leaf page throughout -- whereas this is only a shared buffer\n> lock, and involves a query that's just absurd .\n>\n> Another factor that makes this seem acceptable is just how sensitive\n> the test case is to everything going exactly and perfectly wrong, all\n> at the same time, again and again. The test case uses a 32 column\n> index (the INDEX_MAX_KEYS maximum), with a query that has 32 SAOP\n> clauses (one per index column). If I reduce the number of SAOP clauses\n> in the query to (say) 8, I still have a test case that's almost as\n> silly as my original -- but now we only spend ~225 microseconds in\n> each _bt_readpage call (i.e. we spend over 6x less time in each\n> _bt_readpage call). (Admittedly if I also make the CREATE INDEX use\n> only 8 columns, we can fit more index tuples on one page, leaving us\n> at ~800 microseconds).\n\nA quick update of the table definition to use the various installed\n'fr-%-x-icu' locales on text hash columns instead of numeric with a\ndifferent collation for each column this gets me to EXPLAIN (analyze)\nshowing 2.07ms spent every buffer hit inside the index scan node, as\nopposed to 1.76ms when using numeric. But, as you mention, the value\nof this metric is probably not very high.\n\n\nAs for the patch itself, I'm probably about 50% through the patch now.\nWhile reviewing, I noticed the following two user-visible items,\nrelated to SAOP but not broken by or touched upon in this patch:\n\n1. We don't seem to plan `column opr ALL (...)` as index conditions,\nwhile this should be trivial to optimize for at least btree. Example:\n\nSET enable_bitmapscan = OFF;\nWITH a AS (select generate_series(1, 1000) a)\nSELECT * FROM tenk1\nWHERE thousand = ANY (array(table a))\n AND thousand < ALL (array(table a));\n\nThis will never return any rows, but it does hit 9990 buffers in the\nnew btree code, while I expected that to be 0 buffers based on the\nquery and index (that is, I expected to hit 0 buffers, until I\nrealized that we don't push ALL into index filters). I shall assume\nALL isn't used all that often (heh), but it sure feels like we're\nmissing out on performance here.\n\n2. We also don't seem to support array keys for row compares, which\nprobably is an even more niche use case:\n\nSELECT count(*)\nFROM tenk1\nWHERE (thousand, tenthous) = ANY (ARRAY[(1, 1), (1, 2), (2, 1)]);\n\nThis is no different from master, too, but it'd be nice if there was\nsupport for arrays of row operations, too, just so that composite\nprimary keys can also be looked up with SAOPs.\n\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Fri, 10 Nov 2023 03:05:39 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Wed, 8 Nov 2023 at 02:53, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Tue, Nov 7, 2023 at 4:20 AM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > On Tue, 7 Nov 2023 at 00:03, Peter Geoghegan <pg@bowt.ie> wrote:\n> > > I should be able to post v6 later this week. My current plan is to\n> > > commit the other nbtree patch first (the backwards scan \"boundary\n> > > cases\" one from the ongoing CF) -- since I saw your review earlier\n> > > today. I think that you should probably wait for this v6 before\n> > > starting your review.\n> >\n> > Okay, thanks for the update, then I'll wait for v6 to be posted.\n>\n> On second thought, I'll just post v6 now (there won't be conflicts\n> against the master branch once the other patch is committed anyway).\n\nThanks. Here's my review of the btree-related code:\n\n> +++ b/src/backend/access/nbtree/nbtsearch.c\n> @@ -1625,8 +1633,9 @@ _bt_readpage(IndexScanDesc scan, ScanDirection dir, OffsetNumber offnum)\n> * set flag to true if all required keys are satisfied and false\n> * otherwise.\n> */\n> - (void) _bt_checkkeys(scan, itup, indnatts, dir,\n> - &requiredMatchedByPrecheck, false);\n> + _bt_checkkeys(scan, &pstate, itup, false, false);\n> + requiredMatchedByPrecheck = pstate.continuescan;\n> + pstate.continuescan = true; /* reset */\n\nThe comment above the updated section needs to be updated.\n\n> @@ -1625,8 +1633,9 @@ _bt_readpage(IndexScanDesc scan, ScanDirection dir, OffsetNumber offnum)\n> * set flag to true if all required keys are satisfied and false\n> * otherwise.\n> */\n> - (void) _bt_checkkeys(scan, itup, indnatts, dir,\n> - &requiredMatchedByPrecheck, false);\n> + _bt_checkkeys(scan, &pstate, itup, false, false);\n\nThis 'false' finaltup argument is surely wrong for the rightmost\npage's rightmost tuple, no?\n\n> +++ b/src/backend/access/nbtree/nbtutils.c\n> @@ -357,6 +431,46 @@ _bt_preprocess_array_keys(IndexScanDesc scan)\n> + /* We could pfree(elem_values) after, but not worth the cycles */\n> + num_elems = _bt_merge_arrays(scan, cur,\n> + (indoption[cur->sk_attno - 1] & INDOPTION_DESC) != 0,\n> + prev->elem_values, prev->num_elems,\n> + elem_values, num_elems);\n\nThis code can get hit several times when there are multiple = ANY\nclauses, which may result in repeated leakage of these arrays during\nthis scan. I think cleaning up may well be worth the cycles when the\ntotal size of the arrays is large enough.\n\n> @@ -496,6 +627,48 @@ _bt_sort_array_elements(IndexScanDesc scan, ScanKey skey,\n> _bt_compare_array_elements, &cxt);\n> +_bt_merge_arrays(IndexScanDesc scan, ScanKey skey, bool reverse,\n> + Datum *elems_orig, int nelems_orig,\n> + Datum *elems_next, int nelems_next)\n> [...]\n> + /*\n> + * Incrementally copy the original array into a temp buffer, skipping over\n> + * any items that are missing from the \"next\" array\n> + */\n\nGiven that we only keep the members that both arrays have in common,\nthe result array will be a strict subset of the original array. So, I\ndon't quite see why we need the temporary buffer here - we can reuse\nthe entries of the elems_orig array that we've already compared\nagainst the elems_next array.\n\nWe may want to optimize this further by iterating over only the\nsmallest array: With the current code, [1, 2] + [1....1000] is faster\nto merge than [1..1000] + [1000, 1001], because 2 * log(1000) is much\nsmaller than 1000*log(2). In practice this may matter very little,\nthough.\nAn even better optimized version would do a merge join on the two\narrays, rather than loop + binary search.\n\n\n> @@ -515,6 +688,161 @@ _bt_compare_array_elements(const void *a, const void *b, void *arg)\n> [...]\n> +_bt_binsrch_array_skey(FmgrInfo *orderproc,\n\nIs there a reason for this complex initialization of high/low_elem,\nrather than the this easier to understand and more compact\ninitialization?:\n\n+ low_elem = 0;\n+ high_elem = array->num_elems - 1;\n+ if (cur_elem_start)\n+ {\n+ if (ScanDirectionIsForward(dir))\n+ low_elem = array->cur_elem;\n+ else\n+ high_elem = array->cur_elem;\n+ }\n\n\n> @@ -661,20 +1008,691 @@ _bt_restore_array_keys(IndexScanDesc scan)\n> [...]\n> + _bt_array_keys_remain(IndexScanDesc scan, ScanDirection dir)\n> [...]\n> + if (scan->parallel_scan != NULL)\n> + _bt_parallel_done(scan);\n> +\n> + /*\n> + * No more primitive index scans. Terminate the top-level scan.\n> + */\n> + return false;\n\nI think the conditional _bt_parallel_done(scan) feels misplaced here,\nas the comment immediately below indicates the scan is to be\nterminated after that comment. So, either move this _bt_parallel_done\ncall outside the function (which by name would imply it is read-only,\nwithout side effects like this) or move it below the comment\n\"terminate the top-level scan\".\n\n> +_bt_advance_array_keys(IndexScanDesc scan, BTReadPageState *pstate,\n> [...]\n> + * Set up ORDER 3-way comparison function and array state\n> [...]\n> + * Optimization: Skip over non-required scan keys when we know that\n\nThese two sections should probably be swapped, as the skip makes the\nsetup useless.\nAlso, the comment here is wrong; the scan keys that are skipped are\n'required', not 'non-required'.\n\n\n> +++ b/src/test/regress/expected/join.out\n> @@ -8620,10 +8620,9 @@ where j1.id1 % 1000 = 1 and j2.id1 % 1000 = 1 and j2.id1 >= any (array[1,5]);\n> Merge Cond: (j1.id1 = j2.id1)\n> Join Filter: (j2.id2 = j1.id2)\n> -> Index Scan using j1_id1_idx on j1\n> - -> Index Only Scan using j2_pkey on j2\n> + -> Index Scan using j2_id1_idx on j2\n> Index Cond: (id1 >= ANY ('{1,5}'::integer[]))\n> - Filter: ((id1 % 1000) = 1)\n> -(7 rows)\n> +(6 rows)\n\nI'm a bit surprised that we don't have the `id1 % 1000 = 1` filter\nanymore. The output otherwise matches (quite possibly because the\nother join conditions don't match) and I don't have time to\ninvestigate the intricacies between IOS vs normal IS, but this feels\noff.\n\n----\n\nAs for the planner changes, I don't think I'm familiar enough with the\nplanner to make any authorative comments on this. However, it does\nlook like you've changed the meaning of 'amsearcharray', and I'm not\nsure it's OK to assume all indexes that support amsearcharray will\nalso support for this new assumption of ordered retrieval of SAOPs.\nFor one, the pgroonga extension [0] does mark\namcanorder+amsearcharray.\n\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n[0] https://github.com/pgroonga/pgroonga/blob/115414723c7eb8ce9eb667da98e008bd10fbae0a/src/pgroonga.c#L8782-L8788\n\n\n", "msg_date": "Sat, 11 Nov 2023 22:08:00 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Sat, Nov 11, 2023 at 1:08 PM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> Thanks. Here's my review of the btree-related code:\n\nAttached is v7.\n\nThe main focus in v7 is making the handling of required\ninequality-strategy scan keys more explicit -- now there is an\nunderstanding of inequalities shared by _bt_check_compare (the\nfunction that becomes the guts of _bt_checkkeys) and the new\n_bt_advance_array_keys function/state machine. The big idea for v7 is\nto generalize how we handle required equality-strategy scan keys\n(always required in both scan directions), extending the same concept\nto deal with required inequality strategy scan keys (only ever\nrequired in one direction, which may or may not be the scan\ndirection).\n\nThis led to my discovering and fixing a couple of bugs related to\ninequality handling. These issues were of the same general character\nas many others I've dealt with before now: they involved subtle\nconfusion about when and how to start another primitive index scan,\nleading to the scan reading many more pages than strictly necessary\n(potentially many more than master). In other words, cases where we\ndidn't give up and start another primitive index scan, even though\n(with a repro of the issue) it's obviously not sensible. An accidental\nfull index scan.\n\nWhile I'm still not completely happy with the way that inequalities\nare handled, things in this area are much improved in v7.\n\nIt should be noted that the patch isn't strictly guaranteed to always\nread fewer index pages than master, for a given query plan and index.\nThis is by design. Though the patch comes close, it's not quite a\ncertainty. There are known cases where the patch reads the occasional\nextra page (relative to what master would have done under the same\ncircumstances). These are cases where the implementation just cannot\nknow for sure whether the next/sibling leaf page has key space covered\nby any of the scan's array keys (at least not in a way that seems\npractical). The implementation has simple heuristics that infer (a\npolite word for \"make an educated guess\") about what will be found on\nthe next page. Theoretically we could be more conservative in how we\ngo about this, but that seems like a bad idea to me. It's really easy\nto find cases where the maximally conservative approach loses by a\nlot, and really hard to show cases where it wins at all.\n\nThese heuristics are more or less a limited form of the heuristics\nthat skip scan would need. A *very* limited form. We're still\nconservative. Here's how it works, at a high level: if the scan can\nmake it all the way to the end of the page without having to start a\nnew primitive index scan (before reaching the end), and then finds\nthat \"finaltup\" itself (which is usually the page high key) advances\nthe array keys, we speculate: we move on to the sibling page. It's\njust about possible that we'll discover (once on the next page) that\nfinaltup actually advanced the array keys by so much (in one single\nadvancement step) that the current/new keys cover key space beyond the\nsibling page we just arrived at. The sibling page access will have\nbeen wasted (though I prefer to think of it as a cost of doing\nbusiness).\n\nI go into a lot of detail on the trade-offs in this area in comments\nat the end of the new _bt_checkkeys(), just after it calls\n_bt_advance_array_keys(). Hopefully this is reasonably clear. It's\nalways much easier to understand these things when you've written lots\nof test cases, though. So I wouldn't at all be surprised to hear that\nmy explanation needs more work. I suspect that I'm spending more time\non the topic than it actually warrants, but you have to spend a lot of\ntime on it for yourself to be able to see why that is.\n\n> > +++ b/src/backend/access/nbtree/nbtsearch.c\n> > @@ -1625,8 +1633,9 @@ _bt_readpage(IndexScanDesc scan, ScanDirection dir, OffsetNumber offnum)\n> > * set flag to true if all required keys are satisfied and false\n> > * otherwise.\n> > */\n> > - (void) _bt_checkkeys(scan, itup, indnatts, dir,\n> > - &requiredMatchedByPrecheck, false);\n> > + _bt_checkkeys(scan, &pstate, itup, false, false);\n> > + requiredMatchedByPrecheck = pstate.continuescan;\n> > + pstate.continuescan = true; /* reset */\n>\n> The comment above the updated section needs to be updated.\n\nUpdated.\n\n> > @@ -1625,8 +1633,9 @@ _bt_readpage(IndexScanDesc scan, ScanDirection dir, OffsetNumber offnum)\n> > * set flag to true if all required keys are satisfied and false\n> > * otherwise.\n> > */\n> > - (void) _bt_checkkeys(scan, itup, indnatts, dir,\n> > - &requiredMatchedByPrecheck, false);\n> > + _bt_checkkeys(scan, &pstate, itup, false, false);\n>\n> This 'false' finaltup argument is surely wrong for the rightmost\n> page's rightmost tuple, no?\n\nNot in any practical sense. Since finaltup means \"the tuple that you\nshould use to decide whether to go to the next page or not\", and a\nrightmost page doesn't have a next page.\n\nThere are exactly two ways that the top-level scan can end (not to be\nconfused with the primitive scan), at least in v7. They are:\n\n1. The state machine can exhaust the scan's array keys, ending the\ntop-level scan.\n\n2. The scan can just run out of pages, without ever running out of\narray keys (some array keys can sort higher than any real value from\nthe index). This is just like how an index scan ends when it lacks any\nrequired scan keys to terminate the scan, and eventually runs out of\npages to scan (think of an index-only scan that performs a full scan\nof the index, feeding into a group aggregate).\n\nNote that it wouldn't be okay if the design relied on _bt_checkkeys\nadvancing and exhausting the array keys -- we really do need both 1\nand 2 to deal with various edge cases. For example, there is no way\nthat we'll ever be able to call _bt_checkkeys with a completely empty\nindex. It simply doesn't have any tuples at all. In fact, it doesn't\neven have any pages (apart from the metapage), so clearly we can't\nexpect any calls to _bt_readpage (much less _bt_checkkeys).\n\n> > +++ b/src/backend/access/nbtree/nbtutils.c\n> > @@ -357,6 +431,46 @@ _bt_preprocess_array_keys(IndexScanDesc scan)\n> > + /* We could pfree(elem_values) after, but not worth the cycles */\n> > + num_elems = _bt_merge_arrays(scan, cur,\n> > + (indoption[cur->sk_attno - 1] & INDOPTION_DESC) != 0,\n> > + prev->elem_values, prev->num_elems,\n> > + elem_values, num_elems);\n>\n> This code can get hit several times when there are multiple = ANY\n> clauses, which may result in repeated leakage of these arrays during\n> this scan. I think cleaning up may well be worth the cycles when the\n> total size of the arrays is large enough.\n\nThey won't leak because the memory is allocated in the same dedicated\nmemory context.\n\nThat said, I added a pfree(). It couldn't hurt.\n\n> > @@ -496,6 +627,48 @@ _bt_sort_array_elements(IndexScanDesc scan, ScanKey skey,\n> > _bt_compare_array_elements, &cxt);\n> > +_bt_merge_arrays(IndexScanDesc scan, ScanKey skey, bool reverse,\n> > + Datum *elems_orig, int nelems_orig,\n> > + Datum *elems_next, int nelems_next)\n> > [...]\n> > + /*\n> > + * Incrementally copy the original array into a temp buffer, skipping over\n> > + * any items that are missing from the \"next\" array\n> > + */\n>\n> Given that we only keep the members that both arrays have in common,\n> the result array will be a strict subset of the original array. So, I\n> don't quite see why we need the temporary buffer here - we can reuse\n> the entries of the elems_orig array that we've already compared\n> against the elems_next array.\n\nThis code path is only hit when the query was written on autopilot,\nsince it must have contained redundant SAOPs for the same index column\n-- a glaring inconsistency. Plus these arrays just aren't very big in\npractice (despite my concerns about huge arrays). Plus there is only\none of these array-specific preprocessing steps per btrescan. So I\ndon't think that it's worth going to too much trouble here.\n\n> We may want to optimize this further by iterating over only the\n> smallest array: With the current code, [1, 2] + [1....1000] is faster\n> to merge than [1..1000] + [1000, 1001], because 2 * log(1000) is much\n> smaller than 1000*log(2). In practice this may matter very little,\n> though.\n> An even better optimized version would do a merge join on the two\n> arrays, rather than loop + binary search.\n\nv7 allocates the temp buffer using the size of whatever array is the\nsmaller of the two, just because it's an easy marginal improvement.\n\n> > @@ -515,6 +688,161 @@ _bt_compare_array_elements(const void *a, const void *b, void *arg)\n> > [...]\n> > +_bt_binsrch_array_skey(FmgrInfo *orderproc,\n>\n> Is there a reason for this complex initialization of high/low_elem,\n> rather than the this easier to understand and more compact\n> initialization?:\n>\n> + low_elem = 0;\n> + high_elem = array->num_elems - 1;\n> + if (cur_elem_start)\n> + {\n> + if (ScanDirectionIsForward(dir))\n> + low_elem = array->cur_elem;\n> + else\n> + high_elem = array->cur_elem;\n> + }\n\nI agree that it's better your way. Done that way in v7.\n\n> I think the conditional _bt_parallel_done(scan) feels misplaced here,\n> as the comment immediately below indicates the scan is to be\n> terminated after that comment. So, either move this _bt_parallel_done\n> call outside the function (which by name would imply it is read-only,\n> without side effects like this) or move it below the comment\n> \"terminate the top-level scan\".\n\nv7 moves the comment up, so that it's just before the _bt_parallel_done() call.\n\n> > +_bt_advance_array_keys(IndexScanDesc scan, BTReadPageState *pstate,\n> > [...]\n> > + * Set up ORDER 3-way comparison function and array state\n> > [...]\n> > + * Optimization: Skip over non-required scan keys when we know that\n>\n> These two sections should probably be swapped, as the skip makes the\n> setup useless.\n\nNot quite: we need to increment arrayidx for later loop iterations/scan keys.\n\n> Also, the comment here is wrong; the scan keys that are skipped are\n> 'required', not 'non-required'.\n\nAgreed. Fixed.\n\n> > +++ b/src/test/regress/expected/join.out\n> > @@ -8620,10 +8620,9 @@ where j1.id1 % 1000 = 1 and j2.id1 % 1000 = 1 and j2.id1 >= any (array[1,5]);\n> > Merge Cond: (j1.id1 = j2.id1)\n> > Join Filter: (j2.id2 = j1.id2)\n> > -> Index Scan using j1_id1_idx on j1\n> > - -> Index Only Scan using j2_pkey on j2\n> > + -> Index Scan using j2_id1_idx on j2\n> > Index Cond: (id1 >= ANY ('{1,5}'::integer[]))\n> > - Filter: ((id1 % 1000) = 1)\n> > -(7 rows)\n> > +(6 rows)\n>\n> I'm a bit surprised that we don't have the `id1 % 1000 = 1` filter\n> anymore. The output otherwise matches (quite possibly because the\n> other join conditions don't match) and I don't have time to\n> investigate the intricacies between IOS vs normal IS, but this feels\n> off.\n\nThis happens because the new plan uses a completely different index --\nwhich happens to be a partial index whose predicate exactly matches\nthe old plan's filter quals. That factor makes the filter quals\nunnecessary. That's all this is.\n\n> As for the planner changes, I don't think I'm familiar enough with the\n> planner to make any authorative comments on this. However, it does\n> look like you've changed the meaning of 'amsearcharray', and I'm not\n> sure it's OK to assume all indexes that support amsearcharray will\n> also support for this new assumption of ordered retrieval of SAOPs.\n> For one, the pgroonga extension [0] does mark\n> amcanorder+amsearcharray.\n\nThe changes that I've made to the planner are subtractive. We more or\nless go back to how things were just after the initial work on nbtree\namsearcharray support. That work was (at least tacitly) assumed to\nhave no impact on ordered scans. Because why should it? What other\ntype of index clause has ever affected what seems like a rather\nunrelated thing (namely the sort order of the scan)? The oversight was\nunderstandable. The kinds of plans that master cannot produce output\nfor in standard index order are really silly plans, independent of\nthis issue; it makes zero sense to allow a non-required array scan key\nto affect how or when we skip.\n\nThe code that I'm removing from the planner is code that quite\nobviously assumes nbtree-like behavior. So I'm taking away code like\nthat, rather than adding new code like that. That said, I am really\nsurprised that any extension creates an index AM amcanorder=true (not\nto be confused with amcanorderbyop=true, which is less surprising).\nThat means that it promises the planner that it behaves just like\nnbtree. To quote the docs, it must have \"btree-compatible strategy\nnumbers for their [its] equality and ordering operators\". Is that\nreally something that pgroonga even attempts? And if so, why?\n\nI also find it bizarre that pgroonga's handler-stated capabilities\ninclude \"amcanunique=true\". So pgroonga is a full text search engine,\nbut also supports unique indexes? I find that particularly hard to\nbelieve, and suspect that the way that they set things up in the AM\nhandler just isn't very well thought out.\n\n-- \nPeter Geoghegan", "msg_date": "Mon, 20 Nov 2023 18:52:32 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On 21/11/2023 04:52, Peter Geoghegan wrote:\n> Attached is v7.\n\nFirst, some high-level reactions before looking at the patch very closely:\n\n- +1 on the general idea. Hard to see any downsides if implemented right.\n\n- This changes the meaning of amsearcharray==true to mean that the \nordering is preserved with ScalarArrayOps, right? You change B-tree to \nmake that true, but what about any out-of-tree index AM extensions? I \ndon't know if any such extensions exist, and I don't think we should \njump through any hoops to preserve backwards compatibility here, but \nprobably deserves a notice in the release notes if nothing else.\n\n- You use the term \"primitive index scan\" a lot, but it's not clear to \nme what it means. Does one ScalarArrayOps turn into one \"primitive index \nscan\"? Or each element in the array turns into a separate primitive \nindex scan? Or something in between? Maybe add a new section to the \nREADME explain how that works.\n\n- _bt_preprocess_array_keys() is called for each btrescan(). It performs \na lot of work like cmp function lookups and desconstructing and merging \nthe arrays, even if none of the SAOP keys change in the rescan. That \ncould make queries with nested loop joins like this slower than before: \n\"select * from generate_series(1, 50) g, tenk1 WHERE g = tenk1.unique1 \nand tenk1.two IN (1,2);\".\n\n- nbtutils.c is pretty large now. Perhaps create a new file \nnbtpreprocesskeys.c or something?\n\n- You and Matthias talked about an implicit state machine. I wonder if \nthis could be refactored to have a more explicit state machine. The \nstate transitions and interactions between _bt_checkkeys(), \n_bt_advance_array_keys() and friends feel complicated.\n\n\nAnd then some details:\n\n> --- a/doc/src/sgml/monitoring.sgml\n> +++ b/doc/src/sgml/monitoring.sgml\n> @@ -4035,6 +4035,19 @@ description | Waiting for a newly initialized WAL file to reach durable storage\n> </para>\n> </note>\n> \n> + <note>\n> + <para>\n> + Every time an index is searched, the index's\n> + <structname>pg_stat_all_indexes</structname>.<structfield>idx_scan</structfield>\n> + field is incremented. This usually happens once per index scan node\n> + execution, but might take place several times during execution of a scan\n> + that searches for multiple values together. Only queries that use certain\n> + <acronym>SQL</acronym> constructs to search for rows matching any value\n> + out of a list (or an array) of multiple scalar values are affected. See\n> + <xref linkend=\"functions-comparisons\"/> for details.\n> + </para>\n> + </note>\n> +\n\nIs this true even without this patch? Maybe commit this separately.\n\nThe \"Only queries ...\" sentence feels difficult. Maybe something like \n\"For example, queries using IN (...) or = ANY(...) constructs.\".\n\n> * _bt_preprocess_keys treats each primitive scan as an independent piece of\n> * work. That structure pushes the responsibility for preprocessing that must\n> * work \"across array keys\" onto us. This division of labor makes sense once\n> * you consider that we're typically called no more than once per btrescan,\n> * whereas _bt_preprocess_keys is always called once per primitive index scan.\n\n\"That structure ...\" is a garden-path sentence. I kept parsing \"that \nmust work\" as one unit, the same way as \"that structure\", and it didn't \nmake sense. Took me many re-reads to parse it correctly. Now that I get \nit, it doesn't bother me anymore, but maybe it could be rephrased.\n\nIs there _any_ situation where _bt_preprocess_array_keys() is called \nmore than once per btrescan?\n\n> \t/*\n> \t * Look up the appropriate comparison operator in the opfamily.\n> \t *\n> \t * Note: it's possible that this would fail, if the opfamily is\n> \t * incomplete, but it seems quite unlikely that an opfamily would omit\n> \t * non-cross-type comparison operators for any datatype that it supports\n> \t * at all. ...\n> \t */\n\nI agree that's unlikely. I cannot come up with an example where you \nwould have cross-type operators between A and B, but no same-type \noperators between B and B. For any real-world opfamily, that would be an \nomission you'd probably want to fix.\n\nStill I wonder if we could easily fall back if it doesn't exist? And \nmaybe add a check in the 'opr_sanity' test for that.\n\nIn _bt_readpage():\n> \t/*\n> \t * Prechecking the page with scan keys required for direction scan. We\n> \t * check these keys with the last item on the page (according to our scan\n> \t * direction). If these keys are matched, we can skip checking them with\n> \t * every item on the page. Scan keys for our scan direction would\n> \t * necessarily match the previous items. Scan keys required for opposite\n> \t * direction scan are already matched by the _bt_first() call.\n> \t *\n> \t * With the forward scan, we do this check for the last item on the page\n> \t * instead of the high key. It's relatively likely that the most\n> \t * significant column in the high key will be different from the\n> \t * corresponding value from the last item on the page. So checking with\n> \t * the last item on the page would give a more precise answer.\n> \t *\n> \t * We skip this for the first page in the scan to evade the possible\n> \t * slowdown of point queries. Never apply the optimization with a scans\n> \t * that uses array keys, either, since that breaks certain assumptions.\n> \t * (Our search-type scan keys change whenever _bt_checkkeys advances the\n> \t * arrays, invalidating any precheck. Tracking all that would be tricky.)\n> \t */\n> \tif (!so->firstPage && !numArrayKeys && minoff < maxoff)\n> \t{\n\nIt's sad to disable this optimization completely for array keys. It's \nactually a regression from current master, isn't it? There's no \nfundamental reason we couldn't do it for array keys so I think we should \ndo it.\n\n_bt_checkkeys() is called in an assertion in _bt_readpage, but it has \nthe side-effect of advancing the array keys. Side-effects from an \nassertion seems problematic.\n\nVague idea: refactor _bt_checkkeys() into something that doesn't have \nside-effects, and have a separate function or an argument to \n_bt_checkkeys() to advance to next array key. The prechecking \noptimization and the Assertion could both use the side-effect-free function.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Mon, 27 Nov 2023 15:39:16 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On 11/21/23 03:52, Peter Geoghegan wrote:\n> On Sat, Nov 11, 2023 at 1:08 PM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n>> Thanks. Here's my review of the btree-related code:\n> \n> Attached is v7.\n> \n\nI haven't looked at the code, but I decided to do a bit of blackbox perf\nand stress testing, to get some feeling of what to expect in terms of\nperformance improvements, and see if there happen to be some unexpected\nregressions. Attached is a couple simple bash scripts doing a\nbrute-force test with tables of different size / data distribution,\nnumber of values in the SAOP expression, etc.\n\nAnd a PDF visualizing the comparing the results between master and build\nwith the patch applied. First group of columns is master, then patched,\nand then (patched/master) comparison, with green=faster, red=slower. The\ncolumns are for different number of values in the SAOP condition.\n\nOverall, the results look pretty good, with consistent speedups of up to\n~30% for large number of values (SAOP with 1000 elements). There's a\ncouple blips where the performance regresses, also by up to ~30%. It's\ntoo regular to be a random variation (it affects different combinations\nof parameters, tablesizes), but it seems to only affect one of the two\nmachines I used for testing. Interestingly enough, it's the newer one.\n\nI'm not convinced this is a problem we have to solve. It's possible it\nonly affects cases that are implausible in practice (the script forces a\nparticular scan type, and maybe it would not be picked in practice). But\nmaybe it's fixable ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 28 Nov 2023 13:29:27 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Tue, Nov 28, 2023 at 4:29 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> I haven't looked at the code, but I decided to do a bit of blackbox perf\n> and stress testing, to get some feeling of what to expect in terms of\n> performance improvements, and see if there happen to be some unexpected\n> regressions. Attached is a couple simple bash scripts doing a\n> brute-force test with tables of different size / data distribution,\n> number of values in the SAOP expression, etc.\n\nMy own stress-testing has focussed on the two obvious extremes for\nthis patch, using variants of pgbench with SAOP SELECTs on\npgbench_accounts:\n\n1. The case where there is almost no chance of finding any two index\ntuples together on the same page, because the constants are completely\nrandom. This workload makes the patch's attempts at \"coalescing\ntogether\" index page reads pure overhead, with no possible benefit.\nObviously that's a cost that needs to be kept under control.\n\n2. The case where there are 255 of tuples with distinct values that\nare clustered together (both in the key space and in physical index\npages). Usually they'll span two index pages, but they might all fit\ntogether on one index page, allowing us to descend to it directly and\nread it only once.\n\nWith 32 clients, I typically see a regression of about 1.5% for the\nfirst case relative to master, measured in throughput/TPS. The second\ncase typically sees throughput that's ~4.8x master (i.e. a ~380%\nincrease). I consider both of these extremes to be fairly unrealistic.\nWith fewer array constants, the speed-ups you'll see in sympathetic\ncases are still very significant, but nothing like 4.8x. They're more\nlike the 30% numbers that you saw.\n\nAs you know, I'm not actually all that excited about cases like 2 --\nit's not where users are likely to benefit from the patch. The truly\ninteresting cases are those cases where we can completely avoid heap\naccesses in the first place (not just *repeat* accesses to the same\nindex pages), due to the patch's ability to consistently use index\nquals rather than filter quals. It's not that hard to show cases where\nthere are 100x+ fewer pages accessed -- often with cases have very few\narray constants. It's just that these cases aren't that interesting\nfrom a performance validation point of view -- it's obvious that\nfilter quals are terrible.\n\nAnother thing that the patch does particularly well on is cases where\nthe array keys don't have any matches at all, but there is significant\nclustering (relatively common when multiple SAOPs are used as index\nquals, which becomes far more likely due to the planner changes). We\ndon't just skip over parts of the index that aren't relevant -- we\nalso skip over parts of the arrays that aren't relevant. Some of my\nadversarial test cases that take ~1 millisecond for the patch to\nexecute will practically take forever on master (I had to have my test\nsuite not run those tests against master). You just need lots of array\nkeys.\n\nWhat master does on those adversarial cases with billions of distinct\ncombinations of array keys (not that unlikely if there are 4 or 5\nSAOPs with mere hundreds or thousands of array keys each) is so\ninefficient that we might as well call it infinitely slower than the\npatch. This is interesting to me from a performance\nrobustness/stability point of view. The slowest kind of SAOP index\nscan with the patch becomes a full index scan -- just as it would be\nif we were using any type of non-SOAP qual before now. The worst case\nis a lot easier to reason about.\n\n> I'm not convinced this is a problem we have to solve. It's possible it\n> only affects cases that are implausible in practice (the script forces a\n> particular scan type, and maybe it would not be picked in practice). But\n> maybe it's fixable ...\n\nI would expect the patch to do quite well (relative to what is\nactually possible) on cases like the two extremes that I've focussed\non so far. It seems possible that it will do less well on cases that\nare somewhere in the middle (that also have lots of distinct values on\neach page).\n\nWe effectively do a linear search on a page that we know has at least\none more match (following a precheck that uses the high key). We hope\nthat the next match (for the next array value) closely follows an\ninitial match. But what if there are only 2 or 3 matches on each leaf\npage, that are spaced relatively far apart? You're going to have to\ngrovel through the whole page.\n\nIt's not obvious that that's a problem to be fixed -- we're still only\ndescending the index once and still only locking the leaf page once,\nso we'll probably still win relative to master. And it's not that easy\nto imagine beating a linear search -- it's not like there is just one\n\"next value\" to search for in these cases. But it's something that\ndeserves further consideration.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 28 Nov 2023 09:19:21 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Tue, Nov 28, 2023 at 9:19 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > I'm not convinced this is a problem we have to solve. It's possible it\n> > only affects cases that are implausible in practice (the script forces a\n> > particular scan type, and maybe it would not be picked in practice). But\n> > maybe it's fixable ...\n>\n> I would expect the patch to do quite well (relative to what is\n> actually possible) on cases like the two extremes that I've focussed\n> on so far. It seems possible that it will do less well on cases that\n> are somewhere in the middle (that also have lots of distinct values on\n> each page).\n\nActually, I think that it's more likely that the problems that you saw\nare related to low cardinality data, which seems like it might not be\na great fit for the heuristics that the patch uses to decide whether\nto continue the ongoing primitive index scan, or start a new one\ninstead. I'm referring to the heuristics I describe here:\n\nhttps://postgr.es/m/CAH2-WzmTHoCsOmSgLg=yyft9LoERtuCKXyG2GZn+28PzonFA_g@mail.gmail.com\n\nThe patch itself discusses these heuristics in a large comment block\nafter the point that _bt_checkkeys() calls _bt_advance_array_keys().\n\nI hardly paid any attention to low cardinality data in my performance\nvalidation work -- it was almost always indexes that had few or no\nindexes (just pgbench_accounts if we're talking pure stress-tests),\njust because those are more complicated, and so seemed more important.\nI'm not quite prepared to say that there is definitely a problem here,\nright this minute, but if there was then it wouldn't be terribly\nsurprising (the problems are usually wherever it is that I didn't look\nfor them).\n\nAttached is a sample of my debug instrumentation for one such query,\nbased on running the test script that Tomas posted -- thanks for\nwriting this script, Tomas (I'll use it as the basis for some of my\nown performance validation work going forward). I don't mind sharing\nthe patch that outputs this stuff if anybody is interested (it's kind\nof a monstrosity, so I'm disinclined to post it with the patch until I\nhave a reason). Even without this instrumentation, you can get some\nidea of the kinds of issues I'm talking about just by viewing EXPLAIN\nANALYZE output for a bitmap index scan -- that breaks out the index\npage accesses separately, which is a number that we should expect to\nremain lower than what the master branch shows in approximately all\ncases.\n\nWhile I still think that we need heuristics that apply speculative\ncriteria to decide whether or not going to the next page directly\n(when we have a choice at all), that doesn't mean that the v7\nheuristics can't be improved on, with a little more thought. It's a\nbit tricky, since we're probably also benefiting from the same\nheuristics all the time -- probably even for this same test case. We\ndo lose against the master branch, on balance, and by enough to\nconcern me, though. (I don't want to promise that it'll never happen\nat all, but it should be very limited, which this wasn't.)\n\nI didn't bother to ascertain how much longer it takes to execute the\nquery, since that question seems rather beside the point. The\nimportant thing to me is whether or not this behavior actually makes\nsense, all things considered, and what exactly can be done about it if\nit doesn't make sense. I will need to think about this some more. This\nis just a status update.\n\nThanks\n-- \nPeter Geoghegan", "msg_date": "Tue, 28 Nov 2023 15:52:31 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Tue, Nov 28, 2023 at 3:52 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> While I still think that we need heuristics that apply speculative\n> criteria to decide whether or not going to the next page directly\n> (when we have a choice at all), that doesn't mean that the v7\n> heuristics can't be improved on, with a little more thought. It's a\n> bit tricky, since we're probably also benefiting from the same\n> heuristics all the time -- probably even for this same test case.\n\nCorrection: this particular test case happens to be one where the\noptimal strategy is to do *exactly* what the master branch does\ncurrently. The master branch is unbeatable, so the only reasonable\ngoal for the patch is to not lose (or to lose by no more than a\ncompletely negligible amount).\n\nI'm now prepared to say that this behavior is not okay -- I definitely\nneed to fix this. It's a bug.\n\nBecause each distinct value never fits on one leaf page (it's more\nlike 1.5 - 2 pages, even though we're deduplicating heavily), and\nbecause Postgres 12 optimizations are so effective with low\ncardinality/posting-list-heavy indexes such as this, we're bound to\nlose quite often. The only reason it doesn't happen _every single\ntime_ we descend the index is because the test script uses CREATE\nINDEX, rather than using retail inserts (I tend to prefer the latter\nfor this sort of analysis). Since nbtsort.c isn't as clever/aggressive\nabout suffix truncation as the nbtsplitloc.c split strategies would\nhave been, had we used them (had there been retail inserts), many\nindividual leaf pages are left with high keys that aren't particularly\ngood targets for the high key precheck optimization (see Postgres 12\ncommit 29b64d1d).\n\nIf I wanted to produce a truly adversarial case for this issue (which\nthis is already close to), I'd go with the following:\n\n1. Retail inserts that leave each leaf page full of one single value,\nwhich will allow each high key to still make a \"clean break\" from the\nright sibling page -- it'll have the right sibling's value. Maybe\ninsert 1200 - 1300 tuples per distinct index value for this.\n\nIn other words, bulk loading that results in an index that never has\nto append a heap TID tiebreaker during suffix truncation, but comes\nvery close to needing to. Bulk loading where nbtsplitloc.c needs to\nuse SPLIT_MANY_DUPLICATES all the time, but never quite gets to the\npoint of needing a SPLIT_SINGLE_VALUE split.\n\n2. A SAOP query with an array with every second value in the index as\nan element. Something like \"WHERE arr in (2, 4, 6, 8, ...)\".\n\nThe patch will read every single leaf page, whereas master will\n*reliably* only read every second leaf page. I didn't need to \"trick\"\nthe patch in a contrived sort of way to get this bad outcome -- this\nscenario is fairly realistic. So this behavior is definitely not\nsomething that I'm prepared to defend. As I said, it's a bug.\n\nIt'll be fixed in the next revision.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Tue, 28 Nov 2023 16:57:52 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Mon, Nov 27, 2023 at 5:39 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> - +1 on the general idea. Hard to see any downsides if implemented right.\n\nGlad you think so. The \"no possible downside\" perspective is one that\nthe planner sort of relies on, so this isn't just a nice-to-have -- it\nmight actually be a condition of committing the patch. It's important\nthat the planner can be very aggressive about using SAOP index quals,\nwithout us suffering any real downside at execution time.\n\n> - This changes the meaning of amsearcharray==true to mean that the\n> ordering is preserved with ScalarArrayOps, right? You change B-tree to\n> make that true, but what about any out-of-tree index AM extensions? I\n> don't know if any such extensions exist, and I don't think we should\n> jump through any hoops to preserve backwards compatibility here, but\n> probably deserves a notice in the release notes if nothing else.\n\nMy interpretation is that the planner changes affect amcanorder +\namsearcharray index AMs, but have no impact on mere amsearcharray\nindex AMs. If anything this is a step *away* from knowing about nbtree\nimplementation details in the planner (though the planner's definition\nof amcanorder is very close to the behavior from nbtree, down to\nthings like knowing all about nbtree strategy numbers). The planner\nchanges from the patch are all subtractive -- I'm removing kludges\nthat were added by bug fix commits. Things that weren't in the\noriginal feature commit at all.\n\nI used the term \"my interpretation\" here because it seems hard to\nthink of this in abstract terms, and to write a compatibility note for\nthis imaginary audience. I'm happy to go along with whatever you want,\nthough. Perhaps you can suggest a wording for this?\n\n> - You use the term \"primitive index scan\" a lot, but it's not clear to\n> me what it means. Does one ScalarArrayOps turn into one \"primitive index\n> scan\"? Or each element in the array turns into a separate primitive\n> index scan? Or something in between? Maybe add a new section to the\n> README explain how that works.\n\nThe term primitive index scan refers to the thing that happens each\ntime _bt_first() is called -- with and without the patch. In other\nwords, it's what happens when pg_stat_all_indexes.idx_scan is\nincremented.\n\nYou could argue that that's not quite the right thing to be focussing\non, with this new design. But it has precedent going for it. As I\nsaid, it's the thing that pg_stat_all_indexes.idx_scan counts, which\nis a pretty exact and tangible thing. So it's consistent with\nhistorical practice, but also with what other index AMs do when\nexecuting ScalarArrayOps non-natively.\n\n> - _bt_preprocess_array_keys() is called for each btrescan(). It performs\n> a lot of work like cmp function lookups and desconstructing and merging\n> the arrays, even if none of the SAOP keys change in the rescan. That\n> could make queries with nested loop joins like this slower than before:\n> \"select * from generate_series(1, 50) g, tenk1 WHERE g = tenk1.unique1\n> and tenk1.two IN (1,2);\".\n\nBut that's nothing new. _bt_preprocess_array_keys() isn't a new\nfunction, and the way that it's called isn't new in any way.\n\nThat said, I certainly agree that we should be worried about any added\noverhead in _bt_first for nested loop joins with an inner index scan.\nIn my experience that can be an important issue. I actually have a\nTODO item about this already. It needs to be included in my work on\nperformance validation, on general principle.\n\n> - nbtutils.c is pretty large now. Perhaps create a new file\n> nbtpreprocesskeys.c or something?\n\nLet me get back to you on this.\n\n> - You and Matthias talked about an implicit state machine. I wonder if\n> this could be refactored to have a more explicit state machine. The\n> state transitions and interactions between _bt_checkkeys(),\n> _bt_advance_array_keys() and friends feel complicated.\n\nI agree that it's complicated. That's the main problem that the patch\nhas, by far. It used to be even more complicated, but it's hard to see\na way to make it a lot simpler at this point. If you can think of a\nway to simplify it then I'll definitely give it a go.\n\nCan you elaborate on \"more explicit state machine\"? That seems like it\ncould have value by adding more invariants, and making things a bit\nmore explicit in one or two areas. It could also help us to verify\nthat they hold from assertions. But that isn't really the same thing\nas simplification. I wouldn't use that word, at least.\n\n> > + <note>\n> > + <para>\n> > + Every time an index is searched, the index's\n> > + <structname>pg_stat_all_indexes</structname>.<structfield>idx_scan</structfield>\n> > + field is incremented. This usually happens once per index scan node\n> > + execution, but might take place several times during execution of a scan\n> > + that searches for multiple values together. Only queries that use certain\n> > + <acronym>SQL</acronym> constructs to search for rows matching any value\n> > + out of a list (or an array) of multiple scalar values are affected. See\n> > + <xref linkend=\"functions-comparisons\"/> for details.\n> > + </para>\n> > + </note>\n> > +\n>\n> Is this true even without this patch? Maybe commit this separately.\n\nYes, it is. The patch doesn't actually change anything in this area.\nHowever, something in this area is new: it's a bit weird (but still\nperfectly consistent and logical) that the count shown by\npg_stat_all_indexes.idx_scan will now show a value that is often\ninfluenced by low-level implementation details now. Things that are\nfairly far removed from the SQL query now affect that count -- that\npart is new. That's what I had in mind when I wrote this, FWIW.\n\n> The \"Only queries ...\" sentence feels difficult. Maybe something like\n> \"For example, queries using IN (...) or = ANY(...) constructs.\".\n\nI'll get back to you on this.\n\n> > * _bt_preprocess_keys treats each primitive scan as an independent piece of\n> > * work. That structure pushes the responsibility for preprocessing that must\n> > * work \"across array keys\" onto us. This division of labor makes sense once\n> > * you consider that we're typically called no more than once per btrescan,\n> > * whereas _bt_preprocess_keys is always called once per primitive index scan.\n>\n> \"That structure ...\" is a garden-path sentence. I kept parsing \"that\n> must work\" as one unit, the same way as \"that structure\", and it didn't\n> make sense. Took me many re-reads to parse it correctly. Now that I get\n> it, it doesn't bother me anymore, but maybe it could be rephrased.\n\nI'll do that ahead of the next revision.\n\n> Is there _any_ situation where _bt_preprocess_array_keys() is called\n> more than once per btrescan?\n\nNo. Note that we don't know the scan direction within\n_bt_preprocess_array_keys(). We need a separate function to set up the\narray keys to their initial positions (this is nothing new).\n\n> > /*\n> > * Look up the appropriate comparison operator in the opfamily.\n> > *\n> > * Note: it's possible that this would fail, if the opfamily is\n> > * incomplete, but it seems quite unlikely that an opfamily would omit\n> > * non-cross-type comparison operators for any datatype that it supports\n> > * at all. ...\n> > */\n>\n> I agree that's unlikely. I cannot come up with an example where you\n> would have cross-type operators between A and B, but no same-type\n> operators between B and B. For any real-world opfamily, that would be an\n> omission you'd probably want to fix.\n>\n> Still I wonder if we could easily fall back if it doesn't exist? And\n> maybe add a check in the 'opr_sanity' test for that.\n\nI'll see about an opr_sanity test.\n\n> In _bt_readpage():\n\n> > if (!so->firstPage && !numArrayKeys && minoff < maxoff)\n> > {\n>\n> It's sad to disable this optimization completely for array keys. It's\n> actually a regression from current master, isn't it? There's no\n> fundamental reason we couldn't do it for array keys so I think we should\n> do it.\n\nI'd say whether or not there's any kind of regression in this area is\nquite ambiguous, though in a way that isn't really worth discussing\nnow. If it makes sense to extend something like this optimization to\narray keys (or to add a roughly equivalent optimization), then we\nshould do it. Otherwise we shouldn't.\n\nNote that the patch actually disables two distinct and independent\noptimizations when the scan has array keys. Both of these were added\nby recent commit e0b1ee17, but they are still independent things. They\nare:\n\n1. This skipping thing inside _bt_readpage, which you've highlighted.\n\nThis is only applied on the second or subsequent leaf page read by the\nscan. Right now, in the case of a scan with array keys, that means the\nsecond or subsequent page from the current primitive index scan --\nwhich doesn't seem particularly principled to me.\n\nI'd need to invent a heuristic that works with my design to adapt the\noptimization. Plus I'd need to be able to invalidate the precheck\nwhenever the array keys advanced. And I'd probably need a way of\nguessing whether or not it's likely that the arrays will advance,\nahead of time, so that the precheck doesn't almost always go to waste,\nin a way that just doesn't make sense.\n\nNote that all required scan keys are relevant here. I like to think of\nplain required equality strategy scan keys without any array as\n\"degenerate single value arrays\". Something similar can be said of\ninequality strategy required scan keys (those required in the\n*current* scan direction), too. So it's not as if I can \"just do the\nprecheck stuff for the non-array scan keys\". All required scan keys\nare virtually the same thing as required array-type scan keys -- they\ncan trigger \"roll over\", affecting array key advancement for the scan\nkeys that are associated with arrays.\n\n2. The optimization that has _bt_checkkeys skip non-required scan keys\nthat are *only* required in the direction *opposite* the current scan\ndirection -- this can work even without any precheck from\n_bt_readpage.\n\nNote that this second optimization relies on various behaviors in\n_bt_first() that make it impossible for _bt_checkkeys() to ever see a\ntuple that could fail to satisfy such a scan key -- we must always\nhave passed over non-matching tuples, thanks to _bt_first(). That\nprevents my patch with a problem: the logic for determining whether or\nnot we need a new primitive index scan only promises to never require\nthe scan to grovel through many leaf pages that _bt_first() could and\nshould just skip over instead. This new logic makes no promises about\nskipping over small numbers of tuples. So it's possible that\n_bt_checkkeys() will see a handful of tuples \"after the end of the\n_bt_first-wise primitive index scan\", but \"before the _bt_first-wise\nstart of the next would-be primitive index scan\".\n\nNote that this stuff matters even without bringing optimization 2 into\nit. There are similar considerations for required equality strategy\nscan keys, which (by definition) must be required in both scan\ndirections. The new mechanism must never act as if it's past the end\nof matches in the current scan direction, when in fact it's really\nbefore the beginning of matches (that would lead to totally ignoring a\ngroup of equal matches). The existing _bt_checkkeys() logic can't\nreally tell the difference on its own, since it only has an = operator\nto work with (well, I guess it knows about this context, since there\nis a comment about the dependency on _bt_first behaviors in\n_bt_checkkeys on HEAD -- very old comments).\n\n> _bt_checkkeys() is called in an assertion in _bt_readpage, but it has\n> the side-effect of advancing the array keys. Side-effects from an\n> assertion seems problematic.\n\nI agree that that's a concern, but just to be clear: there are no\nside-effects presently. You can't mix the array stuff with the\noptimization stuff. We won't actually call _bt_checkkeys() in an\nassertion when it can cause side-effects.\n\nAssuming that we ultimately conclude that the optimizations *aren't*\nworth preserving in any form, it might still be worth making it\nobvious that the assertions have no side-effects. But that question is\nunsettled right now.\n\nThanks for the review!\n\nI'll try to get the next revision out soon. It'll also have bug fixes\nfor mark + restore and for a similar issue seen when the scan changes\ndirection in just the wrong way. (In short, the array key state\nmachine can be confused about scan direction in certain corner cases.)\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 4 Dec 2023 19:25:55 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Mon, Dec 4, 2023 at 7:25 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> 2. The optimization that has _bt_checkkeys skip non-required scan keys\n> that are *only* required in the direction *opposite* the current scan\n> direction -- this can work even without any precheck from\n> _bt_readpage.\n>\n> Note that this second optimization relies on various behaviors in\n> _bt_first() that make it impossible for _bt_checkkeys() to ever see a\n> tuple that could fail to satisfy such a scan key -- we must always\n> have passed over non-matching tuples, thanks to _bt_first(). That\n> prevents my patch with a problem: the logic for determining whether or\n> not we need a new primitive index scan only promises to never require\n> the scan to grovel through many leaf pages that _bt_first() could and\n> should just skip over instead. This new logic makes no promises about\n> skipping over small numbers of tuples. So it's possible that\n> _bt_checkkeys() will see a handful of tuples \"after the end of the\n> _bt_first-wise primitive index scan\", but \"before the _bt_first-wise\n> start of the next would-be primitive index scan\".\n\nBTW, I have my doubts about this actually being correct without the\npatch. The following comment block appears above _bt_preprocess_keys:\n\n * Note that one reason we need direction-sensitive required-key flags is\n * precisely that we may not be able to eliminate redundant keys. Suppose\n * we have \"x > 4::int AND x > 10::bigint\", and we are unable to determine\n * which key is more restrictive for lack of a suitable cross-type operator.\n * _bt_first will arbitrarily pick one of the keys to do the initial\n * positioning with. If it picks x > 4, then the x > 10 condition will fail\n * until we reach index entries > 10; but we can't stop the scan just because\n * x > 10 is failing. On the other hand, if we are scanning backwards, then\n * failure of either key is indeed enough to stop the scan. (In general, when\n * inequality keys are present, the initial-positioning code only promises to\n * position before the first possible match, not exactly at the first match,\n * for a forward scan; or after the last match for a backward scan.)\n\nAs I understand it, this might still be okay, because the optimization\nin question from Alexander's commit e0b1ee17 (what I've called\noptimization 2) is careful about NULLs, which were the one case that\ndefinitely had problems. Note that IS NOT NULL works kind of like\nWHERE foo < NULL here (see old bug fix commit 882368e8, \"Fix btree\nstop-at-nulls logic properly\", for more context on this NULLs\nbehavior).\n\nIn any case, my patch isn't compatible with \"optimization 2\" (as in my\ntests break in a rather obvious way) due to a behavior that these old\ncomments claim is normal within any scan (or perhaps normal in any\nscan with scan keys that couldn't be deemed redundant due to a lack of\ncross-type support in the opfamily).\n\nSomething has to be wrong here -- could just be the comment, I\nsuppose. But I find it easy to believe that Alexander's commit\ne0b1ee17 might not have been properly tested with opfamilies that lack\na suitable cross-type operator. That's a pretty niche thing. (My patch\ndoesn't need that niche thing to be present to easily break when\ncombined with \"optimization 2\", which could hint at an existing and\nfar more subtle problem.)\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 4 Dec 2023 21:01:37 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Mon, Nov 20, 2023 at 6:52 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> It should be noted that the patch isn't strictly guaranteed to always\n> read fewer index pages than master, for a given query plan and index.\n> This is by design. Though the patch comes close, it's not quite a\n> certainty. There are known cases where the patch reads the occasional\n> extra page (relative to what master would have done under the same\n> circumstances). These are cases where the implementation just cannot\n> know for sure whether the next/sibling leaf page has key space covered\n> by any of the scan's array keys (at least not in a way that seems\n> practical). The implementation has simple heuristics that infer (a\n> polite word for \"make an educated guess\") about what will be found on\n> the next page. Theoretically we could be more conservative in how we\n> go about this, but that seems like a bad idea to me. It's really easy\n> to find cases where the maximally conservative approach loses by a\n> lot, and really hard to show cases where it wins at all.\n\nAttached is v8, which pretty much rips all of this stuff out.\n\nI definitely had a point when I said that it made sense to be\noptimistic about finding matches on the next page in respect of any\ntruncated -inf attributes in high keys, though. And so we still do\nthat much in v8. But, there is no reason why we need to go any further\nthan that -- there is no reason why we should *also* be optimistic\nabout *untruncated* high key/finaltup attributes that *aren't* exact\nmatches for any of the scan's required array keys finding exact\nmatches once we move onto the next sibling page.\n\nI reached this conclusion when working on a fix for the low\ncardinality index regression that Tomas' tests brought to my attention\n[1]. I started out with the intention of just fixing that one case, in\na very targeted way, but quickly realized that it made almost no sense\nto just limit myself to the low cardinality cases. Even with Tomas'\nproblematic low cardinality test cases, I saw untruncated high key\nattributes that were \"close by to matching tuples\" -- they just\nweren't close enough (i.e. exactly on the next leaf page) for us to\nactually win (so v7 lost). Being almost correct again and again, but\nstill losing again and again is a good sign that certain basic\nassumptions were faulty (at least if it's realistic, which it was in\nthis instance).\n\nTo be clear, and to repeat, even in v8 we'll still \"make guesses\"\nabout -inf truncated attributes. But it's a much more limited form of\nguessing. If we didn't at least do the -inf thing, then backwards\nscans would weirdly work better than forward scans in some cases -- a\nparticular concern with queries that have index quals for each of\nmultiple columns. I don't think that this remaining \"speculative\"\nbehavior needs to be discussed at very much length in code comments,\nthough. That's why v8 is a great deal simpler than v7 was here. No\nmore huge comment block at the end of the new _bt_checkkeys.\n\nNotable stuff that *hasn't* changed from v7:\n\nI'm posting this v8 having not yet worked through all of Heikki's\nfeedback. In particular, v8 doesn't deal with the relatively hard\nquestion of what to do about the optimizations added by Alexander\nKorotkov's commit e0b1ee17 (should I keep them disabled, selectively\nre-enable one or both optimizations, or something else?). This is\npartly due to at least one of the optimizations having problems of\ntheir own that are still outstanding [2]. I also wanted to get a\nrevision out before travelling to Prague for PGConf.EU, which will be\nfollowed by other Christmas travel. That's likely to keep me away from\nthe patch for weeks (that, plus I'm moving to NYC in early January).\nSo I just ran out of time to do absolutely everything.\n\nOther notable changes:\n\n* Bug fixes for cases where the array state machine gets confused by a\nchange in the scan direction, plus similar cases involving mark and\nrestore processing.\n\nI'm not entirely happy with my approach here (mostly referring to the\nnew code in _bt_steppage here). Feels like it needs to be a little\nbetter integrated with mark/restore processing.\n\nNo doubt Heikki will have his own doubts about this. I've included my\ntest cases for the issues in this area. The problems are really hard\nto reproduce, and writing these tests took a surprisingly large amount\nof effort. The tests might not be suitable for commit, but you really\nneed to see the test cases to be able to review the code efficiently.\nIt's just fiddly.\n\n* I've managed to make the array state machine just a little more\nstreamlined compared to v7.\n\nMinor code polishing, not really worth describing in detail.\n\n* Addressed concerns about incomplete opfamilies not working with the\npatch by updating the error message within\n_bt_preprocess_array_keys/_bt_sort_array_cmp_setup. It now exactly\nmatches the similar one in _bt_first.\n\nI don't think that we need any new opr_sanity tests, since we already\nhave one for this. Making the error message match the one in _bt_first\nensures that anybody that runs into a problem here will see the same\nerror message that they'd have seen on earlier versions, anyway.\n\nIt's a more useful error compared to the one from v7 (in that it names\nthe index and its attribute directly). Plus it's good to be\nconsistent.\n\nI don't see any potential for the underlying _bt_sort_array_cmp_setup\nbehavior to be seen as a regression, in terms of our ability to cope\nwith incomplete opfamilies (compared to earlier Postgres versions).\nOpfamilies that lack a cross-type ORDER proc mixed with queries that\nuse the corresponding cross-type = operator were always very dicey.\nThat situation isn't meaningfully different with the patch.\n\n(Actually, this isn't 100% true in the case of queries + indexes with\nnon-required arrays specifically -- they'll need a 3-way comparator in\n_bt_preprocess_array_keys/_bt_sort_array_cmp_setup, and yet *won't*\nneed one moments later, in _bt_first. This is because non-required\nscan keys don't end up in _bt_first's insertion scan key at all, in\ngeneral. This distinction just seems pedantic, though. We're talking\nabout a case where things accidentally failed to fail in previous\nversions, for some queries but not most queries. Now it'll fail in\nexactly the same way in slightly more cases. In reality, affected\nopfamilies are practically non-existent, so this is a hypothetical\nupon a hypothetical.)\n\n[1] https://postgr.es/m/CAH2-WzmtV7XEWxf_rP1pw=vyDjGLi__zGOy6Me5MovR3e1kfdg@mail.gmail.com\n[2] https://postgr.es/m/CAH2-Wzn0LeLcb1PdBnK0xisz8NpHkxRrMr3NWJ+KOK-WZ+QtTQ@mail.gmail.com\n--\nPeter Geoghegan", "msg_date": "Sat, 9 Dec 2023 10:38:57 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Sat, Dec 9, 2023 at 10:38 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached is v8, which pretty much rips all of this stuff out.\n\nAttached is v9, which I'm posting just to fix bitrot. The patch\nstopped cleanly applying against HEAD due to recent bugfix commit\n7e6fb5da. No real changes here compared to v8.\n\n--\nPeter Geoghegan", "msg_date": "Thu, 28 Dec 2023 09:28:32 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Thu, 28 Dec 2023 at 18:28, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Sat, Dec 9, 2023 at 10:38 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Attached is v8, which pretty much rips all of this stuff out.\n>\n> Attached is v9, which I'm posting just to fix bitrot. The patch\n> stopped cleanly applying against HEAD due to recent bugfix commit\n> 7e6fb5da. No real changes here compared to v8.\n\nI found 2 major issues; one correctness issue in the arraykey\nprocessing code of _bt_preprocess_array_keys, and one assertion error\nin _bt_advance_array_keys; both discussed in the relevant sections\nbelow.\n\n> Subject: [PATCH v9] Enhance nbtree ScalarArrayOp execution.\n> [...]\n> Bugfix commit 807a40c5 taught the planner to avoid generating unsafe\n> path keys: path keys on a multicolumn index path, with a SAOP clause on\n> any attribute beyond the first/most significant attribute. These cases\n> are now all safe, so we go back to generating path keys without regard\n> for the presence of SAOP clauses (just like with any other clause type).\n> Also undo changes from follow-up bugfix commit a4523c5a, which taught\n> the planner to produce alternative index paths without any low-order\n> ScalarArrayOpExpr quals (making the SAOP quals into filter quals).\n> We'll no longer generate these alternative paths, which can no longer\n> offer any advantage over the index qual paths that we do still generate.\n\nCan you pull these planner changes into their own commit(s)?\nAs mentioned upthread, it's a significant change in behavior that\nshould have separate consideration and reference in the commit log. I\nreally don't think it should be buried in the 5th paragraph of an\n\"Enhance nbtree ScalarArrayOp execution\" commit. Additionally, the\nchanges of btree are arguably independent of the planner changes, as\nthe btree changes improve performance even if we ignore that it\nimplements strict result ordering.\n\nAn independent thought when reviewing the finaltup / HIKEY scan\noptimization parts of this patch:\nThe 'use highkey to check for next page's data range' optimization is\nuseful, but can currently only be used for scans that go to the right.\nCould we use a similar optimization in the inverse direction if we\nmarked the right page of a split with \"the left page has a HIKEY based\non this page's (un)truncated leftmost value\" or \"left page's HIKEY was\ntruncated to N key attributes\"? It'd give one bit of information,\nspecifically that it does (or does not) share some (or all) key\nattributes with this page's minimum value, which allows for earlier\nscan termination in boundary cases.\n\n> +++ b/src/include/access/nbtree.h\n> +#define SK_BT_RDDNARRAY 0x00040000 /* redundant in array preprocessing */\n\nI think \"made redundant\" is more appropriate than just \"redundant\";\nthe array key is not itself redundant in array preprocessing (indeed\nwe actually work hard on that key during preprocessing to allow us to\nmark it redundant)\n\n> +++ b/src/backend/access/nbtree/nbtsearch.c\n> * We skip this for the first page in the scan to evade the possible\n> - * slowdown of the point queries.\n> + * slowdown of point queries. Never apply the optimization with a scan\n> + * that uses array keys, either, since that breaks certain assumptions.\n> + * (Our search-type scan keys change whenever _bt_checkkeys advances the\n> + * arrays, invalidating any precheck. Tracking all that would be tricky.)\n\nI think this was mentioned before, but can't we have an argument or\nversion of _bt_checkkeys that makes it not advance the array keys, so\nthat we can keep this optimization? Or, isn't that now\n_bt_check_compare?\nFor an orderlines table with 1000s of lines per order, we would\ngreatly benefit from a query `order_id = ANY ('{1, 3, 5}')` that\nhandles scan keys more efficiently; the optimization which is being\ndisabled here.\n\n> +++ b/src/backend/access/nbtree/nbtutils.c\n> _bt_preprocess_array_keys(IndexScanDesc scan)\nThe _bt_preprocess_array_keys code is now broken due to type\nconfusion, as it assumes there is only one array subtype being\ncompared per attribute in so.orderProcs. Reproducer:\nCREATE TABLE test AS\nSELECT generate_series(1, 10000, 1::bigint) num;\nCREATE INDEX ON test (num); /* bigint typed */\n\nSELECT num FROM test\nWHERE num = ANY ('{1}'::smallint[])\n AND num = ANY ('{1}'::int[]) /* ensure matching\nlastEqualityArrayAtt, lastOrderProc for next qual\n AND num = ANY ('{65537}'::int[]); /* qual is broken due to\napplication of smallint compare operator on int values that do equal\nmod 2^16, but do not equal in their own type */\n num\n-----\n 1\n\nI'm also concerned about the implications of this in\n_bt_binsrch_array_skey, as this too assumes the same compare operator\nis always used for all array operations on each attribute. We may need\none so->orderProcs entry for each array key, but at least one per\nsk_subtype of each array key.\n\nI also notice that the merging of values doesn't seem to be applied\noptimally with mixed typed array operations: num = int[] AND num =\nbigint[] AND num = int[] doesn't seem to merge the first and last\narray ops. I'm also concerned about being (un)able to merge\n\n> +/*\n> + * _bt_merge_arrays() -- merge together duplicate array keys\n> + *\n> + * Both scan keys have array elements that have already been sorted and\n> + * deduplicated.\n> + */\n\nAs I mentioned upthread, I find this function to be very wasteful, as\nit uses N binary searches to merge join two already sorted arrays,\nresulting in a O(n log(m)) complexity, whereas a normal merge join\nshould be O(n + m) once the input datasets are sorted.\nPlease fix this, as it shows up in profiling of large array merges.\nAdditionally, as it merges two arrays of unique items into one,\nstoring only matching entries, I feel that it is quite wasteful to do\nthis additional allocation here. Why not reuse the original allocation\nimmediately?\n\n> +_bt_tuple_before_array_skeys(IndexScanDesc scan, BTReadPageState *pstate,\n> + IndexTuple tuple, int sktrig, bool validtrig)\n\nI don't quite understand what the 'validtrig' argument is used for.\nThere is an assertion that it is false under some conditions in this\ncode, but it's not at all clear to me why that would have to be the\ncase - it is called with `true` in one of the three call sites. Could\nthe meaning of this be clarified?\n\nI also feel that this patch includes several optimizations such as\nthis sktrig argument which aren't easy to understand. Could you pull\nthat into a separately reviewable patch?\n\nAdditionally, could you try to create a single point of entry for the\narray key stuff that covers the new systems? I've been trying to wrap\nmy head around this, and it's taking a lot of effort.\n\n> _bt_advance_array_keys\n\nThinking about the implementation here:\nWe require transitivity for btree opclasses, where A < B implies NOT A\n= B, etc. Does this also carry over into cross-type operators? E.g. a\ntype 'truncatedint4' that compares only the highest 16 bits of an\ninteger would be strictly sorted, and could compare 0::truncatedint4 =\n0::int4 as true, as well as 0::truncatedint4 = 2::int4, while 0::int4\n= 2::int4 is false.\nWould it be valid to add support methods for truncatedint4 to an int4\nbtree opclass, or is transitivity also required for all operations?\ni.e. all values that one operator class considers unique within an\nopfamily must be considered unique for all additional operators in the\nopfamily, or is that not required?\nIf not, then that would pose problems for this patch, as the ordering\nof A = ANY ('{1, 2, 3}'::int4[]) AND A = ANY\n('{0,65536}'::truncatedint4[]) could potentially skip results.\n\nI'm also no fan of the (tail) recursion. I would agree that this is\nunlikely to consume a lot of stack, but it does consume stackspace\nnonetheless, and I'd prefer if it was not done this way.\n\nI notice an assertion error here:\n> + Assert(cur->sk_strategy != BTEqualStrategyNumber);\n> + Assert(all_required_sk_satisfied);\n> + Assert(!foundRequiredOppositeDirOnly);\n> +\n> + foundRequiredOppositeDirOnly = true;\n\nThis assertion can be hit with the following test case:\n\nCREATE TABLE test AS\nSELECT i a, i b, i c FROM generate_series(1, 1000) i;\nCREATE INDEX ON test(a, b, c); ANALYZE;\nSELECT count(*) FROM test\nWHERE a = ANY ('{1,2,3}') AND b > 1 AND c > 1\nAND b = ANY ('{1,2,3}');\n\n> +_bt_update_keys_with_arraykeys(IndexScanDesc scan)\n\nI keep getting confused by the mixing of integer increments and\npointer increments. Could you explain why in this code you chose to\nincrement a pointer for \"ScanKey cur\", while using arrray indexing for\nother fields? It feels very arbitrary to me, and that makes the code\ndifficult to follow.\n\n> +++ b/src/test/regress/sql/btree_index.sql\n> +-- Add tests to give coverage of various subtle issues.\n> +--\n> +-- XXX This may not be suitable for commit, due to taking up too many cycles.\n> +--\n> +-- Here we don't remember the scan's array keys before processing a page, only\n> +-- after processing a page (which is implicit, it's just the scan's current\n> +-- keys). So when we move the scan backwards we think that the top-level scan\n> +-- should terminate, when in reality it should jump backwards to the leaf page\n> +-- that we last visited.\n\nI notice this adds a complex test case that outputs many rows. Can we\ndo with less rows if we build the index after data insertion, and with\na lower (non-default) fillfactor?\n\nNote: I did not yet do any in-depth review of the planner changes in\nindxpath.c/selfuncs.c.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 15 Jan 2024 20:32:36 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Mon, Jan 15, 2024 at 2:32 PM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> Can you pull these planner changes into their own commit(s)?\n> As mentioned upthread, it's a significant change in behavior that\n> should have separate consideration and reference in the commit log. I\n> really don't think it should be buried in the 5th paragraph of an\n> \"Enhance nbtree ScalarArrayOp execution\" commit. Additionally, the\n> changes of btree are arguably independent of the planner changes, as\n> the btree changes improve performance even if we ignore that it\n> implements strict result ordering.\n\nI'm not going to break out the planner changes, because they're *not*\nindependent in any way. You could say the same thing about practically\nany work that changes the planner. They're \"buried\" in the 5th\nparagraph of the commit message. If an interested party can't even\nread that far to gain some understanding of a legitimately complicated\npiece of work such as this, I'm okay with that.\n\n> An independent thought when reviewing the finaltup / HIKEY scan\n> optimization parts of this patch:\n> The 'use highkey to check for next page's data range' optimization is\n> useful, but can currently only be used for scans that go to the right.\n> Could we use a similar optimization in the inverse direction if we\n> marked the right page of a split with \"the left page has a HIKEY based\n> on this page's (un)truncated leftmost value\" or \"left page's HIKEY was\n> truncated to N key attributes\"? It'd give one bit of information,\n> specifically that it does (or does not) share some (or all) key\n> attributes with this page's minimum value, which allows for earlier\n> scan termination in boundary cases.\n\nThat would have to be maintained in all sorts of different places in\nnbtree. And could be broken at any time by somebody inserting a\nnon-pivot tuple before every existing one on the leaf page. Doesn't\nseem worth it to me.\n\nIf we were to do something like this then it would be discussed as\nindependent work. It's akin to adding a low key, which could be used\nin several different places. As you say, it's totally independent.\n\n> > +++ b/src/include/access/nbtree.h\n> > +#define SK_BT_RDDNARRAY 0x00040000 /* redundant in array preprocessing */\n>\n> I think \"made redundant\" is more appropriate than just \"redundant\";\n> the array key is not itself redundant in array preprocessing (indeed\n> we actually work hard on that key during preprocessing to allow us to\n> mark it redundant)\n\nMeh. I did it that way to fit under 78 chars while staying on the same\nline. I don't think that it matters.\n\n> I think this was mentioned before, but can't we have an argument or\n> version of _bt_checkkeys that makes it not advance the array keys, so\n> that we can keep this optimization? Or, isn't that now\n> _bt_check_compare?\n\nAs I said to Heikki, thinking about this some more is on my todo list.\nI mean the way that this worked had substantial revisions on HEAD\nright before I posted v9. v9 was only to fix the bit rot that that\ncaused.\n\n> For an orderlines table with 1000s of lines per order, we would\n> greatly benefit from a query `order_id = ANY ('{1, 3, 5}')` that\n> handles scan keys more efficiently; the optimization which is being\n> disabled here.\n\nThe way that these optimizations might work with the mechanism from\nthe patch isn't some kind of natural extension to what's there\nalready. We'll need new heuristics to not waste cycles. Applying all\nof the optimizations together just isn't trivial, and it's not yet\nclear what really makes sense. Combining the two optimizations more or\nless adds another dimension of complexity.\n\n> > +++ b/src/backend/access/nbtree/nbtutils.c\n> > _bt_preprocess_array_keys(IndexScanDesc scan)\n> The _bt_preprocess_array_keys code is now broken due to type\n> confusion, as it assumes there is only one array subtype being\n> compared per attribute in so.orderProcs.\n\nI've been aware of this for some time, but didn't think that it was\nworth bringing up before I had a solution to present...\n\n> I'm also concerned about the implications of this in\n> _bt_binsrch_array_skey, as this too assumes the same compare operator\n> is always used for all array operations on each attribute. We may need\n> one so->orderProcs entry for each array key, but at least one per\n> sk_subtype of each array key.\n\n...since right now it's convenient to make so->orderProcs have a 1:1\ncorrespondence with index key columns....\n\n> I also notice that the merging of values doesn't seem to be applied\n> optimally with mixed typed array operations: num = int[] AND num =\n> bigint[] AND num = int[] doesn't seem to merge the first and last\n> array ops. I'm also concerned about being (un)able to merge\n\n...which ought to work and be robust, once the cross-type support is\nin place. That is, it should work once we really can assume that there\nreally must be exactly one so->orderProcs entry per equality-strategy\nscan key in all cases -- including in highly obscure corner cases\ninvolving a mix of cross-type comparisons that are also redundant.\n(Though as I go into below, I can see at least 2 viable solutions to\nthe problem.)\n\n> > +/*\n> > + * _bt_merge_arrays() -- merge together duplicate array keys\n> > + *\n> > + * Both scan keys have array elements that have already been sorted and\n> > + * deduplicated.\n> > + */\n>\n> As I mentioned upthread, I find this function to be very wasteful, as\n> it uses N binary searches to merge join two already sorted arrays,\n> resulting in a O(n log(m)) complexity, whereas a normal merge join\n> should be O(n + m) once the input datasets are sorted.\n\nAnd as I mentioned upthread, I think that you're making a mountain out\nof a molehill here. This is not a merge join. Even single digit\nthousand of array elements should be considered huge. Plus this code\npath is only hit with poorly written queries.\n\n> Please fix this, as it shows up in profiling of large array merges.\n> Additionally, as it merges two arrays of unique items into one,\n> storing only matching entries, I feel that it is quite wasteful to do\n> this additional allocation here. Why not reuse the original allocation\n> immediately?\n\nBecause that's adding more complexity for a code path that will hardly\never be used in practice. For something that happens once, during a\npreprocessing step.\n\n> > +_bt_tuple_before_array_skeys(IndexScanDesc scan, BTReadPageState *pstate,\n> > + IndexTuple tuple, int sktrig, bool validtrig)\n>\n> I don't quite understand what the 'validtrig' argument is used for.\n> There is an assertion that it is false under some conditions in this\n> code, but it's not at all clear to me why that would have to be the\n> case - it is called with `true` in one of the three call sites. Could\n> the meaning of this be clarified?\n\nSure, I'll add a comment.\n\n> I also feel that this patch includes several optimizations such as\n> this sktrig argument which aren't easy to understand. Could you pull\n> that into a separately reviewable patch?\n\nIt probably makes sense to add the extra preprocessing stuff out into\nits own commit, since I tend to agree that that's an optimization that\ncan be treated as unrelated (and isn't essential to the main thrust of\nthe patch).\n\nHowever, the sktrig thing isn't really like that. We need to do things\nthat way for required inequality scan keys. It doesn't make sense to\nnot just do it for all required scan keys (both equality and\ninequality strategy scan keys) right from the start.\n\n> Additionally, could you try to create a single point of entry for the\n> array key stuff that covers the new systems? I've been trying to wrap\n> my head around this, and it's taking a lot of effort.\n\nI don't understand what you mean here.\n\n> > _bt_advance_array_keys\n>\n> Thinking about the implementation here:\n> We require transitivity for btree opclasses, where A < B implies NOT A\n> = B, etc. Does this also carry over into cross-type operators?\n\nYes, it carries like that.\n\n> Would it be valid to add support methods for truncatedint4 to an int4\n> btree opclass, or is transitivity also required for all operations?\n> i.e. all values that one operator class considers unique within an\n> opfamily must be considered unique for all additional operators in the\n> opfamily, or is that not required?\n> If not, then that would pose problems for this patch, as the ordering\n> of A = ANY ('{1, 2, 3}'::int4[]) AND A = ANY\n> ('{0,65536}'::truncatedint4[]) could potentially skip results.\n\nThere are roughly two ways to deal with this sort of thing (which\nsounds like a restatement of the issue shown by your test case?). They\nare:\n\n1. Add support for detecting redundant scan keys, even when cross-type\noperators are involved. (Discussed earlier.)\n\n2. Be prepared to have more than one scan key per index key column in\nrare edge-cases. This means that I can no longer subscript\nso->orderProcs using an attnum.\n\nI intend to use whichever approach works out to be the simplest and\nmost maintainable. Note that there is usually nothing that stops me\nfrom having redundant scan keys if it can't be avoided via\npreprocessing -- just like today.\n\nActually...I might have to use a hybrid of 1 and 2. Because we need to\nbe prepared for the possibility that it just isn't possible to\ndetermine redundancy at all due to a lack of suitable cross-type\noperators -- I don't think that it would be okay to just throw an\nerror there (that really would be a regression against Postgres 16).\nFor that reason I will most likely need to find a way for\nso->orderProcs to be subscripted using something that maps to equality\nscan keys (rather than having it map to a key column).\n\n> I'm also no fan of the (tail) recursion. I would agree that this is\n> unlikely to consume a lot of stack, but it does consume stackspace\n> nonetheless, and I'd prefer if it was not done this way.\n\nI disagree. The amount of stack space it consumes in the worst case is fixed.\n\n> I notice an assertion error here:\n> > + Assert(cur->sk_strategy != BTEqualStrategyNumber);\n> > + Assert(all_required_sk_satisfied);\n> > + Assert(!foundRequiredOppositeDirOnly);\n> > +\n> > + foundRequiredOppositeDirOnly = true;\n>\n> This assertion can be hit with the following test case:\n>\n> CREATE TABLE test AS\n> SELECT i a, i b, i c FROM generate_series(1, 1000) i;\n> CREATE INDEX ON test(a, b, c); ANALYZE;\n> SELECT count(*) FROM test\n> WHERE a = ANY ('{1,2,3}') AND b > 1 AND c > 1\n> AND b = ANY ('{1,2,3}');\n\nWill fix.\n\n> > +_bt_update_keys_with_arraykeys(IndexScanDesc scan)\n>\n> I keep getting confused by the mixing of integer increments and\n> pointer increments. Could you explain why in this code you chose to\n> increment a pointer for \"ScanKey cur\", while using arrray indexing for\n> other fields? It feels very arbitrary to me, and that makes the code\n> difficult to follow.\n\nBecause in one case we follow the convention for scan keys, whereas\nso->orderProcs is accessed via an attribute number subscript.\n\n> > +++ b/src/test/regress/sql/btree_index.sql\n> > +-- Add tests to give coverage of various subtle issues.\n> > +--\n> > +-- XXX This may not be suitable for commit, due to taking up too many cycles.\n> > +--\n> > +-- Here we don't remember the scan's array keys before processing a page, only\n> > +-- after processing a page (which is implicit, it's just the scan's current\n> > +-- keys). So when we move the scan backwards we think that the top-level scan\n> > +-- should terminate, when in reality it should jump backwards to the leaf page\n> > +-- that we last visited.\n>\n> I notice this adds a complex test case that outputs many rows. Can we\n> do with less rows if we build the index after data insertion, and with\n> a lower (non-default) fillfactor?\n\nProbably not. It was actually very hard to come up with these test\ncases, which tickle the implementation in just the right way to\ndemonstrate that the code in places like _bt_steppage() is actually\nrequired. It took me a rather long time to just prove that much. Not\nsure that we really need this. But thought I'd include it for the time\nbeing, just so that reviewers could understand those changes.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 15 Jan 2024 21:02:56 -0500", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Thu, Dec 28, 2023 at 12:28 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached is v9, which I'm posting just to fix bitrot. The patch\n> stopped cleanly applying against HEAD due to recent bugfix commit\n> 7e6fb5da. No real changes here compared to v8.\n\nAttached is v10, which is another revision most just intended to fix\nbitrot. This time from changes to selfuncs.c on HEAD.\n\nv10 also fixes the assertion failure reported by Matthias in passing,\njust because it was easy to include. No changes for the other open\nitems, though.\n\n--\nPeter Geoghegan", "msg_date": "Wed, 17 Jan 2024 17:08:06 -0500", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Tue, 16 Jan 2024 at 03:03, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, Jan 15, 2024 at 2:32 PM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > Can you pull these planner changes into their own commit(s)?\n> > As mentioned upthread, it's a significant change in behavior that\n> > should have separate consideration and reference in the commit log. I\n> > really don't think it should be buried in the 5th paragraph of an\n> > \"Enhance nbtree ScalarArrayOp execution\" commit. Additionally, the\n> > changes of btree are arguably independent of the planner changes, as\n> > the btree changes improve performance even if we ignore that it\n> > implements strict result ordering.\n>\n> I'm not going to break out the planner changes, because they're *not*\n> independent in any way.\n\nThe planner changes depend on the btree changes, that I agree with.\nHowever, I don't think that the btree changes depend on the planner\nchanges.\n\n> You could say the same thing about practically\n> any work that changes the planner. They're \"buried\" in the 5th\n> paragraph of the commit message. If an interested party can't even\n> read that far to gain some understanding of a legitimately complicated\n> piece of work such as this, I'm okay with that.\n\nI would agree with you if this was about new APIs and features, but\nhere existing APIs are being repurposed without changing them. A\nmaintainer of index AMs would not look at the commit title 'Enhance\nnbtree ScalarArrayOp execution' and think \"oh, now I have to make sure\nmy existing amsearcharray+amcanorder handling actually supports\nnon-prefix arrays keys and return data in index order\".\nThere are also no changes in amapi.h that would signal any index AM\nauthor that expectations have changed. I really don't think you can\njust ignore all that, and I believe this to also be the basis of\nHeikki's first comment.\n\n> As I said to Heikki, thinking about this some more is on my todo list.\n> I mean the way that this worked had substantial revisions on HEAD\n> right before I posted v9. v9 was only to fix the bit rot that that\n> caused.\n\nThen I'll be waiting for that TODO item to be resolved.\n\n> > > +++ b/src/backend/access/nbtree/nbtutils.c\n> > > +/*\n> > > + * _bt_merge_arrays() -- merge together duplicate array keys\n> > > + *\n> > > + * Both scan keys have array elements that have already been sorted and\n> > > + * deduplicated.\n> > > + */\n> >\n> > As I mentioned upthread, I find this function to be very wasteful, as\n> > it uses N binary searches to merge join two already sorted arrays,\n> > resulting in a O(n log(m)) complexity, whereas a normal merge join\n> > should be O(n + m) once the input datasets are sorted.\n>\n> And as I mentioned upthread, I think that you're making a mountain out\n> of a molehill here. This is not a merge join.\n\nWe're merging two sorted sets and want to retain only the matching\nentries, while retaining the order of the data. AFAIK the best\nalgorithm available for this is a sort-merge join.\nSure, it isn't a MergeJoin plan node, but that's not what I was trying to argue.\n\n> Even single digit\n> thousand of array elements should be considered huge. Plus this code\n> path is only hit with poorly written queries.\n\nWould you accept suggestions?\n\n> > Please fix this, as it shows up in profiling of large array merges.\n> > Additionally, as it merges two arrays of unique items into one,\n> > storing only matching entries, I feel that it is quite wasteful to do\n> > this additional allocation here. Why not reuse the original allocation\n> > immediately?\n>\n> Because that's adding more complexity for a code path that will hardly\n> ever be used in practice. For something that happens once, during a\n> preprocessing step.\n\nOr many times, when we're in a parameterized loop, as was also pointed\nout by Heikki. While I do think it is rare, the existence of this path\nthat merges these arrays implies the need for merging these arrays,\nwhich thus\n\n> > > +_bt_tuple_before_array_skeys(IndexScanDesc scan, BTReadPageState *pstate,\n> > > + IndexTuple tuple, int sktrig, bool validtrig)\n> >\n> > I don't quite understand what the 'validtrig' argument is used for.\n> > There is an assertion that it is false under some conditions in this\n> > code, but it's not at all clear to me why that would have to be the\n> > case - it is called with `true` in one of the three call sites. Could\n> > the meaning of this be clarified?\n>\n> Sure, I'll add a comment.\n\nThanks!\n\n> > I also feel that this patch includes several optimizations such as\n> > this sktrig argument which aren't easy to understand. Could you pull\n> > that into a separately reviewable patch?\n>\n> It probably makes sense to add the extra preprocessing stuff out into\n> its own commit, since I tend to agree that that's an optimization that\n> can be treated as unrelated (and isn't essential to the main thrust of\n> the patch).\n>\n> However, the sktrig thing isn't really like that. We need to do things\n> that way for required inequality scan keys. It doesn't make sense to\n> not just do it for all required scan keys (both equality and\n> inequality strategy scan keys) right from the start.\n\nI understand that in some places the \"triggered by scankey\" concept is\nrequired, but in other places the use of it is explicitly flagged as\nan optimization (specifically, in _bt_advance_array_keys), which\nconfused me.\n\n> > Additionally, could you try to create a single point of entry for the\n> > array key stuff that covers the new systems? I've been trying to wrap\n> > my head around this, and it's taking a lot of effort.\n>\n> I don't understand what you mean here.\n\nThe documentation (currently mostly code comments) is extensive, but\nstill spread around various inline comments and comments on functions;\nwith no place where a clear top-level design is detailed.\nI'll agree that we don't have that for the general systems in\n_bt_next() either, but especially with this single large patch it's\ndifficult to grasp the specific differences between the various\nfunctions.\n\n> > > _bt_advance_array_keys\n> >\n> > Thinking about the implementation here:\n> > We require transitivity for btree opclasses, where A < B implies NOT A\n> > = B, etc. Does this also carry over into cross-type operators?\n>\n> Yes, it carries like that.\n>\n> > Would it be valid to add support methods for truncatedint4 to an int4\n> > btree opclass, or is transitivity also required for all operations?\n> > i.e. all values that one operator class considers unique within an\n> > opfamily must be considered unique for all additional operators in the\n> > opfamily, or is that not required?\n> > If not, then that would pose problems for this patch, as the ordering\n> > of A = ANY ('{1, 2, 3}'::int4[]) AND A = ANY\n> > ('{0,65536}'::truncatedint4[]) could potentially skip results.\n>\n> There are roughly two ways to deal with this sort of thing (which\n> sounds like a restatement of the issue shown by your test case?).\n\nIt wasn't really meant as a restatement: It is always unsafe to ignore\nupper bits, as the index isn't organized by that. However, it *could*\nbe safe to ignore the bits with lowest significance, as the index\nwould still be organized correctly even in that case, for int4 at\nleast. Similar to how you can have required scankeys for the prefix of\nan (int2, int2) index, but not the suffix (as of right now at least).\n\nThe issue I was trying to show is that if you have a type that ignores\nsome part of the key for comparison like truncatedint4 (which\nhypothetically would do ((a>>16) < (b>>16)) on int4 types), then this\nmight cause issues if that key was advanced before more precise\nequality checks.\n\nThis won't ever be an issue when there is a requirement that if a = b\nand b = c then a = c must hold for all triples of typed values and\noperations in a btree opclass, as you mention above.\n\n\n> They\n> are:\n>\n> 1. Add support for detecting redundant scan keys, even when cross-type\n> operators are involved. (Discussed earlier.)\n>\n> 2. Be prepared to have more than one scan key per index key column in\n> rare edge-cases. This means that I can no longer subscript\n> so->orderProcs using an attnum.\n\n> Actually...I might have to use a hybrid of 1 and 2. Because we need to\n> be prepared for the possibility that it just isn't possible to\n> determine redundancy at all due to a lack of suitable cross-type\n> operators -- I don't think that it would be okay to just throw an\n> error there (that really would be a regression against Postgres 16).\n\nAgreed.\n\n> > I'm also no fan of the (tail) recursion. I would agree that this is\n> > unlikely to consume a lot of stack, but it does consume stackspace\n> > nonetheless, and I'd prefer if it was not done this way.\n>\n> I disagree. The amount of stack space it consumes in the worst case is fixed.\n\nIs it fixed? Without digging very deep into it, it looks like it can\niterate on the order of n_scankeys deep? The comment above does hint\non 2 iterations, but is not very clear about the conditions and why.\n\n> > I notice an assertion error here:\n> > > + Assert(cur->sk_strategy != BTEqualStrategyNumber);\n> > > + Assert(all_required_sk_satisfied);\n> > > + Assert(!foundRequiredOppositeDirOnly);\n> > > +\n> > > + foundRequiredOppositeDirOnly = true;\n> >\n> > This assertion can be hit with the following test case:\n> >\n> > CREATE TABLE test AS\n> > SELECT i a, i b, i c FROM generate_series(1, 1000) i;\n> > CREATE INDEX ON test(a, b, c); ANALYZE;\n> > SELECT count(*) FROM test\n> > WHERE a = ANY ('{1,2,3}') AND b > 1 AND c > 1\n> > AND b = ANY ('{1,2,3}');\n>\n> Will fix.\n>\n> > > +_bt_update_keys_with_arraykeys(IndexScanDesc scan)\n> >\n> > I keep getting confused by the mixing of integer increments and\n> > pointer increments. Could you explain why in this code you chose to\n> > increment a pointer for \"ScanKey cur\", while using arrray indexing for\n> > other fields? It feels very arbitrary to me, and that makes the code\n> > difficult to follow.\n>\n> Because in one case we follow the convention for scan keys, whereas\n> so->orderProcs is accessed via an attribute number subscript.\n\nOkay, but how about this in _bt_merge_arrays?\n\n+ Datum *elem = elems_orig + i;\n\nI'm not familiar with the scan key convention, as most other places\nuse reference+subscripting.\n\n> > > +++ b/src/test/regress/sql/btree_index.sql\n> > > +-- Add tests to give coverage of various subtle issues.\n> > > +--\n> > > +-- XXX This may not be suitable for commit, due to taking up too many cycles.\n> > > +--\n> > > +-- Here we don't remember the scan's array keys before processing a page, only\n> > > +-- after processing a page (which is implicit, it's just the scan's current\n> > > +-- keys). So when we move the scan backwards we think that the top-level scan\n> > > +-- should terminate, when in reality it should jump backwards to the leaf page\n> > > +-- that we last visited.\n> >\n> > I notice this adds a complex test case that outputs many rows. Can we\n> > do with less rows if we build the index after data insertion, and with\n> > a lower (non-default) fillfactor?\n>\n> Probably not. It was actually very hard to come up with these test\n> cases, which tickle the implementation in just the right way to\n> demonstrate that the code in places like _bt_steppage() is actually\n> required. It took me a rather long time to just prove that much. Not\n> sure that we really need this. But thought I'd include it for the time\n> being, just so that reviewers could understand those changes.\n\nMakes sense, thanks for the explanation.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Thu, 18 Jan 2024 17:38:58 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Thu, Jan 18, 2024 at 11:39 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> > I'm not going to break out the planner changes, because they're *not*\n> > independent in any way.\n>\n> The planner changes depend on the btree changes, that I agree with.\n> However, I don't think that the btree changes depend on the planner\n> changes.\n\nYeah, technically it would be possible to break out the indxpath.c\nchanges -- that wouldn't leave the tree in a state with visibly-wrong\nbehavior at any point. But that's far from the only thing that\nmatters. If that was the standard that we applied, then I might have\nto split the patch into 10 or more patches.\n\nWhat it boils down to is this: it is totally natural for me to think\nof the planner changes as integral to the nbtree changes, so I'm going\nto include them together in one commit. That's just how the code was\nwritten -- I always thought about it as one single thing. The majority\nof the queries that I've used to promote the patch directly rely on\nthe planner changes in one way or another (even back in v1, when the\nplanner side of things had lots of kludges).\n\nI don't necessarily always follow that standard. Sometimes it is\nuseful to further break things up. Like when it happens to make the\nhigh-level division of labor a little bit clearer. The example that\ncomes to mind is commit dd299df8 and commit fab25024. These nbtree\ncommits were essentially one piece of work that was broken into two,\nbut committed within minutes of each other, and left the tree in a\nmomentary state that was not-very-useful (though still correct). That\nmade sense to me at the time because the code in question was\nmechanically distinct, while at the same time modifying some of the\nsame nbtree files. Drawing a line under it seemed to make sense.\n\nI will admit that there is some amount of subjective judgement (gasp!)\nhere. Plus I'll acknowledge that it's *slightly* odd that the most\ncompelling cases for the SAOP patch almost all involve savings in heap\npage accesses, even though it is fundamentally a B-Tree patch. But\nthat's just how it came out. Slightly odd things happen all the time.\n\n> I would agree with you if this was about new APIs and features, but\n> here existing APIs are being repurposed without changing them. A\n> maintainer of index AMs would not look at the commit title 'Enhance\n> nbtree ScalarArrayOp execution' and think \"oh, now I have to make sure\n> my existing amsearcharray+amcanorder handling actually supports\n> non-prefix arrays keys and return data in index order\".\n\nThis is getting ridiculous.\n\nIt is quite likely that there are exactly zero affected out-of-core\nindex AMs. I don't count pgroonga as a counterexample (I don't think\nthat it actually fullfills the definition of a ). Basically,\n\"amcanorder\" index AMs more or less promise to be compatible with\nnbtree, down to having the same strategy numbers. So the idea that I'm\ngoing to upset amsearcharray+amcanorder index AM authors is a\ncompletely theoretical problem. The planner code evolved with nbtree,\nhand-in-glove.\n\nI'm more than happy to add a reminder to the commit message about\nadding a proforma listing to the compatibility section of the Postgres\n17 release notes. Can we actually agree on which theoretical index AM\ntypes are affected first, though? Isn't it actually\namsearcharray+amcanorder+amcanmulticol external index AMs only? Do I\nhave that right?\n\nBTW, how should we phrase this compatibility note, so that it can be\nunderstood? It's rather awkward.\n\n> > As I said to Heikki, thinking about this some more is on my todo list.\n> > I mean the way that this worked had substantial revisions on HEAD\n> > right before I posted v9. v9 was only to fix the bit rot that that\n> > caused.\n>\n> Then I'll be waiting for that TODO item to be resolved.\n\nMy TODO item is to resolve the question of whether and to what extent\nthese two optimizations should be combined. It's possible that I'll\ndecide that it isn't worth it, for whatever reason. At this point I'm\nstill totally neutral.\n\n> > Even single digit\n> > thousand of array elements should be considered huge. Plus this code\n> > path is only hit with poorly written queries.\n>\n> Would you accept suggestions?\n\nYou mean that you want to have a go at it yourself? Sure, go ahead.\n\nI cannot promise that I'll accept your suggested revisions, but if\nthey aren't too invasive/complicated compared to what I have now, then\nI will just accept them. I maintain that this isn't really necessary,\nbut it might be the path of least resistance at this point.\n\n> > > Please fix this, as it shows up in profiling of large array merges.\n> > > Additionally, as it merges two arrays of unique items into one,\n> > > storing only matching entries, I feel that it is quite wasteful to do\n> > > this additional allocation here. Why not reuse the original allocation\n> > > immediately?\n> >\n> > Because that's adding more complexity for a code path that will hardly\n> > ever be used in practice. For something that happens once, during a\n> > preprocessing step.\n>\n> Or many times, when we're in a parameterized loop, as was also pointed\n> out by Heikki.\n\nThat's not really what Heikki said. Heikki had a general concern about\nthe startup costs with nestloop joins.\n\n> > > Additionally, could you try to create a single point of entry for the\n> > > array key stuff that covers the new systems? I've been trying to wrap\n> > > my head around this, and it's taking a lot of effort.\n> >\n> > I don't understand what you mean here.\n>\n> The documentation (currently mostly code comments) is extensive, but\n> still spread around various inline comments and comments on functions;\n> with no place where a clear top-level design is detailed.\n> I'll agree that we don't have that for the general systems in\n> _bt_next() either, but especially with this single large patch it's\n> difficult to grasp the specific differences between the various\n> functions.\n\nBetween what functions? The guts of this work are in the new\n_bt_advance_array_keys and _bt_tuple_before_array_skeys (honorable\nmention for _bt_array_keys_remain). It seems pretty well localized to\nme.\n\nGranted, there are a few places where we rely on things being set up a\ncertain way by other code that's quite some distance away (from the\ncode doing the depending). For example, the new additions to\n_bt_preprocess_keys that we need later on, in\n_bt_advance_array_keys/_bt_update_keys_with_arraykeys. These bits and\npieces of code are required, but are not particularly crucial to\nunderstanding the general design.\n\nFor the most part the design is that we cede control of array key\nadvancement to _bt_checkkeys() -- nothing much else changes. There are\ncertainly some tricky details behind the scenes, which we should be\nverifying via testing and especially via robust invariants (verified\nwith assertions). But this is almost completely hidden from the\nnbtsearch.c caller -- there are no real changes required there.\n\n> It wasn't really meant as a restatement: It is always unsafe to ignore\n> upper bits, as the index isn't organized by that. However, it *could*\n> be safe to ignore the bits with lowest significance, as the index\n> would still be organized correctly even in that case, for int4 at\n> least. Similar to how you can have required scankeys for the prefix of\n> an (int2, int2) index, but not the suffix (as of right now at least).\n\nIt isn't safe to assume that types appearing together within an\nopfamily are binary coercible (or capable of being type-punned like\nthis) in the general case. That often works, but it isn't reliable.\nEven with integer_ops, it breaks on big-endian machines.\n\n> This won't ever be an issue when there is a requirement that if a = b\n> and b = c then a = c must hold for all triples of typed values and\n> operations in a btree opclass, as you mention above.\n\nRight. It also doesn't matter if there are redundant equality\nconditions that we cannot formally prove are redundant during\npreprocessing, for want of appropriate cross-type comparison routines\nfrom the opfamily. The actual behavior of index scans (in terms of\nwhen and how we skip) won't be affected by that at all. The only\nproblem that it creates is that we'll waste CPU cycles, relative to\nthe case where we can somehow detect this redundancy.\n\nIn case it wasn't clear before: the optimizations I've added to\n_bt_preprocess_array_keys are intended to compensate for the\npessimizations added to _bt_preprocess_keys (both the optimizations\nand the pessimizations only affect equality type array scan keys). I\ndon't think that this needs to be perfect; it just seems like a good\nidea.\n\n> > > I'm also no fan of the (tail) recursion. I would agree that this is\n> > > unlikely to consume a lot of stack, but it does consume stackspace\n> > > nonetheless, and I'd prefer if it was not done this way.\n> >\n> > I disagree. The amount of stack space it consumes in the worst case is fixed.\n>\n> Is it fixed? Without digging very deep into it, it looks like it can\n> iterate on the order of n_scankeys deep? The comment above does hint\n> on 2 iterations, but is not very clear about the conditions and why.\n\nThe recursive call to _bt_advance_array_keys is needed to deal with\nunsatisfied-and-required inequalities that were not detected by an\ninitial call to _bt_check_compare, following a second retry call to\n_bt_check_compare. We know that this recursive call to\n_bt_advance_array_keys won't cannot recurse again because we've\nalready identified which specific inequality scan key it is that's a\nstill-unsatisfied inequality, following an initial round of array key\nadvancement.\n\nWe're pretty much instructing _bt_advance_array_keys to perform\n\"beyond_end_advance\" type advancement for the specific known\nunsatisfied inequality scan key (which must be required in the current\nscan direction, per the assertions for that) here. So clearly it\ncannot end up recursing any further -- the recursive call is\nconditioned on \"all_arraylike_sk_satisfied && arrays_advanced\", but\nthat'll be unset when \"ikey == sktrig && !array\" from inside the loop.\n\nThis is a narrow edge-case -- it is an awkward one. Recursion does\nseem natural here, because we're essentially repeating the call to\n_bt_advance_array_keys from inside _bt_advance_array_keys, rather than\nleaving it up to the usual external caller to do later on. If you\nripped this code out it would barely be noticeable at all. But it\nseems worth it so that we can make a uniform promise to always advance\nthe array keys to the maximum extent possible, based on what the\ncaller's tuple tells us about the progress of the scan.\n\nSince all calls to _bt_advance_array_keys are guaranteed to advance\nthe array keys to the maximum extent that's safely possible (barring\nthis one edge-case with recursive calls), it almost follows that there\ncan't be another recursive call. This is the one edge case where the\nimplementation requires a second pass over the tuple -- we advanced\nthe array keys to all-matching values, but nevertheless couldn't\nfinish there due to the unsatisfied required inequality (which must\nhave become unsatisfied _at the same time_ as some earlier required\nequality scan key for us to end up requiring a recursive call).\n\n> > Because in one case we follow the convention for scan keys, whereas\n> > so->orderProcs is accessed via an attribute number subscript.\n>\n> Okay, but how about this in _bt_merge_arrays?\n>\n> + Datum *elem = elems_orig + i;\n>\n> I'm not familiar with the scan key convention, as most other places\n> use reference+subscripting.\n\nI meant the convention used in code like _bt_check_compare (which is\nwhat we call _bt_checkkeys on HEAD, basically).\n\nNote that the _bt_merge_arrays code that you've highlighted isn't\niterating through so->keyData[] -- it is iterating through the\nfunction caller's elements array, which actually come from\nso->arrayKeys[].\n\nLike every other Postgres contributor, I do my best to follow the\nconventions established by existing code. Sometimes that leads to\npretty awkward results, where CamelCase and underscore styles are\nclosely mixed together, because it works out to be the most consistent\nway of doing it overall.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Fri, 19 Jan 2024 17:41:59 -0500", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Fri, 19 Jan 2024 at 23:42, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n\nThank you for your replies so far.\n\n> On Thu, Jan 18, 2024 at 11:39 AM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > I would agree with you if this was about new APIs and features, but\n> > here existing APIs are being repurposed without changing them. A\n> > maintainer of index AMs would not look at the commit title 'Enhance\n> > nbtree ScalarArrayOp execution' and think \"oh, now I have to make sure\n> > my existing amsearcharray+amcanorder handling actually supports\n> > non-prefix arrays keys and return data in index order\".\n>\n> This is getting ridiculous.\n>\n> It is quite likely that there are exactly zero affected out-of-core\n> index AMs. I don't count pgroonga as a counterexample (I don't think\n> that it actually fullfills the definition of a ). Basically,\n> \"amcanorder\" index AMs more or less promise to be compatible with\n> nbtree, down to having the same strategy numbers. So the idea that I'm\n> going to upset amsearcharray+amcanorder index AM authors is a\n> completely theoretical problem. The planner code evolved with nbtree,\n> hand-in-glove.\n\nAnd this is where I disagree with your (percieved) implicit argument\nthat this should be and always stay this way. I don't mind changes in\nthe planner for nbtree per se, but as I've mentioned before in other\nplaces, I really don't like how we handle amcanorder as if it were\namisbtree. But it's not called that, so we shouldn't expect that to\nremain the case; and definitely not keep building on those\nexpectations that it always is going to be the nbtree when amcanorder\nis set (or amsearcharray is set, or ..., or any combination of those\nthat is currently used by btree). By keeping that expectation alive,\nthis becomes a self-fulfilling prophecy, and I really don't like such\nrestrictive self-fulfilling prophecies. It's nice that we have index\nAM feature flags, but if we only effectively allow one implementation\nfor this set of flags by ignoring potential other users when changing\nbehavior, then what is the API good for (apart from abstraction\nlayering, which is nice)?\n\n/rant\n\nI'll see about the\n\n> I'm more than happy to add a reminder to the commit message about\n> adding a proforma listing to the compatibility section of the Postgres\n> 17 release notes. Can we actually agree on which theoretical index AM\n> types are affected first, though? Isn't it actually\n> amsearcharray+amcanorder+amcanmulticol external index AMs only? Do I\n> have that right?\n\nI think that may be right, but I could very well have built a\nbtree-lite that doesn't have the historical baggage of having to deal\nwith pages from before v12 (version 4) and some improvements that\nhaven't made it to core yet.\n\n> BTW, how should we phrase this compatibility note, so that it can be\n> understood? It's rather awkward.\n\nSomething like the following, maybe?\n\n\"\"\"\nCompatibility: The planner will now generate paths with array scan\nkeys in any column for ordered indexes, rather than on only a prefix\nof the index columns. The planner still expects fully ordered data\nfrom those indexes.\nHistorically, the btree AM couldn't output correctly ordered data for\nsuffix array scan keys, which was \"fixed\" by prohibiting the planner\nfrom generating array scan keys without an equality prefix scan key up\nto that attribute. In this commit, the issue has been fixed, and the\nplanner restriction has thus been removed as the only in-core IndexAM\nthat reports amcanorder now supports array scan keys on any column\nregardless of what prefix scan keys it has.\n\"\"\"\n\n> > > As I said to Heikki, thinking about this some more is on my todo list.\n> > > I mean the way that this worked had substantial revisions on HEAD\n> > > right before I posted v9. v9 was only to fix the bit rot that that\n> > > caused.\n> >\n> > Then I'll be waiting for that TODO item to be resolved.\n>\n> My TODO item is to resolve the question of whether and to what extent\n> these two optimizations should be combined. It's possible that I'll\n> decide that it isn't worth it, for whatever reason.\n\nThat's fine, this decision (and any work related to it) is exactly\nwhat I was referring to with that mention of the TODO.\n\n> > > Even single digit\n> > > thousand of array elements should be considered huge. Plus this code\n> > > path is only hit with poorly written queries.\n> >\n> > Would you accept suggestions?\n>\n> You mean that you want to have a go at it yourself? Sure, go ahead.\n>\n> I cannot promise that I'll accept your suggested revisions, but if\n> they aren't too invasive/complicated compared to what I have now, then\n> I will just accept them. I maintain that this isn't really necessary,\n> but it might be the path of least resistance at this point.\n\nAttached 2 patches: v11.patch-a and v11.patch-b. Both are incremental\non top of your earlier set, and both don't allocate additional memory\nin the merge operation in non-assertion builds.\n\npatch-a is a trivial and clean implementation of mergesort, which\ntends to reduce the total number of compare operations if both arrays\nare of similar size and value range. It writes data directly back into\nthe main array on non-assertion builds, and with assertions it reuses\nyour binary search join on scratch space for algorithm validation.\n\npatch-b is an improved version of patch-a with reduced time complexity\nin some cases: It applies exponential search to reduce work done where\nthere are large runs of unmatched values in either array, and does\nthat while keeping O(n+m) worst-case complexity, now getting a\nbest-case O(log(n)) complexity.\n\n> > > > I'm also no fan of the (tail) recursion. I would agree that this is\n> > > > unlikely to consume a lot of stack, but it does consume stackspace\n> > > > nonetheless, and I'd prefer if it was not done this way.\n> > >\n> > > I disagree. The amount of stack space it consumes in the worst case is fixed.\n> >\n> > Is it fixed? Without digging very deep into it, it looks like it can\n> > iterate on the order of n_scankeys deep? The comment above does hint\n> > on 2 iterations, but is not very clear about the conditions and why.\n>\n> The recursive call to _bt_advance_array_keys is needed to deal with\n> unsatisfied-and-required inequalities that were not detected by an\n> initial call to _bt_check_compare, following a second retry call to\n> _bt_check_compare. We know that this recursive call to\n> _bt_advance_array_keys won't cannot recurse again because we've\n> already identified which specific inequality scan key it is that's a\n> still-unsatisfied inequality, following an initial round of array key\n> advancement.\n\nSo, it's a case of this?\n\nscankey: a > 1 AND a = ANY (1 (=current), 2, 3)) AND b < 4 AND b = ANY\n(2 (=current), 3, 4)\ntuple: a=2, b=4\n\nWe first match the 'a' inequality key, then find an array-key mismatch\nbreaking out, then move the array keys forward to a=2 (of ANY\n(1,2,3)), b=4 (of ANY(2, 3, 4)); and now we have to recheck the later\nb < 4 inequality key, as that required inequality key wasn't checked\nbecause the earlier arraykey did not match in _bt_check_compare,\ncausing it to stop at the a=1 condition as opposed to check the b < 4\ncondition?\n\nIf so, then this visual explanation helped me understand the point and\nwhy it can't repeat more than once much better than all that text.\nMaybe this can be integrated?\n\n> This is a narrow edge-case -- it is an awkward one. Recursion does\n> seem natural here, because we're essentially repeating the call to\n> _bt_advance_array_keys from inside _bt_advance_array_keys, rather than\n> leaving it up to the usual external caller to do later on. If you\n> ripped this code out it would barely be noticeable at all. But it\n> seems worth it so that we can make a uniform promise to always advance\n> the array keys to the maximum extent possible, based on what the\n> caller's tuple tells us about the progress of the scan.\n\nAgreed on the awkwardness and recursion.\n\n> > > Because in one case we follow the convention for scan keys, whereas\n> > > so->orderProcs is accessed via an attribute number subscript.\n> >\n> > Okay, but how about this in _bt_merge_arrays?\n> >\n> > + Datum *elem = elems_orig + i;\n> >\n> > I'm not familiar with the scan key convention, as most other places\n> > use reference+subscripting.\n>\n> I meant the convention used in code like _bt_check_compare (which is\n> what we call _bt_checkkeys on HEAD, basically).\n>\n> Note that the _bt_merge_arrays code that you've highlighted isn't\n> iterating through so->keyData[] -- it is iterating through the\n> function caller's elements array, which actually come from\n> so->arrayKeys[].\n\nIt was exactly why I started asking about the use of pointer\nadditions: _bt_merge_arrays is new code and I can't think of any other\ncase of this style being used in new code, except maybe when\nsurrounding code has this style. The reason I first highlighted\n_bt_update_keys_with_arraykeys is because it had a clear case of 2\ndifferent styles in the same function.\n\n> Like every other Postgres contributor, I do my best to follow the\n> conventions established by existing code. Sometimes that leads to\n> pretty awkward results, where CamelCase and underscore styles are\n> closely mixed together, because it works out to be the most consistent\n> way of doing it overall.\n\nI'm slightly surprised by that, as after this patch I can't find any\ncode that wasn't touched by this patch in nbtutils that uses + for\npointer offsets.\nEither way, that's the style for indexing into a ScanKey, apparently?\n\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)", "msg_date": "Mon, 22 Jan 2024 22:12:55 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Mon, Jan 22, 2024 at 4:13 PM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> On Fri, 19 Jan 2024 at 23:42, Peter Geoghegan <pg@bowt.ie> wrote:\n> And this is where I disagree with your (percieved) implicit argument\n> that this should be and always stay this way.\n\nI never said that, and didn't intend to imply it. As I outlined to you\nback in November, my general philosophy around APIs (such as the index\nAM API) is that ambiguity about the limits and extent of an abstract\ninterface isn't necessarily a bad thing [1]. It can actually be a good\nthing! Ever hear of Hyrum's Law? Abstractions are very often quite\nleaky.\n\nAPIs like the index AM API inevitably make trade-offs. Trade-offs are\nalmost always political, in one way or another. Litigating every\npossible question up-front requires knowing ~everything before you\nreally get started. This mostly ends up being a waste of time, since\nmany theoretical contentious trade-offs just won't matter one bit in\nthe long run, for one reason or another (not necessarily because there\nis only ever one consumer of an API, for all sorts of reasons).\n\nYou don't have to agree with me. That's just what my experience\nindicates works best on average, and in the long run. I cannot justify\nit any further than that.\n\n> By keeping that expectation alive,\n> this becomes a self-fulfilling prophecy, and I really don't like such\n> restrictive self-fulfilling prophecies. It's nice that we have index\n> AM feature flags, but if we only effectively allow one implementation\n> for this set of flags by ignoring potential other users when changing\n> behavior, then what is the API good for (apart from abstraction\n> layering, which is nice)?\n\nI explicitly made it clear that I don't mean that.\n\n> I think that may be right, but I could very well have built a\n> btree-lite that doesn't have the historical baggage of having to deal\n> with pages from before v12 (version 4) and some improvements that\n> haven't made it to core yet.\n\nLet me know if you ever do that. Let me know what the problems you\nencounter are. I'm quite happy to revise my position on this in light\nof new information. I change my mind all the time!\n\n> Something like the following, maybe?\n>\n> \"\"\"\n> Compatibility: The planner will now generate paths with array scan\n> keys in any column for ordered indexes, rather than on only a prefix\n> of the index columns. The planner still expects fully ordered data\n> from those indexes.\n> Historically, the btree AM couldn't output correctly ordered data for\n> suffix array scan keys, which was \"fixed\" by prohibiting the planner\n> from generating array scan keys without an equality prefix scan key up\n> to that attribute. In this commit, the issue has been fixed, and the\n> planner restriction has thus been removed as the only in-core IndexAM\n> that reports amcanorder now supports array scan keys on any column\n> regardless of what prefix scan keys it has.\n> \"\"\"\n\nEven if I put something like this in the commit message, I doubt that\nBruce would pick it up in anything like this form (I have my doubts\nabout it happening no matter what wording is used, actually).\n\nI could include something less verbose, mentioning a theoretical risk\nto out-of-core amcanorder routines that coevolved with nbtree,\ninherited the same SAOP limitations, and then never got the same set\nof fixes.\n\n> patch-a is a trivial and clean implementation of mergesort, which\n> tends to reduce the total number of compare operations if both arrays\n> are of similar size and value range. It writes data directly back into\n> the main array on non-assertion builds, and with assertions it reuses\n> your binary search join on scratch space for algorithm validation.\n\nI'll think about this some more.\n\n> patch-b is an improved version of patch-a with reduced time complexity\n> in some cases: It applies exponential search to reduce work done where\n> there are large runs of unmatched values in either array, and does\n> that while keeping O(n+m) worst-case complexity, now getting a\n> best-case O(log(n)) complexity.\n\nThis patch seems massively over-engineered, though. Over 200 new lines\nof code, all for a case that is only used when queries are written\nwith obviously-contradictory quals? It's just not worth it.\n\n> > The recursive call to _bt_advance_array_keys is needed to deal with\n> > unsatisfied-and-required inequalities that were not detected by an\n> > initial call to _bt_check_compare, following a second retry call to\n> > _bt_check_compare. We know that this recursive call to\n> > _bt_advance_array_keys won't cannot recurse again because we've\n> > already identified which specific inequality scan key it is that's a\n> > still-unsatisfied inequality, following an initial round of array key\n> > advancement.\n>\n> So, it's a case of this?\n>\n> scankey: a > 1 AND a = ANY (1 (=current), 2, 3)) AND b < 4 AND b = ANY\n> (2 (=current), 3, 4)\n> tuple: a=2, b=4\n\nI don't understand why your example starts with \"scankey: a > 1\" and\nuses redundant/contradictory scan keys (for both \"a\" and \"b\"). For a\nforward scan the > scan key won't be seen as required by\n_bt_advance_array_keys, which means that it cannot be relevant to the\nbranch containing the recursive call to _bt_advance_array_keys. (The\nlater branch that calls _bt_check_compare is another matter, but that\ndoesn't call _bt_advance_array_keys recursively at all -- that's not\nwhat we're discussing.)\n\nI also don't get why you've added all of this tricky redundancy to the\nexample you've proposed. That seems to make the example a lot more\ncomplicated, without any apparent advantage. This particular piece of\ncode has nothing to do with redundant/contradictory scan keys.\n\n> We first match the 'a' inequality key, then find an array-key mismatch\n> breaking out, then move the array keys forward to a=2 (of ANY\n> (1,2,3)), b=4 (of ANY(2, 3, 4)); and now we have to recheck the later\n> b < 4 inequality key, as that required inequality key wasn't checked\n> because the earlier arraykey did not match in _bt_check_compare,\n> causing it to stop at the a=1 condition as opposed to check the b < 4\n> condition?\n\nI think that that's pretty close, yes.\n\nObviously, _bt_check_compare() is going to give up upon finding the\nmost significant now-unsatisfied scan key (which must be a required\nscan key in cases relevant to the code in question) -- at that point\nwe need to advance the array keys. If we go on to successfully advance\nall required arrays to values that all match the corresponding values\nfrom caller's tuple, we might still have to consider inequalities\n(that are required in the current scan direction).\n\nIt might still turn out that the relevant second call to\n_bt_check_compare() (the first one inside _bt_advance_array_keys)\nstill sets continuescan=false -- despite what we were able to do with\nthe scan's array keys. At that point, the only possible explanation is\nthat there is a required inequality that still isn't satisfied. We\nshould use \"beyond_end_advance\" advancement to advance the array keys\n\"incrementally\" a second time (in a recursive call \"against the same\noriginal tuple\").\n\nThis can only happen when the value of each of two required scan keys\n(some equality scan key plus some later inequality scan key) both\ncease to be satisfied at the same time, *within the same tuple*. In\npractice this just doesn't happen very often. It could in theory be\navoided altogether (without any behavioral change) by forcing\n_bt_check_compare to always assess every scan key, rather than giving\nup upon finding any unsatisfied scan key (though that would be very\ninefficient).\n\nIt'll likely be much easier to see what I mean by considering a real\nexample. See my test case for this, which I don't currently plan to\ncommit:\n\nhttps://github.com/petergeoghegan/postgres/blob/saop-dynamic-skip-v10.0/src/test/regress/sql/dynamic_saop_advancement.sql#L3600\n\nI think that only this test case is the only one that'll actually\nbreak when you just rip out the recursive _bt_advance_array_keys call.\nAnd it still won't give incorrect answers when it breaks -- it just\naccesses a single extra leaf page.\n\nAs I went into a bit already, upthread, this recursive call business\nis a good example of making the implementation more complicated, in\norder to preserve interface simplicity and generality. I think that\nthat's the right trade-off here, despite being kinda awkward.\n\n> If so, then this visual explanation helped me understand the point and\n> why it can't repeat more than once much better than all that text.\n> Maybe this can be integrated?\n\nAn example seems like it'd be helpful as a code comment. Can you come\nup with a simplified one?\n\n> It was exactly why I started asking about the use of pointer\n> additions: _bt_merge_arrays is new code and I can't think of any other\n> case of this style being used in new code, except maybe when\n> surrounding code has this style. The reason I first highlighted\n> _bt_update_keys_with_arraykeys is because it had a clear case of 2\n> different styles in the same function.\n\nMeh.\n\n[1] https://postgr.es/m/CAH2-WzmWn2_eS_4rWy90DRzC-NW-oponONR6PwMqy+OOuvVyFA@mail.gmail.com\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Tue, 23 Jan 2024 15:22:58 -0500", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Mon, Jan 22, 2024 at 4:13 PM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> Attached 2 patches: v11.patch-a and v11.patch-b. Both are incremental\n> on top of your earlier set, and both don't allocate additional memory\n> in the merge operation in non-assertion builds.\n>\n> patch-a is a trivial and clean implementation of mergesort, which\n> tends to reduce the total number of compare operations if both arrays\n> are of similar size and value range. It writes data directly back into\n> the main array on non-assertion builds, and with assertions it reuses\n> your binary search join on scratch space for algorithm validation.\n\nThis patch fails some of my tests on non-assert builds only (assert\nbuilds pass all my tests, though). I'm using the first patch on its\nown here.\n\nWhile I tend to be relatively in favor of complicated assertions (I\ntend to think the risk of introducing side-effects is worth it), it\nlooks like you're only performing certain steps in release builds.\nThis is evident from just looking at the code (there is an #else block\njust for the release build in the loop). Note also that\n\"Assert(_bt_compare_array_elements(&merged[merged_nelems++], orig,\n&cxt) == 0)\" has side-effects in assert-enabled builds only (it\nincrements merged_nelems). While it's true that you *also* increment\nmerged_nelems *outside* of the assertion (or in an #else block used\nduring non-assert builds), that is conditioned on some other thing (so\nit's in no way equivalent to the debug #ifdef USE_ASSERT_CHECKING\ncase). It's also just really hard to understand what's going on here.\n\nIf I was going to do this kind of thing, I'd use two completely\nseparate loops, that were obviously completely separate (maybe even\ntwo functions). I'd then memcmp() each array at the end.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 15 Feb 2024 18:01:50 -0500", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Tue, Jan 23, 2024 at 3:22 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I could include something less verbose, mentioning a theoretical risk\n> to out-of-core amcanorder routines that coevolved with nbtree,\n> inherited the same SAOP limitations, and then never got the same set\n> of fixes.\n\nAttached is v11, which now says something like that in the commit\nmessage. Other changes:\n\n* Fixed buggy sorting of arrays using cross-type ORDER procs, by\nrecognizing that we need to consistently use same-type ORDER procs for\nsorting and merging the arrays during array preprocessing.\n\nObviously, when we sort, we compare array elements to other array\nelements (all of the same type). This is true independent of whether\nthe query itself happens to use a cross type operator/ORDER proc, so\nwe will need to do two separate ORDER proc lookups in cross-type\nscenarios.\n\n* No longer subscript the ORDER proc used for array binary searches\nusing a scankey subscript. Now there is an additional indirection that\nworks even in the presence of multiple redundant scan keys that cannot\nbe detected as such due to a lack of appropriate cross-type support\nwithin an opfamily.\n\nThis was subtly buggy before now. Requires a little more coordination\nbetween array preprocessing and standard/primitive index scan\npreprocessing, which isn't ideal but seems unavoidable.\n\n* Lots of code polishing, especially within _bt_advance_array_keys().\n\nWhile _bt_advance_array_keys() still works in pretty much exactly the\nsame way as it did back in v10, there are now better comments.\nIncluding something about why its recursive call to itself is\nguaranteed to use a low, fixed amount of stack space, verified using\nan assertion. That addresses a concern held by Matthias.\n\nOutlook\n=======\n\nThis patch is approaching being committable now. Current plan is to\ncommit this within the next few weeks.\n\nAll that really remains now is to research how we might integrate this\nwork with the recently added continuescanPrechecked/haveFirstMatch\nstuff from Alexander Korotkov, if at all. I've put that off until now\nbecause it isn't particularly fundamental to what I'm doing here, and\nseems optional.\n\nI would also like to do more performance validation. Things like the\nparallel index scan code could stand to be revisited once again. Plus\nI should think about the overhead of array preprocessing when\nbtrescan() is called many times, from a nested loop join -- I should\nhave something to say about that concern (raised by Heikki at one\npoint) before too long.\n\n-- \nPeter Geoghegan", "msg_date": "Thu, 15 Feb 2024 18:36:24 -0500", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Thu, Feb 15, 2024 at 6:36 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached is v11, which now says something like that in the commit\n> message.\n\nAttached is v12.\n\n> All that really remains now is to research how we might integrate this\n> work with the recently added continuescanPrechecked/haveFirstMatch\n> stuff from Alexander Korotkov, if at all.\n\nThe main change in v12 is that I've integrated both the\ncontinuescanPrechecked and the haveFirstMatch optimizations. Both of\nthese fields are now page-level state, shared between the _bt_readpage\ncaller and the _bt_checkkeys/_bt_advance_array_keys callees (so they\nappear next to the new home for _bt_checkkeys' continuescan field, in\nthe new page state struct).\n\nThe haveFirstMatch optimization (not a great name BTW) was the easier\nof the two to integrate. I just invalidate the flag (set it to false)\nwhen the array keys advance. It works in exactly the same way as in\nthe simple no-arrays case, except that there can be more than one\n\"first match\" per page. (This approach was directly enabled by the new\ndesign that came out of Alexander's bugfix commit 7e6fb5da)\n\nThe continuescanPrechecked optimization was trickier. The hard part\nwas avoiding confusion when the start of matches for the current set\nof array keys starts somewhere in the middle of the page -- that needs\nto be a gating condition on applying the optimization, applied within\n_bt_readpage.\n\ncontinuescanPrechecked\n======================\n\nTo recap, this precheck optimization is predicated on the idea that\nthe precheck _bt_checkkeys call tells us all we need to know about\nrequired-in-same-direction scan keys for every other tuple on the page\n(assuming continuescan=true is set during the precheck). Critically,\nthis assumes that there can be no confusion between \"before the start\nof matching tuples\" and \"after the end of matching tuples\" (in the\npresence of required equality strategy scan keys). In other words, it\ntacitly assumes that we can't break a very old rule that says that\n_bt_first must never call _bt_readpage with an offnum that's before\nthe start of the first match (the leaf page position located by our\ninsertion scan key). Although it isn't obvious that this is relevant\nat all right now (partly because the _bt_first call to _bt_readpage\nwon't even use the optimization on the grounds that to do so would\nregress point queries), it becomes more obvious once my patch is added\nto the mix.\n\nTo recap again, making the arrays advance dynamically necessarily\nrequires that I teach _bt_checkkeys to avoid confusing \"before the\nstart of matching tuples\" with \"after the end of matching tuples\" --\nwe must give _bt_checkkeys a set of 3-way ORDER procs (and the\nnecessary context) to avoid that confusion -- even for any non-array\nrequired equality keys.\n\nPutting it all together (having laid the groundwork with my recaps):\nThis means that it is only safe (for my patch) to even attempt the\nprecheck optimization when we already know that the array keys cover\nkeyspace at the start of the page (and if the attempt actually\nsucceeds, it succeeds because the current array keys cover the entire\npage). And so in v12 I track that state in so->scanBehind. In\npractice, we only rarely need to avoid the continuescanPrechecked\noptimization as a result of this gating condition -- it hardly reduces\nthe effectiveness of the optimization at all. In fact, it isn't even\npossible for so->scanBehind to ever be set during a backward scan.\n\nSaving cycles in_bt_checkkeys with so->scanBehind\n-------------------------------------------------\n\nIt turns out that this so->scanBehind business is generally useful. We\nnow expect _bt_advance_array_keys to establish whether or not the\n_bt_readpage-wise scan will continue to the end of the current leaf\npage up-front, each time the arrays are advanced (advanced past the\ntuple being scanned by _bt_readpage). When _bt_advance_array_keys\ndecides to move onto the next page, it might also set so->scanBehind.\nThis structure allowed me to simplify the hot code paths within\n_bt_checkkeys. Now _bt_checkkeys doesn't need to check the page high\nkey (or check the first non-pivot tuple in the case of backwards\nscans) at all in the very common case where so->scanBehind hasn't been\nset by _bt_advance_array_keys.\n\nI mentioned that only forward scans can ever set the so->scanBehind\nflag variable. That's because only forward scans can advance the array\nkeys using the page high key, which (due to suffix truncation) creates\nthe possibility that the next _bt_readpage will start out on a page\nwhose earlier tuples are before the real start of matches (for the\ncurrent array keys) -- see the comments in _bt_advance_array_keys for\nthe full explanation.\n\nso->scanBehind summary\n----------------------\n\nIn summary, so->scanBehind serves two quite distinct purposes:\n\n1. Makes it safe to apply the continuescanPrechecked optimization\nduring scans that happen to use array keys -- with hardly any changes\nrequired to _bt_readpage to make this work (just testing the\nso->scanBehind flag).\n\n2. Gives _bt_checkkeys() a way to know when to check if an earlier\nspeculative choice (by _bt_advance_array_keys) to move on to the\ncurrent leaf page without it yet being 100% clear that its key space\nhas exact matches for all the scan's equality/array keys (which is\nonly possible when suffix truncation obscures the issue, hence the\nthing about so->scanBehind only ever being set during forward scans).\n\nThis speculative choice can only be made during forward scans of a\ncomposite index, whose high key has one or more truncated suffix\nattributes that correspond to a required scan key.\n_bt_advance_array_keys deems that the truncated attribute \"satisfies\"\nthe scan key, but remembers that it wasn't strictly speaking\n\"satisfied\", by setting so->scanBehind (so it is \"satisfied\" with the\ntruncated attributes being a \"match\", but not so satisfied that it's\nwilling to allow it without also giving _bt_checkkeys a way to back\nout if it turns out to have been the wrong choice once on the next\npage).\n\n_bt_preprocess_keys contradictory key array advancement\n=======================================================\n\nThe other big change in v12 concerns _bt_preprocess_keys (not to be\nconfused with _bt_preprocess_array_keys). It has been taught to treat\nthe scan's array keys as a distinct type of scan key for the purposes\nof detecting redundant and contradictory scan keys. In other words, it\ntreats scan keys with arrays as their own special type of equality\nscan key -- it doesn't treat them as just another type of equality\nstrategy key in v12. In other other words, _bt_preprocess_keys doesn't\n\"work at the level of individual primitive index scans\" in v12.\n\nTechnically this is an optimization that targets cases with many\ndistinct sets of array keys that have contradictory quals. But I\nmostly did this because it has what I now consider to be far better\ncode structure than what we had in previous versions (I changed my\nmind, having previously considered the changes I'd made to\n_bt_preprocess_keys for arrays to be less than desirable). And because\nit defensively makes the B-Tree code tolerant of scans that have an\nabsurdly large number of distinct sets of array keys (e.g., 3 arrays\nwith 1,000 elements, which gives us a billion distinct sets of array\nkeys in total).\n\nIn v12 an index scan can use (say) a billion distinct sets of array\nkeys, and still expect optimal performance -- even in the corner case\nwhere (for whatever reason) the scan's quals also happen to be\ncontradictory. There is no longer any risk of calling\n_bt_preprocess_keys a billion times, making the query time explode,\nall because somebody added an \"a = 5 AND a = 6\" to the end of the\nquery's qual. In fact, there isn't ever a need to call\n_bt_preprocess_keys() more than once, no matter the details of the\nscan -- not now that _bt_preprocess_keys literally operates on whole\narrays as a separate thing to other equality keys. Once you start to\nthink in terms of array scan keys (not primitive index scans), it\nbecomes obvious that a single call to _bt_preprocess_keys is all you\nreally need,\n\n(Note: While we actually do still always call _bt_preprocess_keys from\n_bt_first, every call to _bt_preprocess_keys after the first one can\nnow simply reuse the existing scan keys that were output during the\nfirst call. So effectively there's only one call to\n_bt_preprocess_keys required per top-level scan.)\n\nPotential downsides\n-------------------\n\nAdmittedly, this new approach in _bt_preprocess_keys has some\npotential downsides. _bt_preprocess_keys is now largely incapable of\ntreating quals like \"WHERE a IN(1,2,3) AND a = 2\" as \"partially\ncontradictory\". What it'll actually do is allow the qual to contain\nduplicative scan keys (just like when we can't detect redundancy due\nto a lack of cross-type support), and leave it all up to\n_bt_checkkeys/_bt_advance_array_keys to figure it out later on. (At\none point I actually taught _bt_preprocesses_keys to perform\n\"incremental advancement\" of the scan's arrays itself, which worked,\nbut that still left the code vulnerable to having to call\n_bt_preprocesses_keys an enormous number of times in extreme cases.\nThat was deemed unacceptable, because it still seemed like a POLA\nviolation.)\n\nThis business with just leaving in some extra scan keys might seem a\nlittle bit sloppy. I thought so myself, for a while, but now I think\nthat it's actually fine. For the following reasons:\n\n* Even with arrays, contradictory quals like \"WHERE a = 4 AND a = 5\"\nare still detected as contradictory, while quals like \"WHERE a = 5 and\na = 5\" are still detected as containing a redundancy. We'll only \"fail\nto detect redundancy/contradictoriness\" with cases such as \"WHERE a IN\n(1, 2, 3) and a = 2\" -- cases that (once you buy into this new way of\nthinking about arrays and preprocessing) aren't really\ncontradictory/redundant at all.\n\n* We have _bt_preprocess_array_keys, which was first taught to merge\ntogether redundant array keys in an earlier version of this patch. So,\nfor example, quals like \"WHERE a IN (1, 2, 3) AND a IN (3,4,5)\" can be\npreprocessed into \"WHERE a in (3)\". Plus quals like \"WHERE a IN (1, 2,\n3) AND a IN (18,72)\" can be ruled contradictory there and then, in\n_bt_preprocess_array_keys , ending the scan immediately.\n\nThese cases seem like the really important ones for array\nredundancy/contradictoriness. We're not going to miss these array\nredundancies.\n\n* The new approach to array key advancement is very good at quickly\neliminating redundant subsets of the array keys that couldn't be\ndetected as such by preprocessing, due to a lack of suitable\ninfrastructure. While there is some downside from allowing\npreprocessing to fail to detect \"partially contradictory\" quals, the\nnew code is so good at advancing the array keys efficiently at runtime\nthe added overhead is hardly noticeable at all. We're talking maybe\none or two extra descents of the index, for just a subset of all badly\nwritten queries that show some kind of redundancy/contradictoriness\ninvolving array scan keys.\n\nEven if somebody takes the position that this approach isn't\nacceptable (not that I expect that anybody will), then the problem\nwon't be that I've gone too far. The problem will just be that I\nhaven't gone far enough. If it somehow becomes a blocker to commit,\nI'd rather handle the problem by compensating for the minor downsides\nthat the v12 _bt_preprocess_keys changes arguably create, by adding\nmore types of array-specific preprocessing to\n_bt_preprocess_array_keys. You know, preprocessing that can actually\neliminate a subset of array keys given a qual like \"WHERE a IN (1, 2,\n3) AND a < 2\".\n\nTo be clear, I really don't think it's worth adding new types of array\npreprocessing to _bt_preprocess_array_keys, just for these cases --\nmy current approach works just fine, by any relevant metric (even if\nyou assume that the sorts of array redundancy/contradictoriness we can\nmiss out on are common and important, it's small potatoes). Just\nbringing the idea of adding more stuff to _bt_preprocess_array_keys to\nthe attention of the group, as an option (this idea might also provide\nuseful context, that makes my high level thinking on this a bit easier\nto follow).\n\n--\nPeter Geoghegan", "msg_date": "Fri, 1 Mar 2024 20:30:15 -0500", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Sat, 2 Mar 2024 at 02:30, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Thu, Feb 15, 2024 at 6:36 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Attached is v11, which now says something like that in the commit\n> > message.\n>\n> Attached is v12.\n\nSome initial comments on the documentation:\n\n> + that searches for multiple values together. Queries that use certain\n> + <acronym>SQL</acronym> constructs to search for rows matching any value\n> + out of a list (or an array) of multiple scalar values might perform\n> + multiple <quote>primitive</quote> index scans (up to one primitive scan\n> + per scalar value) at runtime. See <xref linkend=\"functions-comparisons\"/>\n> + for details.\n\nI don't think the \"see <functions-comparisons> for details\" is\ncorrectly worded: The surrounding text implies that it would contain\ndetails about in which precise situations multiple primitive index\nscans would be consumed, while it only contains documentation about\nIN/NOT IN/ANY/ALL/SOME.\n\nSomething like the following would fit better IMO:\n\n+ that searches for multiple values together. Queries that use certain\n+ <acronym>SQL</acronym> constructs to search for rows matching any value\n+ out of a list or array of multiple scalar values (such as those\ndescribed in\n+ <functions-comparisons> might perform multiple <quote>primitive</quote>\n+ index scans (up to one primitive scan per scalar value) at runtime.\n\nThen there is a second issue in the paragraph: Inverted indexes such\nas GIN might well decide to start counting more than one \"primitive\nscan\" per scalar value, because they may need to go through their\ninternal structure more than once to produce results for a single\nscalar value; e.g. with queries WHERE textfield LIKE '%some%word%', a\ntrigram index would likely use at least 4 descents here: one for each\nof \"som\", \"ome\", \"wor\", \"ord\".\n\n> > All that really remains now is to research how we might integrate this\n> > work with the recently added continuescanPrechecked/haveFirstMatch\n> > stuff from Alexander Korotkov, if at all.\n>\n> The main change in v12 is that I've integrated both the\n> continuescanPrechecked and the haveFirstMatch optimizations. Both of\n> these fields are now page-level state, shared between the _bt_readpage\n> caller and the _bt_checkkeys/_bt_advance_array_keys callees (so they\n> appear next to the new home for _bt_checkkeys' continuescan field, in\n> the new page state struct).\n\nCool. I'm planning to review the rest of this patch this\nweek/tomorrow, could you take some time to review some of my btree\npatches, too?\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 4 Mar 2024 21:51:37 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Mon, Mar 4, 2024 at 3:51 PM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> > + that searches for multiple values together. Queries that use certain\n> > + <acronym>SQL</acronym> constructs to search for rows matching any value\n> > + out of a list (or an array) of multiple scalar values might perform\n> > + multiple <quote>primitive</quote> index scans (up to one primitive scan\n> > + per scalar value) at runtime. See <xref linkend=\"functions-comparisons\"/>\n> > + for details.\n>\n> I don't think the \"see <functions-comparisons> for details\" is\n> correctly worded: The surrounding text implies that it would contain\n> details about in which precise situations multiple primitive index\n> scans would be consumed, while it only contains documentation about\n> IN/NOT IN/ANY/ALL/SOME.\n>\n> Something like the following would fit better IMO:\n>\n> + that searches for multiple values together. Queries that use certain\n> + <acronym>SQL</acronym> constructs to search for rows matching any value\n> + out of a list or array of multiple scalar values (such as those\n> described in\n> + <functions-comparisons> might perform multiple <quote>primitive</quote>\n> + index scans (up to one primitive scan per scalar value) at runtime.\n\nI think that there is supposed to be a closing parenthesis here? So\n\"... (such as those described in <functions-comparisons>\") might\nperform...\".\n\nFWM, if that's what you meant.\n\n> Then there is a second issue in the paragraph: Inverted indexes such\n> as GIN might well decide to start counting more than one \"primitive\n> scan\" per scalar value, because they may need to go through their\n> internal structure more than once to produce results for a single\n> scalar value; e.g. with queries WHERE textfield LIKE '%some%word%', a\n> trigram index would likely use at least 4 descents here: one for each\n> of \"som\", \"ome\", \"wor\", \"ord\".\n\nCalling anything other than an executor invocation of\namrescan/index_rescan an \"index scan\" (or \"primitive index scan\") is a\nbit silly IMV, since, as you point out, whether the index AM manages\nto do things like descend the underlying physical index structure is\nreally just an implementation detail. GIN has more than one B-Tree, so\nit's not like there's necessarily just one \"index structure\", anyway.\nIt seems to me the only meaningful definition of index scan is\nsomething directly tied to how the index AM gets invoked. That's a\nlogical, index-AM-agnostic concept. It has a fixed relationship to how\nEXPLAIN ANALYZE displays information.\n\nBut we're not living in a perfect world. The fact is that the\npg_stat_*_tables system views follow a definition that brings these\nimplementation details into it. Its definition of index scan is\nprobably a legacy of when SAOPs could only be executed in a way that\nwas totally opaque to the index AM (which is how it still works for\nevery non-nbtree index AM). Back then, \"index scan\" really was\nsynonymous with an executor invocation of amrescan/index_rescan with\nSAOPs.\n\nI've described the issues in this area (in the docs) in a way that is\nmost consistent with historical conventions. That seems to have the\nfewest problems, despite everything I've said about it.\n\n> > > All that really remains now is to research how we might integrate this\n> > > work with the recently added continuescanPrechecked/haveFirstMatch\n> > > stuff from Alexander Korotkov, if at all.\n> >\n> > The main change in v12 is that I've integrated both the\n> > continuescanPrechecked and the haveFirstMatch optimizations. Both of\n> > these fields are now page-level state, shared between the _bt_readpage\n> > caller and the _bt_checkkeys/_bt_advance_array_keys callees (so they\n> > appear next to the new home for _bt_checkkeys' continuescan field, in\n> > the new page state struct).\n>\n> Cool. I'm planning to review the rest of this patch this\n> week/tomorrow, could you take some time to review some of my btree\n> patches, too?\n\nOkay, I'll take a look again.\n\nAttached is v13.\n\nAt one point Heikki suggested that I just get rid of\nBTScanOpaqueData.arrayKeyData (which has been there for as long as\nnbtree had native support for SAOPs), and use\nBTScanOpaqueData.keyData exclusively instead. I've finally got around\nto doing that now.\n\nThese simplifications were enabled by my new approach within\n_bt_preprocess_keys, described when I posted v12. v13 goes even\nfurther than v12 did, by demoting _bt_preprocess_array_keys to a\nhelper function for _bt_preprocess_keys. That means that we do all of\nour scan key preprocessing at the same time, at the start of _bt_first\n(though only during the first _bt_first, or to be more precise during\nthe first per btrescan). If we need fancier\npreprocessing/normalization for arrays, then it ought to be a lot\neasier with this structure.\n\nNote that we no longer need to have an independent representation of\nso->qual_okay for array keys (the convention of setting\nso->numArrayKeys to -1 for unsatisfiable array keys is no longer\nrequired). There is no longer any need for a separate pass to carry\nover the contents of BTScanOpaqueData.arrayKeyData to\nBTScanOpaqueData.keyData, which was confusing.\n\nAre you still interested in working directly on the preprocessing\nstuff? I have a feeling that I was slightly too optimistic about how\nlikely we were to be able to get away with not having certain kinds of\narray preprocessing, back when I posted v12. It's true that the\npropensity of the patch to not recognize \"partial\nredundancies/contradictions\" hardly matters with redundant equalities,\nbut inequalities are another matter. I'm slightly worried about cases\nlike this one:\n\nselect * from multi_test where a in (1,99, 182, 183, 184) and a < 183;\n\nMaybe we need to do better with that. What do you think?\n\n\n--\nPeter Geoghegan", "msg_date": "Tue, 5 Mar 2024 19:50:18 -0500", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Wed, 6 Mar 2024 at 01:50, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, Mar 4, 2024 at 3:51 PM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > > + that searches for multiple values together. Queries that use certain\n> > > + <acronym>SQL</acronym> constructs to search for rows matching any value\n> > > + out of a list (or an array) of multiple scalar values might perform\n> > > + multiple <quote>primitive</quote> index scans (up to one primitive scan\n> > > + per scalar value) at runtime. See <xref linkend=\"functions-comparisons\"/>\n> > > + for details.\n> >\n> > I don't think the \"see <functions-comparisons> for details\" is\n> > correctly worded: The surrounding text implies that it would contain\n> > details about in which precise situations multiple primitive index\n> > scans would be consumed, while it only contains documentation about\n> > IN/NOT IN/ANY/ALL/SOME.\n> >\n> > Something like the following would fit better IMO:\n> >\n> > + that searches for multiple values together. Queries that use certain\n> > + <acronym>SQL</acronym> constructs to search for rows matching any value\n> > + out of a list or array of multiple scalar values (such as those\n> > described in\n> > + <functions-comparisons> might perform multiple <quote>primitive</quote>\n> > + index scans (up to one primitive scan per scalar value) at runtime.\n>\n> I think that there is supposed to be a closing parenthesis here? So\n> \"... (such as those described in <functions-comparisons>\") might\n> perform...\".\n\nCorrect.\n\n> FWM, if that's what you meant.\n\nWFM, yes?\n\n> > Then there is a second issue in the paragraph: Inverted indexes such\n> > as GIN might well decide to start counting more than one \"primitive\n> > scan\" per scalar value,\n[...]\n> I've described the issues in this area (in the docs) in a way that is\n> most consistent with historical conventions. That seems to have the\n> fewest problems, despite everything I've said about it.\n\nClear enough, thank you for explaining your thoughts on this.\n\n> > > > All that really remains now is to research how we might integrate this\n> > > > work with the recently added continuescanPrechecked/haveFirstMatch\n> > > > stuff from Alexander Korotkov, if at all.\n> > >\n> > > The main change in v12 is that I've integrated both the\n> > > continuescanPrechecked and the haveFirstMatch optimizations. Both of\n> > > these fields are now page-level state, shared between the _bt_readpage\n> > > caller and the _bt_checkkeys/_bt_advance_array_keys callees (so they\n> > > appear next to the new home for _bt_checkkeys' continuescan field, in\n> > > the new page state struct).\n> >\n> > Cool. I'm planning to review the rest of this patch this\n> > week/tomorrow, could you take some time to review some of my btree\n> > patches, too?\n>\n> Okay, I'll take a look again.\n\nThanks, greatly appreciated.\n\n> At one point Heikki suggested that I just get rid of\n> BTScanOpaqueData.arrayKeyData (which has been there for as long as\n> nbtree had native support for SAOPs), and use\n> BTScanOpaqueData.keyData exclusively instead. I've finally got around\n> to doing that now.\n\nI'm not sure if it was worth the reduced churn when the changes for\nthe primary optimization are already 150+kB in size; every \"small\"\naddition increases the context needed to review the patch, and it's\nalready quite complex.\n\n> These simplifications were enabled by my new approach within\n> _bt_preprocess_keys, described when I posted v12. v13 goes even\n> further than v12 did, by demoting _bt_preprocess_array_keys to a\n> helper function for _bt_preprocess_keys. That means that we do all of\n> our scan key preprocessing at the same time, at the start of _bt_first\n> (though only during the first _bt_first, or to be more precise during\n> the first per btrescan). If we need fancier\n> preprocessing/normalization for arrays, then it ought to be a lot\n> easier with this structure.\n\nAgreed.\n\n> Note that we no longer need to have an independent representation of\n> so->qual_okay for array keys (the convention of setting\n> so->numArrayKeys to -1 for unsatisfiable array keys is no longer\n> required). There is no longer any need for a separate pass to carry\n> over the contents of BTScanOpaqueData.arrayKeyData to\n> BTScanOpaqueData.keyData, which was confusing.\n\nI wasn't very confused by it, but sure.\n\n> Are you still interested in working directly on the preprocessing\n> stuff?\n\nIf you mean my proposed change to merging two equality AOPs, then yes.\nI'll try to fit it in tomorrow with the rest of the review.\n\n> I have a feeling that I was slightly too optimistic about how\n> likely we were to be able to get away with not having certain kinds of\n> array preprocessing, back when I posted v12. It's true that the\n> propensity of the patch to not recognize \"partial\n> redundancies/contradictions\" hardly matters with redundant equalities,\n> but inequalities are another matter. I'm slightly worried about cases\n> like this one:\n>\n> select * from multi_test where a in (1,99, 182, 183, 184) and a < 183;\n>\n> Maybe we need to do better with that. What do you think?\n\nLet me come back to that when I'm done reviewing the final part of nbtutils.\n\n> Attached is v13.\n\nIt looks like there are few issues remaining outside the changes in\nnbtutils. I've reviewed the changes to those files, and ~half of\nnbtutils (up to _bt_advance_array_keys_increment) now. I think I can\nget the remainder done tomorrow.\n\n> +++ b/src/backend/access/nbtree/nbtutils.c\n> /*\n> * _bt_start_array_keys() -- Initialize array keys at start of a scan\n> *\n> * Set up the cur_elem counters and fill in the first sk_argument value for\n> - * each array scankey. We can't do this until we know the scan direction.\n> + * each array scankey.\n> */\n> void\n> _bt_start_array_keys(IndexScanDesc scan, ScanDirection dir)\n> @@ -519,159 +888,1163 @@ _bt_start_array_keys(IndexScanDesc scan, ScanDirection dir)\n> BTScanOpaque so = (BTScanOpaque) scan->opaque;\n> int i;\n>\n> + Assert(so->numArrayKeys);\n> + Assert(so->qual_ok);\n\nHas the requirement for a known scan direction been removed, or should\nthis still have an Assert(dir != NoMovementScanDirection)?\n\n> +++ b/src/backend/access/nbtree/nbtsearch.c\n> @@ -1713,17 +1756,18 @@ _bt_readpage(IndexScanDesc scan, ScanDirection dir, OffsetNumber offnum,\n> [...]\n> - _bt_checkkeys(scan, itup, truncatt, dir, &continuescan, false, false);\n> + pstate.prechecked = false; /* prechecked earlier tuple */\n\nI'm not sure that comment is correct, at least it isn't as clear as\ncan be. Maybe something more in the line of the following?\n+ pstate.prechecked = false; /* prechecked didn't cover HIKEY */\n\n\n+++ b/src/backend/access/nbtree/nbtutils.c\n> @@ -272,7 +319,32 @@ _bt_preprocess_array_keys(IndexScanDesc scan)\n> + elemtype = cur->sk_subtype;\n> + if (elemtype == InvalidOid)\n> + elemtype = rel->rd_opcintype[cur->sk_attno - 1];\n\nShould we Assert() that this elemtype matches the array element type\nARR_ELEMTYPE(arrayval) used to deconstruct the array?\n\n> + /*\n> + * If this scan key is semantically equivalent to a previous equality\n> + * operator array scan key, merge the two arrays together to eliminate\n> [...]\n> + if (prevArrayAtt == cur->sk_attno && prevElemtype == elemtype)\n\nThis is not \"a\" previous equality key, but \"the\" previous equality\noperator array scan key.\nDo we want to expend some additional cycles for detecting duplicate\nequality array key types in interleaved order like =int[] =bigint[]\n=int[]? I don't think it would be very expensive considering the\nlimited number of cross-type equality operators registered in default\nPostgreSQL, so a simple loop that checks matching element types\nstarting at the first array key scankey for this attribute should\nsuffice. We could even sort the keys by element type if we wanted to\nfix any O(n) issues for this type of behaviour (though this is\n_extremely_ unlikely to become a performance issue).\n\n> + * qual is unsatisfiable\n> + */\n> + if (num_elems == 0)\n> + {\n> + so->qual_ok = false;\n> + return NULL;\n> + }\n\nI think there's a memory context switch back to oldContext missing\nhere, as well as above at `if (num_nonnulls == 0)`. These were\nprobably introduced by changing from break to return; and the paths\naren't yet exercised in regression tests.\n\n> +++ b/src/backend/utils/adt/selfuncs.c\nhas various changes in comments that result in spelling and style issues:\n\n> + * primitive index scans that will be performed for caller\n+ * primitive index scans that will be performed for the caller.\n(missing \"the\", period)\n\n> - * share for each outer scan. (Don't pro-rate for ScalarArrayOpExpr,\n> - * since that's internal to the indexscan.)\n> + * share for each outer scan\n\nThe trailing period was lost:\n+ * share for each outer scan.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Wed, 6 Mar 2024 22:46:00 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Wed, 6 Mar 2024 at 22:46, Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n>\n> On Wed, 6 Mar 2024 at 01:50, Peter Geoghegan <pg@bowt.ie> wrote:\n> > At one point Heikki suggested that I just get rid of\n> > BTScanOpaqueData.arrayKeyData (which has been there for as long as\n> > nbtree had native support for SAOPs), and use\n> > BTScanOpaqueData.keyData exclusively instead. I've finally got around\n> > to doing that now.\n>\n> I'm not sure if it was worth the reduced churn when the changes for\n> the primary optimization are already 150+kB in size; every \"small\"\n> addition increases the context needed to review the patch, and it's\n> already quite complex.\n\nTo clarify, what I mean here is that merging the changes of both the\nSAOPs changes and the removal of arrayKeyData seems to increase the\npatch size and increases the maximum complexity of each component\npatch's review.\nSeparate patches may make this more reviewable, or not, but no comment\nwas given on why it is better to merge the changes into a single\npatch.\n\n- Matthias\n\n\n", "msg_date": "Wed, 6 Mar 2024 22:51:18 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "Hello,\n\nI am not up to date with the last version of patch but I did a regular\nbenchmark with version 11 and typical issue we have at the moment and the\nresult are still very very good. [1]\n\nIn term of performance improvement the last proposals could be a real game\nchanger for 2 of our biggest databases. We hope that Postgres 17 will\ncontain those improvements.\n\nKind regards,\n\nBenoit\n\n[1] -\nhttps://gist.github.com/benoittgt/ab72dc4cfedea2a0c6a5ee809d16e04d?permalink_comment_id=4972955#gistcomment-4972955\n__________\nBenoit Tigeot\n\n\n\nLe jeu. 7 mars 2024 à 15:36, Peter Geoghegan <pg@bowt.ie> a écrit :\n\n> On Tue, Jan 23, 2024 at 3:22 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > I could include something less verbose, mentioning a theoretical risk\n> > to out-of-core amcanorder routines that coevolved with nbtree,\n> > inherited the same SAOP limitations, and then never got the same set\n> > of fixes.\n>\n> Attached is v11, which now says something like that in the commit\n> message. Other changes:\n>\n> * Fixed buggy sorting of arrays using cross-type ORDER procs, by\n> recognizing that we need to consistently use same-type ORDER procs for\n> sorting and merging the arrays during array preprocessing.\n>\n> Obviously, when we sort, we compare array elements to other array\n> elements (all of the same type). This is true independent of whether\n> the query itself happens to use a cross type operator/ORDER proc, so\n> we will need to do two separate ORDER proc lookups in cross-type\n> scenarios.\n>\n> * No longer subscript the ORDER proc used for array binary searches\n> using a scankey subscript. Now there is an additional indirection that\n> works even in the presence of multiple redundant scan keys that cannot\n> be detected as such due to a lack of appropriate cross-type support\n> within an opfamily.\n>\n> This was subtly buggy before now. Requires a little more coordination\n> between array preprocessing and standard/primitive index scan\n> preprocessing, which isn't ideal but seems unavoidable.\n>\n> * Lots of code polishing, especially within _bt_advance_array_keys().\n>\n> While _bt_advance_array_keys() still works in pretty much exactly the\n> same way as it did back in v10, there are now better comments.\n> Including something about why its recursive call to itself is\n> guaranteed to use a low, fixed amount of stack space, verified using\n> an assertion. That addresses a concern held by Matthias.\n>\n> Outlook\n> =======\n>\n> This patch is approaching being committable now. Current plan is to\n> commit this within the next few weeks.\n>\n> All that really remains now is to research how we might integrate this\n> work with the recently added continuescanPrechecked/haveFirstMatch\n> stuff from Alexander Korotkov, if at all. I've put that off until now\n> because it isn't particularly fundamental to what I'm doing here, and\n> seems optional.\n>\n> I would also like to do more performance validation. Things like the\n> parallel index scan code could stand to be revisited once again. Plus\n> I should think about the overhead of array preprocessing when\n> btrescan() is called many times, from a nested loop join -- I should\n> have something to say about that concern (raised by Heikki at one\n> point) before too long.\n>\n> --\n> Peter Geoghegan\n>\n\nHello,I am not up to date with the last version of patch but I did a regular benchmark with version 11 and typical issue we have at the moment and the result are still very very good. [1]In term of performance improvement the last proposals could be a real game changer for 2 of our biggest databases. We hope that Postgres 17 will contain those improvements.Kind regards,Benoit[1] - https://gist.github.com/benoittgt/ab72dc4cfedea2a0c6a5ee809d16e04d?permalink_comment_id=4972955#gistcomment-4972955__________Benoit TigeotLe jeu. 7 mars 2024 à 15:36, Peter Geoghegan <pg@bowt.ie> a écrit :On Tue, Jan 23, 2024 at 3:22 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I could include something less verbose, mentioning a theoretical risk\n> to out-of-core amcanorder routines that coevolved with nbtree,\n> inherited the same SAOP limitations, and then never got the same set\n> of fixes.\n\nAttached is v11, which now says something like that in the commit\nmessage. Other changes:\n\n* Fixed buggy sorting of arrays using cross-type ORDER procs, by\nrecognizing that we need to consistently use same-type ORDER procs for\nsorting and merging the arrays during array preprocessing.\n\nObviously, when we sort, we compare array elements to other array\nelements (all of the same type). This is true independent of whether\nthe query itself happens to use a cross type operator/ORDER proc, so\nwe will need to do two separate ORDER proc lookups in cross-type\nscenarios.\n\n* No longer subscript the ORDER proc used for array binary searches\nusing a scankey subscript. Now there is an additional indirection that\nworks even in the presence of multiple redundant scan keys that cannot\nbe detected as such due to a lack of appropriate cross-type support\nwithin an opfamily.\n\nThis was subtly buggy before now. Requires a little more coordination\nbetween array preprocessing and standard/primitive index scan\npreprocessing, which isn't ideal but seems unavoidable.\n\n* Lots of code polishing, especially within _bt_advance_array_keys().\n\nWhile _bt_advance_array_keys() still works in pretty much exactly the\nsame way as it did back in v10, there are now better comments.\nIncluding something about why its recursive call to itself is\nguaranteed to use a low, fixed amount of stack space, verified using\nan assertion. That addresses a concern held by Matthias.\n\nOutlook\n=======\n\nThis patch is approaching being committable now. Current plan is to\ncommit this within the next few weeks.\n\nAll that really remains now is to research how we might integrate this\nwork with the recently added continuescanPrechecked/haveFirstMatch\nstuff from Alexander Korotkov, if at all. I've put that off until now\nbecause it isn't particularly fundamental to what I'm doing here, and\nseems optional.\n\nI would also like to do more performance validation. Things like the\nparallel index scan code could stand to be revisited once again. Plus\nI should think about the overhead of array preprocessing when\nbtrescan() is called many times, from a nested loop join -- I should\nhave something to say about that concern (raised by Heikki at one\npoint) before too long.\n\n-- \nPeter Geoghegan", "msg_date": "Thu, 7 Mar 2024 15:41:55 +0000", "msg_from": "Benoit Tigeot <benoit.tigeot@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Wed, Mar 6, 2024 at 4:46 PM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> On Wed, 6 Mar 2024 at 01:50, Peter Geoghegan <pg@bowt.ie> wrote:\n> > I think that there is supposed to be a closing parenthesis here? So\n> > \"... (such as those described in <functions-comparisons>\") might\n> > perform...\".\n>\n> Correct.\n>\n> > FWM, if that's what you meant.\n>\n> WFM, yes?\n\nThen we're in agreement on this.\n\n> > At one point Heikki suggested that I just get rid of\n> > BTScanOpaqueData.arrayKeyData (which has been there for as long as\n> > nbtree had native support for SAOPs), and use\n> > BTScanOpaqueData.keyData exclusively instead. I've finally got around\n> > to doing that now.\n>\n> I'm not sure if it was worth the reduced churn when the changes for\n> the primary optimization are already 150+kB in size; every \"small\"\n> addition increases the context needed to review the patch, and it's\n> already quite complex.\n\nI agree that the patch is quite complex, especially relative to its size.\n\n> > Note that we no longer need to have an independent representation of\n> > so->qual_okay for array keys (the convention of setting\n> > so->numArrayKeys to -1 for unsatisfiable array keys is no longer\n> > required). There is no longer any need for a separate pass to carry\n> > over the contents of BTScanOpaqueData.arrayKeyData to\n> > BTScanOpaqueData.keyData, which was confusing.\n>\n> I wasn't very confused by it, but sure.\n>\n> > Are you still interested in working directly on the preprocessing\n> > stuff?\n>\n> If you mean my proposed change to merging two equality AOPs, then yes.\n> I'll try to fit it in tomorrow with the rest of the review.\n\nI didn't just mean that stuff. I was also suggesting that you could\njoin the project directly (not just as a reviewer). If you're\ninterested, you could do general work on the preprocessing of arrays.\nFancier array-specific preprocessing.\n\nFor example, something that can transform this: select * from foo\nwhere a in (1,2,3) and a < 3;\n\nInto this: select * from foo where a in (1,2);\n\nOr, something that can just set qual_okay=false, given a query such\nas: select * from foo where a in (1,2,3) and a < 5;\n\nThis is clearly doable by reusing the binary search code during\npreprocessing. The first example transformation could work via a\nbinary search for the constant \"3\", followed by a memmove() to shrink\nthe array in-place (plus the inequality itself would need to be fully\neliminated). The second example would work a little like the array\nmerging thing that the patch has already, except that there'd only be\none array involved (there wouldn't be a pair of arrays).\n\n> > select * from multi_test where a in (1,99, 182, 183, 184) and a < 183;\n> >\n> > Maybe we need to do better with that. What do you think?\n>\n> Let me come back to that when I'm done reviewing the final part of nbtutils.\n\nEven if this case doesn't matter (which I now doubt), it's probably\neasier to just get it right than it would be to figure out if we can\nlive without it.\n\n> > void\n> > _bt_start_array_keys(IndexScanDesc scan, ScanDirection dir)\n> > @@ -519,159 +888,1163 @@ _bt_start_array_keys(IndexScanDesc scan, ScanDirection dir)\n> > BTScanOpaque so = (BTScanOpaque) scan->opaque;\n> > int i;\n> >\n> > + Assert(so->numArrayKeys);\n> > + Assert(so->qual_ok);\n>\n> Has the requirement for a known scan direction been removed, or should\n> this still have an Assert(dir != NoMovementScanDirection)?\n\nI agree that such an assertion is worth having. Added that locally.\n\n> I'm not sure that comment is correct, at least it isn't as clear as\n> can be. Maybe something more in the line of the following?\n> + pstate.prechecked = false; /* prechecked didn't cover HIKEY */\n\nI agree that that's a little better than what I had.\n\n> +++ b/src/backend/access/nbtree/nbtutils.c\n> > @@ -272,7 +319,32 @@ _bt_preprocess_array_keys(IndexScanDesc scan)\n> > + elemtype = cur->sk_subtype;\n> > + if (elemtype == InvalidOid)\n> > + elemtype = rel->rd_opcintype[cur->sk_attno - 1];\n>\n> Should we Assert() that this elemtype matches the array element type\n> ARR_ELEMTYPE(arrayval) used to deconstruct the array?\n\nYeah, good idea.\n\n> This is not \"a\" previous equality key, but \"the\" previous equality\n> operator array scan key.\n> Do we want to expend some additional cycles for detecting duplicate\n> equality array key types in interleaved order like =int[] =bigint[]\n> =int[]? I don't think it would be very expensive considering the\n> limited number of cross-type equality operators registered in default\n> PostgreSQL, so a simple loop that checks matching element types\n> starting at the first array key scankey for this attribute should\n> suffice. We could even sort the keys by element type if we wanted to\n> fix any O(n) issues for this type of behaviour (though this is\n> _extremely_ unlikely to become a performance issue).\n\nYes, I think that we should probably have this (though likely wouldn't\nbother sorting the scan keys themselves).\n\nYou'd need to look-up cross-type operators for this. They could\npossibly fail, even when nothing else fails, because in principle you\ncould have 3 types involved for only 2 scan keys: the opclass/on-disk\ntype, plus 2 separate types. We could fail to find a cross-type\noperator for our \"2 separate types\", while still succeeding in finding\n2 cross type operators for each of the 2 separate scan keys (each\npaired up the indexed column).\n\nWhen somebody writes a silly query with such obvious redundancies,\nthere needs to be a sense of proportion about it. Fixed preprocessing\ncosts don't seem that important. What's important is that we make a\nreasonable effort to avoid horrible runtime performance when it can be\navoided, such as scanning the whole index. (If a user has a complaint\nabout added cycles during preprocessing, then the fix for that is to\njust not write silly queries. I care much less about added cycles if\nthey have only a fixed, small-ish added cost, that isn't borne by\nsensibly-written queries.)\n\n> > + * qual is unsatisfiable\n> > + */\n> > + if (num_elems == 0)\n> > + {\n> > + so->qual_ok = false;\n> > + return NULL;\n> > + }\n>\n> I think there's a memory context switch back to oldContext missing\n> here, as well as above at `if (num_nonnulls == 0)`. These were\n> probably introduced by changing from break to return; and the paths\n> aren't yet exercised in regression tests.\n\nI agree that this is buggy. But it doesn't seem to actually fail in\npractice (it \"accidentally fails to fail\"). In practice, the whole\nindex scan is shut down before anything breaks anyway.\n\nI mention this only because it's worth understanding that I do have\ncoverage for this case (at least in my own expansive test suite). It\njust wasn't enough to detect this particular oversight.\n\n> > +++ b/src/backend/utils/adt/selfuncs.c\n> has various changes in comments that result in spelling and style issues:\n>\n> > + * primitive index scans that will be performed for caller\n> + * primitive index scans that will be performed for the caller.\n> (missing \"the\", period)\n\nI'll just keep the previous wording and punctuation, with one\ndifference: \"index scans\" will be changed to \"primitive index scans\".\n\n> > - * share for each outer scan. (Don't pro-rate for ScalarArrayOpExpr,\n> > - * since that's internal to the indexscan.)\n> > + * share for each outer scan\n>\n> The trailing period was lost:\n> + * share for each outer scan.\n\nI'm not in the habit of including a period/full stop after a\nstandalone sentence (actually, I'm in the habit of *not* doing so,\nspecifically). But in the interest of consistency with surrounding\ncode (and because it doesn't really matter), I'll add one back here.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 7 Mar 2024 15:34:52 -0500", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Wed, Mar 6, 2024 at 4:51 PM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> To clarify, what I mean here is that merging the changes of both the\n> SAOPs changes and the removal of arrayKeyData seems to increase the\n> patch size and increases the maximum complexity of each component\n> patch's review.\n\nRemoving arrayKeyData probably makes the patch very slightly smaller,\nactually. But even if it's really the other way around, I'd still like\nto get rid of it as part of the same commit as everything else. It\njust makes sense that way.\n\n> Separate patches may make this more reviewable, or not, but no comment\n> was given on why it is better to merge the changes into a single\n> patch.\n\nFair enough. Here's why:\n\nThe original SAOP design (commit 9e8da0f7, \"Teach btree to handle\nScalarArrayOpExpr quals natively\") added a layer of indirection\nbetween scan->keyData (input scan keys) and so->keyData (output scan\nkeys): it added another scan key array, so->arrayKeyData. There was\narray-specific preprocessing in _bt_preprocess_array_keys, that\nhappened before the first primitive index scan even began -- that\ntransformed our true input scan keys (scan->keyData) into a copy of\nthe array with limited amounts of array-specific preprocessing already\nperformed (so->arrayKeyData).\n\nThis made a certain amount of sense at the time, because\n_bt_preprocess_keys was intended to be called once per primitive index\nscan. Kind of like the inner side of a nested loop join's inner index\nscan, where we call _bt_preprocess_keys once per inner-side\nscan/btrescan call. (Actually, Tom's design has us call _bt_preprocess\nonce per primitive index scan per btrescan call, which might matter in\nthose rare cases where the inner side of a nestloop join had SAOP\nquals.)\n\nWhat I now propose to do is to just call _bt_preprocess_keys once per\nbtrescan (actually, it's still called once per primitive index scan,\nbut all calls after the first are just no-ops after v12 of the patch).\nThis makes sense because SAOP array constants aren't like nestloop\njoins with an inner index scans, in one important way: we really can\nsee everything up-front. We can see all of the array elements, and\noperate on whole arrays as necessary during preprocessing (e.g.,\nperforming the array merging thing I added to\n_bt_preprocess_array_keys).\n\nIt's not like the next array element is only visible to prepocessing\nonly after the outer side of a nestloop join runs, and next calls\nbtrescan -- so why treat it like that? Conceptually, \"WHERE a = 1\" is\nalmost the same thing as \"WHERE a = any('{1,2}')\", so why not use\nessentially the same approach to preprocessing in both cases? We don't\nneed to copy the input keys into so->arrayKeyData, because the\nindirection (which is a bit like a fake nested loop join) doesn't buy\nus anything.\n\nv13 of the patch doesn't quite 100% eliminate so->arrayKeyData. While\nit removes the arrayKeyData field from the BTScanOpaqueData struct, we\nstill use a temporary array (accessed via a pointer that's just a\nlocal variable) that's also called arrayKeyData. And that also stores\narray-preprocessing-only copies of the input scan keys. That happens\nwithin _bt_preprocess_keys.\n\nSo we do \"still need arrayKeyData\", but we only need it for a brief\nperiod at the start of the index scan. It just doesn't make any sense\nto keep it around for longer than that, in a world where\n_bt_preprocess_keys \"operates directly on arrays\". That only made\nsense (a bit of sense) back when _bt_preprocess_keys was subordinate\nto _bt_preprocess_array_keys, but it's kinda the other way around now.\nWe could probably even get rid of this remaining limited form of\narrayKeyData, but that doesn't seem like it would add much.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 7 Mar 2024 16:07:28 -0500", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Wed, 6 Mar 2024 at 22:46, Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n>\n> On Wed, 6 Mar 2024 at 01:50, Peter Geoghegan <pg@bowt.ie> wrote:\n> >\n> > Are you still interested in working directly on the preprocessing\n> > stuff?\n>\n> If you mean my proposed change to merging two equality AOPs, then yes.\n> I'll try to fit it in tomorrow with the rest of the review.\n\nI've attached v14, where 0001 is v13, 0002 is a patch with small\nchanges + some large comment suggestions, and 0003 which contains\nsorted merge join code for _bt_merge_arrays.\n\nI'll try to work a bit on v13/14's _bt_preprocess_keys, and see what I\ncan make of it.\n\n> > I have a feeling that I was slightly too optimistic about how\n> > likely we were to be able to get away with not having certain kinds of\n> > array preprocessing, back when I posted v12. It's true that the\n> > propensity of the patch to not recognize \"partial\n> > redundancies/contradictions\" hardly matters with redundant equalities,\n> > but inequalities are another matter. I'm slightly worried about cases\n> > like this one:\n> >\n> > select * from multi_test where a in (1,99, 182, 183, 184) and a < 183;\n> >\n> > Maybe we need to do better with that. What do you think?\n>\n> Let me come back to that when I'm done reviewing the final part of nbtutils.\n>\n> > Attached is v13.\n>\n> It looks like there are few issues remaining outside the changes in\n> nbtutils. I've reviewed the changes to those files, and ~half of\n> nbtutils (up to _bt_advance_array_keys_increment) now. I think I can\n> get the remainder done tomorrow.\n\n> +_bt_rewind_nonrequired_arrays(IndexScanDesc scan, ScanDirection dir)\n [...]\n> + if (!(cur->sk_flags & SK_SEARCHARRAY) &&\n> + cur->sk_strategy != BTEqualStrategyNumber)\n> + continue;\n\nThis should use ||, not &&: if it's not an array, or not an equality\narray key, it's not using an array key slot and we're not interested.\nNote that _bt_verify_arrays_bt_first does have the right condition already.\n\n> + if (readpagetup || result != 0)\n> + {\n> + Assert(result != 0);\n> + return false;\n> + }\n\nI'm confused about this. By asserting !readpagetup after this exit, we\ncould save a branch condition for the !readpagetup result != 0 path.\nCan't we better assert the inverse just below, or is this specifically\nfor defense-in-depth against bug? E.g.\n\n+ if (result != 0)\n+ return false;\n+\n+ Assert(!readpagetup);\n\n> + /*\n> + * By here we have established that the scan's required arrays were\n> + * advanced, and that they haven't become exhausted.\n> + */\n> + Assert(arrays_advanced || !arrays_exhausted);\n\nShould use &&, based on the comment.\n\n> + * We generally permit primitive index scans to continue onto the next\n> + * sibling page when the page's finaltup satisfies all required scan keys\n> + * at the point where we're between pages.\n\nThis should probably describe that we permit primitive scans with\narray keys to continue until we get to the sibling page, rather than\nthis rather obvious and generic statement that would cover even the\nindex scan for id > 0 ORDER BY id asc; or this paragraph can be\nremoved.\n\n+ if (!all_required_satisfied && pstate->finaltup &&\n+ _bt_tuple_before_array_skeys(scan, dir, pstate->finaltup, false, 0,\n+ &so->scanBehind))\n+ goto new_prim_scan;\n\n> +_bt_verify_arrays_bt_first(IndexScanDesc scan, ScanDirection dir)\n[...]\n> + if (((cur->sk_flags & SK_BT_REQFWD) && ScanDirectionIsForward(dir)) ||\n> + ((cur->sk_flags & SK_BT_REQBKWD) && ScanDirectionIsBackward(dir)))\n> + continue;\n\nI think a simple check if any SK_BT_REQ flag is set should be OK here:\nThe key must be an equality key, and those must be required either in\nboth directions, or in neither direction.\n\n-----\n\nFurther notes:\n\nI have yet to fully grasp what so->scanBehind is supposed to mean. \"/*\nScan might be behind arrays? */\" doesn't give me enough hints here.\n\nI find it weird that we call _bt_advance_array_keys for non-required\nsktrig. Shouldn't that be as easy as doing a binary search through the\narray? Why does this need to hit the difficult path?\n\nKind regards,\n\nMatthias van de Meent", "msg_date": "Fri, 8 Mar 2024 15:00:14 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Fri, Mar 8, 2024 at 9:00 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> I've attached v14, where 0001 is v13, 0002 is a patch with small\n> changes + some large comment suggestions, and 0003 which contains\n> sorted merge join code for _bt_merge_arrays.\n\nI have accepted your changes from 0003. Agree that it's better that\nway. It's at least a little faster, but not meaningfully more\ncomplicated.\n\nThis is part of my next revision, v15, which I've attached (along with\na test case that you might find useful, explained below).\n\n> I'll try to work a bit on v13/14's _bt_preprocess_keys, and see what I\n> can make of it.\n\nThat's been the big focus of this new v15, which now goes all out with\nteaching _bt_preprocess_keys with how to deal with arrays. We now do\ncomprehensive normalization of even very complicated combinations of\narrays and non-array scan keys in this version.\n\nFor example, consider this query:\n\nselect *\nfrom multi_test\nwhere\n a = any('{1, 2, 3, 4, 5}'::int[])\n and\n a > 2::bigint\n and\n a = any('{2, 3, 4, 5, 6}'::bigint[])\n and\n a < 6::smallint\n and\n a = any('{2, 3, 4, 5, 6}'::smallint[])\n and\n a < 4::int;\n\nThis has a total of 6 input scankeys -- 3 of which are arrays. But by\nthe time v15's _bt_preprocess_keys is done with it, it'll have only 1\nscan key -- which doesn't even have an array (not anymore). And so we\nwon't even need to advance the array keys one single time -- there'll\nsimply be no array left to advance. In other words, it'll be just as\nif the query was written this way from the start:\n\nselect *\nfrom multi_test\nwhere\n a = 3::int;\n\n(Though of course the original query will spend more cycles on\npreprocessing, compared to this manually simplified variant.)\n\nIn general, preprocessing can now simplify queries like this to the\nmaximum extent possible (without bringing the optimizer into it), no\nmatter how much crud like this is added -- even including adversarial\ncases, with massive arrays that have some amount of\nredundancy/contradictoriness to them.\n\nIt turned out to not be terribly difficult to teach\n_bt_preprocess_keys everything it could possibly need to know about\narrays, so that it can operate on them directly, as a variant of the\nstandard equality strategy (we do still need _bt_preprocess_array_keys\nfor basic preprocessing of arrays, mostly just merging them). This is\nbetter overall (in that it gets every little subtlety right), but it\nalso simplified a number of related issues. For example, there is no\nlonger any need to maintain a mapping between so->keyData[]-wise scan\nkeys (output scan keys), and scan->keyData[]-wise scan keys (input\nscan keys). We can just add a step to fix-up the references to the end\nof _bt_preprocess_keys, to make life easier within\n_bt_advance_array_keys.\n\nThis preprocessing work should all be happening during planning, not\nduring query execution -- that's the design that makes the most sense.\nThis is something we've discussed in the past in the context of skip\nscan (see my original email to this thread for the reference). It\nwould be especially useful for the very fancy kinds of preprocessing\nthat are described by the MDAM paper, like using an index scan for a\nNOT IN() list/array (this can actually make sense with a low\ncardinality index).\n\nThe structure for preprocessing that I'm working towards (especially\nin v15) sets the groundwork for making those shifts in the planner,\nbecause we'll no longer treat each array constant as its own primitive\nindex scan during preprocessing. Right now, on HEAD, preprocessing\nwith arrays kinda treats each array constant like the parameter of an\nimaginary inner index scan, from an imaginary nested loop join. But\nthe planner really needs to operate on the whole qual, all at once,\nincluding any arrays. An actual nestloop join's inner index scan\nnaturally cannot do that, and so might still require runtime/dynamic\npreprocessing in a world where that mostly happens in the planner --\nbut that clearly not appropriate for arrays (\"WHERE a = 5\" and \"WHERE\na in(4, 5)\" are almost the same thing, and so should be handled in\nalmost the same way by preprocessing).\n\n> > +_bt_rewind_nonrequired_arrays(IndexScanDesc scan, ScanDirection dir)\n> [...]\n> > + if (!(cur->sk_flags & SK_SEARCHARRAY) &&\n> > + cur->sk_strategy != BTEqualStrategyNumber)\n> > + continue;\n>\n> This should use ||, not &&\n\nFixed. Yeah, that was a bug.\n\n> > + if (readpagetup || result != 0)\n> > + {\n> > + Assert(result != 0);\n> > + return false;\n> > + }\n>\n> I'm confused about this. By asserting !readpagetup after this exit, we\n> could save a branch condition for the !readpagetup result != 0 path.\n> Can't we better assert the inverse just below, or is this specifically\n> for defense-in-depth against bug? E.g.\n>\n> + if (result != 0)\n> + return false;\n> +\n> + Assert(!readpagetup);\n\nYeah, that's what it was -- defensively. It seems slightly better\nas-is, because you'll get an assertion failure if a \"readpagetup\"\ncaller gets \"result == 0\". That's never supposed to happen (if it did\nhappen then our ORDER proc won't be in agreement with what our =\noperator indicated about the same tuple attribute value moments\nearlier, inside _bt_check_compare).\n\n> > + /*\n> > + * By here we have established that the scan's required arrays were\n> > + * advanced, and that they haven't become exhausted.\n> > + */\n> > + Assert(arrays_advanced || !arrays_exhausted);\n>\n> Should use &&, based on the comment.\n\nFixed by getting rid of the arrays_exhausted variable, which wasn't\nadding much anyway.\n\n> > + * We generally permit primitive index scans to continue onto the next\n> > + * sibling page when the page's finaltup satisfies all required scan keys\n> > + * at the point where we're between pages.\n>\n> This should probably describe that we permit primitive scans with\n> array keys to continue until we get to the sibling page, rather than\n> this rather obvious and generic statement that would cover even the\n> index scan for id > 0 ORDER BY id asc; or this paragraph can be\n> removed.\n\nIt's not quite obvious. The scan's array keys change as the scan makes\nprogress, up to once per tuple read. But even if it really was\nobvious, it's really supposed to frame the later discussion --\ndiscussion of those less common cases where this isn't what happens.\nThe exceptions. These exceptions are:\n\n1. When a required scan key is deemed \"satisfied\" only because its\nvalue was truncated in a high key finaltup. (Technically this is what\n_bt_checkkeys has always done, but we're much more sensitive to this\nstuff now, because we won't necessarily get to make another choice\nabout starting a new primitive index scan for a long time.)\n\n2. When we apply the has_required_opposite_direction_only stuff, and\ndecide to start a new primitive index scan, even though technically\nall of our required-in-this-direction scan keys are still satisfied.\n\nSeparately, there is also the potential for undesirable interactions\nbetween 1 and 2, which is why we don't let them mix. (We have the \"if\n(so->scanBehind && has_required_opposite_direction_only) goto\nnew_prim_scan\" gating condition.)\n\n> Further notes:\n>\n> I have yet to fully grasp what so->scanBehind is supposed to mean. \"/*\n> Scan might be behind arrays? */\" doesn't give me enough hints here.\n\nYes, it is complicated. The best explanation is the one I've added to\n_bt_readpage, next to the precheck. But that does need more work.\n\nNote that the so->scanBehind thing solves two distinct problems for\nthe patch (related, but still clearly distinct). These problem are:\n\n1. It makes the existing precheck/continuescan optimization in\n_bt_readpage safe -- we'd sometimes get wrong answers to queries if we\ndidn't limit application of the optimization to !so->scanBehind cases.\n\nI have a test case that proves that this is true -- the one I\nmentioned in my introduction. I'm attaching that as\nprecheck_testcase.sql now. It might help you to understand\nso->scanBehind, particularly this point 1 about the basic correctness\nof the precheck thing (possibly point 2 also).\n\n2. It is used to speculatively visit the next leaf page in corner\ncases where truncated -inf attributes from the high key are deemed\n\"satisfied\".\n\nGenerally speaking, we don't want to visit the next leaf page unless\nwe're already 100% sure that it might have matches (if any page has\nmatches for our current array keys at all, that is). But in a certain\nsense we're only guessing. It isn't guaranteed (and fundamentally\ncan't be guaranteed) to work out once on the next sibling page. We've\nmerely assumed that our array keys satisfy truncated -inf columns,\nwithout really knowing what it is that we'll find on the next page\nwhen we actually visit it at that point (we're not going to see -inf\nin the non-pivot tuples on the next page, we'll see some\nreal/non-sentinel low-sorting value).\n\nWe have good practical reasons to not want to treat them as\nnon-matching (though that would be correct), and to take this limited\ngamble (we can discuss that some more if you'd like). Once you accept\nthat we have to do this, it follows that we need to be prepared to:\n\nA. Notice that we're really just guessing in the sense I've described\n(before leaving for the next sibling leaf page), by setting\nso->scanBehind. We'll remember that it happened.\n\nand:\n\nB. Notice that that limited gamble didn't pay off once on the\nnext/sibling leaf page, so that we can cut our losses and start a new\nprimitive index scan at that point. We do this by checking\nso->scanBehind (along with the sibling page's high key), once on the\nsibling page. (We don't need explicit handling for the case when it\nworks out, as it almost always will.)\n\nIf you want to include discussion of problem 1 here too (not just\nproblem 2), then I should add a third thing that we need to notice:\n\nC. Notice that we can't do the precheck thing once in _bt_readpage,\nbecause it'd be wrong to allow it.\n\n> I find it weird that we call _bt_advance_array_keys for non-required\n> sktrig. Shouldn't that be as easy as doing a binary search through the\n> array? Why does this need to hit the difficult path?\n\nWhat difficult path?\n\nAdvancing non-required arrays isn't that different to advancing\nrequired ones. We will never advance required arrays when called just\nto advance a non-required one, obviously. But we can do the opposite.\nIn fact, we absolutely have to advance non-required arrays (as best we\ncan) when advancing a required one (or when the call was triggered by\na non-array required scan key).\n\nThat said, it could have been clearer than it was in earlier versions.\nv15 makes the difference between the non-required scan key trigger and\nrequired scan key trigger cases clearer within _bt_advance_array_keys.\n\n--\nPeter Geoghegan", "msg_date": "Fri, 15 Mar 2024 20:12:02 -0400", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Sat, 16 Mar 2024 at 01:12, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Fri, Mar 8, 2024 at 9:00 AM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > I've attached v14, where 0001 is v13, 0002 is a patch with small\n> > changes + some large comment suggestions, and 0003 which contains\n> > sorted merge join code for _bt_merge_arrays.\n>\n> I have accepted your changes from 0003. Agree that it's better that\n> way. It's at least a little faster, but not meaningfully more\n> complicated.\n\nThanks.\n\n> > I'll try to work a bit on v13/14's _bt_preprocess_keys, and see what I\n> > can make of it.\n>\n> That's been the big focus of this new v15, which now goes all out with\n> teaching _bt_preprocess_keys with how to deal with arrays. We now do\n> comprehensive normalization of even very complicated combinations of\n> arrays and non-array scan keys in this version.\n\nI was thinking about a more unified processing model, where\n_bt_preprocess_keys would iterate over all keys, including processing\nof array keys (by merging and reduction to normal keys) if and when\nfound. This would also reduce the effort expended when there are\ncontradictory scan keys, as array preprocessing is relatively more\nexpensive than other scankeys and contradictions are then found before\nprocessing of later keys.\nAs I wasn't very far into the work yet it seems I can reuse a lot of\nyour work here.\n\n> For example, consider this query:\n[...]\n> This has a total of 6 input scankeys -- 3 of which are arrays. But by\n> the time v15's _bt_preprocess_keys is done with it, it'll have only 1\n> scan key -- which doesn't even have an array (not anymore). And so we\n> won't even need to advance the array keys one single time -- there'll\n> simply be no array left to advance. In other words, it'll be just as\n> if the query was written this way from the start:\n>\n> select *\n> from multi_test\n> where\n> a = 3::int;\n>\n> (Though of course the original query will spend more cycles on\n> preprocessing, compared to this manually simplified variant.)\n\nThat's a good improvement, much closer to an optimal access pattern.\n\n> It turned out to not be terribly difficult to teach\n> _bt_preprocess_keys everything it could possibly need to know about\n> arrays, so that it can operate on them directly, as a variant of the\n> standard equality strategy (we do still need _bt_preprocess_array_keys\n> for basic preprocessing of arrays, mostly just merging them). This is\n> better overall (in that it gets every little subtlety right), but it\n> also simplified a number of related issues. For example, there is no\n> longer any need to maintain a mapping between so->keyData[]-wise scan\n> keys (output scan keys), and scan->keyData[]-wise scan keys (input\n> scan keys). We can just add a step to fix-up the references to the end\n> of _bt_preprocess_keys, to make life easier within\n> _bt_advance_array_keys.\n>\n> This preprocessing work should all be happening during planning, not\n> during query execution -- that's the design that makes the most sense.\n> This is something we've discussed in the past in the context of skip\n> scan (see my original email to this thread for the reference).\n\nYes, but IIRC we also agreed that it's impossible to do this fully in\nplanning, amongst others due to joins on array fields.\n\n> It\n> would be especially useful for the very fancy kinds of preprocessing\n> that are described by the MDAM paper, like using an index scan for a\n> NOT IN() list/array (this can actually make sense with a low\n> cardinality index).\n\nYes, indexes such as those on enums. Though, in those cases the NOT IN\n() could be transformed into IN()-lists by the planner, but not the\nindex.\n\n> The structure for preprocessing that I'm working towards (especially\n> in v15) sets the groundwork for making those shifts in the planner,\n> because we'll no longer treat each array constant as its own primitive\n> index scan during preprocessing.\n\nI hope that's going to be a fully separate patch. I don't think I can\nhandle much more complexity in this one.\n\n> Right now, on HEAD, preprocessing\n> with arrays kinda treats each array constant like the parameter of an\n> imaginary inner index scan, from an imaginary nested loop join. But\n> the planner really needs to operate on the whole qual, all at once,\n> including any arrays. An actual nestloop join's inner index scan\n> naturally cannot do that, and so might still require runtime/dynamic\n> preprocessing in a world where that mostly happens in the planner --\n> but that clearly not appropriate for arrays (\"WHERE a = 5\" and \"WHERE\n> a in(4, 5)\" are almost the same thing, and so should be handled in\n> almost the same way by preprocessing).\n\nYeah, if the planner could handle some of this that'd be great. At the\nsame time, I think that this might need to be gated behind a guc for\nmore expensive planner-time deductions.\n\n> > > + * We generally permit primitive index scans to continue onto the next\n> > > + * sibling page when the page's finaltup satisfies all required scan keys\n> > > + * at the point where we're between pages.\n> >\n> > This should probably describe that we permit primitive scans with\n> > array keys to continue until we get to the sibling page, rather than\n> > this rather obvious and generic statement that would cover even the\n> > index scan for id > 0 ORDER BY id asc; or this paragraph can be\n> > removed.\n>\n> It's not quite obvious.\n[...]\n> Separately, there is also the potential for undesirable interactions\n> between 1 and 2, which is why we don't let them mix. (We have the \"if\n> (so->scanBehind && has_required_opposite_direction_only) goto\n> new_prim_scan\" gating condition.)\n\nI see.\n\n> > Further notes:\n> >\n> > I have yet to fully grasp what so->scanBehind is supposed to mean. \"/*\n> > Scan might be behind arrays? */\" doesn't give me enough hints here.\n>\n> Yes, it is complicated. The best explanation is the one I've added to\n> _bt_readpage, next to the precheck. But that does need more work.\n\nYeah. The _bt_readpage comment doesn't actually contain the search\nterm scanBehind, so I wasn't expecting that to be documented there.\n\n> > I find it weird that we call _bt_advance_array_keys for non-required\n> > sktrig. Shouldn't that be as easy as doing a binary search through the\n> > array? Why does this need to hit the difficult path?\n>\n> What difficult path?\n\n\"Expensive\" would probably have been a better wording: we do a\ncomparative lot of processing in the !_bt_check_compare() +\n!continuescan path; much more than the binary searches you'd need for\nnon-required array key checks.\n\n> Advancing non-required arrays isn't that different to advancing\n> required ones. We will never advance required arrays when called just\n> to advance a non-required one, obviously. But we can do the opposite.\n> In fact, we absolutely have to advance non-required arrays (as best we\n> can) when advancing a required one (or when the call was triggered by\n> a non-array required scan key).\n\nI think it's a lot more expensive to do the non-required array key\nincrement for non-required triggers. What are we protecting against\n(or improving) by always doing advance_array_keys on non-required\ntrigger keys?\n\nI mean that we should just do the non-required array key binary search\ninside _bt_check_compare for non-required array keys, as that would\nskip a lot of the rather expensive other array key infrastructure, and\nonly if we're outside the minimum or maximum bounds of the\nnon-required scankeys should we trigger advance_array_keys (unless\nscan direction changed).\n\nA full review of the updated patch will follow soon.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 18 Mar 2024 14:24:46 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Sat, 16 Mar 2024 at 01:12, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Fri, Mar 8, 2024 at 9:00 AM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > I've attached v14, where 0001 is v13, 0002 is a patch with small\n> > changes + some large comment suggestions, and 0003 which contains\n> > sorted merge join code for _bt_merge_arrays.\n>\n> This is part of my next revision, v15, which I've attached (along with\n> a test case that you might find useful, explained below).\n>\n> v15 makes the difference between the non-required scan key trigger and\n> required scan key trigger cases clearer within _bt_advance_array_keys.\n\nOK, here's a small additional review, with a suggestion for additional\nchanges to _bt_preprocess:\n\n> @@ -1117,6 +3160,46 @@ _bt_compare_scankey_args(IndexScanDesc scan, ScanKey op,\n> /*\n> * The opfamily we need to worry about is identified by the index column.\n> */\n> Assert(leftarg->sk_attno == rightarg->sk_attno);\n>\n> + /*\n> + * If either leftarg or rightarg are equality-type array scankeys, we need\n> + * specialized handling (since by now we know that IS NULL wasn't used)\n> + */\n> [...]\n> + }\n> +\n> opcintype = rel->rd_opcintype[leftarg->sk_attno - 1];\n\nHere, you insert your code between the comment about which opfamily to\nchoose and the code assigning the opfamily. I think this needs some\ncleanup.\n\n> + * Don't call _bt_preprocess_array_keys_final in this fast path\n> + * (we'll miss out on the single value array transformation, but\n> + * that's not nearly as important when there's only one scan key)\n\nWhy is it OK to ignore it? Or, why don't we apply it here?\n\n---\n\nAttached 2 patches for further optimization of the _bt_preprocess_keys\npath (on top of your v15), according to the following idea:\n\nRight now, we do \"expensive\" processing with xform merging for all\nkeys when we have more than 1 keys in the scan. However, we only do\nper-attribute merging of these keys, so if there is only one key for\nany attribute, the many cycles spent in that loop are mostly wasted.\nBy checking for single-scankey attributes, we can short-track many\nmulti-column index scans because they usually have only a single scan\nkey per attribute.\nThe first implements that idea, the second reduces the scope of\nvarious variables so as to improve compiler optimizability.\n\nI'll try to work a bit more on merging the _bt_preprocess steps into a\nsingle main iterator, but this is about as far as I got with clean\npatches. Merging the steps for array preprocessing with per-key\nprocessing and post-processing is proving a bit more complex than I'd\nanticipated, so I don't think I'll be able to finish that before the\nfeature freeze, especially with other things that keep distracting me.\n\nMatthias van de Meent", "msg_date": "Wed, 20 Mar 2024 20:26:38 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Thu, Mar 7, 2024 at 10:42 AM Benoit Tigeot <benoit.tigeot@gmail.com> wrote:\n> I am not up to date with the last version of patch but I did a regular benchmark with version 11 and typical issue we have at the moment and the result are still very very good. [1]\n\nThanks for providing the test case. It was definitely important back\nwhen the ideas behind the patch had not yet fully developed. It helped\nme to realize that my thinking around non-required arrays (meaning\narrays that cannot reposition the scan, and just filter out\nnon-matching tuples) was still sloppy.\n\n> In term of performance improvement the last proposals could be a real game changer for 2 of our biggest databases. We hope that Postgres 17 will contain those improvements.\n\nCurrent plan is to commit this patch in the next couple of weeks,\nahead of Postgres 17 feature freeze.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 21 Mar 2024 18:05:39 -0400", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Mon, Mar 18, 2024 at 9:25 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> I was thinking about a more unified processing model, where\n> _bt_preprocess_keys would iterate over all keys, including processing\n> of array keys (by merging and reduction to normal keys) if and when\n> found. This would also reduce the effort expended when there are\n> contradictory scan keys, as array preprocessing is relatively more\n> expensive than other scankeys and contradictions are then found before\n> processing of later keys.\n\nDoes it really matter? I just can't get excited about maybe getting\none less syscache lookup in queries that involve *obviously*\ncontradictory keys at the SQL level. Especially because we're so much\nbetter off with the new design here anyway; calling\n_bt_preprocess_keys once rather than once per distinct set of array\nkeys is an enormous improvement, all on its own.\n\nMy concern with preprocessing overhead is almost entirely limited to\npathological performance issues involving extreme (even adversarial)\ninput scan keys/arrays. I feel that if we're going to guarantee that\nwe won't choke in _bt_checkkeys, even given billions of distinct sets\nof array keys, we should make the same guarantee in\n_bt_preprocess_keys -- just to avoid POLA violations. But that's the\nonly thing that seems particularly important in the general area of\npreprocessing performance. (Preprocessing performance probably does\nmatter quite a bit in more routine cases, where </<= and >/>= are\nmixed together on the same attribute. But there'll be no new\n_bt_preprocess_keys operator/function lookups for stuff like that.)\n\nThe advantage of not completely merging _bt_preprocess_array_keys with\n_bt_preprocess_keys is that preserving this much of the existing code\nstructure allows us to still decide how many array keys we'll need for\nthe scan up front (if the scan doesn't end up having an unsatisfiable\nqual). _bt_preprocess_array_keys will eliminate redundant arrays\nearly, in practically all cases (the exception is when it has to deal\nwith an incomplete opfamily lacking the appropriate cross-type ORDER\nproc), so we don't have to think about merging arrays after that\npoint. Rather like how we don't have to worry about \"WHERE a <\nany('{1, 2, 3}')\" type inequalities after _bt_preprocess_array_keys\ndoes an initial round of array-specific preprocessing\n(_bt_preprocess_keys can deal with those in the same way as it will\nwith standard inequalities).\n\n> > This preprocessing work should all be happening during planning, not\n> > during query execution -- that's the design that makes the most sense.\n> > This is something we've discussed in the past in the context of skip\n> > scan (see my original email to this thread for the reference).\n>\n> Yes, but IIRC we also agreed that it's impossible to do this fully in\n> planning, amongst others due to joins on array fields.\n\nEven with a nested loop join's inner index scan, where the constants\nused for each btrescan are not really predictable in the planner, we\ncan still do most preprocessing in the planner, at least most of the\ntime.\n\nWe could still easily do analysis that is capable of ruling out\nredundant or contradictory scan keys for any possible parameter value\n-- seen or unseen. I'd expect this to be the common case -- most of\nthe time these inner index scans need only one simple = operator\n(maybe 2 = operators). Obviously, tjat approach still requires that\nbtrescan at least accept a new set of constants for each new inner\nindex scan invocation. But that's still much cheaper than calling\n_bt_preprocess_keys from scratch every time btresca is called.\n\n> > It\n> > would be especially useful for the very fancy kinds of preprocessing\n> > that are described by the MDAM paper, like using an index scan for a\n> > NOT IN() list/array (this can actually make sense with a low\n> > cardinality index).\n>\n> Yes, indexes such as those on enums. Though, in those cases the NOT IN\n> () could be transformed into IN()-lists by the planner, but not the\n> index.\n\nI think that it would probably be built as just another kind of index\npath, like the ones we build for SAOPs. Anti-SAOPs?\n\nJust as with SAOPs, the disjunctive accesses wouldn't really be\nsomething that the planner would need too much direct understanding of\n(except during costing). I'd only expect the plan to use such an index\nscan when it wasn't too far off needing a full index scan anyway. Just\nlike with skip scan, the distinction between these NOT IN() index scan\npaths and a simple full index scan path is supposed to be fuzzy (maybe\nthey'll actually turn out to be full scans at runtime, since the\nnumber of primitive index scans is determined dynamically, based on\nthe structure of the index at runtime).\n\n> > The structure for preprocessing that I'm working towards (especially\n> > in v15) sets the groundwork for making those shifts in the planner,\n> > because we'll no longer treat each array constant as its own primitive\n> > index scan during preprocessing.\n>\n> I hope that's going to be a fully separate patch. I don't think I can\n> handle much more complexity in this one.\n\nAllowing the planner to hook into _bt_preprocess_keys is absolutely\nnot in scope here. I was just making the point that treating array\nscan keys like just another type of scan key during preprocessing is\ngoing to help with that too.\n\n> Yeah. The _bt_readpage comment doesn't actually contain the search\n> term scanBehind, so I wasn't expecting that to be documented there.\n\nCan you think of a better way of explaining it?\n\n> > > I find it weird that we call _bt_advance_array_keys for non-required\n> > > sktrig. Shouldn't that be as easy as doing a binary search through the\n> > > array? Why does this need to hit the difficult path?\n> >\n> > What difficult path?\n>\n> \"Expensive\" would probably have been a better wording: we do a\n> comparative lot of processing in the !_bt_check_compare() +\n> !continuescan path; much more than the binary searches you'd need for\n> non-required array key checks.\n\nI've come around to your point of view on this -- at least to a\ndegree. I'm now calling _bt_advance_array_keys from within\n_bt_check_compare for non-required array keys only.\n\nIf nothing else, it's clearer this way because it makes it obvious\nthat non-required arrays cannot end the (primitive) scan. There's no\nfurther need for the wart in _bt_check_compare's contract about\n\"continuescan=false\" being associated with a non-required scan key in\nthis one special case.\n\n> > Advancing non-required arrays isn't that different to advancing\n> > required ones. We will never advance required arrays when called just\n> > to advance a non-required one, obviously. But we can do the opposite.\n> > In fact, we absolutely have to advance non-required arrays (as best we\n> > can) when advancing a required one (or when the call was triggered by\n> > a non-array required scan key).\n>\n> I think it's a lot more expensive to do the non-required array key\n> increment for non-required triggers. What are we protecting against\n> (or improving) by always doing advance_array_keys on non-required\n> trigger keys?\n\nWe give up on the first non-required array that fails to find an exact\nmatch, though. Regardless of whether the scan was triggered by a\nrequired or a non-required scan key (it's equally useful in both types\nof array advancement, because they're not all that different).\n\nThere were and are steps around starting a new primitive index scan at\nthe end of _bt_advance_array_keys, that aren't required when array\nadvancement was triggered by an unsatisfied non-required array scan\nkey. But those steps were skipped given a non-required trigger scan\nkey, anyway.\n\n> I mean that we should just do the non-required array key binary search\n> inside _bt_check_compare for non-required array keys, as that would\n> skip a lot of the rather expensive other array key infrastructure, and\n> only if we're outside the minimum or maximum bounds of the\n> non-required scankeys should we trigger advance_array_keys (unless\n> scan direction changed).\n\nI've thought about optimizing non-required arrays along those lines.\nBut it doesn't really seem to be necessary at all.\n\nIf we were going to do it, then it ought to be done in a way that\nworks independent of the trigger condition for array advancement (that\nis, it'd work for non-required arrays that have a required scan key\ntrigger, too).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 21 Mar 2024 19:21:53 -0400", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Wed, Mar 20, 2024 at 3:26 PM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> OK, here's a small additional review, with a suggestion for additional\n> changes to _bt_preprocess:\n\nAttached is v16. This has some of the changes that you asked for. But\nmy main focus has been on fixing regressions that were discovered\nduring recent performance validation.\n\nUp until now, my stress testing of the patch (not quite the same thing\nas benchmarking) more or less consisted of using pgbench to look at\ntwo different types of cases. These are:\n\n1. Extreme cases where the patch obviously helps the most, at least\ninsofar as navigating the index structure itself is concerned (note\nthat avoiding heap accesses isn't in scope for the pgbench stress\ntesting).\n\nThese are all variants of pgbench's SELECT workload, with a huge IN()\nlist containing contiguous integers (we can get maybe a 5.3x increase\nin TPS here), or integers that are spaced apart by maybe 20 - 50\ntuples (where we can get maybe a 20% - 30% improvement in TPS here).\n\n2. The opposite extreme, where an IN() list has integers that are\nessentially random, and so are spaced very far apart on average.\n\nAnother variant of pgbench's SELECT workload, with an IN() list. This\nis a case that the patch has hardly any chance of helping, so the goal\nhere is to just avoid regressions. Since I've been paying attention to\nthis for a long time, this wasn't really a problem for v15.\n\n3. Cases where matching tuples are spaced apart by 100 - 300\ntuples/integer values.\n\nThese cases are \"somewhere between 1 and 2\". Cases where I would\nexpect to win by a smaller amount (say a 5% improvement in TPS), but\nv15 nevertheless lost by as much as 12% here. This was a problem that\nseemed to demand a fix.\n\nThe underlying reason why all earlier versions of the patch had these\nregressions came down to their use of a pure linear search (by which I\nmean having _bt_readpage call _bt_checkeys again and again on the same\npage). That was just too slow here. v15 could roughly halve the number\nof index descents compared to master, as expected -- but that wasn't\nenough to make up for the overhead of having to plow through hundreds\nof non-matching tuples on every leaf page. v15 couldn't even break\neven.\n\nv16 of the patch fixes the problem by adding a limited additional form\nof \"skipping\", used *within* a page, as it is scanned by _bt_readpage.\n(As opposed to skipping over the index by redescending anew, starting\nfrom the root page.)\n\nIn other words, v16 teaches the linear search process to occasionally\n\"look ahead\" when it seems like the linear search process has required\ntoo many comparisons at that point (comparisons by\n_bt_tuple_before_array_skeys made from within _bt_checkkeys). You can\nthink of this new \"look ahead\" mechanism as bridging the gap between\ncase 1 and case 2 (fixing case 3, a hybrid of 1 and 2). We still\nmostly want to use a \"linear search\" from within _bt_readpage, but we\ndo benefit from having this fallback when that just isn't working very\nwell. Now even these new type 3 stress tests see an increase in TPS of\nperhaps 5% - 12%, depending on just how far apart you arrange for\nmatching tuples to be spaced apart by (the total size of the IN list\nis also relevant, but much less so).\n\nSeparately, I added a new optimization to the binary search process\nthat selects the next array element via a binary search of the array\n(actually, to the function _bt_binsrch_array_skey), which lowers the\ncost of advancing required arrays that trigger advancement: we only\nreally need one comparison per _bt_binsrch_array_skey call in almost\nall cases. In practice we'll almost certainly find that the next array\nelement in line is <= the unsatisfied tuple datum (we already know\nfrom context that the current/former array element must now be < that\nsame tuple datum at that point, so the conditions under which this\ndoesn't work out right away are narrow). This second optimization is\nmuch more general than the first one. It helps with pretty much any\nkind of adversarial pgbench stress test.\n\n> Here, you insert your code between the comment about which opfamily to\n> choose and the code assigning the opfamily. I think this needs some\n> cleanup.\n\nFixed.\n\n> > + * Don't call _bt_preprocess_array_keys_final in this fast path\n> > + * (we'll miss out on the single value array transformation, but\n> > + * that's not nearly as important when there's only one scan key)\n>\n> Why is it OK to ignore it? Or, why don't we apply it here?\n\nIt doesn't seem worth adding any cycles to that fast path, given that\nthe array transformation in question can only help us to convert \"=\nany('{1}')\" into \"= 1\", because there's no other scan keys that can\ncut down the number of array constants first (through earlier\narray-specific preprocessing steps).\n\nNote that even this can't be said for converting \"IN (1)\" into \"= 1\",\nsince the planner already does that for us, without nbtree ever seeing\nit directly.\n\n> Attached 2 patches for further optimization of the _bt_preprocess_keys\n> path (on top of your v15), according to the following idea:\n>\n> Right now, we do \"expensive\" processing with xform merging for all\n> keys when we have more than 1 keys in the scan. However, we only do\n> per-attribute merging of these keys, so if there is only one key for\n> any attribute, the many cycles spent in that loop are mostly wasted.\n\nWhy do you say that? I agree that calling _bt_compare_scankey_args()\nmore than necessary might matter, but that doesn't come up when\nthere's only one scan key per attribute. We'll only copy each scan key\ninto the strategy-specific slot for the attribute's temp xform[]\narray.\n\nActually, it doesn't necessarily come up when there's more than one\nscan key per index attribute, depending on the strategies in use. That\nseems like an existing problem on HEAD; I think we should be doing\nthis more. We could stand to do better at preprocessing when\ninequalities are mixed together. Right now, we can't detect\ncontradictory quals, given a case such as \"SELECT * FROM\npgbench_accounts WHERE aid > 5 AND aid < 3\". It'll only work for\nsomething like \"WHERE aid = 5 AND aid < 3\", so we'll still useless\ndescend the index once (not a disaster, but not ideal either).\n\nHonestly, I don't follow what the goal is here. Does it really help to\nrestructure the loop like this? Feels like I'm missing something. We\ndo this when there's only one scan key, sort of, but that's probably\nnot really that important anyway. Maybe it helps a bit for nestloop\njoins with an inner index scan, where one array scan key is common.\n\n-- \nPeter Geoghegan", "msg_date": "Thu, 21 Mar 2024 20:32:42 -0400", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Thu, Mar 21, 2024 at 8:32 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached is v16. This has some of the changes that you asked for. But\n> my main focus has been on fixing regressions that were discovered\n> during recent performance validation.\n\nAttached is v17. This revision deals with problems with parallel index\nscans that use array keys.\n\nThis patch is now *very* close to being committable. My only\noutstanding concern is the costing/selfuncs.c aspects of the patch,\nwhich I haven't thought about in many months. I'd estimate that I'm\nnow less than a week away from pushing this patch. (Plus I need to\ndecide which tests from my massive test suite should be included in\nthe committed patch, as standard regression tests.)\n\nBack to v17. Earlier versions dealt with parallel index scans in a way\nthat could lead to very unbalanced utilization of worker processes (at\nleast with the wrong sort of query/index), which was clearly\nunacceptable. The underlying issue was the continued use of\nBTParallelScanDescData.btps_arrayKeyCount field in shared memory,\nwhich was still used to stop workers from moving on to the next\nprimitive scan. That old approach made little sense with this new high\nlevel design. (I have been aware of the problem for months, but didn't\nget around to it until now.)\n\nI wasn't specifically trying to win any benchmarks here -- I really\njust wanted to make the design for parallel index scans with SAOPs\ninto a coherent part of the wider design, without introducing any\nregressions. I applied my usual charter for the patch when working on\nv17, by more or less removing any nbtree.c parallel index scan code\nthat had direct awareness of array keys as distinct primitive index\nscans. This fixed the problem of unbalanced worker utilization, as\nexpected, but it also seems to have benefits particular to parallel\nindex scans with array keys -- which was unexpected.\n\nWe now use an approach that's almost identical to the approach taken\nby every other type of parallel scan. My new approach naturally\nreduces contention among backends that want to advance the scan. (Plus\nparallel index scans get all of the same benefits as serial index\nscans, of course.)\n\nv17 replaces _bt_parallel_advance_array_keys with a new function,\ncalled _bt_parallel_primscan_advance, which is now called from\n_bt_advance_array_keys. The new function doesn't need to increment\nbtscan->btps_arrayKeyCount (nor does it increment a local copy in\nBTScanOpaqueData.arrayKeyCount). It is called at the specific point\nthat array key advancement *tries* to start another primitive index\nscan. The difference (relative to the serial case) is that it might\nnot succeed in doing so. It might not be possible to start another\nprimitive index scan, since it might turn out that some other backend\n(one page ahead, or even one page behind) already did it for\neverybody. At that point it's easy for the backend that failed to\nstart the next primitive scan to have _bt_advance_array_keys back out\nof everything. Then the backend just goes back to consuming pages by\nseizing the scan.\n\nThis approach preserves the ability of parallel scans to have\n_bt_readpage release the next page before _bt_readpage even starts\nexamining tuples from the current page. It's possible that 2 or 3\nbackends (each of which scans its own leaf pages from a group of\nadjacent pages) will \"independently\" decide that it's time to start\nanother scan -- but at most one backend can be granted the right to\ndo so.\n\nIt's even possible that one such backend among several (all of which\nmight be expected to try to start the next primitive scan in unison)\nwill *not* want to do so -- it could know better than the rest. This\ncan happen when the backend lands one page ahead of the rest, and it\njust so happens to see no reason to not just continue the scan on the\nleaf level for now. Such a backend can effectively veto the idea of\nstarting another primitive index scan for the whole top-level parallel\nscan. This probably doesn't really matter much, in and of itself\n(skipping ahead by only one or two pages is suboptimal but not really\na problem), but that's beside the point. The point is that having very\nflexible rules like this works out to be both simpler and better\nperforming than a more rigid, pessimistic approach would be. In\nparticular, keeping the time that the scan is seized by any one\nbackend to an absolute minimum is important -- it may be better to\ndetect and recover from contention than to try to prevent them\naltogether.\n\nJust to be clear, all of this only comes up during parts of the scan\nwhere primitive index scans actually look like a good idea in the\nfirst place. It's perfectly possible for the scan to process thousands\nof array keys, without any backend ever considering starting another\nprimitive index scan. Much of the time, every worker process can\nindependently advance their own private array keys, without that\nrequiring any sort of special coordination (nothing beyond the\ncoordination required by *all* parallel index scans, including those\nwithout any SAOP arrays).\n\nAs I've said many times already, the high level charter of this\nproject is to make scans that have arrays (whether serial or parallel)\nas similar as possible to any other type of scan.\n\n\n--\nPeter Geoghegan", "msg_date": "Tue, 26 Mar 2024 14:01:58 -0400", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Tue, Mar 26, 2024 at 2:01 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> v17 replaces _bt_parallel_advance_array_keys with a new function,\n> called _bt_parallel_primscan_advance, which is now called from\n> _bt_advance_array_keys. The new function doesn't need to increment\n> btscan->btps_arrayKeyCount (nor does it increment a local copy in\n> BTScanOpaqueData.arrayKeyCount). It is called at the specific point\n> that array key advancement *tries* to start another primitive index\n> scan. The difference (relative to the serial case) is that it might\n> not succeed in doing so. It might not be possible to start another\n> primitive index scan, since it might turn out that some other backend\n> (one page ahead, or even one page behind) already did it for\n> everybody.\n\nThere was a bug in v17 around how we deal with parallel index scans.\nUnder the right circumstances, a parallel scan with array keys put all\nof its worker processes to sleep, without subsequently waking them up\n(so the scan would hang indefinitely). The underlying problem was that\nv17 assumed that the backend that scheduled the next primitive index\nscan would also be the one that actually performs that same primitive\nscan. While it is natural and desirable for the backend that schedules\nthe primitive scan to be the one that actually calls _bt_search, v17\ntacitly relied on that to successfully finish the scan. It's\nparticularly important that the leader process always be able to stop\nparticipating as a worker immediately, whenever it needs to, and for\nas long as it needs to.\n\nAttached is v18, which adds an additional layer of indirection to fix\nthe bug: worker processes now schedule primitive index scans\nexplicitly. They store their current array keys in shared memory when\nprimitive scans are scheduled (during a parallel scan), allowing other\nbackends to pick things up where the scheduling backend left off. And\nso with v18, any other backend can be the one that performs the actual\ndescent of the index within _bt_search.\n\nThe design is essentially the same as before, though. We'd still\nprefer that it be the backend that scheduled the primitive index scan.\nOther backends might well be busy finishing off their scan of\nimmediately preceding leaf pages.\n\nSeparately, v18 also adds many new regression tests. This greatly\nimproves the general situation around test coverage, which I'd put off\nbefore now. Now there is coverage for parallel scans with array keys,\nas well as coverage for the new array-specific preprocessing code.\n\nNote: v18 doesn't have any adjustments to the costing, as originally\nplanned. I'll probably need to post a revised patch with improved (or\nat least polished) costing in the next few days, so that others will\nhave the opportunity to comment before I commit the patch.\n\n--\nPeter Geoghegan", "msg_date": "Mon, 1 Apr 2024 18:33:22 -0400", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Mon, Apr 1, 2024 at 6:33 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Note: v18 doesn't have any adjustments to the costing, as originally\n> planned. I'll probably need to post a revised patch with improved (or\n> at least polished) costing in the next few days, so that others will\n> have the opportunity to comment before I commit the patch.\n\nAttached is v19, which dealt with remaining concerns I had about the\ncosting in selfuncs.c. My current plan is to commit this on Saturday\nmorning (US Eastern time).\n\nI ultimately concluded that selfuncs.c shouldn't be doing anything to\ncap the estimated number of primitive index scans for an SAOP clause,\nunless doing so is all but certain to make the estimate more accurate.\nAnd so v19 makes the changes in selfuncs.c much closer to the approach\nused to cost SOAP clauses on master. In short, v19 is a lot more\nconservative in the changes made to selfuncs.c.\n\nv19 does hold onto the idea of aggressively capping the estimate in\nextreme cases -- cases where the estimate begins to approach the total\nnumber of leaf pages in the index. This is clearly safe (it's\nliterally impossible to have more index descents than there are leaf\npages in the index), and probably necessary given that index paths\nwith SAOP filter quals (on indexed attributes) will no longer be\ngenerated by the planner.\n\nTo recap, since nbtree more or less behaves as if scans with SAOPs are\none continuous index scan under the new design from patch, it was\ntempting to make code in genericcostestimate/btcostestimate treat it\nlike that, too (the selfuncs.c changes from earlier versions tried to\ndo that). As I said already, I've now given up on that approach in\nv19. This does mean that genericcostestimate continues to apply the\nMackert-Lohman formula for btree SAOP scans. This is a bit odd in a\nworld where nbtree specifically promises to not repeat any leaf page\naccesses. I was a bit uneasy about that aspect for a while, but\nultimately concluded that it wasn't very important in the grand scheme\nof things.\n\nThe problem with teaching code in genericcostestimate/btcostestimate\nto treat nbtree scans with SAOPs are one continuous index scan is that\nit hugely biases genericcostestimate's simplistic approach to\nestimating numIndexPages. genericcostestimate does this by simply\nprorating using numIndexTuples/index->pages/index->tuples. That naive\napproach probably works alright with simple scalar operators, but it's\nnot going to work with SAOPs directly. I can't think of a good way of\nestimating numIndexPages with SAOPs, and suspect that the required\ninformation just isn't available. It seems best to treat it as a\npossible area for improvement in a later release. The really important\nthing for v17 is that we never wildly overestimate the number of\ndescents when multiple SAOPs are used, each with a medium or large\narray.\n\nv19 also makes sure that genericcostestimate doesn't allow a\nnon-boundary SAOP clause/non-required array scan key to affect\nnum_sa_scans -- it'll just use the num_sa_scans values used by\nbtcostestimate in v19. This makes sense because nbtree is guaranteed\nto not start new primitive scans for these sorts of scan keys (plus\nthere's no need to calculate num_sa_scans twice, in two places, using\ntwo slightly different approaches).\n\n--\nPeter Geoghegan", "msg_date": "Wed, 3 Apr 2024 15:53:37 -0400", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "Hello Peter,\n\n03.04.2024 22:53, Peter Geoghegan wrote:\n> On Mon, Apr 1, 2024 at 6:33 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>> Note: v18 doesn't have any adjustments to the costing, as originally\n>> planned. I'll probably need to post a revised patch with improved (or\n>> at least polished) costing in the next few days, so that others will\n>> have the opportunity to comment before I commit the patch.\n> Attached is v19, which dealt with remaining concerns I had about the\n> costing in selfuncs.c. My current plan is to commit this on Saturday\n> morning (US Eastern time).\n\nPlease look at an assertion failure (reproduced starting from 5bf748b86),\ntriggered by the following query:\nCREATE TABLE t (a int, b int);\nCREATE INDEX t_idx ON t (a, b);\nINSERT INTO t (a, b) SELECT g, g FROM generate_series(0, 999) g;\nANALYZE t;\nSELECT * FROM t WHERE a < ANY (ARRAY[1]) AND b < ANY (ARRAY[1]);\n\nTRAP: failed Assert(\"so->numArrayKeys\"), File: \"nbtutils.c\", Line: 560, PID: 3251267\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Sun, 7 Apr 2024 20:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Sun, Apr 7, 2024 at 1:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n> Please look at an assertion failure (reproduced starting from 5bf748b86),\n> triggered by the following query:\n> CREATE TABLE t (a int, b int);\n> CREATE INDEX t_idx ON t (a, b);\n> INSERT INTO t (a, b) SELECT g, g FROM generate_series(0, 999) g;\n> ANALYZE t;\n> SELECT * FROM t WHERE a < ANY (ARRAY[1]) AND b < ANY (ARRAY[1]);\n>\n> TRAP: failed Assert(\"so->numArrayKeys\"), File: \"nbtutils.c\", Line: 560, PID: 3251267\n\nI immediately see what's up here. WIll fix this in the next short\nwhile. There is no bug here in builds without assertions, but it's\nprobably best to keep the assertion, and to just make sure not to call\n_bt_preprocess_array_keys_final() unless it has real work to do.\n\nThe assertion failure demonstrates that\n_bt_preprocess_array_keys_final can currently be called when there can\nbe no real work for it to do. The problem is that we condition the\ncall to _bt_preprocess_array_keys_final() on whether or not we had to\ndo \"real work\" back when _bt_preprocess_array_keys() was called, at\nthe start of _bt_preprocess_keys(). That usually leaves us with\n\"so->numArrayKeys > 0\", because the \"real work\" typically includes\nequality type array keys. But not here -- here you have two SAOP\ninequalities, which become simple scalar scan keys through early\narray-specific preprocessing in _bt_preprocess_array_keys(). There are\nno arrays left at this point, so \"so->numArrayKeys == 0\".\n\nFWIW I missed this because the tests only cover cases with one SOAP\ninequality, which will always return early from _bt_preprocess_keys\n(by taking its generic single scan key fast path). Your test case has\n2 scan keys, avoiding the fast path, allowing us to reach\n_bt_preprocess_array_keys_final().\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 7 Apr 2024 13:18:29 -0400", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "Coverity pointed out something that looks like a potentially live\nproblem in 5bf748b86:\n\n/srv/coverity/git/pgsql-git/postgresql/src/backend/access/nbtree/nbtutils.c: 2950 in _bt_preprocess_keys()\n2944 * need to make sure that we don't throw away an array\n2945 * scan key. _bt_compare_scankey_args expects us to\n2946 * always keep arrays (and discard non-arrays).\n2947 */\n2948 Assert(j == (BTEqualStrategyNumber - 1));\n2949 Assert(xform[j].skey->sk_flags & SK_SEARCHARRAY);\n>>> CID 1596256: Null pointer dereferences (FORWARD_NULL)\n>>> Dereferencing null pointer \"array\".\n2950 Assert(xform[j].ikey == array->scan_key);\n2951 Assert(!(cur->sk_flags & SK_SEARCHARRAY));\n2952 }\n2953 }\n2954 else if (j == (BTEqualStrategyNumber - 1))\n\nAbove this there is an assertion\n\n Assert(!array || array->num_elems > 0);\n\nwhich certainly makes it look like array->scan_key could be\na null-pointer dereference.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 07 Apr 2024 20:48:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution,\n allowing multi-column ordered scans, skip scan" }, { "msg_contents": "On Sun, Apr 7, 2024 at 8:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Coverity pointed out something that looks like a potentially live\n> problem in 5bf748b86:\n>\n> /srv/coverity/git/pgsql-git/postgresql/src/backend/access/nbtree/nbtutils.c: 2950 in _bt_preprocess_keys()\n> 2944 * need to make sure that we don't throw away an array\n> 2945 * scan key. _bt_compare_scankey_args expects us to\n> 2946 * always keep arrays (and discard non-arrays).\n> 2947 */\n> 2948 Assert(j == (BTEqualStrategyNumber - 1));\n> 2949 Assert(xform[j].skey->sk_flags & SK_SEARCHARRAY);\n> >>> CID 1596256: Null pointer dereferences (FORWARD_NULL)\n> >>> Dereferencing null pointer \"array\".\n> 2950 Assert(xform[j].ikey == array->scan_key);\n> 2951 Assert(!(cur->sk_flags & SK_SEARCHARRAY));\n> 2952 }\n> 2953 }\n> 2954 else if (j == (BTEqualStrategyNumber - 1))\n>\n> Above this there is an assertion\n>\n> Assert(!array || array->num_elems > 0);\n>\n> which certainly makes it look like array->scan_key could be\n> a null-pointer dereference.\n\nBut the \"Assert(xform[j].ikey == array->scan_key)\" assertion is\nlocated in a block where it's been established that the scan key (the\none stored in xform[j] at this point in execution) must have an array.\nIt has been marked SK_SEARCHARRAY, and uses the equality strategy, so\nit had better have one or we're in big trouble either way.\n\nThis is probably very hard for tools like Coverity to understand. We\nalso rely on the fact that only one of the two scan keys (only one of\nthe pair of scan keys that were passed to _bt_compare_scankey_args)\ncan have an array at the point of the assertion that Coverity finds\nsuspicious. It's possible that both of those scan keys actually did\nhave arrays, but _bt_compare_scankey_args just treats that as a case\nof being unable to prove which scan key was redundant/contradictory\ndue to a lack of suitable cross-type support -- so the assertion won't\nbe reached.\n\nWould Coverity stop complaining if I just removed the assertion? I\ncould just do that, I suppose, but that seems backwards to me.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Sun, 7 Apr 2024 21:11:48 -0400", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Sun, Apr 7, 2024 at 8:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Coverity pointed out something that looks like a potentially live\n>> problem in 5bf748b86:\n>> ... which certainly makes it look like array->scan_key could be\n>> a null-pointer dereference.\n\n> But the \"Assert(xform[j].ikey == array->scan_key)\" assertion is\n> located in a block where it's been established that the scan key (the\n> one stored in xform[j] at this point in execution) must have an array.\n\n> This is probably very hard for tools like Coverity to understand.\n\nIt's not obvious to human readers either ...\n\n> Would Coverity stop complaining if I just removed the assertion? I\n> could just do that, I suppose, but that seems backwards to me.\n\nPerhaps this'd help:\n\n- Assert(xform[j].ikey == array->scan_key);\n+ Assert(array && xform[j].ikey == array->scan_key);\n\nIf that doesn't silence it, I'd be prepared to just dismiss the\nwarning.\n\nSome work in the comment to explain why we must have an array here\nwouldn't be out of place either, perhaps.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 07 Apr 2024 21:25:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution,\n allowing multi-column ordered scans, skip scan" }, { "msg_contents": "On Sun, Apr 7, 2024 at 9:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Perhaps this'd help:\n>\n> - Assert(xform[j].ikey == array->scan_key);\n> + Assert(array && xform[j].ikey == array->scan_key);\n>\n> If that doesn't silence it, I'd be prepared to just dismiss the\n> warning.\n\nThe assertions in question are arguably redundant. There are very\nsimilar assertions just a little earlier on, as we initially set up\nthe array stuff (right before _bt_compare_scankey_args is called).\nI'll just remove the \"Assert(xform[j].ikey == array->scan_key)\"\nassertion that Coverity doesn't like, in addition to the\n\"Assert(!array || array->scan_key == i)\" assertion, on the grounds\nthat they're redundant.\n\n> Some work in the comment to explain why we must have an array here\n> wouldn't be out of place either, perhaps.\n\nThere is a comment block about this right above the assertion in question:\n\n/*\n * Both scan keys might have arrays, in which case we'll\n * arbitrarily pass only one of the arrays. That won't\n * matter, since _bt_compare_scankey_args is aware that two\n * SEARCHARRAY scan keys mean that _bt_preprocess_array_keys\n * failed to eliminate redundant arrays through array merging.\n * _bt_compare_scankey_args just returns false when it sees\n * this; it won't even try to examine either array.\n */\n\nDo you think it needs more work?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 7 Apr 2024 21:36:17 -0400", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> The assertions in question are arguably redundant. There are very\n> similar assertions just a little earlier on, as we initially set up\n> the array stuff (right before _bt_compare_scankey_args is called).\n> I'll just remove the \"Assert(xform[j].ikey == array->scan_key)\"\n> assertion that Coverity doesn't like, in addition to the\n> \"Assert(!array || array->scan_key == i)\" assertion, on the grounds\n> that they're redundant.\n\nIf you're doing that, then surely\n\n if (j != (BTEqualStrategyNumber - 1) ||\n !(xform[j].skey->sk_flags & SK_SEARCHARRAY))\n {\n ...\n }\n else\n {\n Assert(j == (BTEqualStrategyNumber - 1));\n Assert(xform[j].skey->sk_flags & SK_SEARCHARRAY);\n Assert(xform[j].ikey == array->scan_key);\n Assert(!(cur->sk_flags & SK_SEARCHARRAY));\n }\n\nthose first two Asserts are redundant with the \"if\" as well.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 07 Apr 2024 21:50:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution,\n allowing multi-column ordered scans, skip scan" }, { "msg_contents": "On Sun, Apr 7, 2024 at 9:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> If you're doing that, then surely\n>\n> if (j != (BTEqualStrategyNumber - 1) ||\n> !(xform[j].skey->sk_flags & SK_SEARCHARRAY))\n> {\n> ...\n> }\n> else\n> {\n> Assert(j == (BTEqualStrategyNumber - 1));\n> Assert(xform[j].skey->sk_flags & SK_SEARCHARRAY);\n> Assert(xform[j].ikey == array->scan_key);\n> Assert(!(cur->sk_flags & SK_SEARCHARRAY));\n> }\n>\n> those first two Asserts are redundant with the \"if\" as well.\n\nI'll get rid of those other two assertions as well, then.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 7 Apr 2024 21:57:23 -0400", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Sun, Apr 7, 2024 at 9:57 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Sun, Apr 7, 2024 at 9:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > those first two Asserts are redundant with the \"if\" as well.\n>\n> I'll get rid of those other two assertions as well, then.\n\nDone that way.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 7 Apr 2024 22:13:57 -0400", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "Hi Peter\n\nThere seems to be an assertion failure with this change in HEAD\nTRAP: failed Assert(\"leftarg->sk_attno == rightarg->sk_attno\"), File:\n\"../../src/backend/access/nbtree/nbtutils.c\", Line: 3246, PID: 1434532\n\nIt can be reproduced by:\ncreate table t(a int);\ninsert into t select 1 from generate_series(1,10);\ncreate index on t (a desc);\nset enable_seqscan = false;\nselect * from t where a IN (1,2) and a IN (1,2,3);\n\nIt's triggered when a scankey's strategy is set to invalid. While for a\ndescending ordered column,\nthe strategy needs to get fixed to its commute strategy. That doesn't work\nif the strategy is invalid.\n\nAttached a demo fix.\n\nRegards,\nDonghang Lin\n(ServiceNow)", "msg_date": "Wed, 17 Apr 2024 23:12:52 -0700", "msg_from": "Donghang Lin <donghanglin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Thu, Apr 18, 2024 at 2:13 AM Donghang Lin <donghanglin@gmail.com> wrote:\n> It's triggered when a scankey's strategy is set to invalid. While for a descending ordered column,\n> the strategy needs to get fixed to its commute strategy. That doesn't work if the strategy is invalid.\n\nThe problem is that _bt_fix_scankey_strategy shouldn't have been doing\nanything with already-eliminated array scan keys in the first place\n(whether or not they're on a DESC index column). I just pushed a fix\nalong those lines.\n\nThanks for the report!\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 18 Apr 2024 11:50:53 -0400", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "Hello Peter,\n\n07.04.2024 20:18, Peter Geoghegan wrote:\n> On Sun, Apr 7, 2024 at 1:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n>> SELECT * FROM t WHERE a < ANY (ARRAY[1]) AND b < ANY (ARRAY[1]);\n>>\n>> TRAP: failed Assert(\"so->numArrayKeys\"), File: \"nbtutils.c\", Line: 560, PID: 3251267\n> I immediately see what's up here. WIll fix this in the next short\n> while. There is no bug here in builds without assertions, but it's\n> probably best to keep the assertion, and to just make sure not to call\n> _bt_preprocess_array_keys_final() unless it has real work to do.\n\nPlease look at another case, where a similar Assert (but this time in\n_bt_preprocess_keys()) is triggered:\nCREATE TABLE t (a text, b text);\nINSERT INTO t (a, b) SELECT 'a', repeat('b', 100) FROM generate_series(1, 500) g;\nCREATE INDEX t_idx ON t USING btree(a);\nBEGIN;\nDECLARE c CURSOR FOR SELECT a FROM t WHERE a = 'a';\nFETCH FROM c;\nFETCH RELATIVE 0 FROM c;\n\nTRAP: failed Assert(\"so->numArrayKeys\"), File: \"nbtutils.c\", Line: 2582, PID: 1130962\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Mon, 22 Apr 2024 11:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Mon, Apr 22, 2024 at 4:00 AM Alexander Lakhin <exclusion@gmail.com> wrote:\n> Please look at another case, where a similar Assert (but this time in\n> _bt_preprocess_keys()) is triggered:\n> CREATE TABLE t (a text, b text);\n> INSERT INTO t (a, b) SELECT 'a', repeat('b', 100) FROM generate_series(1, 500) g;\n> CREATE INDEX t_idx ON t USING btree(a);\n> BEGIN;\n> DECLARE c CURSOR FOR SELECT a FROM t WHERE a = 'a';\n> FETCH FROM c;\n> FETCH RELATIVE 0 FROM c;\n>\n> TRAP: failed Assert(\"so->numArrayKeys\"), File: \"nbtutils.c\", Line: 2582, PID: 1130962\n\nI'm pretty sure that I could fix this by simply removing the\nassertion. But I need to think about it a bit more before I push a\nfix.\n\nThe test case you've provided proves that _bt_preprocess_keys's\nnew no-op path isn't just used during scans that have array keys (your\ntest case doesn't have a SAOP at all). This was never intended. On the\nother hand, I think that it's still correct (or will be once the assertion is\ngone), and it seems like it would be simpler to allow this case (and\ndocument it) than to not allow it at all.\n\nThe general idea that we only need one \"real\" _bt_preprocess_keys call\nper btrescan (independent of the presence of array keys) still seems\nsound.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Mon, 22 Apr 2024 11:13:44 -0400", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Mon, Apr 22, 2024 at 11:13 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> I'm pretty sure that I could fix this by simply removing the\n> assertion. But I need to think about it a bit more before I push a\n> fix.\n>\n> The test case you've provided proves that _bt_preprocess_keys's\n> new no-op path isn't just used during scans that have array keys (your\n> test case doesn't have a SAOP at all). This was never intended. On the\n> other hand, I think that it's still correct (or will be once the assertion is\n> gone), and it seems like it would be simpler to allow this case (and\n> document it) than to not allow it at all.\n\nPushed a fix like that just now.\n\nThanks for the report, Alexander.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 22 Apr 2024 13:59:22 -0400", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "Hi, Peter!\n\nOn 20.01.2024 01:41, Peter Geoghegan wrote:\n> It is quite likely that there are exactly zero affected out-of-core\n> index AMs. I don't count pgroonga as a counterexample (I don't think\n> that it actually fullfills the definition of a ). Basically,\n> \"amcanorder\" index AMs more or less promise to be compatible with\n> nbtree, down to having the same strategy numbers. So the idea that I'm\n> going to upset amsearcharray+amcanorder index AM authors is a\n> completely theoretical problem. The planner code evolved with nbtree,\n> hand-in-glove.\n\n\n From the 5bf748b86bc commit message:\n\n> There is a theoretical risk that removing restrictions on SAOP index\n> paths from the planner will break compatibility with amcanorder-based\n> index AMs maintained as extensions. Such an index AM could have the\n> same limitations around ordered SAOP scans as nbtree had up until now.\n> Adding a pro forma incompatibility item about the issue to the Postgres\n> 17 release notes seems like a good idea.\n\nSeems, this commit broke our posted knn_btree patch. [1]\nIf the point from which ORDER BY goes by distance is greater than the elements of ScalarArrayOp,\nthen knn_btree algorithm will give only the first tuple. It sorts the elements of ScalarArrayOp\nin descending order and starts searching from smaller to larger\nand always expects that for each element of ScalarArrayOp there will be a separate scan.\nAnd now it does not work. Reproduction is described in [2].\n\nSeems it is impossible to solve this problem only from the knn-btree patch side.\nCould you advise any ways how to deal with this. Would be very grateful.\n\nWith the best wishes,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n[1]\nhttps://commitfest.postgresql.org/48/4871/\n[2]\nhttps://www.postgresql.org/message-id/47adb0b0-6e65-4b40-8d93-20dcecc21395%40postgrespro.ru\n\n\n", "msg_date": "Wed, 31 Jul 2024 07:47:33 +0300", "msg_from": "\"Anton A. Melnikov\" <a.melnikov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Wed, Jul 31, 2024 at 12:47 AM Anton A. Melnikov\n<a.melnikov@postgrespro.ru> wrote:\n> From the 5bf748b86bc commit message:\n>\n> > There is a theoretical risk that removing restrictions on SAOP index\n> > paths from the planner will break compatibility with amcanorder-based\n> > index AMs maintained as extensions. Such an index AM could have the\n> > same limitations around ordered SAOP scans as nbtree had up until now.\n> > Adding a pro forma incompatibility item about the issue to the Postgres\n> > 17 release notes seems like a good idea.\n\nHere you've quoted the commit message's description of the risks\naround breaking third party index AMs maintained as extensions. FWIW\nthat seems like a rather different problem to the one that you ran\ninto with your KNN nbtree patch.\n\n> Seems, this commit broke our posted knn_btree patch. [1]\n> If the point from which ORDER BY goes by distance is greater than the elements of ScalarArrayOp,\n> then knn_btree algorithm will give only the first tuple. It sorts the elements of ScalarArrayOp\n> in descending order and starts searching from smaller to larger\n> and always expects that for each element of ScalarArrayOp there will be a separate scan.\n> And now it does not work. Reproduction is described in [2].\n>\n> Seems it is impossible to solve this problem only from the knn-btree patch side.\n> Could you advise any ways how to deal with this. Would be very grateful.\n\nWell, you're actually patching nbtree. Your patch isn't upstream of\nthe nbtree code. You're going to have to reconcile your design with\nthe design for advancing arrays that is primarily implemented by\n_bt_advance_array_keys.\n\nI'm not entirely sure what the best way to go about doing that is --\nthat would require a lot of context about the KNN patch. It wouldn't\nbe particularly hard to selectively reenable the old Postgres 16 SAOP\nbehavior, though. You could conditionally force \"goto new_prim_scan\"\nwhenever the KNN patch's mechanism was in use.\n\nThat kind of approach might have unintended consequences elsewhere,\nthough. For example it could break things like mark/restore processing\nby merge joins, which assumes that the scan's current array keys can\nalways be reconstructed after restoring a marked position by resetting\nthe array's to their first elements (or final elements if it's a\nbackwards scan). On the other hand, I think that you already shouldn't\nexpect anything like that to work with your patch. After all, it\nchanges the order of the tuples returned by the scan, which is already\nbound to break certain assumptions made by \"regular\" index scans.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 7 Aug 2024 10:59:11 -0400", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "Hello Peter,\n\n22.04.2024 20:59, Peter Geoghegan wrote:\n>\n> Pushed a fix like that just now. \n\nI'm sorry to bother you again, but I've come across another assertion\nfailure. Please try the following query (I use a clean \"postgres\" database,\njust after initdb):\nEXPLAIN SELECT conname\n   FROM pg_constraint WHERE conname IN ('pkey', 'ID')\n   ORDER BY conname DESC;\n\nSELECT conname\n   FROM pg_constraint WHERE conname IN ('pkey', 'ID')\n   ORDER BY conname DESC;\n\nIt fails for me as below:\n                                                      QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------\n  Index Only Scan Backward using pg_constraint_conname_nsp_index on pg_constraint  (cost=0.14..4.18 rows=2 width=64)\n    Index Cond: (conname = ANY ('{pkey,ID}'::name[]))\n(2 rows)\n\nserver closed the connection unexpectedly\n...\nwith the stack trace:\n...\n#5  0x000055a49f81148d in ExceptionalCondition (conditionName=0x55a49f8bb540 \"ItemIdHasStorage(itemId)\",\n     fileName=0x55a49f8bb4a8 \"../../../../src/include/storage/bufpage.h\", lineNumber=355) at assert.c:66\n#6  0x000055a49f0f2ddd in PageGetItem (page=0x7f97cbf17000 \"\", itemId=0x7f97cbf2f064)\n     at ../../../../src/include/storage/bufpage.h:355\n#7  0x000055a49f0f9367 in _bt_checkkeys_look_ahead (scan=0x55a4a0ac4548, pstate=0x7ffd1a103670, tupnatts=2,\n     tupdesc=0x7f97cb5d7be8) at nbtutils.c:4105\n#8  0x000055a49f0f8ac3 in _bt_checkkeys (scan=0x55a4a0ac4548, pstate=0x7ffd1a103670, arrayKeys=true,\n     tuple=0x7f97cbf18890, tupnatts=2) at nbtutils.c:3612\n#9  0x000055a49f0ebb4b in _bt_readpage (scan=0x55a4a0ac4548, dir=BackwardScanDirection, offnum=20, firstPage=true)\n     at nbtsearch.c:1863\n...\n(gdb) f 7\n#7  0x000055a49f0f9367 in _bt_checkkeys_look_ahead (scan=0x55a4a0ac4548, pstate=0x7ffd1a103670, tupnatts=2,\n     tupdesc=0x7f97cb5d7be8) at nbtutils.c:4105\n4105            ahead = (IndexTuple) PageGetItem(pstate->page,\n(gdb) p aheadoffnum\n$1 = 24596\n(gdb) p pstate->offnum\n$2 = 20\n(gdb) p pstate->targetdistance\n$3 = -24576\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Mon, 26 Aug 2024 17:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" }, { "msg_contents": "On Mon, Aug 26, 2024 at 10:00 AM Alexander Lakhin <exclusion@gmail.com> wrote:\n> I'm sorry to bother you again, but I've come across another assertion\n> failure.\n\nYou've found a real bug. I should be the one apologizing - not you.\n\n> Please try the following query (I use a clean \"postgres\" database,\n> just after initdb):\n> EXPLAIN SELECT conname\n> FROM pg_constraint WHERE conname IN ('pkey', 'ID')\n> ORDER BY conname DESC;\n>\n> SELECT conname\n> FROM pg_constraint WHERE conname IN ('pkey', 'ID')\n> ORDER BY conname DESC;\n\nThe problem is that _bt_checkkeys_look_ahead didn't quite get\neverything right with sanitizing the page offset number it uses to\ncheck if a later tuple is before the recently advanced array scan\nkeys. The page offset itself was checked, but in a way that was faulty\nin cases where the int16 we use could overflow.\n\nI can fix the bug by making sure that pstate->targetdistance (an int16\nvariable) can only be doubled when it actually makes sense. That way\nthere can never be an int16 overflow, and so the final offnum cannot\nunderflow to a value that's much higher than the page's max offset\nnumber.\n\nThis approach works:\n\n /*\n * The look ahead distance starts small, and ramps up as each call here\n * allows _bt_readpage to skip over more tuples\n */\n if (!pstate->targetdistance)\n pstate->targetdistance = LOOK_AHEAD_DEFAULT_DISTANCE;\n- else\n+ else if (pstate->targetdistance < MaxIndexTuplesPerPage / 2)\n pstate->targetdistance *= 2;\n\nI'll push a fix along these lines shortly.\n\nThanks for the report!\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 26 Aug 2024 10:58:50 -0400", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column\n ordered scans, skip scan" } ]
[ { "msg_contents": "Hello,\nPartition pruning is not working on the updates query, am I missing something? \nAble get around this by manually pinning the table partition to the date partition but, it's a manual process. \nPostgreSQL 13.8 on AWS\nPartition table using pg_partman v4.5.1Each daily partition contains about 15 million rows which require to be processed using an update on the column p_state.\nTable:  public.stat                              Partitioned table \"public.stat\"       Column        |            Type             | Collation | Nullable |         Default         ---------------------+-----------------------------+-----------+----------+------------------------- creationdate        | timestamp without time zone |           | not null |  ver                 | text                        |           |          |  id                  | text                        |           |          |  name                | text                        |           |          |  ip                  | text                        |           | not null |  tags                | jsonb                       |           |          |  list                | jsonb                       |           |          |  updated_tstamp      | timestamp without time zone |           |          |  p_state             | character varying           |           |          | 'N'::character varying insert_tstamp       | timestamp without time zone |           |          | CURRENT_TIMESTAMPPartition key: RANGE (creationdate)Indexes:    \"pk_public.stat\" PRIMARY KEY, btree (creationdate, ip)    \"idx_public.p_state\" btree (p_state)Number of partitions: 47 (Use \\d+ to list them.)\n\nQuery: explain WITH update_pr as (SELECT * FROM public.stat WHERE p_state = 'N' AND creationdate > current_timestamp - INTERVAL '5' day ORDER BY creationdate LIMIT 4000000) UPDATE public.stat s set p_state = 'PR' FROM update_pr u WHERE s.p_state = 'N' AND s.ip = u.ip AND s.creationdate = u.creationdate;                                                                              QUERY PLAN                                                                                ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Update on public.stat s  (cost=11.45..193994.25 rows=47 width=858)   Update on public.stat_p2023_06_25 s_1   Update on public.stat_p2023_06_26 s_2   ...   Update on public.stat_p2023_08_09 s_46   Update on public.stat_default s_47   ->  Merge Join  (cost=11.45..4128.23 rows=1 width=966)         Merge Cond: (u.creationdate = s_1.creationdate)         Join Filter: (s_1.ip = u.ip)         ->  Subquery Scan on u  (cost=11.30..4036.97 rows=34432 width=322)               ->  Limit  (cost=11.30..3692.65 rows=34432 width=272)                     ->  Merge Append  (cost=11.30..3692.65 rows=34432 width=272)                           Sort Key: public.stat.creationdate                           Subplans Removed: 29                           ->  Index Scan using public.stat_p2023_07_24_pkey on public.stat_p2023_07_24 public.stat_1  (cost=0.29..1474.75 rows=23641 width=272)                                 Index Cond: (creationdate > (CURRENT_TIMESTAMP - '1 day'::interval day))                                 Filter: ((p_state)::text = 'N'::text)                           ->  Index Scan using public.stat_p2023_07_25_pkey on public.stat_p2023_07_25 public.stat_2  (cost=0.29..673.21 rows=10746 width=273)                                 Index Cond: (creationdate > (CURRENT_TIMESTAMP - '1 day'::interval day))                                 Filter: ((p_state)::text = 'N'::text)                           ->  Index Scan using public.stat_p2023_07_26_pkey on public.stat_p2023_07_26 public.stat_3  (cost=0.15..11.89 rows=1 width=664) ..                       ->  Index Scan using public.stat_p2023_08_02_pkey on public.stat_p2023_08_02 public.stat_10  (cost=0.15..11.89 rows=1 width=664)                                 Index Cond: (creationdate > (CURRENT_TIMESTAMP - '1 day'::interval day))                                 Filter: ((p_state)::text = 'N'::text)                           ->  Index Scan using public.stat_p2023_08_03_pkey on public.stat_p2023_08_03 public.stat_11  (cost=0.15..11.89 rows=1 width=664)                                 Index Cond: (creationdate > (CURRENT_TIMESTAMP - '1 day'::interval day))                                 Filter: ((p_state)::text = 'N'::text)                           ->  Index Scan using public.stat_p2023_08_04_pkey on public.stat_p2023_08_04 public.stat_12  (cost=0.15..11.89 rows=1 width=664)                                 Index Cond: (creationdate > (CURRENT_TIMESTAMP - '1 day'::interval day))                                 Filter: ((p_state)::text = 'N'::text)                           ->  Index Scan using public.stat_p2023_08_05_pkey on public.stat_p2023_08_05 public.stat_13  (cost=0.15..11.89 rows=1 width=664)                                 Index Cond: (creationdate > (CURRENT_TIMESTAMP - '1 day'::interval day))                                 Filter: ((p_state)::text = 'N'::text)                           ->  Index Scan using public.stat_p2023_08_06_pkey on public.stat_p2023_08_06 public.stat_14  (cost=0.15..11.89 rows=1 width=664)                                 Index Cond: (creationdate > (CURRENT_TIMESTAMP - '1 day'::interval day))                                 Filter: ((p_state)::text = 'N'::text)                           ->  Index Scan using public.stat_p2023_08_07_pkey on public.stat_p2023_08_07 public.stat_15  (cost=0.15..11.89 rows=1 width=664)                                 Index Cond: (creationdate > (CURRENT_TIMESTAMP - '1 day'::interval day))                                 Filter: ((p_state)::text = 'N'::text)                           ->  Index Scan using public.stat_p2023_08_08_pkey on public.stat_p2023_08_08 public.stat_16  (cost=0.15..11.89 rows=1 width=664)                                 Index Cond: (creationdate > (CURRENT_TIMESTAMP - '1 day'::interval day))                                 Filter: ((p_state)::text = 'N'::text)                           ->  Index Scan using public.stat_p2023_08_09_pkey on public.stat_p2023_08_09 public.stat_17  (cost=0.15..11.89 rows=1 width=664)                                 Index Cond: (creationdate > (CURRENT_TIMESTAMP - '1 day'::interval day))                                 Filter: ((p_state)::text = 'N'::text)                           ->  Index Scan using public.stat_default_pkey on public.stat_default public.stat_18  (cost=0.15..11.89 rows=1 width=664)                                 Index Cond: (creationdate > (CURRENT_TIMESTAMP - '1 day'::interval day))                                 Filter: ((p_state)::text = 'N'::text)         ->  Materialize  (cost=0.15..2.18 rows=1 width=638)               ->  Index Scan using public.stat_p2023_06_25_pkey on public.stat_p2023_06_25 s_1  (cost=0.15..2.17 rows=1 width=638)                     Index Cond: ((creationdate >= (CURRENT_TIMESTAMP - '1 day'::interval day)) AND (creationdate <= (CURRENT_TIMESTAMP - '1 day'::interval day)))                     Filter: ((p_state)::text = 'N'::text)   ->  Merge Join  (cost=11.45..4128.23 rows=1 width=966)         Merge Cond: (u_1.creationdate = s_2.creationdate)         Join Filter: (s_2.ip = u_1.ip)         ->  Subquery Scan on u_1  (cost=11.30..4036.97 rows=34432 width=322)               ->  Limit  (cost=11.30..3692.65 rows=34432 width=272)                     ->  Merge Append  (cost=11.30..3692.65 rows=34432 width=272)\nAny help on this is appreciated...B.\nHello,Partition pruning is not working on the updates query, am I missing something? Able get around this by manually pinning the table partition to the date partition but, it's a manual process. PostgreSQL 13.8 on AWSPartition table using pg_partman v4.5.1Each daily partition contains about 15 million rows which require to be processed using an update on the column p_state.Table:  public.stat                              Partitioned table \"public.stat\"       Column        |            Type             | Collation | Nullable |         Default         ---------------------+-----------------------------+-----------+----------+------------------------- creationdate        | timestamp without time zone |           | not null |  ver                 | text                        |           |          |  id                  | text                        |           |          |  name                | text                        |           |          |  ip                  | text                        |           | not null |  tags                | jsonb                       |           |          |  list                | jsonb                       |           |          |  updated_tstamp      | timestamp without time zone |           |          |  p_state             | character varying           |           |          | 'N'::character varying insert_tstamp       | timestamp without time zone |           |          | CURRENT_TIMESTAMPPartition key: RANGE (creationdate)Indexes:    \"pk_public.stat\" PRIMARY KEY, btree (creationdate, ip)    \"idx_public.p_state\" btree (p_state)Number of partitions: 47 (Use \\d+ to list them.)Query: explain WITH update_pr as (SELECT * FROM public.stat WHERE p_state = 'N' AND creationdate > current_timestamp - INTERVAL '5' day ORDER BY creationdate LIMIT 4000000) UPDATE public.stat s set p_state = 'PR' FROM update_pr u WHERE s.p_state = 'N' AND s.ip = u.ip AND s.creationdate = u.creationdate;                                                                              QUERY PLAN                                                                                ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Update on public.stat s  (cost=11.45..193994.25 rows=47 width=858)   Update on public.stat_p2023_06_25 s_1   Update on public.stat_p2023_06_26 s_2   ...   Update on public.stat_p2023_08_09 s_46   Update on public.stat_default s_47   ->  Merge Join  (cost=11.45..4128.23 rows=1 width=966)         Merge Cond: (u.creationdate = s_1.creationdate)         Join Filter: (s_1.ip = u.ip)         ->  Subquery Scan on u  (cost=11.30..4036.97 rows=34432 width=322)               ->  Limit  (cost=11.30..3692.65 rows=34432 width=272)                     ->  Merge Append  (cost=11.30..3692.65 rows=34432 width=272)                           Sort Key: public.stat.creationdate                           Subplans Removed: 29                           ->  Index Scan using public.stat_p2023_07_24_pkey on public.stat_p2023_07_24 public.stat_1  (cost=0.29..1474.75 rows=23641 width=272)                                 Index Cond: (creationdate > (CURRENT_TIMESTAMP - '1 day'::interval day))                                 Filter: ((p_state)::text = 'N'::text)                           ->  Index Scan using public.stat_p2023_07_25_pkey on public.stat_p2023_07_25 public.stat_2  (cost=0.29..673.21 rows=10746 width=273)                                 Index Cond: (creationdate > (CURRENT_TIMESTAMP - '1 day'::interval day))                                 Filter: ((p_state)::text = 'N'::text)                           ->  Index Scan using public.stat_p2023_07_26_pkey on public.stat_p2023_07_26 public.stat_3  (cost=0.15..11.89 rows=1 width=664) ..                       ->  Index Scan using public.stat_p2023_08_02_pkey on public.stat_p2023_08_02 public.stat_10  (cost=0.15..11.89 rows=1 width=664)                                 Index Cond: (creationdate > (CURRENT_TIMESTAMP - '1 day'::interval day))                                 Filter: ((p_state)::text = 'N'::text)                           ->  Index Scan using public.stat_p2023_08_03_pkey on public.stat_p2023_08_03 public.stat_11  (cost=0.15..11.89 rows=1 width=664)                                 Index Cond: (creationdate > (CURRENT_TIMESTAMP - '1 day'::interval day))                                 Filter: ((p_state)::text = 'N'::text)                           ->  Index Scan using public.stat_p2023_08_04_pkey on public.stat_p2023_08_04 public.stat_12  (cost=0.15..11.89 rows=1 width=664)                                 Index Cond: (creationdate > (CURRENT_TIMESTAMP - '1 day'::interval day))                                 Filter: ((p_state)::text = 'N'::text)                           ->  Index Scan using public.stat_p2023_08_05_pkey on public.stat_p2023_08_05 public.stat_13  (cost=0.15..11.89 rows=1 width=664)                                 Index Cond: (creationdate > (CURRENT_TIMESTAMP - '1 day'::interval day))                                 Filter: ((p_state)::text = 'N'::text)                           ->  Index Scan using public.stat_p2023_08_06_pkey on public.stat_p2023_08_06 public.stat_14  (cost=0.15..11.89 rows=1 width=664)                                 Index Cond: (creationdate > (CURRENT_TIMESTAMP - '1 day'::interval day))                                 Filter: ((p_state)::text = 'N'::text)                           ->  Index Scan using public.stat_p2023_08_07_pkey on public.stat_p2023_08_07 public.stat_15  (cost=0.15..11.89 rows=1 width=664)                                 Index Cond: (creationdate > (CURRENT_TIMESTAMP - '1 day'::interval day))                                 Filter: ((p_state)::text = 'N'::text)                           ->  Index Scan using public.stat_p2023_08_08_pkey on public.stat_p2023_08_08 public.stat_16  (cost=0.15..11.89 rows=1 width=664)                                 Index Cond: (creationdate > (CURRENT_TIMESTAMP - '1 day'::interval day))                                 Filter: ((p_state)::text = 'N'::text)                           ->  Index Scan using public.stat_p2023_08_09_pkey on public.stat_p2023_08_09 public.stat_17  (cost=0.15..11.89 rows=1 width=664)                                 Index Cond: (creationdate > (CURRENT_TIMESTAMP - '1 day'::interval day))                                 Filter: ((p_state)::text = 'N'::text)                           ->  Index Scan using public.stat_default_pkey on public.stat_default public.stat_18  (cost=0.15..11.89 rows=1 width=664)                                 Index Cond: (creationdate > (CURRENT_TIMESTAMP - '1 day'::interval day))                                 Filter: ((p_state)::text = 'N'::text)         ->  Materialize  (cost=0.15..2.18 rows=1 width=638)               ->  Index Scan using public.stat_p2023_06_25_pkey on public.stat_p2023_06_25 s_1  (cost=0.15..2.17 rows=1 width=638)                     Index Cond: ((creationdate >= (CURRENT_TIMESTAMP - '1 day'::interval day)) AND (creationdate <= (CURRENT_TIMESTAMP - '1 day'::interval day)))                     Filter: ((p_state)::text = 'N'::text)   ->  Merge Join  (cost=11.45..4128.23 rows=1 width=966)         Merge Cond: (u_1.creationdate = s_2.creationdate)         Join Filter: (s_2.ip = u_1.ip)         ->  Subquery Scan on u_1  (cost=11.30..4036.97 rows=34432 width=322)               ->  Limit  (cost=11.30..3692.65 rows=34432 width=272)                     ->  Merge Append  (cost=11.30..3692.65 rows=34432 width=272)Any help on this is appreciated...B.", "msg_date": "Tue, 25 Jul 2023 03:17:55 +0000 (UTC)", "msg_from": "\"Mr.Bim\" <bmopat@yahoo.com>", "msg_from_op": true, "msg_subject": "Partition pruning not working on updates" }, { "msg_contents": "On Tue, 25 Jul 2023 at 20:45, Mr.Bim <bmopat@yahoo.com> wrote:\n> Partition pruning is not working on the updates query, am I missing something?\n\nIn PG13, partition pruning for UPDATE and DELETE only works during\nquery planning. Because you're using CURRENT_TIMESTAMP, that's not an\nimmutable expression which can be evaluated during query planning.\nThis means that the only possible way to prune partitions when using\nCURRENT_TIMESTAMP is during query execution. Unfortunately, execution\ntime pruning does not work for UPDATE/DELETE in PG13. PG14 is the\nfirst version to have that.\n\nThere's a note in the documents [1] about this which reads:\n\n\"Execution-time partition pruning currently only occurs for the Append\nand MergeAppend node types. It is not yet implemented for the\nModifyTable node type, but that is likely to be changed in a future\nrelease of PostgreSQL.\"\n\nIf you were never to imbed that query in a view or use PREPAREd\nstatements, then you could replace CURRENT_TIMESTAMP with\n'now'::timestamp. 'now' will be evaluated at parse time and that\nmeans it'll be a known value to the planner. However, unless you\nsomehow can be completely certain that this query will never be put\ninside a view or used with prepared statements, then I'd highly\nrecommend not doing this. SQLs tend to get copied and pasted and it\none day might get pasted into the wrong place (e.g. a view) and that\ncould cause problems.\n\nExample with a view:\npostgres=# create view v_test as select 'now'::timestamp;\nCREATE VIEW\npostgres=# explain verbose select * from v_test;\n QUERY PLAN\n---------------------------------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=8)\n Output: '2023-07-26 00:17:32.370399'::timestamp without time zone\n(2 rows)\n\n\npostgres=# explain verbose select * from v_test;\n QUERY PLAN\n---------------------------------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=8)\n Output: '2023-07-26 00:17:32.370399'::timestamp without time zone\n(2 rows)\n\nnote that the view always just returns the time when the view was created.\n\nMy recommendation, if this is a show-stopper for you, would be to\nconsider using a newer version of PostgreSQL.\n\nDavid\n\n[1] https://www.postgresql.org/docs/13/ddl-partitioning.html\n\n\n", "msg_date": "Wed, 26 Jul 2023 00:19:10 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Partition pruning not working on updates" } ]
[ { "msg_contents": "When forming an outer join's joinrel, we have the is_pushed_down flag in\nRestrictInfo nodes to distinguish those quals that are in that join's\nJOIN/ON condition from those that were pushed down to the joinrel and\nthus act as filter quals. Since now we have the outer-join-aware-Var\ninfrastructure, I think we can check to see whether a qual clause's\nrequired_relids reference the outer join(s) being formed, in order to\ntell if it's a join or filter clause. This seems like a more principled\nway. (Interesting that optimizer/README actually describes this way in\nsection 'Relation Identification and Qual Clause Placement'.)\n\nSo I give it a try to retire is_pushed_down as attached. But there are\nseveral points that may need more thoughts.\n\n* When we form an outer join, it's possible that more than one outer\njoin relid is added to the join's relid set, if there are any pushed\ndown outer joins per identity 3. And it's also possible that no outer\njoin relid is added, for an outer join that has been pushed down. So\ninstead of checking if a qual clause's required_relids include the outer\njoin's relid, I think we should check if its required_relids overlap\nthe outer join relids that are being formed, which means that we should\nuse bms_overlap(rinfo->required_relids, ojrelids) rather than\nbms_is_member(ojrelid, rinfo->required_relids). And we should do this\ncheck only for outer joins.\n\n* This patch calculates the outer join relids that are being formed\ngenerally in this way:\n\n bms_difference(joinrelids, bms_union(outerrelids, innerrelids))\n\nOf course this can only be used after the outer join relids has been\nadded by add_outer_joins_to_relids(). This calculation is performed\nmultiple times during planning. I'm not sure if this has performance\nissues. Maybe we can calculate it only once and store the result in\nsome place (such as in JoinPath)?\n\n* ANTI joins are treated as outer joins but sometimes they do not have\nrtindex (such as ANTI joins derived from SEMI). This would be a problem\nwith this new check. As an example, consider query\n\nselect * from a where not exists (select 1 from b where a.i = b.i) and\nCURRENT_USER = SESSION_USER;\n\nThe pseudoconstant clause 'CURRENT_USER = SESSION_USER' is supposed to\nbe treated as a filter clause but the new check would treat it as a join\nclause because the outer join relid set being formed is empty since the\nANTI join here does not have an rtindex. To solve this problem, this\npatch manually adds a RTE for a ANTI join derived from SEMI.\n\nAny thoughts?\n\nThanks\nRichard", "msg_date": "Tue, 25 Jul 2023 15:39:58 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Retiring is_pushed_down" }, { "msg_contents": "On Tue, Jul 25, 2023 at 3:39 PM Richard Guo <guofenglinux@gmail.com> wrote:\n\n> * This patch calculates the outer join relids that are being formed\n> generally in this way:\n>\n> bms_difference(joinrelids, bms_union(outerrelids, innerrelids))\n>\n> Of course this can only be used after the outer join relids has been\n> added by add_outer_joins_to_relids(). This calculation is performed\n> multiple times during planning. I'm not sure if this has performance\n> issues. Maybe we can calculate it only once and store the result in\n> some place (such as in JoinPath)?\n>\n\nIn the v2 patch, I added a member in JoinPath to store the relid set of\nany outer joins that will be calculated at this join, and this would\navoid repeating this calculation when creating nestloop/merge/hash join\nplan nodes. Also fixed a comment in v2.\n\nThanks\nRichard", "msg_date": "Thu, 27 Jul 2023 10:55:07 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Retiring is_pushed_down" }, { "msg_contents": "On Thu, 27 Jul 2023 at 08:25, Richard Guo <guofenglinux@gmail.com> wrote:\n>\n>\n> On Tue, Jul 25, 2023 at 3:39 PM Richard Guo <guofenglinux@gmail.com> wrote:\n>>\n>> * This patch calculates the outer join relids that are being formed\n>> generally in this way:\n>>\n>> bms_difference(joinrelids, bms_union(outerrelids, innerrelids))\n>>\n>> Of course this can only be used after the outer join relids has been\n>> added by add_outer_joins_to_relids(). This calculation is performed\n>> multiple times during planning. I'm not sure if this has performance\n>> issues. Maybe we can calculate it only once and store the result in\n>> some place (such as in JoinPath)?\n>\n>\n> In the v2 patch, I added a member in JoinPath to store the relid set of\n> any outer joins that will be calculated at this join, and this would\n> avoid repeating this calculation when creating nestloop/merge/hash join\n> plan nodes. Also fixed a comment in v2.\n\nI'm seeing that there has been no activity in this thread for nearly 6\nmonths, I'm planning to close this in the current commitfest unless\nsomeone is planning to take it forward. It can be opened again when\nthere is more interest.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sun, 21 Jan 2024 18:07:47 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Retiring is_pushed_down" }, { "msg_contents": "On Sun, Jan 21, 2024 at 8:37 PM vignesh C <vignesh21@gmail.com> wrote:\n\n> I'm seeing that there has been no activity in this thread for nearly 6\n> months, I'm planning to close this in the current commitfest unless\n> someone is planning to take it forward. It can be opened again when\n> there is more interest.\n\n\nI'm planning to take it forward. The v2 patch does not compile any\nmore. Attached is an updated patch that is rebased over master and\nfixes the compiling issues. Nothing else has changed.\n\nThanks\nRichard", "msg_date": "Mon, 29 Jan 2024 19:50:56 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Retiring is_pushed_down" }, { "msg_contents": "Here is another rebase over 3af7040985. Nothing else has changed.\n\nThanks\nRichard", "msg_date": "Thu, 18 Apr 2024 15:34:19 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Retiring is_pushed_down" }, { "msg_contents": "I see following issue with the latest patch,\n\nrelnode.c:2122:32: error: use of undeclared identifier 'ojrelids'\n RINFO_IS_PUSHED_DOWN(rinfo, ojrelids,\njoinrel->relids))\n\nOn Thu, 18 Apr 2024 at 09:34, Richard Guo <guofenglinux@gmail.com> wrote:\n\n> Here is another rebase over 3af7040985. Nothing else has changed.\n>\n> Thanks\n> Richard\n>\n\n\n-- \nRegards,\nRafia Sabih\n\nI see following issue with the latest patch,\nrelnode.c:2122:32: error: use of undeclared identifier 'ojrelids'                        RINFO_IS_PUSHED_DOWN(rinfo, ojrelids, joinrel->relids))On Thu, 18 Apr 2024 at 09:34, Richard Guo <guofenglinux@gmail.com> wrote:Here is another rebase over 3af7040985.  Nothing else has changed.ThanksRichard\n-- Regards,Rafia Sabih", "msg_date": "Tue, 10 Sep 2024 14:22:11 +0200", "msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Retiring is_pushed_down" }, { "msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> When forming an outer join's joinrel, we have the is_pushed_down flag in\n> RestrictInfo nodes to distinguish those quals that are in that join's\n> JOIN/ON condition from those that were pushed down to the joinrel and\n> thus act as filter quals. Since now we have the outer-join-aware-Var\n> infrastructure, I think we can check to see whether a qual clause's\n> required_relids reference the outer join(s) being formed, in order to\n> tell if it's a join or filter clause. This seems like a more principled\n> way. (Interesting that optimizer/README actually describes this way in\n> section 'Relation Identification and Qual Clause Placement'.)\n\nSorry for being so slow to look at this patch. The idea you're\nfollowing is one that I spent a fair amount of time on while working\non what became 2489d76c4 (\"Make Vars be outer-join-aware\"). I failed\nto make it work though. Digging in my notes from the time:\n\n-----\nHow about is_pushed_down?\n\nWould really like to get rid of that, because it's ugly/sloppily defined,\nand it's hard to determine the correct value for EquivClass-generated\nclauses once we allow derivations from OJ clauses. However, my original\nidea of checking for join's ojrelid present in clause's required_relids\nhas issues:\n* fails if clause is not pushed as far down as it can possibly be (and\nlateral refs mean that that's hard to do sometimes)\n* getting the join's ojrelid to everywhere we need to check this is messy.\nI'd tolerate the mess if it worked nicely, but ...\n-----\n\nSo I'm worried that the point about lateral refs is still a problem\nin your version. To be clear, the hazard is that if a WHERE clause\nends up getting placed at an outer join that's higher than any of\nthe OJs specifically listed in its required_relids, we'd misinterpret\nit as being a join clause for that OJ although it should be a filter\nclause.\n\nThe other thing I find in my old notes is speculation that we could\nuse the concept of JoinDomains to replace is_pushed_down. That is,\nwe'd have to label every RestrictInfo with the JoinDomain of its\nsyntactic source location, and then we could tell if the RI was\n\"pushed down\" relative to a particular join by seeing if the JD was\nabove or below that join. This ought to be impervious to\nnot-pushed-down-all-the-way problems. The thing I'd not figured\nout was how to make this work with quals of full joins: they don't\nbelong to either the upper JoinDomain or either of the lower ones.\nWe could possibly fix this by giving a full join its very own\nJoinDomain that is understood to be a parent of both lower domains,\nbut I ran out of energy to pursue that.\n\nIf we went this route, we'd basically be replacing the is_pushed_down\nfield with a JoinDomain field, which is surely not simpler. But it\nseems more crisply defined and perhaps more amenable to my long-term\ndesire to be able to use the EquivalenceClass machinery with outer\njoin clauses. (The idea being that an EC would describe equalities\nthat hold within a JoinDomain, but not necessarily elsewhere.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 26 Sep 2024 16:06:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Retiring is_pushed_down" } ]
[ { "msg_contents": "\n-------------------- Start of forwarded message --------------------\nFrom: huchangqi <huchangqi@loongson.cn>\nTo: YANG Xudong <yangxudong@ymatrix.cn>\nSubject: Re: [PATCH] Add loongarch native checksum implementation.\nDate: Tue, 25 Jul 2023 15:51:43 +0800\n\nBoth cisticola and nuthatch are on the buildfarm now。\n\ncisticola is \"old world ABI\".\nhttps://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=cisticola&br=HEAD\n\nnuthatch is \"new world ABI\".\nhttps://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=nuthatch&br=HEAD\n\n----------\nBest regards,\nhuchangqi\n\nYANG Xudong <yangxudong@ymatrix.cn> writes:\n\n> On 2023/7/6 15:14, huchangqi wrote:\n>> Hi, i have a loongarch machine runing Loongnix-server (based on redhat/centos, it has gcc-8.3 on it), i am trying to test buildfarm on it, when i edit build-farm.conf it seems to need animal and secret to connect the buildfarm server.\n>> then i go to https://buildfarm.postgresql.org/cgi-bin/register-form.pl, and registered the buildfarm a week ago, but didn't receive any response. so what else i need to do next.\n>> \n>> \n>\n> Is it possible to provide a build farm instance for new world ABI of \n> loongarch also by loongson? It will be really appreciated.\n>\n> Thanks!\n>\n>> \n>>> -----Original Messages-----\n>>> From: \"YANG Xudong\" <yangxudong@ymatrix.cn>\n>>> Send time:Wednesday, 07/05/2023 10:15:51\n>>> To: \"John Naylor\" <john.naylor@enterprisedb.com>\n>>> Cc: pgsql-hackers@lists.postgresql.org, wengyanqing@ymatrix.cn, wanghao@ymatrix.cn\n>>> Subject: Re: [PATCH] Add loongarch native checksum implementation.\n>>>\n>>> Is there any other comment?\n>>>\n>>> If the patch looks OK, I would like to update its status to ready for\n>>> committer in the commitfest.\n>>>\n>>> Thanks!\n>>>\n>>> On 2023/6/16 09:28, YANG Xudong wrote:\n>>>> Updated the patch based on the comments.\n>>>>\n>>>> On 2023/6/15 18:30, John Naylor wrote:\n>>>>>\n>>>>> On Wed, Jun 14, 2023 at 9:20 AM YANG Xudong <yangxudong@ymatrix.cn\n>>>>> <mailto:yangxudong@ymatrix.cn>> wrote:\n>>>>>  >\n>>>>>  > Attached a new patch with fixes based on the comment below.\n>>>>>\n>>>>> Note: It's helpful to pass \"-v\" to git format-patch, to have different\n>>>>> versions.\n>>>>>\n>>>>\n>>>> Added v2\n>>>>\n>>>>>  > > For x86 and Arm, if it fails to link without an -march flag, we\n>>>>> allow\n>>>>>  > > for a runtime check. The flags \"-march=armv8-a+crc\" and\n>>>>> \"-msse4.2\" are\n>>>>>  > > for instructions not found on all platforms. The patch also\n>>>>> checks both\n>>>>>  > > ways, and each one results in \"Use LoongArch CRC instruction\n>>>>>  > > unconditionally\". The -march flag here is general, not specific. In\n>>>>>  > > other words, if this only runs inside \"+elif host_cpu ==\n>>>>> 'loongarch64'\",\n>>>>>  > > why do we need both with -march and without?\n>>>>>  > >\n>>>>>  >\n>>>>>  > Removed the elif branch.\n>>>>>\n>>>>> Okay, since we've confirmed that no arch flag is necessary, some other\n>>>>> places can be simplified:\n>>>>>\n>>>>> --- a/src/port/Makefile\n>>>>> +++ b/src/port/Makefile\n>>>>> @@ -98,6 +98,11 @@ pg_crc32c_armv8.o: CFLAGS+=$(CFLAGS_CRC)\n>>>>>   pg_crc32c_armv8_shlib.o: CFLAGS+=$(CFLAGS_CRC)\n>>>>>   pg_crc32c_armv8_srv.o: CFLAGS+=$(CFLAGS_CRC)\n>>>>>\n>>>>> +# all versions of pg_crc32c_loongarch.o need CFLAGS_CRC\n>>>>> +pg_crc32c_loongarch.o: CFLAGS+=$(CFLAGS_CRC)\n>>>>> +pg_crc32c_loongarch_shlib.o: CFLAGS+=$(CFLAGS_CRC)\n>>>>> +pg_crc32c_loongarch_srv.o: CFLAGS+=$(CFLAGS_CRC)\n>>>>>\n>>>>> This was copy-and-pasted from platforms that use a runtime check, so\n>>>>> should be unnecessary.\n>>>>>\n>>>>\n>>>> Removed these lines.\n>>>>\n>>>>> +# If the intrinsics are supported, sets\n>>>>> pgac_loongarch_crc32c_intrinsics,\n>>>>> +# and CFLAGS_CRC.\n>>>>>\n>>>>> +# Check if __builtin_loongarch_crcc_* intrinsics can be used\n>>>>> +# with the default compiler flags.\n>>>>> +# CFLAGS_CRC is set if the extra flag is required.\n>>>>>\n>>>>> Same here -- it seems we don't need to set CFLAGS_CRC at all. Can you\n>>>>> confirm?\n>>>>>\n>>>>\n>>>> We don't need to set CFLAGS_CRC as commented. I have updated the\n>>>> configure script to make it align with the logic in meson build script.\n>>>>\n>>>>>  > > Also, I don't have a Loongarch machine for testing. Could you\n>>>>> show that\n>>>>>  > > the instructions are found in the binary, maybe using objdump and\n>>>>> grep?\n>>>>>  > > Or a performance test?\n>>>>>  > >\n>>>>>  >\n>>>>>  > The output of the objdump command `objdump -dS\n>>>>>  > ../postgres-build/tmp_install/usr/local/pgsql/bin/postgres  | grep\n>>>>> -B 30\n>>>>>  > -A 10 crcc` is attached.\n>>>>>\n>>>>> Thanks for confirming.\n>>>>>\n>>>>> -- \n>>>>> John Naylor\n>>>>> EDB: http://www.enterprisedb.com <http://www.enterprisedb.com>\n>>>\n>> \n>> \n>> 本邮件及其附件含有龙芯中科的商业秘密信息,仅限于发送给上面地址中列出的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制或散发)本邮件及其附件中的信息。如果您错收本邮件,请您立即电话或邮件通知发件人并删除本邮件。\n>> This email and its attachments contain confidential information from Loongson Technology , which is intended only for the person or entity whose address is listed above. Any use of the information contained herein in any way (including, but not limited to, total or partial disclosure, reproduction or dissemination) by persons other than the intended recipient(s) is prohibited. If you receive this email in error, please notify the sender by phone or email immediately and delete it.\n-------------------- End of forwarded message --------------------\n\n\n\n", "msg_date": "Tue, 25 Jul 2023 16:12:21 +0800", "msg_from": "huchangqi <huchangqi@loongson.cn>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add loongarch native checksum implementation." } ]
[ { "msg_contents": "Hi,\n\nI've observed the following failure once in one of my Cirrus CI runs\non Windows Server on HEAD:\n\ntimed out waiting for match: (?^:User was holding shared buffer pin\nfor too long) at\nC:/cirrus/src/test/recovery/t/031_recovery_conflict.pl line 318.\n# Postmaster PID for node \"primary\" is 696\n\nhttps://api.cirrus-ci.com/v1/artifact/task/5272062399348736/testrun/build/testrun/recovery/031_recovery_conflict/log/regress_log_031_recovery_conflict\n\nhttps://github.com/BRupireddy/postgres/runs/15296698158\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 25 Jul 2023 13:56:41 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "A failure in 031_recovery_conflict.pl on Cirrus CI Windows Server" }, { "msg_contents": "On Tue, Jul 25, 2023 at 01:56:41PM +0530, Bharath Rupireddy wrote:\n> I've observed the following failure once in one of my Cirrus CI runs\n> on Windows Server on HEAD:\n> \n> timed out waiting for match: (?^:User was holding shared buffer pin\n> for too long) at\n> C:/cirrus/src/test/recovery/t/031_recovery_conflict.pl line 318.\n> # Postmaster PID for node \"primary\" is 696\n> \n> https://api.cirrus-ci.com/v1/artifact/task/5272062399348736/testrun/build/testrun/recovery/031_recovery_conflict/log/regress_log_031_recovery_conflict\n> \n> https://github.com/BRupireddy/postgres/runs/15296698158\n\nKnown:\nhttps://postgr.es/m/flat/20220409045515.35ypjzddp25v72ou%40alap3.anarazel.de\nhttps://postgr.es/m/flat/CA+hUKGK3PGKwcKqzoosamn36YW-fsuTdOPPF1i_rtEO=nEYKSg@mail.gmail.com\n\nA good next step would be patch \"Fix recovery conflict SIGUSR1 handling\" from\nhttps://postgr.es/m/CA+hUKG+Hi5P1j_8cVeqLLwNSVyJh4RntF04fYWkeNXfTrH2MYA@mail.gmail.com\n(near the end of that second thread).\n\n\n", "msg_date": "Tue, 25 Jul 2023 03:54:56 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: A failure in 031_recovery_conflict.pl on Cirrus CI Windows Server" }, { "msg_contents": "Re: Noah Misch\n> On Tue, Jul 25, 2023 at 01:56:41PM +0530, Bharath Rupireddy wrote:\n> > I've observed the following failure once in one of my Cirrus CI runs\n> > on Windows Server on HEAD:\n> > \n> > timed out waiting for match: (?^:User was holding shared buffer pin\n> > for too long) at\n> > C:/cirrus/src/test/recovery/t/031_recovery_conflict.pl line 318.\n> > # Postmaster PID for node \"primary\" is 696\n> > \n> > https://api.cirrus-ci.com/v1/artifact/task/5272062399348736/testrun/build/testrun/recovery/031_recovery_conflict/log/regress_log_031_recovery_conflict\n> > \n> > https://github.com/BRupireddy/postgres/runs/15296698158\n> \n> Known:\n> https://postgr.es/m/flat/20220409045515.35ypjzddp25v72ou%40alap3.anarazel.de\n> https://postgr.es/m/flat/CA+hUKGK3PGKwcKqzoosamn36YW-fsuTdOPPF1i_rtEO=nEYKSg@mail.gmail.com\n> \n> A good next step would be patch \"Fix recovery conflict SIGUSR1 handling\" from\n> https://postgr.es/m/CA+hUKG+Hi5P1j_8cVeqLLwNSVyJh4RntF04fYWkeNXfTrH2MYA@mail.gmail.com\n> (near the end of that second thread).\n\nI am also seeing problems with t/031_recovery_conflict.pl, on the\nnewly added s390x architecture on apt.postgresql.org. The failure is\nflaky, but has been popping up in build logs often enough.\n\nI managed to reproduce it on the shell by running the test in a loop a\nfew times. The failure looks like this:\n\necho \"# +++ tap check in src/test/recovery +++\" && rm -rf '/home/myon/postgresql/pg/postgresql/build/src/test/recovery'/tmp_check && /bin/mkdir -p '/home/myon/postgresql/pg/postgresql/build/src/test/recovery'/tmp_check && cd /home/myon/postgresql/pg/postgresql/build/../src/test/recovery && TESTLOGDIR='/home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check/log' TESTDATADIR='/home/myon/postgresql/pg/postgresql/build/src/test/recovery/tmp_check' PATH=\"/home/myon/postgresql/pg/postgresql/build/tmp_install/usr/lib/postgresql/17/bin:/home/myon/postgresql/pg/postgresql/build/src/test/recovery:$PATH\" LD_LIBRARY_PATH=\"/home/myon/postgresql/pg/postgresql/build/tmp_install/usr/lib/s390x-linux-gnu\" PGPORT='65432' top_builddir='/home/myon/postgresql/pg/postgresql/build/src/test/recovery/../../..' PG_REGRESS='/home/myon/postgresql/pg/postgresql/build/src/test/recovery/../../../src/test/regress/pg_regress' /usr/bin/prove -I /home/myon/postgresql/pg/postgresql/build/../src/test/perl/ -I /home/myon/postgresql/pg/postgresql/build/../src/test/recovery --verbose t/031_recovery_conflict.pl\n# +++ tap check in src/test/recovery +++\nt/031_recovery_conflict.pl ..\n# issuing query via background psql:\n# BEGIN;\n# DECLARE test_recovery_conflict_cursor CURSOR FOR SELECT b FROM test_recovery_conflict_table1;\n# FETCH FORWARD FROM test_recovery_conflict_cursor;\nok 1 - buffer pin conflict: cursor with conflicting pin established\nok 2 - buffer pin conflict: logfile contains terminated connection due to recovery conflict\nok 3 - buffer pin conflict: stats show conflict on standby\n# issuing query via background psql:\n# BEGIN;\n# DECLARE test_recovery_conflict_cursor CURSOR FOR SELECT b FROM test_recovery_conflict_table1;\n# FETCH FORWARD FROM test_recovery_conflict_cursor;\n#\nok 4 - snapshot conflict: cursor with conflicting snapshot established\nok 5 - snapshot conflict: logfile contains terminated connection due to recovery conflict\nok 6 - snapshot conflict: stats show conflict on standby\n# issuing query via background psql:\n# BEGIN;\n# LOCK TABLE test_recovery_conflict_table1 IN ACCESS SHARE MODE;\n# SELECT 1;\n#\nok 7 - lock conflict: conflicting lock acquired\n# Tests were run but no plan was declared and done_testing() was not seen.\n# Looks like your test exited with 255 just after 7.\nDubious, test returned 255 (wstat 65280, 0xff00)\nAll 7 subtests passed\n\nTest Summary Report\n-------------------\nt/031_recovery_conflict.pl (Wstat: 65280 Tests: 7 Failed: 0)\n Non-zero exit status: 255\n Parse errors: No plan found in TAP output\nFiles=1, Tests=7, 186 wallclock secs ( 0.01 usr 0.00 sys + 0.74 cusr 0.71 csys = 1.46 CPU)\nResult: FAIL\nmake: *** [Makefile:23: check] Error 1\n\nI.e. the test file just exits after test 7, without running the rest.\n\nIs it dying because it's running into a deadlock instead of some other\nexpected error message?\n\ntimed out waiting for match: (?^:User was holding a relation lock for too long) at t/031_recovery_conflict.pl line 318.\n2023-08-04 12:20:38.061 UTC [1920073] 031_recovery_conflict.pl FATAL: terminating connection due to conflict with recovery\n\nChristoph", "msg_date": "Fri, 4 Aug 2023 14:43:29 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": false, "msg_subject": "A failure in 031_recovery_conflict.pl on Debian/s390x" }, { "msg_contents": "On Sat, Aug 5, 2023 at 12:43 AM Christoph Berg <myon@debian.org> wrote:\n> I managed to reproduce it on the shell by running the test in a loop a\n> few times. The failure looks like this:\n\nIt's great that you can reproduce this semi-reliably! I've rebased\nthe patch, hoping you can try it out.\n\nhttps://www.postgresql.org/message-id/CA%2BhUKGJDcyW8Hbq3UyG-5-5Y7WqqOTjrXbFTMxxmhiofFraE-Q%40mail.gmail.com\n\n\n", "msg_date": "Sat, 5 Aug 2023 13:42:33 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A failure in 031_recovery_conflict.pl on Debian/s390x" }, { "msg_contents": "Re: Thomas Munro\n> It's great that you can reproduce this semi-reliably! I've rebased\n> the patch, hoping you can try it out.\n\nUnfortunately very semi, today I didn't get to the same point where it\nexited after test 7, but got some other timeouts. Not even sure they\nare related to this (?) problem. Anyway:\n\n> https://www.postgresql.org/message-id/CA%2BhUKGJDcyW8Hbq3UyG-5-5Y7WqqOTjrXbFTMxxmhiofFraE-Q%40mail.gmail.com\n\nThis patch makes the testsuite hang (and later exit) after this:\n\nok 17 - 5 recovery conflicts shown in pg_stat_database\n# Tests were run but no plan was declared and done_testing() was not seen.\n# Looks like your test exited with 255 just after 17.\n\nI haven't seen any other problems with the patch attached, but since\nthat test was hanging and hence very slow, I couldn't do many runs.\n\nPerhaps that's progress, I don't know. :) Logs attached.\n\nChristoph", "msg_date": "Sun, 6 Aug 2023 22:40:37 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": false, "msg_subject": "Re: A failure in 031_recovery_conflict.pl on Debian/s390x" }, { "msg_contents": "On Mon, Aug 7, 2023 at 8:40 AM Christoph Berg <myon@debian.org> wrote:\n> 2023-08-06 17:21:24.078 UTC [1275555] 031_recovery_conflict.pl FATAL: unrecognized conflict mode: 7\n\nThanks for testing! Would you mind trying v8 from that thread? V7\nhad a silly bug (I accidentally deleted a 'case' label while cleaning\nsome stuff up, resulting in the above error...)\n\n\n", "msg_date": "Mon, 7 Aug 2023 10:15:36 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A failure in 031_recovery_conflict.pl on Debian/s390x" }, { "msg_contents": "Re: Thomas Munro\n> Thanks for testing! Would you mind trying v8 from that thread? V7\n> had a silly bug (I accidentally deleted a 'case' label while cleaning\n> some stuff up, resulting in the above error...)\n\nv8 worked better. It succeeded a few times (at least 12, my screen\nscrollback didn't catch more) before erroring like this:\n\n[10:21:58.410](0.151s) ok 15 - startup deadlock: logfile contains terminated connection due to recovery conflict\n[10:21:58.463](0.053s) not ok 16 - startup deadlock: stats show conflict on standby\n[10:21:58.463](0.000s)\n[10:21:58.463](0.000s) # Failed test 'startup deadlock: stats show conflict on standby'\n# at t/031_recovery_conflict.pl line 332.\n[10:21:58.463](0.000s) # got: '0'\n# expected: '1'\n\nChristoph", "msg_date": "Mon, 7 Aug 2023 12:57:40 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": false, "msg_subject": "Re: A failure in 031_recovery_conflict.pl on Debian/s390x" }, { "msg_contents": "Hi,\n\nOn 2023-08-07 12:57:40 +0200, Christoph Berg wrote:\n> Re: Thomas Munro\n> > Thanks for testing! Would you mind trying v8 from that thread? V7\n> > had a silly bug (I accidentally deleted a 'case' label while cleaning\n> > some stuff up, resulting in the above error...)\n> \n> v8 worked better. It succeeded a few times (at least 12, my screen\n> scrollback didn't catch more) before erroring like this:\n\n> [10:21:58.410](0.151s) ok 15 - startup deadlock: logfile contains terminated connection due to recovery conflict\n> [10:21:58.463](0.053s) not ok 16 - startup deadlock: stats show conflict on standby\n> [10:21:58.463](0.000s)\n> [10:21:58.463](0.000s) # Failed test 'startup deadlock: stats show conflict on standby'\n> # at t/031_recovery_conflict.pl line 332.\n> [10:21:58.463](0.000s) # got: '0'\n> # expected: '1'\n\nHm, that could just be a \"harmless\" race. Does it still happen if you apply\nthe attached patch in addition?\n\nGreetings,\n\nAndres Freund", "msg_date": "Mon, 7 Aug 2023 16:08:21 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: A failure in 031_recovery_conflict.pl on Debian/s390x" }, { "msg_contents": "Re: Andres Freund\n> Hm, that could just be a \"harmless\" race. Does it still happen if you apply\n> the attached patch in addition?\n\nPutting that patch on top of v8 made it pass 294 times before exiting\nlike this:\n\n[08:52:34.134](0.032s) ok 1 - buffer pin conflict: cursor with conflicting pin established\nWaiting for replication conn standby's replay_lsn to pass 0/3430000 on primary\ndone\ntimed out waiting for match: (?^:User was holding shared buffer pin for too long) at t/031_recovery_conflict.pl line 318.\n\nBut admittedly, this build machine is quite sluggish at times, likely\ndue to neighboring VMs on the same host. (Perhaps that even explains\nthe behavior from before this patch.) I'll still attach the logs since\nI frankly can't judge what errors are acceptable here and which\naren't.\n\nChristoph", "msg_date": "Tue, 8 Aug 2023 16:01:37 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": false, "msg_subject": "Re: A failure in 031_recovery_conflict.pl on Debian/s390x" }, { "msg_contents": "On Wed, Aug 9, 2023 at 2:01 AM Christoph Berg <myon@debian.org> wrote:\n> Putting that patch on top of v8 made it pass 294 times before exiting\n> like this:\n>\n> [08:52:34.134](0.032s) ok 1 - buffer pin conflict: cursor with conflicting pin established\n> Waiting for replication conn standby's replay_lsn to pass 0/3430000 on primary\n> done\n> timed out waiting for match: (?^:User was holding shared buffer pin for too long) at t/031_recovery_conflict.pl line 318.\n\nCan you reproduce that with logging like this added on top?\n\ndiff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c\nindex 5b663a2997..72f5274c95 100644\n--- a/src/backend/storage/lmgr/proc.c\n+++ b/src/backend/storage/lmgr/proc.c\n@@ -629,6 +629,7 @@ SetStartupBufferPinWaitBufId(int bufid)\n /* use volatile pointer to prevent code rearrangement */\n volatile PROC_HDR *procglobal = ProcGlobal;\n\n+ elog(LOG, \"XXX SetStartupBufferPinWaitBufId(%d)\", bufid);\n procglobal->startupBufferPinWaitBufId = bufid;\n }\n\n@@ -640,8 +641,11 @@ GetStartupBufferPinWaitBufId(void)\n {\n /* use volatile pointer to prevent code rearrangement */\n volatile PROC_HDR *procglobal = ProcGlobal;\n+ int bufid;\n\n- return procglobal->startupBufferPinWaitBufId;\n+ bufid = procglobal->startupBufferPinWaitBufId;\n+ elog(LOG, \"XXX GetStartupBufferPinWaitBufId() -> %d\", bufid);\n+ return bufid;\n }\n\n\n", "msg_date": "Wed, 9 Aug 2023 05:23:57 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A failure in 031_recovery_conflict.pl on Debian/s390x" }, { "msg_contents": "Re: Thomas Munro\n> On Wed, Aug 9, 2023 at 2:01 AM Christoph Berg <myon@debian.org> wrote:\n> > Putting that patch on top of v8 made it pass 294 times before exiting\n> > like this:\n> >\n> > [08:52:34.134](0.032s) ok 1 - buffer pin conflict: cursor with conflicting pin established\n> > Waiting for replication conn standby's replay_lsn to pass 0/3430000 on primary\n> > done\n> > timed out waiting for match: (?^:User was holding shared buffer pin for too long) at t/031_recovery_conflict.pl line 318.\n> \n> Can you reproduce that with logging like this added on top?\n\n603 iterations later it hit again, but didn't log anything. (I believe\nI did run \"make\" in the right directory.)\n\n[22:20:24.714](3.145s) # issuing query via background psql:\n# BEGIN;\n# DECLARE test_recovery_conflict_cursor CURSOR FOR SELECT b FROM test_recovery_conflict_table1;\n# FETCH FORWARD FROM test_recovery_conflict_cursor;\n[22:20:24.745](0.031s) ok 1 - buffer pin conflict: cursor with conflicting pin established\nWaiting for replication conn standby's replay_lsn to pass 0/3430000 on primary\ndone\ntimed out waiting for match: (?^:User was holding shared buffer pin for too long) at t/031_recovery_conflict.pl line 318.\n\nPerhaps this can simply be attributed to the machine being too busy.\n\nChristoph", "msg_date": "Wed, 9 Aug 2023 12:56:06 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": false, "msg_subject": "Re: A failure in 031_recovery_conflict.pl on Debian/s390x" }, { "msg_contents": "Re: To Thomas Munro\n> 603 iterations later it hit again, but didn't log anything. (I believe\n> I did run \"make\" in the right directory.)\n\nSince that didn't seem right I'm running the tests again. There are\nXXX lines in the output, but it hasn't hit yet.\n\nChristoph\n\n\n", "msg_date": "Wed, 9 Aug 2023 13:38:07 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": false, "msg_subject": "Re: A failure in 031_recovery_conflict.pl on Debian/s390x" }, { "msg_contents": "Re: To Thomas Munro\n> 603 iterations later it hit again, but didn't log anything. (I believe\n> I did run \"make\" in the right directory.)\n\nThis time it took 3086 iterations to hit the problem.\n\nRunning c27f8621eedf7 + Debian patches + v8 +\npgstat-report-conflicts-immediately.patch + the XXX logging.\n\n> [22:20:24.714](3.145s) # issuing query via background psql:\n> # BEGIN;\n> # DECLARE test_recovery_conflict_cursor CURSOR FOR SELECT b FROM test_recovery_conflict_table1;\n> # FETCH FORWARD FROM test_recovery_conflict_cursor;\n> [22:20:24.745](0.031s) ok 1 - buffer pin conflict: cursor with conflicting pin established\n> Waiting for replication conn standby's replay_lsn to pass 0/3430000 on primary\n> done\n> timed out waiting for match: (?^:User was holding shared buffer pin for too long) at t/031_recovery_conflict.pl line 318.\n\nNo XXX lines this time either, but I've seen then im logfiles that\nwent through successfully.\n\n> Perhaps this can simply be attributed to the machine being too busy.\n\nWith the patches, the problem of dying so often that builds targeting\nseveral distributions in parallel will usually fail is gone.\n\nChristoph", "msg_date": "Thu, 10 Aug 2023 11:15:11 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": false, "msg_subject": "Re: A failure in 031_recovery_conflict.pl on Debian/s390x" }, { "msg_contents": "On Thu, Aug 10, 2023 at 9:15 PM Christoph Berg <myon@debian.org> wrote:\n> No XXX lines this time either, but I've seen then im logfiles that\n> went through successfully.\n\nHmm. Well, I think this looks like a different kind of bug then.\nThat patch of mine is about fixing some unsafe coding on the receiving\nside of a signal. In this case it's apparently not being sent. So\neither the Heap2/PRUNE record was able to proceed (indicating that\nthat CURSOR was not holding a pin as expected), or VACUUM decided not\nto actually do anything to that block (conditional cleanup lock vs\ntransient pin changing behaviour?), or there's a bug somewhere in/near\nLockBufferForCleanup(), which should have emitted that XXX message\nbefore even calling ResolveRecoveryConflictWithBufferPin().\n\nDo you still have the data directories around from that run, so we can\nsee if the expected Heap2/PRUNE was actually logged? For example\n(using meson layout here, in the build directory) that'd be something\nlike:\n\n$ ./tmp_install/home/tmunro/install/bin/pg_waldump\ntestrun/recovery/031_recovery_conflict/data/t_031_recovery_conflict_standby_data/pgdata/pg_wal/000000010000000000000003\n\nIn there I see this:\n\nrmgr: Heap2 len (rec/tot): 57/ 57, tx: 0, lsn:\n0/0344BB90, prev 0/0344BB68, desc: PRUNE snapshotConflictHorizon: 0,\nnredirected: 0, ndead: 1, nunused: 0, redirected: [], dead: [21],\nunused: [], blkref #0: rel 1663/16385/16386 blk 0\n\nThat's the WAL record that's supposed to be causing\n031_recovery_conflict_standby.log to talk about a conflict, starting\nwith this:\n\n2023-08-10 22:47:04.564 NZST [57145] LOG: recovery still waiting\nafter 10.035 ms: recovery conflict on buffer pin\n2023-08-10 22:47:04.564 NZST [57145] CONTEXT: WAL redo at 0/344BB90\nfor Heap2/PRUNE: snapshotConflictHorizon: 0, nredirected: 0, ndead: 1,\n nunused: 0, redirected: [], dead: [21], unused: []; blkref #0: rel\n1663/16385/16386, blk 0\n\n\n", "msg_date": "Thu, 10 Aug 2023 22:55:24 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A failure in 031_recovery_conflict.pl on Debian/s390x" }, { "msg_contents": "Re: Thomas Munro\n> On Thu, Aug 10, 2023 at 9:15 PM Christoph Berg <myon@debian.org> wrote:\n> > No XXX lines this time either, but I've seen then im logfiles that\n> > went through successfully.\n> \n> Do you still have the data directories around from that run, so we can\n> see if the expected Heap2/PRUNE was actually logged? For example\n> (using meson layout here, in the build directory) that'd be something\n> like:\n\nSorry for the late reply, getting the PG release out of the door was\nmore important.\n\n> $ ./tmp_install/home/tmunro/install/bin/pg_waldump\n> testrun/recovery/031_recovery_conflict/data/t_031_recovery_conflict_standby_data/pgdata/pg_wal/000000010000000000000003\n> \n> In there I see this:\n\nIt's been a long day and I can't wrap my mind around understanding\nthis now, so I'll just dump the output here.\n\n[0] 16:03 myon@sid-s390x.pgs390x:~/.../build/src/test/recovery $ LC_ALL=C ../../../tmp_install/usr/lib/postgresql/17/bin/pg_waldump tmp_check/t_031_recovery_conflict_standby_data/pgdata/pg_wal/000000010000000000000003 | grep -3 PRUNE > PRUNE.log\npg_waldump: error: error in WAL record at 0/347E6A8: invalid record length at 0/347E6E0: expected at least 24, got 0\n\nChristoph", "msg_date": "Fri, 11 Aug 2023 18:05:36 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": false, "msg_subject": "Re: A failure in 031_recovery_conflict.pl on Debian/s390x" }, { "msg_contents": "Thanks. I realised that it's easy enough to test that theory about\ncleanup locks by hacking ConditionalLockBufferForCleanup() to return\nfalse randomly. Then the test occasionally fails as described. Seems\nlike we'll need to fix that test, but it's not evidence of a server\nbug, and my signal handler refactoring patch is in the clear. Thanks\nfor testing it!\n\n\n", "msg_date": "Sat, 12 Aug 2023 15:50:24 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A failure in 031_recovery_conflict.pl on Debian/s390x" }, { "msg_contents": "Hi,\n\nOn 2023-08-12 15:50:24 +1200, Thomas Munro wrote:\n> Thanks. I realised that it's easy enough to test that theory about\n> cleanup locks by hacking ConditionalLockBufferForCleanup() to return\n> false randomly. Then the test occasionally fails as described. Seems\n> like we'll need to fix that test, but it's not evidence of a server\n> bug, and my signal handler refactoring patch is in the clear. Thanks\n> for testing it!\n\nWRT fixing the test: I think just using VACUUM FREEZE ought to do the job?\nAfter changing all the VACUUMs to VACUUM FREEZEs, 031_recovery_conflict.pl\npasses even after I make ConditionalLockBufferForCleanup() fail 100%.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 12 Aug 2023 14:00:06 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: A failure in 031_recovery_conflict.pl on Debian/s390x" }, { "msg_contents": "Re: Andres Freund\n> > Thanks. I realised that it's easy enough to test that theory about\n> > cleanup locks by hacking ConditionalLockBufferForCleanup() to return\n> > false randomly. Then the test occasionally fails as described. Seems\n> > like we'll need to fix that test, but it's not evidence of a server\n> > bug, and my signal handler refactoring patch is in the clear. Thanks\n> > for testing it!\n> \n> WRT fixing the test: I think just using VACUUM FREEZE ought to do the job?\n> After changing all the VACUUMs to VACUUM FREEZEs, 031_recovery_conflict.pl\n> passes even after I make ConditionalLockBufferForCleanup() fail 100%.\n\nI have now applied the last two patches to postgresql-17 so see if the\nbuild is more stable. (So far I had only tried in manual tests.)\n\nFwiw this is also causing pain on PostgreSQL 16:\n\nhttps://pgdgbuild.dus.dg-i.net/view/Snapshot/job/postgresql-16-binaries-snapshot/1011/architecture=s390x,distribution=sid/consoleText\n\nMost of the failing builds in\nhttps://pgdgbuild.dus.dg-i.net/view/Snapshot/job/postgresql-16-binaries-snapshot/\nare on s390x and likely due to this problem.\n\nThis should be fixed before the 16 release.\n\nChristoph\n\n\n", "msg_date": "Mon, 28 Aug 2023 15:58:01 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": false, "msg_subject": "Re: A failure in 031_recovery_conflict.pl on Debian/s390x" }, { "msg_contents": "On Tue, Aug 29, 2023 at 1:58 AM Christoph Berg <myon@debian.org> wrote:\n> This should be fixed before the 16 release.\n\nHere's what I was thinking of doing for this, given where we are in\nthe release schedule:\n\n* commit the signal-refactoring patch in master only\n* plan to back-patch it into 16 in a later point release (it's much\nharder to back-patch further than that)\n* look into what we can do to suppress the offending test for 16 in the meantime\n\nBut looking into that just now, I am curious about something. The\nwhole area of recovery conflicts has been buggy forever, and the tests\nwere only developed fairly recently by Melanie and Andres (April\n2022), and then backpatched to all releases. They were disabled again\nin release branches 10-14 (discussion at\nhttps://postgr.es/m/3447060.1652032749@sss.pgh.pa.us):\n\n+plan skip_all => \"disabled until after minor releases, due to instability\";\n\nNow your mainframe build bot is regularly failing for whatever timing\nreason, in 16 and master. That's quite useful because your tests have\nmade us or at least me a lot more confident that the fix is good (one\nof the reasons progress has been slow is that failures in CI and BF\nwere pretty rare and hard to analyse). But... I wonder why it isn't\nfailing for you in 15? Are you able to check if it ever has? I\nsuppose we could go and do the \"disabled due to instability\" thing in\n15 and 16, and then later we'll un-disable it in 16 when we back-patch\nthe fix into it.\n\n\n", "msg_date": "Tue, 29 Aug 2023 10:00:56 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A failure in 031_recovery_conflict.pl on Debian/s390x" }, { "msg_contents": "Re: Thomas Munro\n> 2022), and then backpatched to all releases. They were disabled again\n> in release branches 10-14 (discussion at\n> https://postgr.es/m/3447060.1652032749@sss.pgh.pa.us):\n> \n> +plan skip_all => \"disabled until after minor releases, due to instability\";\n\nRight:\nhttps://pgdgbuild.dus.dg-i.net/view/Snapshot/job/postgresql-14-binaries-snapshot/366/architecture=s390x,distribution=sid/consoleText\n-> t/031_recovery_conflict.pl ........... skipped: disabled until after minor releases, due to instability\n\n> Now your mainframe build bot is regularly failing for whatever timing\n> reason, in 16 and master. That's quite useful because your tests have\n> made us or at least me a lot more confident that the fix is good (one\n> of the reasons progress has been slow is that failures in CI and BF\n> were pretty rare and hard to analyse). But... I wonder why it isn't\n> failing for you in 15? Are you able to check if it ever has? I\n> suppose we could go and do the \"disabled due to instability\" thing in\n> 15 and 16, and then later we'll un-disable it in 16 when we back-patch\n> the fix into it.\n\nThe current explanation is that builds are only running frequently for\n16 and devel, and the periodic extra test jobs only run \"make check\"\nand the postgresql-common testsuite, not the full server testsuite.\n\nI had \"snapshot\" jobs configured for basically everything (server and\nextensions) for some time, but ultimately realized that I don't have\nthe slightest time to look at everything, so I disabled them again\nabout 3 weeks ago. This removed enough noise so I could actually look\nat the failing 16 and devel jobs again.\n\nI don't see any failures of that kind for the 15 snapshot build while\nit was still running, the failures in there are all something else\n(mostly problems outside PG).\n\nhttps://pgdgbuild.dus.dg-i.net/view/Snapshot/job/postgresql-15-binaries-snapshot/\n\nI'll enable the server snapshot builds again, those shouldn't be too\nnoisy.\n\nChristoph\n\n\n", "msg_date": "Tue, 29 Aug 2023 15:39:38 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": false, "msg_subject": "Re: A failure in 031_recovery_conflict.pl on Debian/s390x" }, { "msg_contents": "I have now disabled the test in 15 and 16 (like the older branches).\nI'll see about getting the fixes into master today, and we can\ncontemplate back-patching later, after we've collected a convincing\nvolume of test results from the build farm, CI and hopefully your\ns390x master snapshot builds (if that is a thing).\n\n\n", "msg_date": "Thu, 7 Sep 2023 11:57:51 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A failure in 031_recovery_conflict.pl on Debian/s390x" }, { "msg_contents": "On Sun, Aug 13, 2023 at 9:00 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-08-12 15:50:24 +1200, Thomas Munro wrote:\n> > Thanks. I realised that it's easy enough to test that theory about\n> > cleanup locks by hacking ConditionalLockBufferForCleanup() to return\n> > false randomly. Then the test occasionally fails as described. Seems\n> > like we'll need to fix that test, but it's not evidence of a server\n> > bug, and my signal handler refactoring patch is in the clear. Thanks\n> > for testing it!\n>\n> WRT fixing the test: I think just using VACUUM FREEZE ought to do the job?\n> After changing all the VACUUMs to VACUUM FREEZEs, 031_recovery_conflict.pl\n> passes even after I make ConditionalLockBufferForCleanup() fail 100%.\n\nI pushed that change (master only for now, like the other change).\n\n\n", "msg_date": "Thu, 7 Sep 2023 14:41:08 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A failure in 031_recovery_conflict.pl on Debian/s390x" }, { "msg_contents": "On Tue, Aug 8, 2023 at 11:08 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-08-07 12:57:40 +0200, Christoph Berg wrote:\n> > v8 worked better. It succeeded a few times (at least 12, my screen\n> > scrollback didn't catch more) before erroring like this:\n>\n> > [10:21:58.410](0.151s) ok 15 - startup deadlock: logfile contains terminated connection due to recovery conflict\n> > [10:21:58.463](0.053s) not ok 16 - startup deadlock: stats show conflict on standby\n> > [10:21:58.463](0.000s)\n> > [10:21:58.463](0.000s) # Failed test 'startup deadlock: stats show conflict on standby'\n> > # at t/031_recovery_conflict.pl line 332.\n> > [10:21:58.463](0.000s) # got: '0'\n> > # expected: '1'\n>\n> Hm, that could just be a \"harmless\" race. Does it still happen if you apply\n> the attached patch in addition?\n\nDo you intend that as the fix, or just proof of what's wrong? It\nlooks pretty reasonable to me. I will go ahead and commit this unless\nyou want to change something/do it yourself. As far as I know so far,\nthis is the last thing preventing the mighty Debian/s390x build box\nfrom passing consistently, which would be nice.", "msg_date": "Thu, 7 Sep 2023 15:39:31 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A failure in 031_recovery_conflict.pl on Debian/s390x" }, { "msg_contents": "Re: Thomas Munro\n> I have now disabled the test in 15 and 16 (like the older branches).\n> I'll see about getting the fixes into master today, and we can\n> contemplate back-patching later, after we've collected a convincing\n> volume of test results from the build farm, CI and hopefully your\n> s390x master snapshot builds (if that is a thing).\n\nI have now seen the problem on 15 as well, if I had not reported that\nyet.\n\nI think the 16/17 builds look good now with the patches applied, but\nthere's still a lot of noise from the build box having random other\nproblems (network disconnect and other silly things), so it still\nneeds a lot of hand-holding :-/.\n\nChristoph\n\n\n", "msg_date": "Thu, 7 Sep 2023 10:19:41 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": false, "msg_subject": "Re: A failure in 031_recovery_conflict.pl on Debian/s390x" }, { "msg_contents": "Hi,\n\n13.08.2023 00:00, Andres Freund wrote:\n> Hi,\n>\n> On 2023-08-12 15:50:24 +1200, Thomas Munro wrote:\n>> Thanks. I realised that it's easy enough to test that theory about\n>> cleanup locks by hacking ConditionalLockBufferForCleanup() to return\n>> false randomly. Then the test occasionally fails as described. Seems\n>> like we'll need to fix that test, but it's not evidence of a server\n>> bug, and my signal handler refactoring patch is in the clear. Thanks\n>> for testing it!\n> WRT fixing the test: I think just using VACUUM FREEZE ought to do the job?\n> After changing all the VACUUMs to VACUUM FREEZEs, 031_recovery_conflict.pl\n> passes even after I make ConditionalLockBufferForCleanup() fail 100%.\n>\n\nI happened to encounter another failure of 031_recovery_conflict, on\nrather slow ARMv7:\nt/031_recovery_conflict.pl ............ 13/? # poll_query_until timed out executing this query:\n#\n# SELECT 'waiting' FROM pg_locks WHERE locktype = 'relation' AND NOT granted;\n#\n# expecting this output:\n# waiting\n# last actual query output:\n#\n# with stderr:\nt/031_recovery_conflict.pl ............ 14/?\n#   Failed test 'startup deadlock: lock acquisition is waiting'\n#   at t/031_recovery_conflict.pl line 261.\n# Tests were run but no plan was declared and done_testing() was not seen.\n# Looks like your test exited with 255 just after 15.\nt/031_recovery_conflict.pl ............ Dubious, test returned 255 (wstat 65280, 0xff00)\nFailed 1/15 subtests\n\nI've managed to reproduce it with the attached test patch, on a regular\nx86_64 VM with CPU slowed down to 2%, so that the modified test runs for\n5+ minutes.\n...\nt/031_recovery_conflict.pl .. 102/? # iteration 31\nt/031_recovery_conflict.pl .. 104/? # iteration 32\nt/031_recovery_conflict.pl .. 106/?\n<... waiting for something ...>\n\n031_recovery_conflict_standby.log contains:\n2023-10-09 12:43:14.821 UTC [145519] 031_recovery_conflict.pl LOG: statement: BEGIN;\n2023-10-09 12:43:14.821 UTC [145519] 031_recovery_conflict.pl LOG: statement: DECLARE test_recovery_conflict_cursor \nCURSOR FOR SELECT a FROM test_recovery_conflict_table1;\n2023-10-09 12:43:14.822 UTC [145519] 031_recovery_conflict.pl LOG: statement: FETCH FORWARD FROM \ntest_recovery_conflict_cursor;\n2023-10-09 12:43:14.822 UTC [145519] 031_recovery_conflict.pl LOG: statement: SELECT * FROM test_recovery_conflict_table2;\n2023-10-09 12:43:15.039 UTC [145513] LOG:  recovery still waiting after 10.217 ms: recovery conflict on buffer pin\n2023-10-09 12:43:15.039 UTC [145513] CONTEXT:  WAL redo at 0/35ADD38 for Heap2/PRUNE: snapshotConflictHorizon: 0, \nnredirected: 0, ndead: 100, nunused: 0, redirected: [], dead: [2, 3, 4, ... , 99, 100, 101], unused: []; blkref #0: rel \n1663/16385/16489, blk 0\n2023-10-09 12:43:15.039 UTC [145519] 031_recovery_conflict.pl ERROR:  canceling statement due to conflict with recovery \nat character 15\n2023-10-09 12:43:15.039 UTC [145519] 031_recovery_conflict.pl DETAIL:  User transaction caused buffer deadlock with \nrecovery.\n2023-10-09 12:43:15.039 UTC [145519] 031_recovery_conflict.pl STATEMENT:  SELECT * FROM test_recovery_conflict_table2;\n2023-10-09 12:43:15.039 UTC [145513] LOG:  recovery finished waiting after 10.382 ms: recovery conflict on buffer pin\n2023-10-09 12:43:15.039 UTC [145513] CONTEXT:  WAL redo at 0/35ADD38 for Heap2/PRUNE: snapshotConflictHorizon: 0, \nnredirected: 0, ndead: 100, nunused: 0, redirected: [], dead: [2, 3, 4, ... , 99, 100, 101], unused: []; blkref #0: rel \n1663/16385/16489, blk 0\n2023-10-09 12:43:15.134 UTC [145527] 031_recovery_conflict.pl LOG: statement: SELECT 'waiting' FROM pg_locks WHERE \nlocktype = 'relation' AND NOT granted;\n2023-10-09 12:43:15.647 UTC [145530] 031_recovery_conflict.pl LOG: statement: SELECT 'waiting' FROM pg_locks WHERE \nlocktype = 'relation' AND NOT granted;\n2023-10-09 12:43:15.981 UTC [145532] 031_recovery_conflict.pl LOG: statement: SELECT 'waiting' FROM pg_locks WHERE \nlocktype = 'relation' AND NOT granted;\n...\n2023-10-09 12:51:24.729 UTC [149175] 031_recovery_conflict.pl LOG: statement: SELECT 'waiting' FROM pg_locks WHERE \nlocktype = 'relation' AND NOT granted;\n2023-10-09 12:51:25.969 UTC [145514] FATAL:  could not receive data from WAL stream: server closed the connection \nunexpectedly\n                 This probably means the server terminated abnormally\n                 before or while processing the request.\n\nIndeed, t_031_recovery_conflict_standby_data/pgdata/pg_wal/\n000000010000000000000003 contains:\n...\nrmgr: Heap        len (rec/tot):     59/    59, tx:        972, lsn: 0/035ADB18, prev 0/035ADAD8, desc: INSERT off: 100, \nflags: 0x00, blkref #0: rel 1663/16385/16489 blk 0\nrmgr: Heap        len (rec/tot):     59/    59, tx:        972, lsn: 0/035ADB58, prev 0/035ADB18, desc: INSERT off: 101, \nflags: 0x00, blkref #0: rel 1663/16385/16489 blk 0\nrmgr: Transaction len (rec/tot):     34/    34, tx:        972, lsn: 0/035ADB98, prev 0/035ADB58, desc: ABORT 2023-10-09 \n12:43:13.797513 UTC\nrmgr: Standby     len (rec/tot):     42/    42, tx:        973, lsn: 0/035ADBC0, prev 0/035ADB98, desc: LOCK xid 973 db \n16385 rel 16389\nrmgr: Transaction len (rec/tot):    178/   178, tx:        973, lsn: 0/035ADBF0, prev 0/035ADBC0, desc: PREPARE gid \nlock: 2023-10-09 12:43:13.798048 UTC\nrmgr: Heap        len (rec/tot):     59/    59, tx:        974, lsn: 0/035ADCA8, prev 0/035ADBF0, desc: INSERT off: 102, \nflags: 0x00, blkref #0: rel 1663/16385/16489 blk 0\nrmgr: Transaction len (rec/tot):     34/    34, tx:        974, lsn: 0/035ADCE8, prev 0/035ADCA8, desc: COMMIT \n2023-10-09 12:43:13.798434 UTC\nrmgr: Transaction len (rec/tot):     34/    34, tx:        975, lsn: 0/035ADD10, prev 0/035ADCE8, desc: COMMIT \n2023-10-09 12:43:13.900046 UTC\nrmgr: Heap2       len (rec/tot):    255/   255, tx:          0, lsn: 0/035ADD38, prev 0/035ADD10, desc: PRUNE \nsnapshotConflictHorizon: 0, nredirected: 0, ndead: 100, nunused: 0, redirected: [], dead: [2, 3, 4, ... , 99, 100, 101], \nunused: [], blkref #0: rel 1663/16385/16489 blk 0\nrmgr: Heap2       len (rec/tot):    248/   248, tx:          0, lsn: 0/035ADE38, prev 0/035ADD38, desc: VACUUM nunused: \n100, unused: [2, 3, 4, ... , 99, 100, 101], blkref #0: rel 1663/16385/16489 blk 0\n\nSo it looks like \"SELECT * FROM test_recovery_conflict_table2\" in this\ncase was interrupted due to conflict with replaying PRUNE, not VACUUM,\nand as a result, this query hadn't entered \"waiting for lock\" state.\n(I saw also other failures of the test, but probably they have the same\nroot cause).\n\nBest regards,\nAlexander", "msg_date": "Mon, 9 Oct 2023 17:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A failure in 031_recovery_conflict.pl on Debian/s390x" }, { "msg_contents": "13.08.2023 00:00, Andres Freund wrote:\n> On 2023-08-12 15:50:24 +1200, Thomas Munro wrote:\n>> Thanks. I realised that it's easy enough to test that theory about\n>> cleanup locks by hacking ConditionalLockBufferForCleanup() to return\n>> false randomly. Then the test occasionally fails as described. Seems\n>> like we'll need to fix that test, but it's not evidence of a server\n>> bug, and my signal handler refactoring patch is in the clear. Thanks\n>> for testing it!\n> WRT fixing the test: I think just using VACUUM FREEZE ought to do the job?\n> After changing all the VACUUMs to VACUUM FREEZEs, 031_recovery_conflict.pl\n> passes even after I make ConditionalLockBufferForCleanup() fail 100%.\n\nYesterday I wondered what's is the reason of other failure on skink [1],\n[2], [3]:\n#   Failed test 'inactiveslot slot invalidation is logged with vacuum on pg_authid'\n#   at t/035_standby_logical_decoding.pl line 224.\n\nI've reproduced the failure by running several test instances in parallel\n(with 15 jobs) under Valgrind and saw a difference in the primary node WAL:\nThe failed run:\nrmgr: Heap        len (rec/tot):     54/    54, tx:        752, lsn: 0/0403FB88, prev 0/0403FB48, desc: DELETE xmax: \n752, off: 16, infobits: [KEYS_UPDATED], flags: 0x00, blkref #0: rel 1664/0/1260 blk 0\nrmgr: Transaction len (rec/tot):     62/    62, tx:        752, lsn: 0/0403FBC0, prev 0/0403FB88, desc: INVALIDATION ; \ninval msgs: catcache 11 catcache 10\nrmgr: Transaction len (rec/tot):     82/    82, tx:        752, lsn: 0/0403FC00, prev 0/0403FBC0, desc: COMMIT \n2023-10-14 13:35:21.955043 MSK; inval msgs: catcache 11 catcache 10\nrmgr: XLOG        len (rec/tot):     49/  8241, tx:          0, lsn: 0/0403FC58, prev 0/0403FC00, desc: FPI_FOR_HINT , \nblkref #0: rel 1664/0/1260 fork fsm blk 2 FPW\nrmgr: XLOG        len (rec/tot):     49/  8241, tx:          0, lsn: 0/04041CA8, prev 0/0403FC58, desc: FPI_FOR_HINT , \nblkref #0: rel 1664/0/1260 fork fsm blk 1 FPW\nrmgr: XLOG        len (rec/tot):     49/  8241, tx:          0, lsn: 0/04043CF8, prev 0/04041CA8, desc: FPI_FOR_HINT , \nblkref #0: rel 1664/0/1260 fork fsm blk 0 FPW\nrmgr: Heap        len (rec/tot):    225/   225, tx:          0, lsn: 0/04045D48, prev 0/04043CF8, desc: INPLACE off: 26, \nblkref #0: rel 1663/16384/16406 blk 4\nrmgr: Transaction len (rec/tot):     78/    78, tx:          0, lsn: 0/04045E30, prev 0/04045D48, desc: INVALIDATION ; \ninval msgs: catcache 55 catcache 54 relcache 1260\nrmgr: Standby     len (rec/tot):     90/    90, tx:          0, lsn: 0/04045E80, prev 0/04045E30, desc: INVALIDATIONS ; \nrelcache init file inval dbid 16384 tsid 1663; inval msgs: catcache 55 catcache 54 relcache 1260\n\nA successful run:\nrmgr: Heap        len (rec/tot):     54/    54, tx:        752, lsn: 0/0403FB88, prev 0/0403FB48, desc: DELETE xmax: \n752, off: 16, infobits: [KEYS_UPDATED], flags: 0x00, blkref #0: rel 1664/0/1260 blk 0\nrmgr: Transaction len (rec/tot):     62/    62, tx:        752, lsn: 0/0403FBC0, prev 0/0403FB88, desc: INVALIDATION ; \ninval msgs: catcache 11 catcache 10\nrmgr: Transaction len (rec/tot):     82/    82, tx:        752, lsn: 0/0403FC00, prev 0/0403FBC0, desc: COMMIT \n2023-10-14 14:21:47.254556 MSK; inval msgs: catcache 11 catcache 10\nrmgr: Heap2       len (rec/tot):     57/    57, tx:          0, lsn: 0/0403FC58, prev 0/0403FC00, desc: PRUNE \nsnapshotConflictHorizon: 752, nredirected: 0, ndead: 1, nunused: 0, redirected: [], dead: [16], unused: [], blkref #0: \nrel 1664/0/1260 blk 0\nrmgr: Btree       len (rec/tot):     52/    52, tx:          0, lsn: 0/0403FC98, prev 0/0403FC58, desc: VACUUM ndeleted: \n1, nupdated: 0, deleted: [1], updated: [], blkref #0: rel 1664/0/2676 blk 1\nrmgr: Btree       len (rec/tot):     52/    52, tx:          0, lsn: 0/0403FCD0, prev 0/0403FC98, desc: VACUUM ndeleted: \n1, nupdated: 0, deleted: [16], updated: [], blkref #0: rel 1664/0/2677 blk 1\nrmgr: Heap2       len (rec/tot):     50/    50, tx:          0, lsn: 0/0403FD08, prev 0/0403FCD0, desc: VACUUM nunused: \n1, unused: [16], blkref #0: rel 1664/0/1260 blk 0\n\nI repeated the hack Thomas did before (added\n\"if (rand() % 10 == 0) return false;\" in ConditionalLockBufferForCleanup())\nand reproduced that failure without Valgrind and parallel.\nThe following changes:\n@@ -562 +562 @@\n-  VACUUM pg_class;\n+  VACUUM FREEZE pg_class;\n@@ -600 +600 @@\n-  VACUUM pg_authid;\n+  VACUUM FREEZE pg_authid;\n@@ -638 +638 @@\n-  VACUUM conflict_test;\n+  VACUUM FREEZE conflict_test;\nfixed the issue for me\n\nSecond change should protect from a similar failure [4]:\n#   Failed test 'inactiveslot slot invalidation is logged with vacuum on pg_class'\n#   at t/035_standby_logical_decoding.pl line 224.\n\n(I decided to write about failures of 035_standby_logical_decoding in this\nthread because of the same VACUUM issue.)\n\nI also confirmed that all the other tests in src/test/recovery pass reliably\nwith the modified ConditionalLockBufferForCleanup (except for\n027_stream_regress, where plans changed).\n\nWhen I had overcome the failures with pg_authid/pg_class, I stumbled upon\nother even more rare one (which occurs without modifying\nConditionalLockBufferForCleanup):\nt/035_standby_logical_decoding.pl .. 42/?\n#   Failed test 'inactiveslot slot invalidation is logged with on-access pruning'\n#   at t/035_standby_logical_decoding.pl line 224.\nBut I think that's another case, which I'm going to investigate separately.\n\nBTW, it looks like the comment:\n# One way to produce recovery conflict is to create/drop a relation and\n# launch a vacuum on pg_class with hot_standby_feedback turned off on the standby.\nin scenario 3 is a copy-paste error.\nAlso, there are two \"Scenario 4\" in this test.\n\nLong story short, VACUUM FREEZE should make skink greener, but still there\nis another possibility for the 035 test to fail.\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2023-09-05%2009%3A47%3A44\n[2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2023-10-13%2012%3A16%3A52\n[3] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2023-09-15%2017%3A10%3A55\n[4] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2023-08-27%2008%3A30%3A28\n\n\n", "msg_date": "Sun, 15 Oct 2023 09:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A failure in 031_recovery_conflict.pl on Debian/s390x" } ]
[ { "msg_contents": "Currently, we don't perform $SUBJECT at the time of shutdown of the\nserver. I think currently it will only have a minor impact that after\nrestart subscribers will ask to start processing before the\nXLOG_CHECKPOINT_SHUTDOWN or maybe after the switchover the old\npublisher will have an extra WAL record. However, if we want to\nsupport the upgrade of the publisher node such that the existing slots\nare copied/created into a new cluster, we need to ensure that all the\nchanges generated on the publisher must be sent and applied to the\nsubscriber. This is a hard requirement because after the upgrade we\nreset the WAL and if some of the WAL has not been sent then that will\nbe lost. Now, even a clean shutdown of the publisher node can't ensure\nthat all the WAL has been sent because it is quite possible that the\nsubscriber node is down due to which at shutdown time walsenders won't\nbe available to send the data. Similarly, there could be some logical\nslots created via backend which may not have processed all the data\nand we can't copy those slots as it is during the upgrade.\n\nTo ensure that all the data has been sent during the upgrade, we can\nensure that each logical slot's confirmed_flush_lsn (position in the\nWAL till which subscriber has confirmed that it has applied the WAL)\nis the same as current_wal_insert_lsn. Now, because we don't send\nXLOG_CHECKPOINT_SHUTDOWN even on clean shutdown, confirmed_flush_lsn\nwill never be the same as current_wal_insert_lsn. The one idea being\ndiscussed in patch [1] (see 0003) is to ensure that each slot's LSN is\nexactly XLOG_CHECKPOINT_SHUTDOWN ago which probably has some drawbacks\nlike what if we tomorrow add some other WAL in the shutdown checkpoint\npath or the size of record changes then we would need to modify the\ncorresponding code in upgrade.\n\nThe other possibility is that we allow logical walsenders to process\nXLOG_CHECKPOINT_SHUTDOWN before shutdown after which during the\nupgrade confirmed_flush_lsn will be the same as\ncurrent_wal_insert_lsn. AFAICU, the primary reason that we don't allow\nit is that we want to avoid writing any new WAL after the shutdown\ncheckpoint (to avoid any sort of PANIC as discussed in the thread [2])\nwhich is possible during decoding due to hint bits but it doesn't seem\ndecoding of XLOG_CHECKPOINT_SHUTDOWN can lead to any hint bit updates.\nIt seems we made these changes as part of commit c6c3334364 [3]. Note\nthat even if we can ensure that walsenders send all the WAL before\nshutdown and make corresponding logical slots up-to-date so that there\nis no pending data but it would still be possible that logical slots\ncreated manually via backends won't consume all the WAL before\nshutdown. I think those will be the responsibility of users as those\nare created by them.\n\nWe can also provide some guidelines to users similar to what we have\non physical standby in pg_upgrade docs [4] (See: 9 Prepare for standby\nserver upgrades). Something like, before upgrading, verify that the\nsubscriber is caught up with the publisher by comparing the current\nWAL position on the publisher and pg_stat_subscription.received_lsn on\nthe subscriber.\n\nAny better ideas or thoughts on the above?\n\n[1] - https://www.postgresql.org/message-id/TYAPR01MB586619721863B7FFDAC4369FF550A%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n[2] - https://www.postgresql.org/message-id/CAHGQGwEsttg9P9LOOavoc9d6VB1zVmYgfBk%3DLjsk-UL9cEf-eA%40mail.gmail.com\n[3] -\ncommit c6c333436491a292d56044ed6e167e2bdee015a2\nAuthor: Andres Freund <andres@anarazel.de>\nDate: Mon Jun 5 18:53:41 2017 -0700\n\n Prevent possibility of panics during shutdown checkpoint.\n[4] - https://www.postgresql.org/docs/devel/pgupgrade.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 25 Jul 2023 14:31:00 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Logical walsenders don't process XLOG_CHECKPOINT_SHUTDOWN" }, { "msg_contents": "Hi,\n\nOn 2023-07-25 14:31:00 +0530, Amit Kapila wrote:\n> To ensure that all the data has been sent during the upgrade, we can\n> ensure that each logical slot's confirmed_flush_lsn (position in the\n> WAL till which subscriber has confirmed that it has applied the WAL)\n> is the same as current_wal_insert_lsn. Now, because we don't send\n> XLOG_CHECKPOINT_SHUTDOWN even on clean shutdown, confirmed_flush_lsn\n> will never be the same as current_wal_insert_lsn. The one idea being\n> discussed in patch [1] (see 0003) is to ensure that each slot's LSN is\n> exactly XLOG_CHECKPOINT_SHUTDOWN ago which probably has some drawbacks\n> like what if we tomorrow add some other WAL in the shutdown checkpoint\n> path or the size of record changes then we would need to modify the\n> corresponding code in upgrade.\n\nYea, that doesn't seem like a good path. But there is a variant that seems\nbetter: We could just scan the end of the WAL for records that should have\nbeen streamed out?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 25 Jul 2023 10:03:19 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Logical walsenders don't process XLOG_CHECKPOINT_SHUTDOWN" }, { "msg_contents": "On Tue, Jul 25, 2023 at 10:33 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2023-07-25 14:31:00 +0530, Amit Kapila wrote:\n> > To ensure that all the data has been sent during the upgrade, we can\n> > ensure that each logical slot's confirmed_flush_lsn (position in the\n> > WAL till which subscriber has confirmed that it has applied the WAL)\n> > is the same as current_wal_insert_lsn. Now, because we don't send\n> > XLOG_CHECKPOINT_SHUTDOWN even on clean shutdown, confirmed_flush_lsn\n> > will never be the same as current_wal_insert_lsn. The one idea being\n> > discussed in patch [1] (see 0003) is to ensure that each slot's LSN is\n> > exactly XLOG_CHECKPOINT_SHUTDOWN ago which probably has some drawbacks\n> > like what if we tomorrow add some other WAL in the shutdown checkpoint\n> > path or the size of record changes then we would need to modify the\n> > corresponding code in upgrade.\n>\n> Yea, that doesn't seem like a good path. But there is a variant that seems\n> better: We could just scan the end of the WAL for records that should have\n> been streamed out?\n>\n\nThis sounds like a better idea. So, one way to realize this is that\ngroup slots based on confirmed_flush_lsn and then scan based on that.\nOnce we ensure that the slot group with the highest\nconfirm_flush_location is up-to-date (doesn't have any pending WAL\nexcept for shutdown_checkpoint), any slot group having a lesser value\nof confirm_flush_location would be considered a group with pending\ndata.\n\nBTW, I think the main downside for not trying to send\nXLOG_CHECKPOINT_SHUTDOWN for logical walsenders is that even if today\nthere is no risk of any hint bit updates (or any other possibility of\ngenerating WAL) during decoding of XLOG_CHECKPOINT_SHUTDOWN but there\nis no future guarantee of the same. Is there anything I am missing\nhere?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 26 Jul 2023 09:43:39 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Logical walsenders don't process XLOG_CHECKPOINT_SHUTDOWN" } ]
[ { "msg_contents": "Hi hackers,\n\nCurrently, most distributed tracing solutions only provide details from the\nclient's drivers point of view. We don't have visibility when the query\nreached the database and how the query was processed.\n\nThe goal of this project is to provide distributed tracing within the\ndatabase to provide more insights on a query execution as an extension. The\nidea and architecture is heavily inspired by pg_stat_statements and\nauto_explain.\n\nI have a working prototype of a pg_tracing extension and wanted some\nfeedback on the design and architecture.\n\nSome definitions: (paraphrased from opentelemetry\nhttps://opentelemetry.io/docs/concepts/signals/traces/)\n\nA trace groups multiple spans and a span represents a single specific\noperation. It is defined by:\n\n- a trace_id\n\n- a spanid (randomly generated int64)\n\n- a name\n\n- a parent id (a span can have children)\n\n- a start timestamp\n\n- a duration\n\n- Attributes (key value metadatas)\n\nWe will use attributes to propagate the query's instrumentation details:\nbuffer usage, jit, wal usage, first tuple, plan cost...\n\nTriggering a trace:\n\n===============\n\nWe rely on https://google.github.io/sqlcommenter/ to propagate trace\ninformation.\n\nA query with sqlcommenter will look like:\n/*dddbs='postgres.db',traceparent='00-00000000000000000000000000000009-0000000000000005-01'*/\nselect 1;\n\nThe traceparent fields are detailed in\nhttps://www.w3.org/TR/trace-context/#traceparent-header-field-values\n\nThe 3 fields are\n\n00000000000000000000000000000009: trace id\n\n0000000000000005: parent id\n\n01: trace flags (01 == sampled)\n\nIf the query is sampled, we instrument the query and generate spans for the\nfollowing operations:\n\n- Query Span: The top span for a query. They are created after extracting\nthe traceid from traceparent or to represent a nested query. We always have\nat least one. We may have more if we have a nested query.\n\n- Planner: We track the time spent in the planner and report the planner\ncounters\n\n- Executor: We trace the different steps of the Executor: Start, Run,\nFinish and End\n\n- Node Span: Created from planstate. The name is extracted from the node\ntype (IndexScan, SeqScan). For the operation name, we generate something\nsimilar to the explain.\n\n\nStoring spans:\n\n===============\n\nSpans are stored in a shared memory buffer. The buffer size is fixed and if\nit is full, no further spans are generated until the buffer is freed.\n\nWe store multiple information with variable sized text in the spans: names,\nparameter values...\n\nTo avoid keeping those in the shared memory, we store them in an external\nfile (similar to pg_stat_statements query text).\n\nProcessing spans:\n\n===============\n\nTraces need to be sent to a trace collector. We offload this logic to an\nexternal application which will:\n\n- Collect the spans through a new pg_tracing_spans() function output\n\n- Built spans in any format specific to the targeted trace collector\n(opentelemetry, datadog...)\n\n- Send it to the targeted trace collector using the appropriate protocol\n(gRPC, http...)\n\nI have an example of a span forwarder that I've been using in my tests\navailable here:\nhttps://gist.github.com/bonnefoa/6ed24520bdac026d6a6a6992d308bd50.\n\nThis setup permits a lot of flexibility:\n\n- The postgres extension doesn't need any vendor specific knowledge\n\n- The trace forwarder can reuse existing libraries and any languages to\nsend spans\n\nBuffer management:\n\n===============\n\nI've tried to keep the memory management simple. Creating spans will add\nnew elements to the shared_spans buffer.\n\nOnce a span is forwarded, there's no need to keep it in the shared memory.\n\nThus, pg_tracing_spans has a \"consume\" parameter which will completely\nempty the shared buffers when called by the forwarder.\n\nIf we have a regularly full buffers, then we can:\n\n- increase forwarder's frequency\n\n- increase the shared buffer size\n\n- decrease the amount of generated spans\n\nStatistics are available through pg_tracing_info to keep track of the\nnumber of generated spans and dropped events.\n\nOverhead:\n\n===============\n\nI haven't run benchmarks yet, however, we will have multiple ways to\ncontrol the overhead.\n\nTraces rely heavily on sampling to keep the overhead low: we want extensive\ninformation on representative samples of our queries.\n\nFor now, we leave the sampling decision to the caller (through the sampled\nflag) but I will add an additional parameter to allow additional sampling\nif the rate of traced queries is too high.\n\nStopping tracing when we have a full buffer is also a good safeguard. If an\napplication is misconfigured and starts sampling every queries, this will\nstop all tracing and won't impact query performances.\n\nQuery Instrumentation:\n\n===============\n\nI have been able to rely on most of existing query instrumentation except\nfor one missing information: I needed to know the start of a node as we\ncurrently only have the first tuple and the duration. I added firsttime to\nInstrumentation. This is the only change done outside the extension's code.\n\nCurrent status:\n\n===============\n\nThe current code is able to generate spans which once forwarded to the\ntrace collector, I was able to get traces with execution details with\nsimilar amount of information that an explain (analyze, buffers) would\nprovide. I have provided screenshots of the result.\n\nThere's a lot of work left in cleaning, commenting, handling edge cases and\nadding tests.\n\nThoughts and feedback are welcome.\n\nRegards,\n\nAnthonin Bonnefoy", "msg_date": "Tue, 25 Jul 2023 11:21:55 +0200", "msg_from": "Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>", "msg_from_op": true, "msg_subject": "POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Hi Anthonin,\n\n> I have a working prototype of a pg_tracing extension and wanted some feedback on the design and architecture.\n\nThe patch looks very interesting, thanks for working on it and for\nsharing. The facts that the patch doesn't change the core except for\ntwo lines in instrument.{c.h} and that is uses pull-based model:\n\n> - Collect the spans through a new pg_tracing_spans() function output\n\n... IMO were the right design decisions. The patch lacks the\ndocumentation, but this is OK for a PoC.\n\nI added the patch to the nearest CF [1]. Let's see what the rest of\nthe community thinks.\n\n[1] https://commitfest.postgresql.org/44/4456/\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 25 Jul 2023 13:16:11 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Hi!\n\nThis patch looks very interesting, I'm working on the same subject too. But\nI've used\nanother approach - I'm using C wrapper for C++ API library from\nOpenTelemetry, and\nhandle span storage and output to this library. There are some nuances\nthough, but it\nis possible. Have you tried to use OpenTelemetry APIs instead of\nimplementing all\nfunctionality around spans?\n\nOn Tue, Jul 25, 2023 at 1:16 PM Aleksander Alekseev <\naleksander@timescale.com> wrote:\n\n> Hi Anthonin,\n>\n> > I have a working prototype of a pg_tracing extension and wanted some\n> feedback on the design and architecture.\n>\n> The patch looks very interesting, thanks for working on it and for\n> sharing. The facts that the patch doesn't change the core except for\n> two lines in instrument.{c.h} and that is uses pull-based model:\n>\n> > - Collect the spans through a new pg_tracing_spans() function output\n>\n> ... IMO were the right design decisions. The patch lacks the\n> documentation, but this is OK for a PoC.\n>\n> I added the patch to the nearest CF [1]. Let's see what the rest of\n> the community thinks.\n>\n> [1] https://commitfest.postgresql.org/44/4456/\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>\n>\n>\n\n-- \nRegards,\n\n--\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi!This patch looks very interesting, I'm working on the same subject too. But I've usedanother approach - I'm using C wrapper for C++ API library from OpenTelemetry, andhandle span storage and output to this library. There are some nuances though, but itis possible. Have you tried to use OpenTelemetry APIs instead of implementing allfunctionality around spans?On Tue, Jul 25, 2023 at 1:16 PM Aleksander Alekseev <aleksander@timescale.com> wrote:Hi Anthonin,\n\n> I have a working prototype of a pg_tracing extension and wanted some feedback on the design and architecture.\n\nThe patch looks very interesting, thanks for working on it and for\nsharing. The facts that the patch doesn't change the core except for\ntwo lines in instrument.{c.h} and that is uses pull-based model:\n\n> - Collect the spans through a new pg_tracing_spans() function output\n\n... IMO were the right design decisions. The patch lacks the\ndocumentation, but this is OK for a PoC.\n\nI added the patch to the nearest CF [1]. Let's see what the rest of\nthe community thinks.\n\n[1] https://commitfest.postgresql.org/44/4456/\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n-- Regards,--Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/", "msg_date": "Wed, 26 Jul 2023 14:40:07 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Nikita,\n\n> This patch looks very interesting, I'm working on the same subject too. But I've used\n> another approach - I'm using C wrapper for C++ API library from OpenTelemetry, and\n> handle span storage and output to this library. There are some nuances though, but it\n> is possible. Have you tried to use OpenTelemetry APIs instead of implementing all\n> functionality around spans?\n\nI don't think that PostgreSQL accepts such kind of C++ code, not to\nmention the fact that the PostgreSQL license is not necessarily\ncompatible with Apache 2.0 (I'm not a lawyer; this is not a legal\nadvice). Such a design decision will probably require using separate\ncompile flags since the user doesn't necessarily have a corresponding\ndependency installed. Similarly to how we do with LLVM, OpenSSL, etc.\n\nSo -1 to the OpenTelemetry C++ library and +1 to the properly licensed\nC implementation without 3rd party dependencies from me. Especially\nconsidering the fact that the implementation seems to be rather\nsimple.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 26 Jul 2023 17:11:35 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "I've initially thought of sending the spans from PostgreSQL since this is\nthe usual behavior of tracing libraries.\n\nHowever, this created a lot potential issues:\n\n- Protocol support and differences between trace collectors. OpenTelemetry\nseems to use gRPC, others are using http and those will require additional\nlibraries (plus gRPC support in C doesn't look good) and any change in\nprotobuf definition would require updating the extension.\n\n- Do we send the spans within the query hooks? This means that we could\nblock the process if the trace collector is slow to answer or we can’t\nconnect. Sending spans from a background process sounded rather complex and\nresource heavy.\n\nMoving to a pull model fixed those issues and felt more natural as this is\nthe way PostgreSQL exposes its metrics.\n\n\nOn Wed, Jul 26, 2023 at 4:11 PM Aleksander Alekseev <\naleksander@timescale.com> wrote:\n\n> Nikita,\n>\n> > This patch looks very interesting, I'm working on the same subject too.\n> But I've used\n> > another approach - I'm using C wrapper for C++ API library from\n> OpenTelemetry, and\n> > handle span storage and output to this library. There are some nuances\n> though, but it\n> > is possible. Have you tried to use OpenTelemetry APIs instead of\n> implementing all\n> > functionality around spans?\n>\n> I don't think that PostgreSQL accepts such kind of C++ code, not to\n> mention the fact that the PostgreSQL license is not necessarily\n> compatible with Apache 2.0 (I'm not a lawyer; this is not a legal\n> advice). Such a design decision will probably require using separate\n> compile flags since the user doesn't necessarily have a corresponding\n> dependency installed. Similarly to how we do with LLVM, OpenSSL, etc.\n>\n> So -1 to the OpenTelemetry C++ library and +1 to the properly licensed\n> C implementation without 3rd party dependencies from me. Especially\n> considering the fact that the implementation seems to be rather\n> simple.\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>\n\nI've initially thought of sending the spans from PostgreSQL since this is the usual behavior of tracing libraries. However, this created a lot potential issues:- Protocol support and differences between trace collectors. OpenTelemetry seems to use gRPC, others are using http and those will require additional libraries (plus gRPC support in C doesn't look good) and any change in protobuf definition would require updating the extension.- Do we send the spans within the query hooks? This means that we could block the process if the trace collector is slow to answer or we can’t connect. Sending spans from a background process sounded rather complex and resource heavy.Moving to a pull model fixed those issues and felt more natural as this is the way PostgreSQL exposes its metrics.On Wed, Jul 26, 2023 at 4:11 PM Aleksander Alekseev <aleksander@timescale.com> wrote:Nikita,\n\n> This patch looks very interesting, I'm working on the same subject too. But I've used\n> another approach - I'm using C wrapper for C++ API library from OpenTelemetry, and\n> handle span storage and output to this library. There are some nuances though, but it\n> is possible. Have you tried to use OpenTelemetry APIs instead of implementing all\n> functionality around spans?\n\nI don't think that PostgreSQL accepts such kind of C++ code, not to\nmention the fact that the PostgreSQL license is not necessarily\ncompatible with Apache 2.0 (I'm not a lawyer; this is not a legal\nadvice). Such a design decision will probably require using separate\ncompile flags since the user doesn't necessarily have a corresponding\ndependency installed. Similarly to how we do with LLVM, OpenSSL, etc.\n\nSo -1 to the OpenTelemetry C++ library and +1 to the properly licensed\nC implementation without 3rd party dependencies from me. Especially\nconsidering the fact that the implementation seems to be rather\nsimple.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Wed, 26 Jul 2023 17:54:51 +0200", "msg_from": "Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>", "msg_from_op": true, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Hi!\n\nI've tried to test the extension, but got errors calling make check and\ncannot install the extension.\nI've applied the patch onto current master and configured it as:\n./configure --enable-debug --enable-cassert --enable-depend\n--enable-tap-tests\nCould you please advise if I'm doing something wrong?\nFor make check please see attached log.\nFor installing extension after setting shared preload library I've got an\nerror in Postgres logfile:\n2023-07-27 13:41:52.324 MSK [12738] LOG: database system is shut down\n2023-07-27 13:41:52.404 MSK [14126] FATAL: could not load library\n\"/usr/local/pgsql/lib/pg_tracing.so\": /usr/local/pgsql/lib/pg_tracing.so:\nundefined symbol: get_operation_name\nThank you!\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/", "msg_date": "Thu, 27 Jul 2023 13:56:42 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Hi,\n\nAlso FYI, there are build warnings because functions\nconst char * get_span_name(const Span * span, const char *qbuffer)\nand\nconst char * get_operation_name(const Span * span, const char *qbuffer)\ndo not have default inside switch and no return outside of switch.\n\n--\nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi,Also FYI, there are build warnings because functions const char * get_span_name(const Span * span, const char *qbuffer)andconst char * get_operation_name(const Span * span, const char *qbuffer)do not have default inside switch and no return outside of switch.--Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/", "msg_date": "Thu, 27 Jul 2023 17:45:57 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Hi,\n\n> Also FYI, there are build warnings because functions\n> const char * get_span_name(const Span * span, const char *qbuffer)\n> and\n> const char * get_operation_name(const Span * span, const char *qbuffer)\n> do not have default inside switch and no return outside of switch.\n\nYou are right, there are a few warnings:\n\n```\n[1566/1887] Compiling C object contrib/pg_tracing/pg_tracing.so.p/span.c.o\n../contrib/pg_tracing/span.c: In function ‘get_span_name’:\n../contrib/pg_tracing/span.c:210:1: warning: control reaches end of\nnon-void function [-Wreturn-type]\n 210 | }\n | ^\n../contrib/pg_tracing/span.c: In function ‘get_operation_name’:\n../contrib/pg_tracing/span.c:249:1: warning: control reaches end of\nnon-void function [-Wreturn-type]\n 249 | }\n | ^\n```\n\nHere is the patch v2 with a quick fix.\n\n> but got errors calling make check and cannot install the extension\n\nAgree, something goes wrong when using Autotools (but not Meson) on\nboth Linux and MacOS. I didn't investigate the issue though.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Thu, 27 Jul 2023 18:38:52 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Hi,\n\nI've fixed the Autotools build, please check patch below (v2).\n\nOn Thu, Jul 27, 2023 at 6:39 PM Aleksander Alekseev <\naleksander@timescale.com> wrote:\n\n> Hi,\n>\n> > Also FYI, there are build warnings because functions\n> > const char * get_span_name(const Span * span, const char *qbuffer)\n> > and\n> > const char * get_operation_name(const Span * span, const char *qbuffer)\n> > do not have default inside switch and no return outside of switch.\n>\n> You are right, there are a few warnings:\n>\n> ```\n> [1566/1887] Compiling C object contrib/pg_tracing/pg_tracing.so.p/span.c.o\n> ../contrib/pg_tracing/span.c: In function ‘get_span_name’:\n> ../contrib/pg_tracing/span.c:210:1: warning: control reaches end of\n> non-void function [-Wreturn-type]\n> 210 | }\n> | ^\n> ../contrib/pg_tracing/span.c: In function ‘get_operation_name’:\n> ../contrib/pg_tracing/span.c:249:1: warning: control reaches end of\n> non-void function [-Wreturn-type]\n> 249 | }\n> | ^\n> ```\n>\n> Here is the patch v2 with a quick fix.\n>\n> > but got errors calling make check and cannot install the extension\n>\n> Agree, something goes wrong when using Autotools (but not Meson) on\n> both Linux and MacOS. I didn't investigate the issue though.\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>\n\n\n-- \nRegards,\n\n--\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/", "msg_date": "Fri, 28 Jul 2023 09:10:12 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "> Agree, something goes wrong when using Autotools (but not Meson) on\n> both Linux and MacOS. I didn't investigate the issue though.\nI was only using meson and forgot to keep Automake files up to date when\nI've split pg_tracing.c in multiple files (span.c, explain.c...).\n\nOn Fri, Jul 28, 2023 at 8:10 AM Nikita Malakhov <hukutoc@gmail.com> wrote:\n\n> Hi,\n>\n> I've fixed the Autotools build, please check patch below (v2).\n>\n> On Thu, Jul 27, 2023 at 6:39 PM Aleksander Alekseev <\n> aleksander@timescale.com> wrote:\n>\n>> Hi,\n>>\n>> > Also FYI, there are build warnings because functions\n>> > const char * get_span_name(const Span * span, const char *qbuffer)\n>> > and\n>> > const char * get_operation_name(const Span * span, const char *qbuffer)\n>> > do not have default inside switch and no return outside of switch.\n>>\n>> You are right, there are a few warnings:\n>>\n>> ```\n>> [1566/1887] Compiling C object contrib/pg_tracing/pg_tracing.so.p/span.c.o\n>> ../contrib/pg_tracing/span.c: In function ‘get_span_name’:\n>> ../contrib/pg_tracing/span.c:210:1: warning: control reaches end of\n>> non-void function [-Wreturn-type]\n>> 210 | }\n>> | ^\n>> ../contrib/pg_tracing/span.c: In function ‘get_operation_name’:\n>> ../contrib/pg_tracing/span.c:249:1: warning: control reaches end of\n>> non-void function [-Wreturn-type]\n>> 249 | }\n>> | ^\n>> ```\n>>\n>> Here is the patch v2 with a quick fix.\n>>\n>> > but got errors calling make check and cannot install the extension\n>>\n>> Agree, something goes wrong when using Autotools (but not Meson) on\n>> both Linux and MacOS. I didn't investigate the issue though.\n>>\n>> --\n>> Best regards,\n>> Aleksander Alekseev\n>>\n>\n>\n> --\n> Regards,\n>\n> --\n> Nikita Malakhov\n> Postgres Professional\n> The Russian Postgres Company\n> https://postgrespro.ru/\n>\n\n> Agree, something goes wrong when using Autotools (but not Meson) on> both Linux and MacOS. I didn't investigate the issue though.I was only using meson and forgot to keep Automake files up to date when I've split pg_tracing.c in multiple files (span.c, explain.c...).On Fri, Jul 28, 2023 at 8:10 AM Nikita Malakhov <hukutoc@gmail.com> wrote:Hi,I've fixed the Autotools build, please check patch below (v2).On Thu, Jul 27, 2023 at 6:39 PM Aleksander Alekseev <aleksander@timescale.com> wrote:Hi,\n\n> Also FYI, there are build warnings because functions\n> const char * get_span_name(const Span * span, const char *qbuffer)\n> and\n> const char * get_operation_name(const Span * span, const char *qbuffer)\n> do not have default inside switch and no return outside of switch.\n\nYou are right, there are a few warnings:\n\n```\n[1566/1887] Compiling C object contrib/pg_tracing/pg_tracing.so.p/span.c.o\n../contrib/pg_tracing/span.c: In function ‘get_span_name’:\n../contrib/pg_tracing/span.c:210:1: warning: control reaches end of\nnon-void function [-Wreturn-type]\n  210 | }\n      | ^\n../contrib/pg_tracing/span.c: In function ‘get_operation_name’:\n../contrib/pg_tracing/span.c:249:1: warning: control reaches end of\nnon-void function [-Wreturn-type]\n  249 | }\n      | ^\n```\n\nHere is the patch v2 with a quick fix.\n\n> but got errors calling make check and cannot install the extension\n\nAgree, something goes wrong when using Autotools (but not Meson) on\nboth Linux and MacOS. I didn't investigate the issue though.\n\n-- \nBest regards,\nAleksander Alekseev\n-- Regards,--Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/", "msg_date": "Fri, 28 Jul 2023 08:35:20 +0200", "msg_from": "Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>", "msg_from_op": true, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Hi,\n\nI'd keep Autotools build up to date, because Meson is very limited in terms\nof not very\nup-to-date OS versions. Anyway, it's OK now. I'm currently playing with the\npatch and\nreviewing sources, if you need any cooperation - please let us know. I'm +1\nfor committing\nthis patch after review.\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi,I'd keep Autotools build up to date, because Meson is very limited in terms of not veryup-to-date OS versions. Anyway, it's OK now. I'm currently playing with the patch andreviewing sources, if you need any cooperation - please let us know. I'm +1 for committingthis patch after review.-- Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/", "msg_date": "Fri, 28 Jul 2023 10:34:56 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Hi,\n\n> I'd keep Autotools build up to date\nDefinitely. The patch is still in a rough state and updating Autotools fell\nthrough the cracks.\n\n> I'm currently playing with the patch and\n> reviewing sources, if you need any cooperation - please let us know.\nThe goal for me was to get validation on the design and to check if there\nwas anything that would be a blocker before commiting more time on the\nproject.\n\nThere are already several things I was planning on improving:\n- The parse span is something I've added recently. I've realised that I\ncould rely on the stmtStartTimestamp to get the time spent in parsing in\nthe post parse hook so the code around this is still rough.\n- There are probably race conditions on the reads and writes of the query\nfile that need to be fixed\n- I'm using the same Span structure for everything while Executor spans\nonly need a small subset which is a bit wasteful. I've tried to use two\ndifferent shared buffers (base spans and spans with counters) but it was\nmaking things more confusing.\n- Finish commenting code and start writing the doc\n\nSo any comments, missing features and reviews on the current state of the\nproject is welcome.\n\nRegards,\nAnthonin\n\nOn Fri, Jul 28, 2023 at 9:35 AM Nikita Malakhov <hukutoc@gmail.com> wrote:\n\n> Hi,\n>\n> I'd keep Autotools build up to date, because Meson is very limited in\n> terms of not very\n> up-to-date OS versions. Anyway, it's OK now. I'm currently playing with\n> the patch and\n> reviewing sources, if you need any cooperation - please let us know.\n> I'm +1 for committing\n> this patch after review.\n>\n> --\n> Regards,\n> Nikita Malakhov\n> Postgres Professional\n> The Russian Postgres Company\n> https://postgrespro.ru/\n>\n\nHi, > I'd keep Autotools build up to dateDefinitely. The patch is still in a rough state and updating Autotools fell through the cracks.> I'm currently playing with the patch and> reviewing sources, if you need any cooperation - please let us know.The goal for me was to get validation on the design and to check if there was anything that would be a blocker before commiting more time on the project.There are already several things I was planning on improving:- The parse span is something I've added recently. I've realised that I could rely on the stmtStartTimestamp to get the time spent in parsing in the post parse hook so the code around this is still rough.- There are probably race conditions on the reads and writes of the query file that need to be fixed- I'm using the same Span structure for everything while Executor spans only need a small subset which is a bit wasteful. I've tried to use two different shared buffers (base spans and spans with counters) but it was making things more confusing.- Finish commenting code and start writing the docSo any comments, missing features and reviews on the current state of the project is welcome.Regards,AnthoninOn Fri, Jul 28, 2023 at 9:35 AM Nikita Malakhov <hukutoc@gmail.com> wrote:Hi,I'd keep Autotools build up to date, because Meson is very limited in terms of not veryup-to-date OS versions. Anyway, it's OK now. I'm currently playing with the patch andreviewing sources, if you need any cooperation - please let us know. I'm +1 for committingthis patch after review.-- Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/", "msg_date": "Fri, 28 Jul 2023 13:45:24 +0200", "msg_from": "Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>", "msg_from_op": true, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Hi!\n\nWhat do you think about using INSTR_TIME_SET_CURRENT, INSTR_TIME_SUBTRACT\nand INSTR_TIME_GET_MILLISEC\nmacros for timing calculations?\n\nAlso, have you thought about a way to trace existing (running) queries\nwithout directly instrumenting them? It would\nbe much more usable for maintenance and support personnel, because in\nproduction environments you rarely could\nchange query text directly. For the current state the most simple solution\nis switch tracing on and off by calling SQL\nfunction, and possibly switch tracing for prepared statement the same way,\nbut not for any random query.\n\nI'll check the patch for the race conditions.\n\n--\nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi!What do you think about using INSTR_TIME_SET_CURRENT, INSTR_TIME_SUBTRACT and INSTR_TIME_GET_MILLISECmacros for timing calculations?Also, have you thought about a way to trace existing (running) queries without directly instrumenting them? It wouldbe much more usable for maintenance and support personnel, because in production environments you rarely couldchange query text directly. For the current state the most simple solution is switch tracing on and off by calling SQLfunction, and possibly switch tracing for prepared statement the same way, but not for any random query.I'll check the patch for the race conditions.--Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/", "msg_date": "Fri, 28 Jul 2023 16:01:44 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "> What do you think about using INSTR_TIME_SET_CURRENT, INSTR_TIME_SUBTRACT\nand INSTR_TIME_GET_MILLISEC\n> macros for timing calculations?\nIf you're talking of the two instances where I'm modifying the instr_time's\nticks, it's because I can't use the macros there.\nThe first case is for the parse span. I only have the start timestamp\nusing GetCurrentStatementStartTimestamp and don't\nhave access to the start instr_time so I need to build the duration from 2\ntimestamps.\nThe second case is when building node spans from the planstate. I directly\nhave the duration from Instrumentation.\n\nI guess one fix would be to use an int64 for the span duration to directly\nstore nanoseconds instead of an instr_time\nbut I do use the instrumentation macros outside of those two cases to get\nthe duration of other spans.\n\n> Also, have you thought about a way to trace existing (running) queries\nwithout directly instrumenting them?\nThat's a good point. I was focusing on leaving the sampling decision to the\ncaller through the sampled flag and\nonly recently added the pg_tracing_sample_rate parameter to give more\ncontrol. It should be straightforward to\nadd an option to create standalone traces based on sample rate alone. This\nway, setting the sample rate to 1\nwould force the queries running in the session to be traced.\n\n\nOn Fri, Jul 28, 2023 at 3:02 PM Nikita Malakhov <hukutoc@gmail.com> wrote:\n\n> Hi!\n>\n> What do you think about using INSTR_TIME_SET_CURRENT, INSTR_TIME_SUBTRACT\n> and INSTR_TIME_GET_MILLISEC\n> macros for timing calculations?\n>\n> Also, have you thought about a way to trace existing (running) queries\n> without directly instrumenting them? It would\n> be much more usable for maintenance and support personnel, because in\n> production environments you rarely could\n> change query text directly. For the current state the most simple solution\n> is switch tracing on and off by calling SQL\n> function, and possibly switch tracing for prepared statement the same way,\n> but not for any random query.\n>\n> I'll check the patch for the race conditions.\n>\n> --\n> Regards,\n> Nikita Malakhov\n> Postgres Professional\n> The Russian Postgres Company\n> https://postgrespro.ru/\n>\n\n> What do you think about using INSTR_TIME_SET_CURRENT, INSTR_TIME_SUBTRACT and INSTR_TIME_GET_MILLISEC> macros for timing calculations?If you're talking of the two instances where I'm modifying the instr_time's ticks, it's because I can't use the macros there.The first case is for the parse span. I only have the start timestamp using GetCurrentStatementStartTimestamp and don't have access to the start instr_time so I need to build the duration from 2 timestamps.The second case is when building node spans from the planstate. I directly have the duration from Instrumentation.I guess one fix would be to use an int64 for the span duration to directly store nanoseconds instead of an instr_time but I do use the instrumentation macros outside of those two cases to get the duration of other spans.> Also, have you thought about a way to trace existing (running) queries without directly instrumenting them?That's a good point. I was focusing on leaving the sampling decision to the caller through the sampled flag and only recently added the pg_tracing_sample_rate parameter to give more control. It should be straightforward to add an option to create standalone traces based on sample rate alone. This way, setting the sample rate to 1 would force the queries running in the session to be traced.On Fri, Jul 28, 2023 at 3:02 PM Nikita Malakhov <hukutoc@gmail.com> wrote:Hi!What do you think about using INSTR_TIME_SET_CURRENT, INSTR_TIME_SUBTRACT and INSTR_TIME_GET_MILLISECmacros for timing calculations?Also, have you thought about a way to trace existing (running) queries without directly instrumenting them? It wouldbe much more usable for maintenance and support personnel, because in production environments you rarely couldchange query text directly. For the current state the most simple solution is switch tracing on and off by calling SQLfunction, and possibly switch tracing for prepared statement the same way, but not for any random query.I'll check the patch for the race conditions.--Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/", "msg_date": "Fri, 28 Jul 2023 16:06:20 +0200", "msg_from": "Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>", "msg_from_op": true, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Hi,\n\nHere's a new patch with changes from the previous discussion:\n- I'm now directly storing nanoseconds duration in the span instead of the\ninstr_time. Using the instr_time macros was a bit awkward as the\ndurations I generate don't necessarily have a starting and ending\ninstr_time.\nMoving to straight nanoseconds made things clearer from my point of view.\n- I've added an additional sample rate pg_tracing.sample_rate (on top of\nthe pg_tracing.caller_sample_rate). This one will allow queries to be\nsampled even without trace information propagated from the caller.\nSetting this sample rate to 1 will basically trace everything. For now,\nthis will only work when going through the post parse hook. I will add\nsupport for prepared statements and cached plans for the next patch.\n- I've improved how parse spans are created. It's a bit challenging to get\nthe start of a parse as there's no pre parse hook or instrumentation around\nparse so it's only an estimation.\n\nRegards,\nAnthonin\n\nOn Fri, Jul 28, 2023 at 4:06 PM Anthonin Bonnefoy <\nanthonin.bonnefoy@datadoghq.com> wrote:\n\n> > What do you think about using INSTR_TIME_SET_CURRENT,\n> INSTR_TIME_SUBTRACT and INSTR_TIME_GET_MILLISEC\n> > macros for timing calculations?\n> If you're talking of the two instances where I'm modifying the\n> instr_time's ticks, it's because I can't use the macros there.\n> The first case is for the parse span. I only have the start timestamp\n> using GetCurrentStatementStartTimestamp and don't\n> have access to the start instr_time so I need to build the duration from 2\n> timestamps.\n> The second case is when building node spans from the planstate. I directly\n> have the duration from Instrumentation.\n>\n> I guess one fix would be to use an int64 for the span duration to directly\n> store nanoseconds instead of an instr_time\n> but I do use the instrumentation macros outside of those two cases to get\n> the duration of other spans.\n>\n> > Also, have you thought about a way to trace existing (running) queries\n> without directly instrumenting them?\n> That's a good point. I was focusing on leaving the sampling decision to\n> the caller through the sampled flag and\n> only recently added the pg_tracing_sample_rate parameter to give more\n> control. It should be straightforward to\n> add an option to create standalone traces based on sample rate alone. This\n> way, setting the sample rate to 1\n> would force the queries running in the session to be traced.\n>\n>\n> On Fri, Jul 28, 2023 at 3:02 PM Nikita Malakhov <hukutoc@gmail.com> wrote:\n>\n>> Hi!\n>>\n>> What do you think about using INSTR_TIME_SET_CURRENT, INSTR_TIME_SUBTRACT\n>> and INSTR_TIME_GET_MILLISEC\n>> macros for timing calculations?\n>>\n>> Also, have you thought about a way to trace existing (running) queries\n>> without directly instrumenting them? It would\n>> be much more usable for maintenance and support personnel, because in\n>> production environments you rarely could\n>> change query text directly. For the current state the most simple\n>> solution is switch tracing on and off by calling SQL\n>> function, and possibly switch tracing for prepared statement the same\n>> way, but not for any random query.\n>>\n>> I'll check the patch for the race conditions.\n>>\n>> --\n>> Regards,\n>> Nikita Malakhov\n>> Postgres Professional\n>> The Russian Postgres Company\n>> https://postgrespro.ru/\n>>\n>", "msg_date": "Tue, 1 Aug 2023 15:19:41 +0200", "msg_from": "Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>", "msg_from_op": true, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Hi!\n\nThanks for the improvements!\n>Here's a new patch with changes from the previous discussion:\n>- I'm now directly storing nanoseconds duration in the span instead of the\ninstr_time. Using the instr_time macros was a bit awkward as the\ndurations I generate don't necessarily have a starting and ending\ninstr_time.\n>Moving to straight nanoseconds made things clearer from my point of view.\nCool. It could be easily casted to ms for the user.\n\n>- I've added an additional sample rate pg_tracing.sample_rate (on top of\nthe pg_tracing.caller_sample_rate). This one will allow queries to be\nsampled even without trace information propagated from the caller.\n>Setting this sample rate to 1 will basically trace everything. For now,\nthis will only work when going through the post parse hook. I will add\nsupport for prepared statements and cached plans for the next patch.\nCool, I've just made the same improvement and wanted to send a patch a bit\nlater after tests.\n\n>- I've improved how parse spans are created. It's a bit challenging to get\nthe start of a parse as there's no pre parse hook or instrumentation around\nparse so it's only an estimation.\nI've also added a query id field to span and made a table and an sql\nfunction that flushes spans to this table instead of returning set or\nrecords - it is more convenient for the maintenance to query the table.\n\n--\nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi!Thanks for the improvements!>Here's a new patch with changes from the previous discussion: >- I'm now directly storing nanoseconds duration in the span instead of the instr_time. Using the instr_time macros was a bit awkward as the durations I generate don't necessarily have a starting and ending instr_time. >Moving to straight nanoseconds made things clearer from my point of view.Cool. It could be easily casted to ms for the user.>- I've added an additional sample rate pg_tracing.sample_rate (on top of the pg_tracing.caller_sample_rate). This one will allow queries to be sampled even without trace information propagated from the caller. >Setting this sample rate to 1 will basically trace everything. For now, this will only work when going through the post parse hook. I will add support for prepared statements and cached plans for the next patch.Cool, I've just made the same improvement and wanted to send a patch a bit later after tests.>- I've improved how parse spans are created. It's a bit challenging to get the start of a parse as there's no pre parse hook or instrumentation around parse so it's only an estimation.I've also added a query id field to span and made a table and an sql function that flushes spans to this table instead of returning set or records - it is more convenient for the maintenance to query the table.--Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/", "msg_date": "Tue, 1 Aug 2023 18:47:28 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Hi!\n\nPlease check some suggested improvements -\n1) query_id added so span to be able to join it with pg_stat_activity and\npg_stat_statements;\n2) table for storing spans added, to flush spans buffer, for maintenance\nreasons - to keep track of spans,\nwith SQL function that flushes buffer into table instead of recordset;\n3) added setter function for sampling_rate GUC to tweak it on-the-fly\nwithout restart.\n\n--\nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/", "msg_date": "Thu, 3 Aug 2023 22:13:18 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Hi!\n\n> 1) query_id added so span to be able to join it with pg_stat_activity and\npg_stat_statements;\nSounds good, I've added your changes with my code.\n\n> 2) table for storing spans added, to flush spans buffer\nI'm not sure about this. It means that this is something that would only be\navailable on primary as replicas won't be able\nto write data in the table. It will also make version updates and\nmigrations much more complex and I haven't seen a similar\npattern on other extensions.\n\n> 3) added setter function for sampling_rate GUC to tweak it on-the-fly\nwithout restart\nok, I've added this in my branch.\n\nOn my side, I've made the following changes:\n1) All spans are now kept in palloced buffers and only added during\nend_tracing. This way, we limit the shared_spans lock.\n2) I've added a pg_tracing.drop_on_full_buffer parameter to drop all spans\nwhen the buffer is full. This could be useful to always keep\nthe latest spans when the consuming app is not fast enough. This is also\nuseful for testing.\n3) I'm testing more complex queries. Most of my previous tests were using\nsimple query protocol but extended protocol introduces\ndifferences that break some assumptions I did. For example, with multi\nstatement transaction like\nBEGIN;\nSELECT 1;\nSELECT 2;\nThe parse of SELECT 2 will happen before the ExecutorEnd (and the\nend_tracing) of SELECT 1. For now, I'm skipping the post parse\nhook if we still have an ongoing tracing.\nI've also started running https://github.com/anse1/sqlsmith on a db with\nfull sample and it's currently failing some assertions and I'm\nworking to fix those.\n\n\nOn Thu, Aug 3, 2023 at 9:13 PM Nikita Malakhov <hukutoc@gmail.com> wrote:\n\n> Hi!\n>\n> Please check some suggested improvements -\n> 1) query_id added so span to be able to join it with pg_stat_activity and\n> pg_stat_statements;\n> 2) table for storing spans added, to flush spans buffer, for maintenance\n> reasons - to keep track of spans,\n> with SQL function that flushes buffer into table instead of recordset;\n> 3) added setter function for sampling_rate GUC to tweak it on-the-fly\n> without restart.\n>\n> --\n> Regards,\n> Nikita Malakhov\n> Postgres Professional\n> The Russian Postgres Company\n> https://postgrespro.ru/\n>", "msg_date": "Wed, 9 Aug 2023 10:34:41 +0200", "msg_from": "Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>", "msg_from_op": true, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Hi!\n\nStoring spans is a tricky question. Monitoring systems often use pull model\nand use probes or SQL\nAPI for pulling data, so from this point of view it is much more convenient\nto keep spans in separate\ntable. But in this case we come to another issue - how to flush this data\ninto the table? Automatic\nflushing on a full buffer would randomly (to the user) significantly affect\nquery performance. I'd rather\nmake a GUC turned off by default to store spans into the table instead of\nbuffer.\n\n>3) I'm testing more complex queries. Most of my previous tests were using\nsimple query protocol but extended protocol introduces\n>differences that break some assumptions I did. For example, with multi\nstatement transaction like\n>BEGIN;\n>SELECT 1;\n>SELECT 2;\n>The parse of SELECT 2 will happen before the ExecutorEnd (and the\nend_tracing) of SELECT 1. For now, I'm skipping the post parse\n>hook if we still have an ongoing tracing.\n\nI've checked this behavior before and haven't noticed the case you\nmentioned, but for\nloops like\n for i in 1..2 loop\n StartTime := clock_timestamp();\n insert into t2 values (i, a_data);\n EndTime := clock_timestamp();\n Delta := 1000 * ( extract(epoch from EndTime) - extract(epoch from\nStartTime) );\n end loop;\n\nI've got the following call sequence:\npsql:/home/postgres/tests/trace.sql:52: NOTICE:\n pg_tracing_post_parse_analyze hook <1>\npsql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_planner_hook <1>\npsql:/home/postgres/tests/trace.sql:52: NOTICE:\n pg_tracing_post_parse_analyze hook <2>\npsql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_planner_hook <2>\npsql:/home/postgres/tests/trace.sql:52: NOTICE:\n pg_tracing_post_parse_analyze hook <StartTime := clock_timestamp()>\npsql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_planner_hook\n<StartTime := clock_timestamp()>\npsql:/home/postgres/tests/trace.sql:52: NOTICE:\n pg_tracing_post_parse_analyze hook <insert into t2 values (i, a_data)>\npsql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_planner_hook\n<insert into t2 values (i, a_data)>\npsql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_ExecutorStart\n<insert into t2 values (i, a_data)>\npsql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_ExecutorRun\n <insert into t2 values (i, a_data)>\npsql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_ExecutorFinish\n<insert into t2 values (i, a_data)>\npsql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_ExecutorEnd\n<insert into t2 values (i, a_data)>\npsql:/home/postgres/tests/trace.sql:52: NOTICE:\n pg_tracing_post_parse_analyze hook <EndTime := clock_timestamp()>\npsql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_planner_hook\n<EndTime := clock_timestamp()>\npsql:/home/postgres/tests/trace.sql:52: NOTICE:\n pg_tracing_post_parse_analyze hook <Delta := 1000 * ( extract(epoch from\nEndTime) - extract(epoch from StartTime) )>\npsql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_planner_hook\n<Delta := 1000 * ( extract(epoch from EndTime) - extract(epoch from\nStartTime) )>\npsql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_planner_hook\n<insert into t2 values (i, a_data)>\npsql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_ExecutorStart\n<insert into t2 values (i, a_data)>\npsql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_ExecutorRun\n <insert into t2 values (i, a_data)>\npsql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_ExecutorFinish\n<insert into t2 values (i, a_data)>\npsql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_ExecutorEnd\n<insert into t2 values (i, a_data)>\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi!Storing spans is a tricky question. Monitoring systems often use pull model and use probes or SQLAPI for pulling data, so from this point of view it is much more convenient to keep spans in separatetable. But in this case we come to another issue - how to flush this data into the table? Automaticflushing on a full buffer would randomly (to the user) significantly affect query performance. I'd rathermake a GUC turned off by default to store spans into the table instead of buffer.>3) I'm testing more complex queries. Most of my previous tests were using simple query protocol but extended protocol introduces>differences that break some assumptions I did. For example, with multi statement transaction like>BEGIN;>SELECT 1;>SELECT 2;>The parse of SELECT 2 will happen before the ExecutorEnd (and the end_tracing) of SELECT 1. For now, I'm skipping the post parse >hook if we still have an ongoing tracing.I've checked this behavior before and haven't noticed the case you mentioned, but forloops like        for i in 1..2 loop                StartTime := clock_timestamp();        insert into t2 values (i, a_data);        EndTime := clock_timestamp();        Delta := 1000 * ( extract(epoch from EndTime) - extract(epoch from StartTime) );        end loop;I've got the following call sequence:psql:/home/postgres/tests/trace.sql:52: NOTICE:  pg_tracing_post_parse_analyze hook <1>psql:/home/postgres/tests/trace.sql:52: NOTICE:  pg_tracing_planner_hook <1>psql:/home/postgres/tests/trace.sql:52: NOTICE:  pg_tracing_post_parse_analyze hook <2>psql:/home/postgres/tests/trace.sql:52: NOTICE:  pg_tracing_planner_hook <2>psql:/home/postgres/tests/trace.sql:52: NOTICE:  pg_tracing_post_parse_analyze hook <StartTime := clock_timestamp()>psql:/home/postgres/tests/trace.sql:52: NOTICE:  pg_tracing_planner_hook <StartTime := clock_timestamp()>psql:/home/postgres/tests/trace.sql:52: NOTICE:  pg_tracing_post_parse_analyze hook <insert into t2 values (i, a_data)>psql:/home/postgres/tests/trace.sql:52: NOTICE:  pg_tracing_planner_hook <insert into t2 values (i, a_data)>psql:/home/postgres/tests/trace.sql:52: NOTICE:  pg_tracing_ExecutorStart <insert into t2 values (i, a_data)>psql:/home/postgres/tests/trace.sql:52: NOTICE:  pg_tracing_ExecutorRun  <insert into t2 values (i, a_data)>psql:/home/postgres/tests/trace.sql:52: NOTICE:  pg_tracing_ExecutorFinish <insert into t2 values (i, a_data)>psql:/home/postgres/tests/trace.sql:52: NOTICE:  pg_tracing_ExecutorEnd <insert into t2 values (i, a_data)>psql:/home/postgres/tests/trace.sql:52: NOTICE:  pg_tracing_post_parse_analyze hook <EndTime := clock_timestamp()>psql:/home/postgres/tests/trace.sql:52: NOTICE:  pg_tracing_planner_hook <EndTime := clock_timestamp()>psql:/home/postgres/tests/trace.sql:52: NOTICE:  pg_tracing_post_parse_analyze hook <Delta := 1000 * ( extract(epoch from EndTime) - extract(epoch from StartTime) )>psql:/home/postgres/tests/trace.sql:52: NOTICE:  pg_tracing_planner_hook <Delta := 1000 * ( extract(epoch from EndTime) - extract(epoch from StartTime) )>psql:/home/postgres/tests/trace.sql:52: NOTICE:  pg_tracing_planner_hook <insert into t2 values (i, a_data)>psql:/home/postgres/tests/trace.sql:52: NOTICE:  pg_tracing_ExecutorStart <insert into t2 values (i, a_data)>psql:/home/postgres/tests/trace.sql:52: NOTICE:  pg_tracing_ExecutorRun  <insert into t2 values (i, a_data)>psql:/home/postgres/tests/trace.sql:52: NOTICE:  pg_tracing_ExecutorFinish <insert into t2 values (i, a_data)>psql:/home/postgres/tests/trace.sql:52: NOTICE:  pg_tracing_ExecutorEnd <insert into t2 values (i, a_data)>-- Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/", "msg_date": "Mon, 14 Aug 2023 12:17:04 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Hi!\n\n> Storing spans is a tricky question. Monitoring systems often use pull model and use probes or SQL\n> API for pulling data, so from this point of view it is much more convenient to keep spans in separate\n> table. But in this case we come to another issue - how to flush this data into the table? Automatic\n> flushing on a full buffer would randomly (to the user) significantly affect query performance. I'd rather\n> make a GUC turned off by default to store spans into the table instead of buffer.\n\nMy main issue with having the extension flush spans in a table is that\nit will only work on primaries.\nReplicas won't be able to do this as they can't write data on tables\nand having this flush function is likely going to introduce more\nconfusion if it only works on primaries.\n\n From my point of view, processing the spans should be done by an\nexternal application.Similar to my initial example which forward spans\nto a trace collector\n(https://gist.github.com/bonnefoa/6ed24520bdac026d6a6a6992d308bd50#file-main-go),\nthis application could instead store spans in a dedicated table. This\nway, the creation of the table is outside pg_tracing's scope and the\nspan_store application will be able to store spans on replicas and\nprimaries.\nHow frequently this application should pull data to avoid full buffer\nand dropping spans is a tricky part. We have stats so we can monitor\nif drops are happening and adjust the spans buffer size or increase\nthe application's pull frequency. Another possibility (though I'm not\nfamiliar with that) could be to use notifications. The application\nwill listen to a channel and pg_tracing will notify when a\nconfigurable threshold is reached (i.e.: if the buffer reaches 50%,\nsend a notification).\n\n> I've checked this behavior before and haven't noticed the case you mentioned, but for\n> loops like\n> for i in 1..2 loop\n> StartTime := clock_timestamp();\n> insert into t2 values (i, a_data);\n> EndTime := clock_timestamp();\n> Delta := 1000 * ( extract(epoch from EndTime) - extract(epoch from StartTime) );\n> end loop;\n\nWas this run through psql? In this case, you would be using simple\nprotocol which always drops the portal at the end of the statement and\nis not triggering the issue.\n\nWith extended protocol and a multi statement transactions, I have the\nfollowing backtrace:\n* frame #0: 0x0000000104581318\npostgres`ExecutorEnd(queryDesc=0x00000001420083b0) at execMain.c:471:6\n frame #1: 0x00000001044e1644\npostgres`PortalCleanup(portal=0x0000000152031d00) at\nportalcmds.c:299:4\n frame #2: 0x0000000104b0e77c\npostgres`PortalDrop(portal=0x0000000152031d00, isTopCommit=false) at\nportalmem.c:503:3\n frame #3: 0x0000000104b0e3e8 postgres`CreatePortal(name=\"\",\nallowDup=true, dupSilent=true) at portalmem.c:194:3\n frame #4: 0x000000010487a308\npostgres`exec_bind_message(input_message=0x000000016bb4b398) at\npostgres.c:1745:12\n frame #5: 0x000000010487846c\npostgres`PostgresMain(dbname=\"postgres\", username=\"postgres\") at\npostgres.c:4685:5\n frame #6: 0x0000000104773144\npostgres`BackendRun(port=0x0000000141704080) at postmaster.c:4433:2\n frame #7: 0x000000010477044c\npostgres`BackendStartup(port=0x0000000141704080) at\npostmaster.c:4161:3\n frame #8: 0x000000010476d6fc postgres`ServerLoop at postmaster.c:1778:6\n frame #9: 0x000000010476c260 postgres`PostmasterMain(argc=3,\nargv=0x0000600001cf72c0) at postmaster.c:1462:11\n frame #10: 0x0000000104625ca8 postgres`main(argc=3,\nargv=0x0000600001cf72c0) at main.c:198:3\n frame #11: 0x00000001a61dbf28 dyld`start + 2236\n\nAt this point, the new statement is already parsed and it's only\nduring the bind that the previous' statement portal is dropped and\nExecutorEnd is called. Thus, we have overlapping statements which are\ntricky to handle.\n\nOn my side, I've made the following changes:\nSkip nodes that are not executed\nFix query_id propagation, it should now be associated with all the\nspans of a query when available\nI've forgotten to add the spans buffer in the shmem hook so it would\ncrash when pg_tracing.max_span was too high. Now it's fixed and we can\nrequest more than 2000 spans. Currently, the size of Span is 320 bytes\nso 50000 will take ~15MB of memory.\nI've added the subplan/init plan spans and started experimenting\ndeparsing the plan to add more details on the nodes. Instead of\n'NestedLoop', we will have something 'NestedLoop|Join Filter : (oid =\nrelnamespace)'. One consequence is that it will create a high number\nof different operations which is something we want to avoid with trace\ncollectors. I'm probably gonna move this outside the operation name or\nfind a way to make this more stable.\n\nI've started initial benchmarks and profiling. On a default\ninstallation and running a default `pgbench -T15`, I have the\nfollowing:\nWithout pg_tracing: tps = 953.118759 (without initial connection time)\nWith pg_tracing: tps = 652.953858 (without initial connection time)\nMost of the time is spent process_query_desc. I haven't tried to\noptimize performances yet so there's probably a lot of room for\nimprovement.\n\n\nOn Mon, Aug 14, 2023 at 11:17 AM Nikita Malakhov <hukutoc@gmail.com> wrote:\n>\n> Hi!\n>\n> Storing spans is a tricky question. Monitoring systems often use pull model and use probes or SQL\n> API for pulling data, so from this point of view it is much more convenient to keep spans in separate\n> table. But in this case we come to another issue - how to flush this data into the table? Automatic\n> flushing on a full buffer would randomly (to the user) significantly affect query performance. I'd rather\n> make a GUC turned off by default to store spans into the table instead of buffer.\n>\n> >3) I'm testing more complex queries. Most of my previous tests were using simple query protocol but extended protocol introduces\n> >differences that break some assumptions I did. For example, with multi statement transaction like\n> >BEGIN;\n> >SELECT 1;\n> >SELECT 2;\n> >The parse of SELECT 2 will happen before the ExecutorEnd (and the end_tracing) of SELECT 1. For now, I'm skipping the post parse\n> >hook if we still have an ongoing tracing.\n>\n> I've checked this behavior before and haven't noticed the case you mentioned, but for\n> loops like\n> for i in 1..2 loop\n> StartTime := clock_timestamp();\n> insert into t2 values (i, a_data);\n> EndTime := clock_timestamp();\n> Delta := 1000 * ( extract(epoch from EndTime) - extract(epoch from StartTime) );\n> end loop;\n>\n> I've got the following call sequence:\n> psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_post_parse_analyze hook <1>\n> psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_planner_hook <1>\n> psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_post_parse_analyze hook <2>\n> psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_planner_hook <2>\n> psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_post_parse_analyze hook <StartTime := clock_timestamp()>\n> psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_planner_hook <StartTime := clock_timestamp()>\n> psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_post_parse_analyze hook <insert into t2 values (i, a_data)>\n> psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_planner_hook <insert into t2 values (i, a_data)>\n> psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_ExecutorStart <insert into t2 values (i, a_data)>\n> psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_ExecutorRun <insert into t2 values (i, a_data)>\n> psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_ExecutorFinish <insert into t2 values (i, a_data)>\n> psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_ExecutorEnd <insert into t2 values (i, a_data)>\n> psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_post_parse_analyze hook <EndTime := clock_timestamp()>\n> psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_planner_hook <EndTime := clock_timestamp()>\n> psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_post_parse_analyze hook <Delta := 1000 * ( extract(epoch from EndTime) - extract(epoch from StartTime) )>\n> psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_planner_hook <Delta := 1000 * ( extract(epoch from EndTime) - extract(epoch from StartTime) )>\n> psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_planner_hook <insert into t2 values (i, a_data)>\n> psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_ExecutorStart <insert into t2 values (i, a_data)>\n> psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_ExecutorRun <insert into t2 values (i, a_data)>\n> psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_ExecutorFinish <insert into t2 values (i, a_data)>\n> psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_ExecutorEnd <insert into t2 values (i, a_data)>\n>\n> --\n> Regards,\n> Nikita Malakhov\n> Postgres Professional\n> The Russian Postgres Company\n> https://postgrespro.ru/", "msg_date": "Wed, 30 Aug 2023 11:09:32 +0200", "msg_from": "Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>", "msg_from_op": true, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "And the latest version of the patch that I've forgotten in the previous message.\n\nOn Wed, Aug 30, 2023 at 11:09 AM Anthonin Bonnefoy\n<anthonin.bonnefoy@datadoghq.com> wrote:\n>\n> Hi!\n>\n> > Storing spans is a tricky question. Monitoring systems often use pull model and use probes or SQL\n> > API for pulling data, so from this point of view it is much more convenient to keep spans in separate\n> > table. But in this case we come to another issue - how to flush this data into the table? Automatic\n> > flushing on a full buffer would randomly (to the user) significantly affect query performance. I'd rather\n> > make a GUC turned off by default to store spans into the table instead of buffer.\n>\n> My main issue with having the extension flush spans in a table is that\n> it will only work on primaries.\n> Replicas won't be able to do this as they can't write data on tables\n> and having this flush function is likely going to introduce more\n> confusion if it only works on primaries.\n>\n> From my point of view, processing the spans should be done by an\n> external application.Similar to my initial example which forward spans\n> to a trace collector\n> (https://gist.github.com/bonnefoa/6ed24520bdac026d6a6a6992d308bd50#file-main-go),\n> this application could instead store spans in a dedicated table. This\n> way, the creation of the table is outside pg_tracing's scope and the\n> span_store application will be able to store spans on replicas and\n> primaries.\n> How frequently this application should pull data to avoid full buffer\n> and dropping spans is a tricky part. We have stats so we can monitor\n> if drops are happening and adjust the spans buffer size or increase\n> the application's pull frequency. Another possibility (though I'm not\n> familiar with that) could be to use notifications. The application\n> will listen to a channel and pg_tracing will notify when a\n> configurable threshold is reached (i.e.: if the buffer reaches 50%,\n> send a notification).\n>\n> > I've checked this behavior before and haven't noticed the case you mentioned, but for\n> > loops like\n> > for i in 1..2 loop\n> > StartTime := clock_timestamp();\n> > insert into t2 values (i, a_data);\n> > EndTime := clock_timestamp();\n> > Delta := 1000 * ( extract(epoch from EndTime) - extract(epoch from StartTime) );\n> > end loop;\n>\n> Was this run through psql? In this case, you would be using simple\n> protocol which always drops the portal at the end of the statement and\n> is not triggering the issue.\n>\n> With extended protocol and a multi statement transactions, I have the\n> following backtrace:\n> * frame #0: 0x0000000104581318\n> postgres`ExecutorEnd(queryDesc=0x00000001420083b0) at execMain.c:471:6\n> frame #1: 0x00000001044e1644\n> postgres`PortalCleanup(portal=0x0000000152031d00) at\n> portalcmds.c:299:4\n> frame #2: 0x0000000104b0e77c\n> postgres`PortalDrop(portal=0x0000000152031d00, isTopCommit=false) at\n> portalmem.c:503:3\n> frame #3: 0x0000000104b0e3e8 postgres`CreatePortal(name=\"\",\n> allowDup=true, dupSilent=true) at portalmem.c:194:3\n> frame #4: 0x000000010487a308\n> postgres`exec_bind_message(input_message=0x000000016bb4b398) at\n> postgres.c:1745:12\n> frame #5: 0x000000010487846c\n> postgres`PostgresMain(dbname=\"postgres\", username=\"postgres\") at\n> postgres.c:4685:5\n> frame #6: 0x0000000104773144\n> postgres`BackendRun(port=0x0000000141704080) at postmaster.c:4433:2\n> frame #7: 0x000000010477044c\n> postgres`BackendStartup(port=0x0000000141704080) at\n> postmaster.c:4161:3\n> frame #8: 0x000000010476d6fc postgres`ServerLoop at postmaster.c:1778:6\n> frame #9: 0x000000010476c260 postgres`PostmasterMain(argc=3,\n> argv=0x0000600001cf72c0) at postmaster.c:1462:11\n> frame #10: 0x0000000104625ca8 postgres`main(argc=3,\n> argv=0x0000600001cf72c0) at main.c:198:3\n> frame #11: 0x00000001a61dbf28 dyld`start + 2236\n>\n> At this point, the new statement is already parsed and it's only\n> during the bind that the previous' statement portal is dropped and\n> ExecutorEnd is called. Thus, we have overlapping statements which are\n> tricky to handle.\n>\n> On my side, I've made the following changes:\n> Skip nodes that are not executed\n> Fix query_id propagation, it should now be associated with all the\n> spans of a query when available\n> I've forgotten to add the spans buffer in the shmem hook so it would\n> crash when pg_tracing.max_span was too high. Now it's fixed and we can\n> request more than 2000 spans. Currently, the size of Span is 320 bytes\n> so 50000 will take ~15MB of memory.\n> I've added the subplan/init plan spans and started experimenting\n> deparsing the plan to add more details on the nodes. Instead of\n> 'NestedLoop', we will have something 'NestedLoop|Join Filter : (oid =\n> relnamespace)'. One consequence is that it will create a high number\n> of different operations which is something we want to avoid with trace\n> collectors. I'm probably gonna move this outside the operation name or\n> find a way to make this more stable.\n>\n> I've started initial benchmarks and profiling. On a default\n> installation and running a default `pgbench -T15`, I have the\n> following:\n> Without pg_tracing: tps = 953.118759 (without initial connection time)\n> With pg_tracing: tps = 652.953858 (without initial connection time)\n> Most of the time is spent process_query_desc. I haven't tried to\n> optimize performances yet so there's probably a lot of room for\n> improvement.\n>\n>\n> On Mon, Aug 14, 2023 at 11:17 AM Nikita Malakhov <hukutoc@gmail.com> wrote:\n> >\n> > Hi!\n> >\n> > Storing spans is a tricky question. Monitoring systems often use pull model and use probes or SQL\n> > API for pulling data, so from this point of view it is much more convenient to keep spans in separate\n> > table. But in this case we come to another issue - how to flush this data into the table? Automatic\n> > flushing on a full buffer would randomly (to the user) significantly affect query performance. I'd rather\n> > make a GUC turned off by default to store spans into the table instead of buffer.\n> >\n> > >3) I'm testing more complex queries. Most of my previous tests were using simple query protocol but extended protocol introduces\n> > >differences that break some assumptions I did. For example, with multi statement transaction like\n> > >BEGIN;\n> > >SELECT 1;\n> > >SELECT 2;\n> > >The parse of SELECT 2 will happen before the ExecutorEnd (and the end_tracing) of SELECT 1. For now, I'm skipping the post parse\n> > >hook if we still have an ongoing tracing.\n> >\n> > I've checked this behavior before and haven't noticed the case you mentioned, but for\n> > loops like\n> > for i in 1..2 loop\n> > StartTime := clock_timestamp();\n> > insert into t2 values (i, a_data);\n> > EndTime := clock_timestamp();\n> > Delta := 1000 * ( extract(epoch from EndTime) - extract(epoch from StartTime) );\n> > end loop;\n> >\n> > I've got the following call sequence:\n> > psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_post_parse_analyze hook <1>\n> > psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_planner_hook <1>\n> > psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_post_parse_analyze hook <2>\n> > psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_planner_hook <2>\n> > psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_post_parse_analyze hook <StartTime := clock_timestamp()>\n> > psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_planner_hook <StartTime := clock_timestamp()>\n> > psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_post_parse_analyze hook <insert into t2 values (i, a_data)>\n> > psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_planner_hook <insert into t2 values (i, a_data)>\n> > psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_ExecutorStart <insert into t2 values (i, a_data)>\n> > psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_ExecutorRun <insert into t2 values (i, a_data)>\n> > psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_ExecutorFinish <insert into t2 values (i, a_data)>\n> > psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_ExecutorEnd <insert into t2 values (i, a_data)>\n> > psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_post_parse_analyze hook <EndTime := clock_timestamp()>\n> > psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_planner_hook <EndTime := clock_timestamp()>\n> > psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_post_parse_analyze hook <Delta := 1000 * ( extract(epoch from EndTime) - extract(epoch from StartTime) )>\n> > psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_planner_hook <Delta := 1000 * ( extract(epoch from EndTime) - extract(epoch from StartTime) )>\n> > psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_planner_hook <insert into t2 values (i, a_data)>\n> > psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_ExecutorStart <insert into t2 values (i, a_data)>\n> > psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_ExecutorRun <insert into t2 values (i, a_data)>\n> > psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_ExecutorFinish <insert into t2 values (i, a_data)>\n> > psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_ExecutorEnd <insert into t2 values (i, a_data)>\n> >\n> > --\n> > Regards,\n> > Nikita Malakhov\n> > Postgres Professional\n> > The Russian Postgres Company\n> > https://postgrespro.ru/", "msg_date": "Wed, 30 Aug 2023 11:15:31 +0200", "msg_from": "Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>", "msg_from_op": true, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Hi!\n\nI've been working on performance improvements and features.\nPerformance improvements:\n- I was calling select_rtable_names_for_explain on every\nExplainTargetRel which was rather costly and unnecessary. It's now\nonly done once.\n- File writes are now batched and only a single file write is done at\nthe end of tracing.\n\nWith those improvements, pgbench now yields the following results:\n0% tracing:\npgbench -T60 \"options=-cpg_tracing.sample_rate=0\"\nnumber of transactions actually processed: 57293\ntps = 954.915403 (without initial connection time)\n\n100% tracing:\npgbench -T60 \"options=-cpg_tracing.sample_rate=1\"\nnumber of transactions actually processed: 45963\ntps = 766.068564 (without initial connection time)\n\nI’ve also added tracing of parallel workers. If a query needs parallel\nworkers, the trace context is pushed in a specific shared buffer so\nworkers can pull trace context and generate spans with the correct\ntraceId and parentId. We are now able to see the start, run and finish\nof workers (see image).\nI'm currently focusing on adding more tests, comments, fixing TODOs\nand known bugs.\n\nRegards,\nAnthonin\n\nOn Wed, Aug 30, 2023 at 11:15 AM Anthonin Bonnefoy\n<anthonin.bonnefoy@datadoghq.com> wrote:\n>\n> And the latest version of the patch that I've forgotten in the previous message.\n>\n> On Wed, Aug 30, 2023 at 11:09 AM Anthonin Bonnefoy\n> <anthonin.bonnefoy@datadoghq.com> wrote:\n> >\n> > Hi!\n> >\n> > > Storing spans is a tricky question. Monitoring systems often use pull model and use probes or SQL\n> > > API for pulling data, so from this point of view it is much more convenient to keep spans in separate\n> > > table. But in this case we come to another issue - how to flush this data into the table? Automatic\n> > > flushing on a full buffer would randomly (to the user) significantly affect query performance. I'd rather\n> > > make a GUC turned off by default to store spans into the table instead of buffer.\n> >\n> > My main issue with having the extension flush spans in a table is that\n> > it will only work on primaries.\n> > Replicas won't be able to do this as they can't write data on tables\n> > and having this flush function is likely going to introduce more\n> > confusion if it only works on primaries.\n> >\n> > From my point of view, processing the spans should be done by an\n> > external application.Similar to my initial example which forward spans\n> > to a trace collector\n> > (https://gist.github.com/bonnefoa/6ed24520bdac026d6a6a6992d308bd50#file-main-go),\n> > this application could instead store spans in a dedicated table. This\n> > way, the creation of the table is outside pg_tracing's scope and the\n> > span_store application will be able to store spans on replicas and\n> > primaries.\n> > How frequently this application should pull data to avoid full buffer\n> > and dropping spans is a tricky part. We have stats so we can monitor\n> > if drops are happening and adjust the spans buffer size or increase\n> > the application's pull frequency. Another possibility (though I'm not\n> > familiar with that) could be to use notifications. The application\n> > will listen to a channel and pg_tracing will notify when a\n> > configurable threshold is reached (i.e.: if the buffer reaches 50%,\n> > send a notification).\n> >\n> > > I've checked this behavior before and haven't noticed the case you mentioned, but for\n> > > loops like\n> > > for i in 1..2 loop\n> > > StartTime := clock_timestamp();\n> > > insert into t2 values (i, a_data);\n> > > EndTime := clock_timestamp();\n> > > Delta := 1000 * ( extract(epoch from EndTime) - extract(epoch from StartTime) );\n> > > end loop;\n> >\n> > Was this run through psql? In this case, you would be using simple\n> > protocol which always drops the portal at the end of the statement and\n> > is not triggering the issue.\n> >\n> > With extended protocol and a multi statement transactions, I have the\n> > following backtrace:\n> > * frame #0: 0x0000000104581318\n> > postgres`ExecutorEnd(queryDesc=0x00000001420083b0) at execMain.c:471:6\n> > frame #1: 0x00000001044e1644\n> > postgres`PortalCleanup(portal=0x0000000152031d00) at\n> > portalcmds.c:299:4\n> > frame #2: 0x0000000104b0e77c\n> > postgres`PortalDrop(portal=0x0000000152031d00, isTopCommit=false) at\n> > portalmem.c:503:3\n> > frame #3: 0x0000000104b0e3e8 postgres`CreatePortal(name=\"\",\n> > allowDup=true, dupSilent=true) at portalmem.c:194:3\n> > frame #4: 0x000000010487a308\n> > postgres`exec_bind_message(input_message=0x000000016bb4b398) at\n> > postgres.c:1745:12\n> > frame #5: 0x000000010487846c\n> > postgres`PostgresMain(dbname=\"postgres\", username=\"postgres\") at\n> > postgres.c:4685:5\n> > frame #6: 0x0000000104773144\n> > postgres`BackendRun(port=0x0000000141704080) at postmaster.c:4433:2\n> > frame #7: 0x000000010477044c\n> > postgres`BackendStartup(port=0x0000000141704080) at\n> > postmaster.c:4161:3\n> > frame #8: 0x000000010476d6fc postgres`ServerLoop at postmaster.c:1778:6\n> > frame #9: 0x000000010476c260 postgres`PostmasterMain(argc=3,\n> > argv=0x0000600001cf72c0) at postmaster.c:1462:11\n> > frame #10: 0x0000000104625ca8 postgres`main(argc=3,\n> > argv=0x0000600001cf72c0) at main.c:198:3\n> > frame #11: 0x00000001a61dbf28 dyld`start + 2236\n> >\n> > At this point, the new statement is already parsed and it's only\n> > during the bind that the previous' statement portal is dropped and\n> > ExecutorEnd is called. Thus, we have overlapping statements which are\n> > tricky to handle.\n> >\n> > On my side, I've made the following changes:\n> > Skip nodes that are not executed\n> > Fix query_id propagation, it should now be associated with all the\n> > spans of a query when available\n> > I've forgotten to add the spans buffer in the shmem hook so it would\n> > crash when pg_tracing.max_span was too high. Now it's fixed and we can\n> > request more than 2000 spans. Currently, the size of Span is 320 bytes\n> > so 50000 will take ~15MB of memory.\n> > I've added the subplan/init plan spans and started experimenting\n> > deparsing the plan to add more details on the nodes. Instead of\n> > 'NestedLoop', we will have something 'NestedLoop|Join Filter : (oid =\n> > relnamespace)'. One consequence is that it will create a high number\n> > of different operations which is something we want to avoid with trace\n> > collectors. I'm probably gonna move this outside the operation name or\n> > find a way to make this more stable.\n> >\n> > I've started initial benchmarks and profiling. On a default\n> > installation and running a default `pgbench -T15`, I have the\n> > following:\n> > Without pg_tracing: tps = 953.118759 (without initial connection time)\n> > With pg_tracing: tps = 652.953858 (without initial connection time)\n> > Most of the time is spent process_query_desc. I haven't tried to\n> > optimize performances yet so there's probably a lot of room for\n> > improvement.\n> >\n> >\n> > On Mon, Aug 14, 2023 at 11:17 AM Nikita Malakhov <hukutoc@gmail.com> wrote:\n> > >\n> > > Hi!\n> > >\n> > > Storing spans is a tricky question. Monitoring systems often use pull model and use probes or SQL\n> > > API for pulling data, so from this point of view it is much more convenient to keep spans in separate\n> > > table. But in this case we come to another issue - how to flush this data into the table? Automatic\n> > > flushing on a full buffer would randomly (to the user) significantly affect query performance. I'd rather\n> > > make a GUC turned off by default to store spans into the table instead of buffer.\n> > >\n> > > >3) I'm testing more complex queries. Most of my previous tests were using simple query protocol but extended protocol introduces\n> > > >differences that break some assumptions I did. For example, with multi statement transaction like\n> > > >BEGIN;\n> > > >SELECT 1;\n> > > >SELECT 2;\n> > > >The parse of SELECT 2 will happen before the ExecutorEnd (and the end_tracing) of SELECT 1. For now, I'm skipping the post parse\n> > > >hook if we still have an ongoing tracing.\n> > >\n> > > I've checked this behavior before and haven't noticed the case you mentioned, but for\n> > > loops like\n> > > for i in 1..2 loop\n> > > StartTime := clock_timestamp();\n> > > insert into t2 values (i, a_data);\n> > > EndTime := clock_timestamp();\n> > > Delta := 1000 * ( extract(epoch from EndTime) - extract(epoch from StartTime) );\n> > > end loop;\n> > >\n> > > I've got the following call sequence:\n> > > psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_post_parse_analyze hook <1>\n> > > psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_planner_hook <1>\n> > > psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_post_parse_analyze hook <2>\n> > > psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_planner_hook <2>\n> > > psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_post_parse_analyze hook <StartTime := clock_timestamp()>\n> > > psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_planner_hook <StartTime := clock_timestamp()>\n> > > psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_post_parse_analyze hook <insert into t2 values (i, a_data)>\n> > > psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_planner_hook <insert into t2 values (i, a_data)>\n> > > psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_ExecutorStart <insert into t2 values (i, a_data)>\n> > > psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_ExecutorRun <insert into t2 values (i, a_data)>\n> > > psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_ExecutorFinish <insert into t2 values (i, a_data)>\n> > > psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_ExecutorEnd <insert into t2 values (i, a_data)>\n> > > psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_post_parse_analyze hook <EndTime := clock_timestamp()>\n> > > psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_planner_hook <EndTime := clock_timestamp()>\n> > > psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_post_parse_analyze hook <Delta := 1000 * ( extract(epoch from EndTime) - extract(epoch from StartTime) )>\n> > > psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_planner_hook <Delta := 1000 * ( extract(epoch from EndTime) - extract(epoch from StartTime) )>\n> > > psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_planner_hook <insert into t2 values (i, a_data)>\n> > > psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_ExecutorStart <insert into t2 values (i, a_data)>\n> > > psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_ExecutorRun <insert into t2 values (i, a_data)>\n> > > psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_ExecutorFinish <insert into t2 values (i, a_data)>\n> > > psql:/home/postgres/tests/trace.sql:52: NOTICE: pg_tracing_ExecutorEnd <insert into t2 values (i, a_data)>\n> > >\n> > > --\n> > > Regards,\n> > > Nikita Malakhov\n> > > Postgres Professional\n> > > The Russian Postgres Company\n> > > https://postgrespro.ru/", "msg_date": "Wed, 6 Sep 2023 15:43:36 +0200", "msg_from": "Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>", "msg_from_op": true, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Hi!\n\nHere's a new version with a good batch of changes.\n\nRenaming/Refactoring:\n- All spans are now tracked in the palloced current_trace_spans buffer\ncompared to top_span and parse_span being kept in a static variable\nbefore.\n- I've renamed query_spans to top_span. A top_span serves as the\nparent for all spans in a specific nested level.\n- More debugging information and assertions. Spans now track their\nnested level, if they've been ended and if they are a top_span.\n\nChanges:\n- I've added the subxact_count to the span's metadata. This can help\nidentify the moment a subtransaction was started.\n- I've reworked nested queries and utility statements. Previously, I\nmade the assumptions that we could only have one top_span per nested\nlevel which is not the case. Some utility statements can execute\nmultiple queries in the same nested level. Tracing utility statement\nnow works better (see image of tracing a create extension).\n\nRegards,\nAnthonin", "msg_date": "Fri, 15 Sep 2023 10:04:45 +0200", "msg_from": "Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>", "msg_from_op": true, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Hi,\n\n> Renaming/Refactoring:\n> - All spans are now tracked in the palloced current_trace_spans buffer\n> compared to top_span and parse_span being kept in a static variable\n> before.\n> - I've renamed query_spans to top_span. A top_span serves as the\n> parent for all spans in a specific nested level.\n> - More debugging information and assertions. Spans now track their\n> nested level, if they've been ended and if they are a top_span.\n>\n> Changes:\n> - I've added the subxact_count to the span's metadata. This can help\n> identify the moment a subtransaction was started.\n> - I've reworked nested queries and utility statements. Previously, I\n> made the assumptions that we could only have one top_span per nested\n> level which is not the case. Some utility statements can execute\n> multiple queries in the same nested level. Tracing utility statement\n> now works better (see image of tracing a create extension).\n\nMany thanks for the updated patch.\n\nIf it's not too much trouble, please use `git format-patch`. This\nmakes applying and testing the patch much easier. Also you can provide\na commit message which 1. will simplify the work for the committer and\n2. can be reviewed as well.\n\nThe tests fail on CI [1]. I tried to run them locally and got the same\nresults. I'm using full-build.sh from this repository [2] for\nAutotools and the following one-liner for Meson:\n\n```\ntime CPPFLAGS=\"-DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS\" sh -c 'git\nclean -dfx && meson setup --buildtype debug -Dcassert=true\n-DPG_TEST_EXTRA=\"kerberos ldap ssl load_balance\" -Dldap=disabled\n-Dssl=openssl -Dtap_tests=enabled\n-Dprefix=/home/eax/projects/pginstall build && ninja -C build && ninja\n-C build docs && PG_TEST_EXTRA=1 meson test -C build'\n```\n\nThe comments for pg_tracing.c don't seem to be formatted properly.\nPlease make sure to run pg_indent. See src/tools/pgindent/README\n\nThe interface pg_tracing_spans(true | false) doesn't strike me as\nparticularly readable. Maybe this function should be private and the\ninterface be like pg_tracing_spans() and pg_trancing_consume_spans().\nAlternatively you could use named arguments in the tests, but I don't\nthink this is a preferred solution.\n\nSpeaking of the tests I suggest adding a bit more comments before\nevery (or most) of the queries. Figuring out what they test could be\nnot particularly straightforward for somebody who will make changes\nafter the patch will be accepted.\n\n[1]: http://cfbot.cputube.org/\n[2]: https://github.com/afiskon/pgscripts/\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 15 Sep 2023 14:32:44 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Hi!\nGreat you continue to work on this patch!\nI'm checking out the newest changes.\n\n\nOn Fri, Sep 15, 2023 at 2:32 PM Aleksander Alekseev <\naleksander@timescale.com> wrote:\n\n> Hi,\n>\n> > Renaming/Refactoring:\n> > - All spans are now tracked in the palloced current_trace_spans buffer\n> > compared to top_span and parse_span being kept in a static variable\n> > before.\n> > - I've renamed query_spans to top_span. A top_span serves as the\n> > parent for all spans in a specific nested level.\n> > - More debugging information and assertions. Spans now track their\n> > nested level, if they've been ended and if they are a top_span.\n> >\n> > Changes:\n> > - I've added the subxact_count to the span's metadata. This can help\n> > identify the moment a subtransaction was started.\n> > - I've reworked nested queries and utility statements. Previously, I\n> > made the assumptions that we could only have one top_span per nested\n> > level which is not the case. Some utility statements can execute\n> > multiple queries in the same nested level. Tracing utility statement\n> > now works better (see image of tracing a create extension).\n>\n> Many thanks for the updated patch.\n>\n> If it's not too much trouble, please use `git format-patch`. This\n> makes applying and testing the patch much easier. Also you can provide\n> a commit message which 1. will simplify the work for the committer and\n> 2. can be reviewed as well.\n>\n> The tests fail on CI [1]. I tried to run them locally and got the same\n> results. I'm using full-build.sh from this repository [2] for\n> Autotools and the following one-liner for Meson:\n>\n> ```\n> time CPPFLAGS=\"-DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS\" sh -c 'git\n> clean -dfx && meson setup --buildtype debug -Dcassert=true\n> -DPG_TEST_EXTRA=\"kerberos ldap ssl load_balance\" -Dldap=disabled\n> -Dssl=openssl -Dtap_tests=enabled\n> -Dprefix=/home/eax/projects/pginstall build && ninja -C build && ninja\n> -C build docs && PG_TEST_EXTRA=1 meson test -C build'\n> ```\n>\n> The comments for pg_tracing.c don't seem to be formatted properly.\n> Please make sure to run pg_indent. See src/tools/pgindent/README\n>\n> The interface pg_tracing_spans(true | false) doesn't strike me as\n> particularly readable. Maybe this function should be private and the\n> interface be like pg_tracing_spans() and pg_trancing_consume_spans().\n> Alternatively you could use named arguments in the tests, but I don't\n> think this is a preferred solution.\n>\n> Speaking of the tests I suggest adding a bit more comments before\n> every (or most) of the queries. Figuring out what they test could be\n> not particularly straightforward for somebody who will make changes\n> after the patch will be accepted.\n>\n> [1]: http://cfbot.cputube.org/\n> [2]: https://github.com/afiskon/pgscripts/\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi!Great you continue to work on this patch!I'm checking out the newest changes.On Fri, Sep 15, 2023 at 2:32 PM Aleksander Alekseev <aleksander@timescale.com> wrote:Hi,\n\n> Renaming/Refactoring:\n> - All spans are now tracked in the palloced current_trace_spans buffer\n> compared to top_span and parse_span being kept in a static variable\n> before.\n> - I've renamed query_spans to top_span. A top_span serves as the\n> parent for all spans in a specific nested level.\n> - More debugging information and assertions. Spans now track their\n> nested level, if they've been ended and if they are a top_span.\n>\n> Changes:\n> - I've added the subxact_count to the span's metadata. This can help\n> identify the moment a subtransaction was started.\n> - I've reworked nested queries and utility statements. Previously, I\n> made the assumptions that we could only have one top_span per nested\n> level which is not the case. Some utility statements can execute\n> multiple queries in the same nested level. Tracing utility statement\n> now works better (see image of tracing a create extension).\n\nMany thanks for the updated patch.\n\nIf it's not too much trouble, please use `git format-patch`. This\nmakes applying and testing the patch much easier. Also you can provide\na commit message which 1. will simplify the work for the committer and\n2. can be reviewed as well.\n\nThe tests fail on CI [1]. I tried to run them locally and got the same\nresults. I'm using full-build.sh from this repository [2] for\nAutotools and the following one-liner for Meson:\n\n```\ntime CPPFLAGS=\"-DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS\" sh -c 'git\nclean -dfx && meson setup --buildtype debug -Dcassert=true\n-DPG_TEST_EXTRA=\"kerberos ldap ssl load_balance\" -Dldap=disabled\n-Dssl=openssl -Dtap_tests=enabled\n-Dprefix=/home/eax/projects/pginstall build && ninja -C build && ninja\n-C build docs && PG_TEST_EXTRA=1 meson test -C build'\n```\n\nThe comments for pg_tracing.c don't seem to be formatted properly.\nPlease make sure to run pg_indent. See src/tools/pgindent/README\n\nThe interface pg_tracing_spans(true | false) doesn't strike me as\nparticularly readable. Maybe this function should be private and the\ninterface be like pg_tracing_spans() and pg_trancing_consume_spans().\nAlternatively you could use named arguments in the tests, but I don't\nthink this is a preferred solution.\n\nSpeaking of the tests I suggest adding a bit more comments before\nevery (or most) of the queries. Figuring out what they test could be\nnot particularly straightforward for somebody who will make changes\nafter the patch will be accepted.\n\n[1]: http://cfbot.cputube.org/\n[2]: https://github.com/afiskon/pgscripts/\n\n-- \nBest regards,\nAleksander Alekseev\n-- Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/", "msg_date": "Fri, 15 Sep 2023 15:13:10 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Hi!\n\n> If it's not too much trouble, please use `git format-patch`. This\n> makes applying and testing the patch much easier. Also you can provide\n> a commit message which 1. will simplify the work for the committer and\n> 2. can be reviewed as well.\nNo problem, I will do that from now on.\n\n> The tests fail on CI [1]. I tried to run them locally and got the same\n> results.\nI've fixed the compilation issue, my working laptop is a mac so I've missed the\nlinux errors. I've set up the CI on my side to avoid those issues in the future.\nI've also fixed most of the test issues.\nOne test is still flaky: the parallel worker test can have one less\nworker reporting\nfrom time to time, I will try to find a way to make it more stable.\n\n> The comments for pg_tracing.c don't seem to be formatted properly.\n> Please make sure to run pg_indent. See src/tools/pgindent/README\nDone.\n\n> The interface pg_tracing_spans(true | false) doesn't strike me as\n> particularly readable. Maybe this function should be private and the\n> interface be like pg_tracing_spans() and pg_trancing_consume_spans().\nAgree. I wasn't super happy about the format. I've created\npg_tracing_peek_spans() and pg_tracing_consume_spans() to make it\nmore explicit.\n\n> Speaking of the tests I suggest adding a bit more comments before\n> every (or most) of the queries. Figuring out what they test could be\n> not particularly straightforward for somebody who will make changes\n> after the patch will be accepted.\nI've added more comments and a representation of the expected span\nhierarchy for the tests involving nested queries. It helps me a lot to keep\ntrack of the spans to check.\n\nI've done some additional changes:\n- I've added a start_ns to spans which is the nanosecond remainder of the\ntime since the start of the trace. Since some spans can have a very short\nduration, some spans would have the same start if we use a microsecond\nprecision. Having a start with a nanosecond precision allowed me to remove\nsome hacks I was using and makes tests more stable.\n- I've also added a start_ns_time to span which is the value of the monotonic\nclock at the start of the span. I used to use duration_ns as a\ntemporary storage\nfor this value but it made things more confusing and hard to follow.\n- Fix error handling with nested queries created with ProcessUtility (like\ncalling \"create extension pg_tracing\" twice).\n\nRegards,\nAnthonin", "msg_date": "Fri, 22 Sep 2023 17:45:27 +0200", "msg_from": "Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>", "msg_from_op": true, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Hi!\n\nI've made a new batch of changes and improvements.\nNew features:\n- Triggers are now correctly supported. They were not correctly\nattached to the ExecutorFinish Span before.\n- Additional configuration: exporting parameter values in span\nmetadata can be disabled. Deparsing can also be disabled now.\n- Dbid and userid are now exported in span's metadata\n- New Notify channel and threshold parameters. This channel will\nreceive a notification when the span buffer usage crosses the\nthreshold.\n- Explicit transaction with extended protocol is now correctly\nhandled. This is done by keeping 2 different trace contexts, one for\nparser/planner trace context at the root level and the other for\nnested queries and executors. The spans now correctly show the parse\nof the next statement happening before the ExecutorEnd of the previous\nstatement (see screenshot).\n\nChanges:\n- Parse span is now outside of the top span. When multiple statements\nare processed, they are parsed together so it didn't make sense to\nattach Parse to a specific statement.\n- Deparse info is now exported in its own column. This will leave the\npossibility to the trace forwarder to either use it as metadata or put\nit in the span name.\n- Renaming: Spans now have a span_type (Select, Executor, Parser...)\nand a span_operation (ExecutorRun, Select $1...)\n- For spans created without propagated trace context, the same traceid\nwill be used for statements within the same transaction.\n- pg_tracing_consume_spans and pg_tracing_peek_spans are now\nrestricted to users with pg_read_all_stats role\n- In instrument.h, instr->firsttime is only set if timer was requested\n\nThe code should be more stable from now on. Most of the features I had\nplanned are implemented.\nI've also started writing the module's documentation. It's still a bit\nbare but I will be working on completing this.\n\nRegards,\nAnthonin\n\nOn Fri, Sep 22, 2023 at 5:45 PM Anthonin Bonnefoy\n<anthonin.bonnefoy@datadoghq.com> wrote:\n>\n> Hi!\n>\n> > If it's not too much trouble, please use `git format-patch`. This\n> > makes applying and testing the patch much easier. Also you can provide\n> > a commit message which 1. will simplify the work for the committer and\n> > 2. can be reviewed as well.\n> No problem, I will do that from now on.\n>\n> > The tests fail on CI [1]. I tried to run them locally and got the same\n> > results.\n> I've fixed the compilation issue, my working laptop is a mac so I've missed the\n> linux errors. I've set up the CI on my side to avoid those issues in the future.\n> I've also fixed most of the test issues.\n> One test is still flaky: the parallel worker test can have one less\n> worker reporting\n> from time to time, I will try to find a way to make it more stable.\n>\n> > The comments for pg_tracing.c don't seem to be formatted properly.\n> > Please make sure to run pg_indent. See src/tools/pgindent/README\n> Done.\n>\n> > The interface pg_tracing_spans(true | false) doesn't strike me as\n> > particularly readable. Maybe this function should be private and the\n> > interface be like pg_tracing_spans() and pg_trancing_consume_spans().\n> Agree. I wasn't super happy about the format. I've created\n> pg_tracing_peek_spans() and pg_tracing_consume_spans() to make it\n> more explicit.\n>\n> > Speaking of the tests I suggest adding a bit more comments before\n> > every (or most) of the queries. Figuring out what they test could be\n> > not particularly straightforward for somebody who will make changes\n> > after the patch will be accepted.\n> I've added more comments and a representation of the expected span\n> hierarchy for the tests involving nested queries. It helps me a lot to keep\n> track of the spans to check.\n>\n> I've done some additional changes:\n> - I've added a start_ns to spans which is the nanosecond remainder of the\n> time since the start of the trace. Since some spans can have a very short\n> duration, some spans would have the same start if we use a microsecond\n> precision. Having a start with a nanosecond precision allowed me to remove\n> some hacks I was using and makes tests more stable.\n> - I've also added a start_ns_time to span which is the value of the monotonic\n> clock at the start of the span. I used to use duration_ns as a\n> temporary storage\n> for this value but it made things more confusing and hard to follow.\n> - Fix error handling with nested queries created with ProcessUtility (like\n> calling \"create extension pg_tracing\" twice).\n>\n> Regards,\n> Anthonin", "msg_date": "Thu, 12 Oct 2023 10:54:45 +0200", "msg_from": "Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>", "msg_from_op": true, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Hi,\n\nOn Thu, 12 Oct 2023 at 14:32, Anthonin Bonnefoy\n<anthonin.bonnefoy@datadoghq.com> wrote:\n>\n> Hi!\n>\n> I've made a new batch of changes and improvements.\n> New features:\n> - Triggers are now correctly supported. They were not correctly\n> attached to the ExecutorFinish Span before.\n> - Additional configuration: exporting parameter values in span\n> metadata can be disabled. Deparsing can also be disabled now.\n> - Dbid and userid are now exported in span's metadata\n> - New Notify channel and threshold parameters. This channel will\n> receive a notification when the span buffer usage crosses the\n> threshold.\n> - Explicit transaction with extended protocol is now correctly\n> handled. This is done by keeping 2 different trace contexts, one for\n> parser/planner trace context at the root level and the other for\n> nested queries and executors. The spans now correctly show the parse\n> of the next statement happening before the ExecutorEnd of the previous\n> statement (see screenshot).\n>\n> Changes:\n> - Parse span is now outside of the top span. When multiple statements\n> are processed, they are parsed together so it didn't make sense to\n> attach Parse to a specific statement.\n> - Deparse info is now exported in its own column. This will leave the\n> possibility to the trace forwarder to either use it as metadata or put\n> it in the span name.\n> - Renaming: Spans now have a span_type (Select, Executor, Parser...)\n> and a span_operation (ExecutorRun, Select $1...)\n> - For spans created without propagated trace context, the same traceid\n> will be used for statements within the same transaction.\n> - pg_tracing_consume_spans and pg_tracing_peek_spans are now\n> restricted to users with pg_read_all_stats role\n> - In instrument.h, instr->firsttime is only set if timer was requested\n>\n> The code should be more stable from now on. Most of the features I had\n> planned are implemented.\n> I've also started writing the module's documentation. It's still a bit\n> bare but I will be working on completing this.\n\nIn CFbot, I see that build is failing for this patch (link:\nhttps://cirrus-ci.com/task/5378223450619904?logs=build#L1532) with\nfollowing error:\n[07:58:05.037] In file included from ../src/include/executor/instrument.h:16,\n[07:58:05.037] from ../src/include/jit/jit.h:14,\n[07:58:05.037] from ../contrib/pg_tracing/span.h:16,\n[07:58:05.037] from ../contrib/pg_tracing/pg_tracing_explain.h:4,\n[07:58:05.037] from ../contrib/pg_tracing/pg_tracing.c:43:\n[07:58:05.037] ../contrib/pg_tracing/pg_tracing.c: In function\n‘add_node_counters’:\n[07:58:05.037] ../contrib/pg_tracing/pg_tracing.c:2330:70: error:\n‘BufferUsage’ has no member named ‘blk_read_time’; did you mean\n‘temp_blk_read_time’?\n[07:58:05.037] 2330 | blk_read_time =\nINSTR_TIME_GET_MILLISEC(node_counters->buffer_usage.blk_read_time);\n[07:58:05.037] | ^~~~~~~~~~~~~\n[07:58:05.037] ../src/include/portability/instr_time.h:126:12: note:\nin definition of macro ‘INSTR_TIME_GET_NANOSEC’\n[07:58:05.037] 126 | ((int64) (t).ticks)\n[07:58:05.037] | ^\n[07:58:05.037] ../contrib/pg_tracing/pg_tracing.c:2330:18: note: in\nexpansion of macro ‘INSTR_TIME_GET_MILLISEC’\n[07:58:05.037] 2330 | blk_read_time =\nINSTR_TIME_GET_MILLISEC(node_counters->buffer_usage.blk_read_time);\n[07:58:05.037] | ^~~~~~~~~~~~~~~~~~~~~~~\n[07:58:05.037] ../contrib/pg_tracing/pg_tracing.c:2331:71: error:\n‘BufferUsage’ has no member named ‘blk_write_time’; did you mean\n‘temp_blk_write_time’?\n[07:58:05.037] 2331 | blk_write_time =\nINSTR_TIME_GET_MILLISEC(node_counters->buffer_usage.blk_write_time);\n[07:58:05.037] | ^~~~~~~~~~~~~~\n[07:58:05.037] ../src/include/portability/instr_time.h:126:12: note:\nin definition of macro ‘INSTR_TIME_GET_NANOSEC’\n[07:58:05.037] 126 | ((int64) (t).ticks)\n[07:58:05.037] | ^\n[07:58:05.037] ../contrib/pg_tracing/pg_tracing.c:2331:19: note: in\nexpansion of macro ‘INSTR_TIME_GET_MILLISEC’\n[07:58:05.037] 2331 | blk_write_time =\nINSTR_TIME_GET_MILLISEC(node_counters->buffer_usage.blk_write_time);\n[07:58:05.037] | ^~~~~~~~~~~~~~~~~~~~~~~\n\nI also tried to build this patch locally and the build is failing with\nthe same error.\n\nThanks\nShlok Kumar Kyal\n\n\n", "msg_date": "Mon, 6 Nov 2023 11:49:13 +0530", "msg_from": "Shlok Kyal <shlok.kyal.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Hi!\n\nThe commit 13d00729d422c84b1764c24251abcc785ea4adb1 renamed some\nfields in the BufferUsage struct. I've updated those which should fix the\ncompilation errors.\n\nI've also added some changes, mainly around how lazy functions are managed.\nSome queries like 'select character_maximum_length from\ninformation_schema.element_types limit 1 offset 3;' could generate 50K spans\ndue to lazy calls of generate_series. Tracing of lazy functions is now disabled.\n\nI've also written an otel version of the trace forwarder available in\nhttps://github.com/bonnefoa/pg-tracing-otel-forwarder. The repository\ncontains a docker compose to start a local otel collector and jaeger\ninterface and will be able to consume the spans from a test PostgreSQL\ninstance with pg_tracing.\n\nRegards,\nAnthonin\n\nOn Mon, Nov 6, 2023 at 7:19 AM Shlok Kyal <shlok.kyal.oss@gmail.com> wrote:\n>\n> Hi,\n>\n> On Thu, 12 Oct 2023 at 14:32, Anthonin Bonnefoy\n> <anthonin.bonnefoy@datadoghq.com> wrote:\n> >\n> > Hi!\n> >\n> > I've made a new batch of changes and improvements.\n> > New features:\n> > - Triggers are now correctly supported. They were not correctly\n> > attached to the ExecutorFinish Span before.\n> > - Additional configuration: exporting parameter values in span\n> > metadata can be disabled. Deparsing can also be disabled now.\n> > - Dbid and userid are now exported in span's metadata\n> > - New Notify channel and threshold parameters. This channel will\n> > receive a notification when the span buffer usage crosses the\n> > threshold.\n> > - Explicit transaction with extended protocol is now correctly\n> > handled. This is done by keeping 2 different trace contexts, one for\n> > parser/planner trace context at the root level and the other for\n> > nested queries and executors. The spans now correctly show the parse\n> > of the next statement happening before the ExecutorEnd of the previous\n> > statement (see screenshot).\n> >\n> > Changes:\n> > - Parse span is now outside of the top span. When multiple statements\n> > are processed, they are parsed together so it didn't make sense to\n> > attach Parse to a specific statement.\n> > - Deparse info is now exported in its own column. This will leave the\n> > possibility to the trace forwarder to either use it as metadata or put\n> > it in the span name.\n> > - Renaming: Spans now have a span_type (Select, Executor, Parser...)\n> > and a span_operation (ExecutorRun, Select $1...)\n> > - For spans created without propagated trace context, the same traceid\n> > will be used for statements within the same transaction.\n> > - pg_tracing_consume_spans and pg_tracing_peek_spans are now\n> > restricted to users with pg_read_all_stats role\n> > - In instrument.h, instr->firsttime is only set if timer was requested\n> >\n> > The code should be more stable from now on. Most of the features I had\n> > planned are implemented.\n> > I've also started writing the module's documentation. It's still a bit\n> > bare but I will be working on completing this.\n>\n> In CFbot, I see that build is failing for this patch (link:\n> https://cirrus-ci.com/task/5378223450619904?logs=build#L1532) with\n> following error:\n> [07:58:05.037] In file included from ../src/include/executor/instrument.h:16,\n> [07:58:05.037] from ../src/include/jit/jit.h:14,\n> [07:58:05.037] from ../contrib/pg_tracing/span.h:16,\n> [07:58:05.037] from ../contrib/pg_tracing/pg_tracing_explain.h:4,\n> [07:58:05.037] from ../contrib/pg_tracing/pg_tracing.c:43:\n> [07:58:05.037] ../contrib/pg_tracing/pg_tracing.c: In function\n> ‘add_node_counters’:\n> [07:58:05.037] ../contrib/pg_tracing/pg_tracing.c:2330:70: error:\n> ‘BufferUsage’ has no member named ‘blk_read_time’; did you mean\n> ‘temp_blk_read_time’?\n> [07:58:05.037] 2330 | blk_read_time =\n> INSTR_TIME_GET_MILLISEC(node_counters->buffer_usage.blk_read_time);\n> [07:58:05.037] | ^~~~~~~~~~~~~\n> [07:58:05.037] ../src/include/portability/instr_time.h:126:12: note:\n> in definition of macro ‘INSTR_TIME_GET_NANOSEC’\n> [07:58:05.037] 126 | ((int64) (t).ticks)\n> [07:58:05.037] | ^\n> [07:58:05.037] ../contrib/pg_tracing/pg_tracing.c:2330:18: note: in\n> expansion of macro ‘INSTR_TIME_GET_MILLISEC’\n> [07:58:05.037] 2330 | blk_read_time =\n> INSTR_TIME_GET_MILLISEC(node_counters->buffer_usage.blk_read_time);\n> [07:58:05.037] | ^~~~~~~~~~~~~~~~~~~~~~~\n> [07:58:05.037] ../contrib/pg_tracing/pg_tracing.c:2331:71: error:\n> ‘BufferUsage’ has no member named ‘blk_write_time’; did you mean\n> ‘temp_blk_write_time’?\n> [07:58:05.037] 2331 | blk_write_time =\n> INSTR_TIME_GET_MILLISEC(node_counters->buffer_usage.blk_write_time);\n> [07:58:05.037] | ^~~~~~~~~~~~~~\n> [07:58:05.037] ../src/include/portability/instr_time.h:126:12: note:\n> in definition of macro ‘INSTR_TIME_GET_NANOSEC’\n> [07:58:05.037] 126 | ((int64) (t).ticks)\n> [07:58:05.037] | ^\n> [07:58:05.037] ../contrib/pg_tracing/pg_tracing.c:2331:19: note: in\n> expansion of macro ‘INSTR_TIME_GET_MILLISEC’\n> [07:58:05.037] 2331 | blk_write_time =\n> INSTR_TIME_GET_MILLISEC(node_counters->buffer_usage.blk_write_time);\n> [07:58:05.037] | ^~~~~~~~~~~~~~~~~~~~~~~\n>\n> I also tried to build this patch locally and the build is failing with\n> the same error.\n>\n> Thanks\n> Shlok Kumar Kyal", "msg_date": "Mon, 6 Nov 2023 16:08:49 +0100", "msg_from": "Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>", "msg_from_op": true, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Hi,\n\nSome small changes, mostly around making tests less flaky:\n- Removed the top_span and nested_level in the span output, those were\nmostly used for debugging\n- More tests around Parse span in nested queries\n- Remove explicit queryId in the tests\n\nRegards,\nAnthonin\n\nOn Mon, Nov 6, 2023 at 4:08 PM Anthonin Bonnefoy\n<anthonin.bonnefoy@datadoghq.com> wrote:\n>\n> Hi!\n>\n> The commit 13d00729d422c84b1764c24251abcc785ea4adb1 renamed some\n> fields in the BufferUsage struct. I've updated those which should fix the\n> compilation errors.\n>\n> I've also added some changes, mainly around how lazy functions are managed.\n> Some queries like 'select character_maximum_length from\n> information_schema.element_types limit 1 offset 3;' could generate 50K spans\n> due to lazy calls of generate_series. Tracing of lazy functions is now disabled.\n>\n> I've also written an otel version of the trace forwarder available in\n> https://github.com/bonnefoa/pg-tracing-otel-forwarder. The repository\n> contains a docker compose to start a local otel collector and jaeger\n> interface and will be able to consume the spans from a test PostgreSQL\n> instance with pg_tracing.\n>\n> Regards,\n> Anthonin\n>\n> On Mon, Nov 6, 2023 at 7:19 AM Shlok Kyal <shlok.kyal.oss@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > On Thu, 12 Oct 2023 at 14:32, Anthonin Bonnefoy\n> > <anthonin.bonnefoy@datadoghq.com> wrote:\n> > >\n> > > Hi!\n> > >\n> > > I've made a new batch of changes and improvements.\n> > > New features:\n> > > - Triggers are now correctly supported. They were not correctly\n> > > attached to the ExecutorFinish Span before.\n> > > - Additional configuration: exporting parameter values in span\n> > > metadata can be disabled. Deparsing can also be disabled now.\n> > > - Dbid and userid are now exported in span's metadata\n> > > - New Notify channel and threshold parameters. This channel will\n> > > receive a notification when the span buffer usage crosses the\n> > > threshold.\n> > > - Explicit transaction with extended protocol is now correctly\n> > > handled. This is done by keeping 2 different trace contexts, one for\n> > > parser/planner trace context at the root level and the other for\n> > > nested queries and executors. The spans now correctly show the parse\n> > > of the next statement happening before the ExecutorEnd of the previous\n> > > statement (see screenshot).\n> > >\n> > > Changes:\n> > > - Parse span is now outside of the top span. When multiple statements\n> > > are processed, they are parsed together so it didn't make sense to\n> > > attach Parse to a specific statement.\n> > > - Deparse info is now exported in its own column. This will leave the\n> > > possibility to the trace forwarder to either use it as metadata or put\n> > > it in the span name.\n> > > - Renaming: Spans now have a span_type (Select, Executor, Parser...)\n> > > and a span_operation (ExecutorRun, Select $1...)\n> > > - For spans created without propagated trace context, the same traceid\n> > > will be used for statements within the same transaction.\n> > > - pg_tracing_consume_spans and pg_tracing_peek_spans are now\n> > > restricted to users with pg_read_all_stats role\n> > > - In instrument.h, instr->firsttime is only set if timer was requested\n> > >\n> > > The code should be more stable from now on. Most of the features I had\n> > > planned are implemented.\n> > > I've also started writing the module's documentation. It's still a bit\n> > > bare but I will be working on completing this.\n> >\n> > In CFbot, I see that build is failing for this patch (link:\n> > https://cirrus-ci.com/task/5378223450619904?logs=build#L1532) with\n> > following error:\n> > [07:58:05.037] In file included from ../src/include/executor/instrument.h:16,\n> > [07:58:05.037] from ../src/include/jit/jit.h:14,\n> > [07:58:05.037] from ../contrib/pg_tracing/span.h:16,\n> > [07:58:05.037] from ../contrib/pg_tracing/pg_tracing_explain.h:4,\n> > [07:58:05.037] from ../contrib/pg_tracing/pg_tracing.c:43:\n> > [07:58:05.037] ../contrib/pg_tracing/pg_tracing.c: In function\n> > ‘add_node_counters’:\n> > [07:58:05.037] ../contrib/pg_tracing/pg_tracing.c:2330:70: error:\n> > ‘BufferUsage’ has no member named ‘blk_read_time’; did you mean\n> > ‘temp_blk_read_time’?\n> > [07:58:05.037] 2330 | blk_read_time =\n> > INSTR_TIME_GET_MILLISEC(node_counters->buffer_usage.blk_read_time);\n> > [07:58:05.037] | ^~~~~~~~~~~~~\n> > [07:58:05.037] ../src/include/portability/instr_time.h:126:12: note:\n> > in definition of macro ‘INSTR_TIME_GET_NANOSEC’\n> > [07:58:05.037] 126 | ((int64) (t).ticks)\n> > [07:58:05.037] | ^\n> > [07:58:05.037] ../contrib/pg_tracing/pg_tracing.c:2330:18: note: in\n> > expansion of macro ‘INSTR_TIME_GET_MILLISEC’\n> > [07:58:05.037] 2330 | blk_read_time =\n> > INSTR_TIME_GET_MILLISEC(node_counters->buffer_usage.blk_read_time);\n> > [07:58:05.037] | ^~~~~~~~~~~~~~~~~~~~~~~\n> > [07:58:05.037] ../contrib/pg_tracing/pg_tracing.c:2331:71: error:\n> > ‘BufferUsage’ has no member named ‘blk_write_time’; did you mean\n> > ‘temp_blk_write_time’?\n> > [07:58:05.037] 2331 | blk_write_time =\n> > INSTR_TIME_GET_MILLISEC(node_counters->buffer_usage.blk_write_time);\n> > [07:58:05.037] | ^~~~~~~~~~~~~~\n> > [07:58:05.037] ../src/include/portability/instr_time.h:126:12: note:\n> > in definition of macro ‘INSTR_TIME_GET_NANOSEC’\n> > [07:58:05.037] 126 | ((int64) (t).ticks)\n> > [07:58:05.037] | ^\n> > [07:58:05.037] ../contrib/pg_tracing/pg_tracing.c:2331:19: note: in\n> > expansion of macro ‘INSTR_TIME_GET_MILLISEC’\n> > [07:58:05.037] 2331 | blk_write_time =\n> > INSTR_TIME_GET_MILLISEC(node_counters->buffer_usage.blk_write_time);\n> > [07:58:05.037] | ^~~~~~~~~~~~~~~~~~~~~~~\n> >\n> > I also tried to build this patch locally and the build is failing with\n> > the same error.\n> >\n> > Thanks\n> > Shlok Kumar Kyal", "msg_date": "Wed, 15 Nov 2023 16:20:04 +0100", "msg_from": "Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>", "msg_from_op": true, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Hi,\n\nThanks for the updated patch!\n\n> Some small changes, mostly around making tests less flaky:\n> - Removed the top_span and nested_level in the span output, those were\n> mostly used for debugging\n> - More tests around Parse span in nested queries\n> - Remove explicit queryId in the tests\n\n```\n+-- Worker can take some additional time to end and report their spans\n+SELECT pg_sleep(0.2);\n+ pg_sleep\n+----------\n+\n+(1 row)\n```\n\nPretty sure this will fail on buildfarm from time to time. Maybe this\nis a case when it is worth using TAP tests? Especially considering the\nsupport of pg_tracing.notify_channel.\n\n```\n+ Only superusers can change this setting.\n+ This parameter can only be set at server start.\n```\n\nI find this confusing. If the parameter can only be set at server\nstart doesn't it mean that whoever has access to postgresql.conf can\nchange it?\n\n```\n+ <varname>pg_tracing.track_utility</varname> controls whether spans\n+ should be generated for utility statements. Utility commands are\n+ all those other than <command>SELECT</command>,\n<command>INSERT</command>,\n+ <command>UPDATE</command>, <command>DELETE</command>, and\n<command>MERGE</command>.\n+ The default value is <literal>on</literal>.\n```\n\nPerhaps TABLE command should be added to the list, since it's\nbasically a syntax sugar for SELECT:\n\n```\n=# create table t(x int);\n=# insert into t values (1), (2), (3);\n=# table t;\n x\n---\n 1\n 2\n 3\n(3 rows)\n```\n\nThis part worries me a bit:\n\n```\n+ <listitem>\n+ <para>\n+ <varname>pg_tracing.drop_on_full_buffer</varname> controls whether\n+ span buffer should be emptied when <varname>pg_tracing.max_span</varname>\n+ spans is reached. If <literal>off</literal>, new spans are dropped.\n+ The default value is <literal>on</literal>.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n```\n\nObviously pg_tracing.drop_on_full_buffer = on will not work properly\nwith pg_tracing.notify_threshold because there is a race condition\npossible. By the time one receives and processes the notification the\nbuffer can be already cleaned. This makes me think that 1)\npg_tracing.drop_on_full_buffer = on could be not the best default\nvalue 2) As a user I would like a support of ring buffer mode. Even if\nimplementing this mode is out of scope of this particular patch, I\nwould argue that instead of drop_on_full_buffer boolean parameter we\nneed something like:\n\npg_tracing.buffer_mode = keep_on_full | drop_on_full | ring_buffer |\nperhaps_other_future_modes\n\n```\n+ <para>\n+ The module requires additional shared memory proportional to\n+ <varname>pg_tracing.max_span</varname>. Note that this\n+ memory is consumed whenever the module is loaded, even if\n+ no spans are generated.\n+ </para>\n```\n\nDo you think it possible to document how much shared memory does the\nextension consume depending on the parameters? Alternatively maybe\nit's worth providing a SQL function that returns this value? As a user\nI would like to know whether 10 MB, 500 MB or 1.5 GB of shared memory\nare consumed.\n\n```\n+ The start time of a span has two parts: <literal>span_start</literal> and\n+ <literal>span_start_ns</literal>. <literal>span_start</literal> is\na Timestamp\n+ with time zone providing up a microsecond precision.\n<literal>span_start_ns</literal>\n+ provides the nanosecond part of the span start.\n<literal>span_start + span_start_ns</literal>\n+ can be used to get the start with nanosecond precision.\n```\n\nI would argue that ns ( 1 / 1_000_000_000 sec! ) precision is\nredundant and only complicates the interface. This is way below the\nnoise floor formed from network delays, process switching by the OS,\nnoisy neighbours under the same supervisor, time slowdown by ntpd,\netc. I don't recall ever needing spans with better than 1 us\nprecision.\n\nOn top of that if you have `span_start_ns` you should probably rename\n`duration` to `duration_ns` for consistency. I believe it would be\nmore convenient to have `duration` in microseconds. Nothing prevents\nus from using fractions of microseconds (e.g. 12.345 us) if necessary.\n\n```\n+=# select trace_id, parent_id, span_id, span_start, duration, span_type, ...\n+ trace_id | parent_id | span_id |\n ...\n+----------------------+----------------------+----------------------+----\n+ 479062278978111363 | 479062278978111363 | 479062278978111363 ...\n+ 479062278978111363 | 479062278978111363 | 479062278978111363 ...\n+ 479062278978111363 | 479062278978111363 | -7732425595282414585 ...\n```\n\nNegative span ids, for sure, are possible, but I don't recall seeing\nmany negative ids in my life. I suggest not to surprise the user\nunless this is really necessary.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 17 Nov 2023 17:34:37 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Hi,\n\nThanks for the review!\n\n> ```\n> +-- Worker can take some additional time to end and report their spans\n> +SELECT pg_sleep(0.2);\n> + pg_sleep\n> +----------\n> +\n> +(1 row)\n> ```\n>\n> Pretty sure this will fail on buildfarm from time to time. Maybe this\n> is a case when it is worth using TAP tests? Especially considering the\n> support of pg_tracing.notify_channel.\n\nThe sleep was unnecessary as the leader waits for all workers to finish\nand report. All workers should have added their spans in the shared buffer\nbefore the leader and I removed the sleep from the test. I haven't seen a recent\nfailure on this test since then.\n\nI did add a TAP test for the notify part since it seems to be the only way\nto test it.\n\n> ```\n> + Only superusers can change this setting.\n> + This parameter can only be set at server start.\n> ```\n>\n> I find this confusing. If the parameter can only be set at server\n> start doesn't it mean that whoever has access to postgresql.conf can\n> change it?\n\nThe 'only superusers...' part was incorrect. This setting was needed to compute\nthe size of the shmem request so modifying it would require a server start.\nThough for this parameter, I've realised that I could use the global's\nmax_parallel_workers\ninstead, removing the need to have a max_traced_parallel_workers parameter.\n\n> Perhaps TABLE command should be added to the list, since it's\n> basically a syntax sugar for SELECT:\n\nAdded.\n\n> Obviously pg_tracing.drop_on_full_buffer = on will not work properly\n> with pg_tracing.notify_threshold because there is a race condition\n> possible. By the time one receives and processes the notification the\n> buffer can be already cleaned.\n\nThat's definitely a possibility. Lowering the notification threshold can\nlower the chance of the buffer being cleared when handling the notification\nbut the race can still happen.\n\nFor my use cases, spans should be sent as soon as possible and old spans\nare less valuable than new spans but other usage would want the opposite,\nthus the parameters to control the behavior.\n\n> This makes me think that 1) pg_tracing.drop_on_full_buffer = on could\n> be not the best default value\n\nI don't have a strong opinion on the default behaviour, I will switch to\nkeep buffer on full.\n\n> 2) As a user I would like a support of ring buffer mode. Even if\n> implementing this mode is out of scope of this particular patch, I\n> would argue that instead of drop_on_full_buffer boolean parameter we\n> need something like:\n>\n> pg_tracing.buffer_mode = keep_on_full | drop_on_full | ring_buffer |\n> perhaps_other_future_modes\n\nHaving a buffer ring was also my first approach but it adds some complexity\nand I've switched to the full drop + truncate query file for the first\niteration.\nRing buffer would definitely be a better experience, I've switched the\nparameter to an enum with only keep_on_full and drop_on_full for the time being.\n\n> Do you think it possible to document how much shared memory does the\n> extension consume depending on the parameters? Alternatively maybe\n> it's worth providing a SQL function that returns this value? As a user\n> I would like to know whether 10 MB, 500 MB or 1.5 GB of shared memory\n> are consumed.\n\nGood point. I've updated the doc to give some approximations with a query\nto detail the memory usage.\n\n> I would argue that ns ( 1 / 1_000_000_000 sec! ) precision is\n> redundant and only complicates the interface. This is way below the\n> noise floor formed from network delays, process switching by the OS,\n> noisy neighbours under the same supervisor, time slowdown by ntpd,\n> etc. I don't recall ever needing spans with better than 1 us\n> precision.\n>\n> On top of that if you have `span_start_ns` you should probably rename\n> `duration` to `duration_ns` for consistency. I believe it would be\n> more convenient to have `duration` in microseconds. Nothing prevents\n> us from using fractions of microseconds (e.g. 12.345 us) if necessary.\n\nI was definitely on the fence on this. I've initially tried to match the\nprecision provided by query instrumentation but that definitely introduced\na lot of complexity. Also, while trace standard specify that start and end\nshould be in nanoseconds, it turns out that the nanosecond precision is\ngenerally ignored.\n\nI've switched to a start+end in TimestampTz, dropping the ns precision.\nTests had to be rewritten to have a stable order with spans having that can\nhave the same start/end.\n\nI've also removed some spans: ExecutorStart, ExecutorEnd, PostParse.\nThose spans generally had a sub microsecond duration and didn't bring much\nvalue except for debugging pg_tracing itself. To avoid the confusion of\nhaving spans with a 0us duration, removing them altogether seemed better.\nExecutorFinish also has a sub microsecond duration unless it has a nested\nquery (triggers) so I'm only emitting them in those cases.\n\n> Negative span ids, for sure, are possible, but I don't recall seeing\n> many negative ids in my life. I suggest not to surprise the user\n> unless this is really necessary.\n\nThat one is gonna be difficult. Traceid and spanid are 8 bytes so I'm exporting\nthem using bigint. PostgreSQL doesn't support unsigned bigint thus the negative\nspan ids. An alternative would be to use a bytea but bigint are easier to read\nand interact with.\n\nThe latest patch contains changes around the nanosecond precision drop, test\nupdates to make them more stable and simplification around span management.\n\nRegards,\nAnthonin\n\nOn Fri, Nov 17, 2023 at 3:34 PM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> Hi,\n>\n> Thanks for the updated patch!\n>\n> > Some small changes, mostly around making tests less flaky:\n> > - Removed the top_span and nested_level in the span output, those were\n> > mostly used for debugging\n> > - More tests around Parse span in nested queries\n> > - Remove explicit queryId in the tests\n>\n> ```\n> +-- Worker can take some additional time to end and report their spans\n> +SELECT pg_sleep(0.2);\n> + pg_sleep\n> +----------\n> +\n> +(1 row)\n> ```\n>\n> Pretty sure this will fail on buildfarm from time to time. Maybe this\n> is a case when it is worth using TAP tests? Especially considering the\n> support of pg_tracing.notify_channel.\n>\n> ```\n> + Only superusers can change this setting.\n> + This parameter can only be set at server start.\n> ```\n>\n> I find this confusing. If the parameter can only be set at server\n> start doesn't it mean that whoever has access to postgresql.conf can\n> change it?\n>\n> ```\n> + <varname>pg_tracing.track_utility</varname> controls whether spans\n> + should be generated for utility statements. Utility commands are\n> + all those other than <command>SELECT</command>,\n> <command>INSERT</command>,\n> + <command>UPDATE</command>, <command>DELETE</command>, and\n> <command>MERGE</command>.\n> + The default value is <literal>on</literal>.\n> ```\n>\n> Perhaps TABLE command should be added to the list, since it's\n> basically a syntax sugar for SELECT:\n>\n> ```\n> =# create table t(x int);\n> =# insert into t values (1), (2), (3);\n> =# table t;\n> x\n> ---\n> 1\n> 2\n> 3\n> (3 rows)\n> ```\n>\n> This part worries me a bit:\n>\n> ```\n> + <listitem>\n> + <para>\n> + <varname>pg_tracing.drop_on_full_buffer</varname> controls whether\n> + span buffer should be emptied when <varname>pg_tracing.max_span</varname>\n> + spans is reached. If <literal>off</literal>, new spans are dropped.\n> + The default value is <literal>on</literal>.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> ```\n>\n> Obviously pg_tracing.drop_on_full_buffer = on will not work properly\n> with pg_tracing.notify_threshold because there is a race condition\n> possible. By the time one receives and processes the notification the\n> buffer can be already cleaned. This makes me think that 1)\n> pg_tracing.drop_on_full_buffer = on could be not the best default\n> value 2) As a user I would like a support of ring buffer mode. Even if\n> implementing this mode is out of scope of this particular patch, I\n> would argue that instead of drop_on_full_buffer boolean parameter we\n> need something like:\n>\n> pg_tracing.buffer_mode = keep_on_full | drop_on_full | ring_buffer |\n> perhaps_other_future_modes\n>\n> ```\n> + <para>\n> + The module requires additional shared memory proportional to\n> + <varname>pg_tracing.max_span</varname>. Note that this\n> + memory is consumed whenever the module is loaded, even if\n> + no spans are generated.\n> + </para>\n> ```\n>\n> Do you think it possible to document how much shared memory does the\n> extension consume depending on the parameters? Alternatively maybe\n> it's worth providing a SQL function that returns this value? As a user\n> I would like to know whether 10 MB, 500 MB or 1.5 GB of shared memory\n> are consumed.\n>\n> ```\n> + The start time of a span has two parts: <literal>span_start</literal> and\n> + <literal>span_start_ns</literal>. <literal>span_start</literal> is\n> a Timestamp\n> + with time zone providing up a microsecond precision.\n> <literal>span_start_ns</literal>\n> + provides the nanosecond part of the span start.\n> <literal>span_start + span_start_ns</literal>\n> + can be used to get the start with nanosecond precision.\n> ```\n>\n> I would argue that ns ( 1 / 1_000_000_000 sec! ) precision is\n> redundant and only complicates the interface. This is way below the\n> noise floor formed from network delays, process switching by the OS,\n> noisy neighbours under the same supervisor, time slowdown by ntpd,\n> etc. I don't recall ever needing spans with better than 1 us\n> precision.\n>\n> On top of that if you have `span_start_ns` you should probably rename\n> `duration` to `duration_ns` for consistency. I believe it would be\n> more convenient to have `duration` in microseconds. Nothing prevents\n> us from using fractions of microseconds (e.g. 12.345 us) if necessary.\n>\n> ```\n> +=# select trace_id, parent_id, span_id, span_start, duration, span_type, ...\n> + trace_id | parent_id | span_id |\n> ...\n> +----------------------+----------------------+----------------------+----\n> + 479062278978111363 | 479062278978111363 | 479062278978111363 ...\n> + 479062278978111363 | 479062278978111363 | 479062278978111363 ...\n> + 479062278978111363 | 479062278978111363 | -7732425595282414585 ...\n> ```\n>\n> Negative span ids, for sure, are possible, but I don't recall seeing\n> many negative ids in my life. I suggest not to surprise the user\n> unless this is really necessary.\n>\n> --\n> Best regards,\n> Aleksander Alekseev", "msg_date": "Thu, 7 Dec 2023 15:36:12 +0100", "msg_from": "Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>", "msg_from_op": true, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Hi all\n\nJust saw this thread.\n\nI cooked up a PoC distributed tracing support in Pg years ago as part of\nthe 2ndQuadrant BDR project.\n\nI used GUCs to set the `opentelemtery_trace_id` and\n`opentelemetry_span_id`. These can be sent as protocol-level messages by\nthe client driver if the client driver has native trace integration\nenabled, in which case the user doesn't need to see or care. And for other\ndrivers, the application can use `SET` or `SET LOCAL` to assign them.\n\nThis approach avoids the need to rewrite SQL or give special meaning to SQL\ncomments.\n\nWithin the server IIRC I used the existing postgres resource owner and\nmemory context stacks to maintain some internal nested traces.\n\nMy implementation used the C++ opentelemetry client library to emit traces,\nso it was never going to be able to land in core. But IIRC the current\nBDR/PGD folks are now using an OpenTelemetry sidecar process instead. I\nthink they send it UDP packets to emit metrics and events. Petr or Markus\nmight be able to tell you more about how they're doing that.\n\nI'd love to see OpenTelemetry integration in Pg.\n\nHi allJust saw this thread.I cooked up a PoC distributed tracing support in Pg years ago as part of the 2ndQuadrant BDR project. I used GUCs to set the `opentelemtery_trace_id` and `opentelemetry_span_id`. These can be sent as protocol-level messages by the client driver if the client driver has native trace integration enabled, in which case the user doesn't need to see or care. And for other drivers, the application can use `SET` or `SET LOCAL` to assign them.This approach avoids the need to rewrite SQL or give special meaning to SQL comments.Within the server IIRC I used the existing postgres resource owner and memory context stacks to maintain some internal nested traces.My implementation used the C++ opentelemetry client library to emit traces, so it was never going to be able to land in core. But IIRC the current BDR/PGD folks are now using an OpenTelemetry sidecar process instead. I think they send it UDP packets to emit metrics and events. Petr or Markus might be able to tell you more about how they're doing that.I'd love to see OpenTelemetry integration in Pg.", "msg_date": "Tue, 12 Dec 2023 10:51:17 +1300", "msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Hi!\n\nOverall solution looks good for me except SQL Commenter - query\ninstrumentation\nwith SQL comments is often not possible on production systems. Instead\nthe very often requested feature is to enable tracing for a given single\nquery ID,\nor several (very limited number of) queries by IDs. It could be done by\nadding\nSQL function to add and remove query ID into a list (even array would do)\nstored in top tracing context.\n\nGreat job, thank you!\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi!Overall solution looks good for me except SQL Commenter - query instrumentationwith SQL comments is often not possible on production systems. Instead the very often requested feature is to enable tracing for a given single query ID,or several (very limited number of) queries by IDs. It could be done by addingSQL function to add and remove query ID into a list (even array would do)stored in top tracing context.Great job, thank you!-- Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/", "msg_date": "Fri, 15 Dec 2023 22:05:38 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Hi,\n\n> Overall solution looks good for me except SQL Commenter - query instrumentation\n> with SQL comments is often not possible on production systems. Instead\n> the very often requested feature is to enable tracing for a given single query ID,\n> or several (very limited number of) queries by IDs. It could be done by adding\n> SQL function to add and remove query ID into a list (even array would do)\n> stored in top tracing context.\n\nNot 100% sure if I follow.\n\nBy queries you mean particular queries, not transactions? And since\nthey have an assigned ID it means that the query is already executing\nand we want to enable the tracking in another session, right? If this\nis the case I would argue that enabling/disabling tracing for an\n_already_ running query (or transaction) would be way too complicated.\n\nI wouldn't mind enabling/disabling tracing for the current session\nand/or a given session ID. In the latter case it can have an effect\nonly for the new transactions. This however could be implemented\nseparately from the proposed patchset. I suggest keeping the scope\nsmall.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 2 Jan 2024 15:14:15 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Hi,\n\n> This approach avoids the need to rewrite SQL or give special meaning to SQL comments.\n\nSQLCommenter already has a good amount of support and is referenced in\nopentelemetry https://github.com/open-telemetry/opentelemetry-sqlcommenter\nSo the goal was more to leverage the existing trace propagation\nsystems as much as possible.\n\n> I used GUCs to set the `opentelemtery_trace_id` and `opentelemetry_span_id`.\n> These can be sent as protocol-level messages by the client driver if the\n> client driver has native trace integration enabled, in which case the user\n> doesn't need to see or care. And for other drivers, the application can use\n> `SET` or `SET LOCAL` to assign them.\n\nEmitting `SET` and `SET LOCAL` for every traced statement sounds more\ndisruptive and expensive than relying on SQLCommenter. When not using\n`SET LOCAL`, the variables also need to be cleared.\nThis will also introduce a lot of headaches as the `SET` themselves\ncould be traced and generate spans when tracing utility statements is\nalready tricky enough.\n\n> But IIRC the current BDR/PGD folks are now using an OpenTelemetry\n> sidecar process instead. I think they send it UDP packets to emit\n> metrics and events.\n\nI would be curious to hear more about this. I didn't want to be tied\nto a specific protocol (I've only seen gRPC and http support on latest\nopentelemetry collectors) and I fear that relying on Postgres sending\ntraces would lower the chance of having this supported on managed\nPostgres solutions (like RDS and CloudSQL).\n\n> By queries you mean particular queries, not transactions? And since\n> they have an assigned ID it means that the query is already executing\n> and we want to enable the tracking in another session, right?\n\nI think that was the idea. The query identifier generated for a\nspecific statement is stable so we could have a GUC variable with a\nlist of query id and only queries matching the provided query ids\nwould be sampled. This would make it easier to generate traces for a\nspecific query within a session.\n\n\nOn Tue, Jan 2, 2024 at 1:14 PM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> Hi,\n>\n> > Overall solution looks good for me except SQL Commenter - query instrumentation\n> > with SQL comments is often not possible on production systems. Instead\n> > the very often requested feature is to enable tracing for a given single query ID,\n> > or several (very limited number of) queries by IDs. It could be done by adding\n> > SQL function to add and remove query ID into a list (even array would do)\n> > stored in top tracing context.\n>\n> Not 100% sure if I follow.\n>\n> By queries you mean particular queries, not transactions? And since\n> they have an assigned ID it means that the query is already executing\n> and we want to enable the tracking in another session, right? If this\n> is the case I would argue that enabling/disabling tracing for an\n> _already_ running query (or transaction) would be way too complicated.\n>\n> I wouldn't mind enabling/disabling tracing for the current session\n> and/or a given session ID. In the latter case it can have an effect\n> only for the new transactions. This however could be implemented\n> separately from the proposed patchset. I suggest keeping the scope\n> small.\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n\n\n", "msg_date": "Thu, 4 Jan 2024 16:36:02 +0100", "msg_from": "Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>", "msg_from_op": true, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Hi!\n\nI've meant exactly the thing you mentioned -\n\n>\n> > By queries you mean particular queries, not transactions? And since\n> > they have an assigned ID it means that the query is already executing\n> > and we want to enable the tracking in another session, right?\n>\n> I think that was the idea. The query identifier generated for a\n> specific statement is stable so we could have a GUC variable with a\n> list of query id and only queries matching the provided query ids\n> would be sampled. This would make it easier to generate traces for a\n> specific query within a session.\n>\n> This one. When maintaining production systems with high load rate\nwe often encountered necessity to trace a number of queries with\nknown IDs (that was the case for Oracle environments, but Postgres\nis not very different from this point of view), because enabling tracing\nfor the whole session would immediately result in thousands of trace\nspans in the system with throughput like hundreds or even thousands\nof tps, when we need, say, to trace a single problematic query.\n\nThank you!\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi!I've meant exactly the thing you mentioned -> By queries you mean particular queries, not transactions? And since\n> they have an assigned ID it means that the query is already executing\n> and we want to enable the tracking in another session, right?\n\nI think that was the idea. The query identifier generated for a\nspecific statement is stable so we could have a GUC variable with a\nlist of query id and only queries matching the provided query ids\nwould be sampled. This would make it easier to generate traces for a\nspecific query within a session.\nThis one. When maintaining production systems with high load ratewe often encountered necessity to trace a number of queries withknown IDs (that was the case for Oracle environments, but Postgresis not very different from this point of view), because enabling tracingfor the whole session would immediately result in thousands of tracespans in the system with throughput like hundreds or even thousandsof tps, when we need, say, to trace a single problematic query.Thank you!-- Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/", "msg_date": "Fri, 5 Jan 2024 18:48:03 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "On Thu, 7 Dec 2023 at 20:06, Anthonin Bonnefoy\n<anthonin.bonnefoy@datadoghq.com> wrote:\n>\n> Hi,\n>\n> Thanks for the review!\n>\n> > ```\n> > +-- Worker can take some additional time to end and report their spans\n> > +SELECT pg_sleep(0.2);\n> > + pg_sleep\n> > +----------\n> > +\n> > +(1 row)\n> > ```\n> >\n> > Pretty sure this will fail on buildfarm from time to time. Maybe this\n> > is a case when it is worth using TAP tests? Especially considering the\n> > support of pg_tracing.notify_channel.\n>\n> The sleep was unnecessary as the leader waits for all workers to finish\n> and report. All workers should have added their spans in the shared buffer\n> before the leader and I removed the sleep from the test. I haven't seen a recent\n> failure on this test since then.\n>\n> I did add a TAP test for the notify part since it seems to be the only way\n> to test it.\n>\n> > ```\n> > + Only superusers can change this setting.\n> > + This parameter can only be set at server start.\n> > ```\n> >\n> > I find this confusing. If the parameter can only be set at server\n> > start doesn't it mean that whoever has access to postgresql.conf can\n> > change it?\n>\n> The 'only superusers...' part was incorrect. This setting was needed to compute\n> the size of the shmem request so modifying it would require a server start.\n> Though for this parameter, I've realised that I could use the global's\n> max_parallel_workers\n> instead, removing the need to have a max_traced_parallel_workers parameter.\n>\n> > Perhaps TABLE command should be added to the list, since it's\n> > basically a syntax sugar for SELECT:\n>\n> Added.\n>\n> > Obviously pg_tracing.drop_on_full_buffer = on will not work properly\n> > with pg_tracing.notify_threshold because there is a race condition\n> > possible. By the time one receives and processes the notification the\n> > buffer can be already cleaned.\n>\n> That's definitely a possibility. Lowering the notification threshold can\n> lower the chance of the buffer being cleared when handling the notification\n> but the race can still happen.\n>\n> For my use cases, spans should be sent as soon as possible and old spans\n> are less valuable than new spans but other usage would want the opposite,\n> thus the parameters to control the behavior.\n>\n> > This makes me think that 1) pg_tracing.drop_on_full_buffer = on could\n> > be not the best default value\n>\n> I don't have a strong opinion on the default behaviour, I will switch to\n> keep buffer on full.\n>\n> > 2) As a user I would like a support of ring buffer mode. Even if\n> > implementing this mode is out of scope of this particular patch, I\n> > would argue that instead of drop_on_full_buffer boolean parameter we\n> > need something like:\n> >\n> > pg_tracing.buffer_mode = keep_on_full | drop_on_full | ring_buffer |\n> > perhaps_other_future_modes\n>\n> Having a buffer ring was also my first approach but it adds some complexity\n> and I've switched to the full drop + truncate query file for the first\n> iteration.\n> Ring buffer would definitely be a better experience, I've switched the\n> parameter to an enum with only keep_on_full and drop_on_full for the time being.\n>\n> > Do you think it possible to document how much shared memory does the\n> > extension consume depending on the parameters? Alternatively maybe\n> > it's worth providing a SQL function that returns this value? As a user\n> > I would like to know whether 10 MB, 500 MB or 1.5 GB of shared memory\n> > are consumed.\n>\n> Good point. I've updated the doc to give some approximations with a query\n> to detail the memory usage.\n>\n> > I would argue that ns ( 1 / 1_000_000_000 sec! ) precision is\n> > redundant and only complicates the interface. This is way below the\n> > noise floor formed from network delays, process switching by the OS,\n> > noisy neighbours under the same supervisor, time slowdown by ntpd,\n> > etc. I don't recall ever needing spans with better than 1 us\n> > precision.\n> >\n> > On top of that if you have `span_start_ns` you should probably rename\n> > `duration` to `duration_ns` for consistency. I believe it would be\n> > more convenient to have `duration` in microseconds. Nothing prevents\n> > us from using fractions of microseconds (e.g. 12.345 us) if necessary.\n>\n> I was definitely on the fence on this. I've initially tried to match the\n> precision provided by query instrumentation but that definitely introduced\n> a lot of complexity. Also, while trace standard specify that start and end\n> should be in nanoseconds, it turns out that the nanosecond precision is\n> generally ignored.\n>\n> I've switched to a start+end in TimestampTz, dropping the ns precision.\n> Tests had to be rewritten to have a stable order with spans having that can\n> have the same start/end.\n>\n> I've also removed some spans: ExecutorStart, ExecutorEnd, PostParse.\n> Those spans generally had a sub microsecond duration and didn't bring much\n> value except for debugging pg_tracing itself. To avoid the confusion of\n> having spans with a 0us duration, removing them altogether seemed better.\n> ExecutorFinish also has a sub microsecond duration unless it has a nested\n> query (triggers) so I'm only emitting them in those cases.\n>\n> > Negative span ids, for sure, are possible, but I don't recall seeing\n> > many negative ids in my life. I suggest not to surprise the user\n> > unless this is really necessary.\n>\n> That one is gonna be difficult. Traceid and spanid are 8 bytes so I'm exporting\n> them using bigint. PostgreSQL doesn't support unsigned bigint thus the negative\n> span ids. An alternative would be to use a bytea but bigint are easier to read\n> and interact with.\n>\n> The latest patch contains changes around the nanosecond precision drop, test\n> updates to make them more stable and simplification around span management.\n\nOne of the test has failed in CFBOT [1] with:\n-select count(span_id) from pg_tracing_consume_spans group by span_id;\n- count\n--------\n- 1\n- 1\n- 1\n- 1\n- 1\n-(5 rows)\n-\n--- Cleanup\n-SET pg_tracing.sample_rate = 0.0;\n+server closed the connection unexpectedly\n+ This probably means the server terminated abnormally\n+ before or while processing the request.\n+connection to server was lost\n\nMore details are available at [2]\n\n[1] - https://cirrus-ci.com/task/5876369167482880\n[2] - https://api.cirrus-ci.com/v1/artifact/task/5876369167482880/testrun/build/testrun/pg_tracing/regress/regression.diffs\n\n\n", "msg_date": "Sat, 6 Jan 2024 21:00:03 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "On Thu, Jan 4, 2024, 16:36 Anthonin Bonnefoy\n<anthonin.bonnefoy@datadoghq.com> wrote:\n> > I used GUCs to set the `opentelemtery_trace_id` and `opentelemetry_span_id`.\n> > These can be sent as protocol-level messages by the client driver if the\n> > client driver has native trace integration enabled, in which case the user\n> > doesn't need to see or care. And for other drivers, the application can use\n> > `SET` or `SET LOCAL` to assign them.\n>\n> Emitting `SET` and `SET LOCAL` for every traced statement sounds more\n> disruptive and expensive than relying on SQLCommenter. When not using\n> `SET LOCAL`, the variables also need to be cleared.\n> This will also introduce a lot of headaches as the `SET` themselves\n> could be traced and generate spans when tracing utility statements is\n> already tricky enough.\n\nI think one hugely important benefit of using GUCs over sql comments\nis, that comments and named protocol-level prepared statements cannot\nbe combined afaict. Since with named protocol level prepared\nstatements no query is sent, only arguments. So there's no place to\nattach a comment too.\n\n\n", "msg_date": "Sun, 7 Jan 2024 00:48:30 +0100", "msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Hi,\n\n> I think one hugely important benefit of using GUCs over sql comments\n> is, that comments and named protocol-level prepared statements cannot\n> be combined afaict. Since with named protocol level prepared\n> statements no query is sent, only arguments. So there's no place to\n> attach a comment too.\n\nHandling named prepared statements has definitely been a pain point.\nI've been experimenting and it is possible to combine comments with\nprepared statements by passing the comment's content as a parameter.\n\nPREPARE test_prepared (text, integer) AS /*$1*/ SELECT $2;\nEXECUTE test_prepared('dddbs=''postgres.db'',traceparent=''00-00000000000000000000000000000009-0000000000000009-01''',\n1);\n\nThough adding a parameter is definitely more intrusive than just\nadding a comment and this is not defined and supported anywhere. I\nwill try to see how much work and complexity it would involve to add\nthe trace context propagation through GUC variables.\nOn one hand, having more options would allow more flexibility and ease\nto implement the trace propagation. On the other hand, I would like to\nkeep things simple for a first iteration.\n\n> One of the test has failed in CFBOT [1] with\n\nI've identified and fixed the issue. An assertion checking that spans\ngenerated from planstate didn't end before the parent's end could be\ntriggered.\n\nThe new patch also contains the following changes:\n- Support for 128 bits trace ids. While w3c defines exceptions to\nallow compatibility with systems that only support 64 bits trace ids,\nsupporting 128 bits trace ids early should remove the possible pain of\nmigrating or interoperating issues.\n- Trace ids, span ids and parent ids outputs are now hexadecimal\ncharacters instead of bigints. This removes the negative ids issue due\nto not having unsigned bigint and was necessary for the 128 bits trace\nid support. It also matches the format that's propagated through the\ntraceparent value. They are still stored as uint64 to keep memory\nfootprint low.\n- Added support for before trigger.\n- Refactored all headers to a single header file.\n- Added Query id filtering. Adding this turned out to be rather straightforward.\n- Additional edge cases found and fixed (planner error could trigger\nan assertion failure).\n\nRegards,\nAnthonin\n\nOn Sun, Jan 7, 2024 at 12:48 AM Jelte Fennema-Nio <postgres@jeltef.nl> wrote:\n>\n> On Thu, Jan 4, 2024, 16:36 Anthonin Bonnefoy\n> <anthonin.bonnefoy@datadoghq.com> wrote:\n> > > I used GUCs to set the `opentelemtery_trace_id` and `opentelemetry_span_id`.\n> > > These can be sent as protocol-level messages by the client driver if the\n> > > client driver has native trace integration enabled, in which case the user\n> > > doesn't need to see or care. And for other drivers, the application can use\n> > > `SET` or `SET LOCAL` to assign them.\n> >\n> > Emitting `SET` and `SET LOCAL` for every traced statement sounds more\n> > disruptive and expensive than relying on SQLCommenter. When not using\n> > `SET LOCAL`, the variables also need to be cleared.\n> > This will also introduce a lot of headaches as the `SET` themselves\n> > could be traced and generate spans when tracing utility statements is\n> > already tricky enough.\n>\n> I think one hugely important benefit of using GUCs over sql comments\n> is, that comments and named protocol-level prepared statements cannot\n> be combined afaict. Since with named protocol level prepared\n> statements no query is sent, only arguments. So there's no place to\n> attach a comment too.", "msg_date": "Wed, 10 Jan 2024 15:54:54 +0100", "msg_from": "Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>", "msg_from_op": true, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "On Wed, 10 Jan 2024 at 15:55, Anthonin Bonnefoy\n<anthonin.bonnefoy@datadoghq.com> wrote:\n> PREPARE test_prepared (text, integer) AS /*$1*/ SELECT $2;\n> EXECUTE test_prepared('dddbs=''postgres.db'',traceparent=''00-00000000000000000000000000000009-0000000000000009-01''',\n> 1);\n\nWow, I had no idea that parameters could be expanded in comments. I'm\nhonestly wondering if that is supported on purpose.\n\n\n", "msg_date": "Wed, 10 Jan 2024 16:03:08 +0100", "msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "2024-01 Commitfest.\n\nHi, This patch has a CF status of \"Needs Review\" [1], but it seems\nthere were CFbot test failures last time it was run [2]. Please have a\nlook and post an updated version if necessary.\n\n======\n[1] https://commitfest.postgresql.org/46/4456/\n[2] https://cirrus-ci.com/task/5581154296791040\n\nKind Regards,\nPeter Smith.\n\n\n", "msg_date": "Mon, 22 Jan 2024 17:45:35 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Hi,\n\nThe last CFbot failure was on pg_upgrade/002_pg_upgrade and doesn't\nseem related to the patch.\n\n# Failed test 'regression tests pass'\n# at C:/cirrus/src/bin/pg_upgrade/t/002_pg_upgrade.pl line 212.\n# got: '256'\n# expected: '0'\n# Looks like you failed 1 test of 18.\n\nGiven that the patch grew a bit in complexity and features, I've tried\nto reduce it to a minimum to make reviewing easier. The goal is to\nkeep the scope focused for a first version while additional features\nand changes could be handled afterwards.\n\nFor this version:\n- Trace context can be propagated through SQLCommenter\n- Sampling decision uses either a global sampling rate or on\nSQLCommenter's sampled flag\n- We generate spans for Planner, ExecutorRun and ExecutorFinish\n\nWhat was put on hold for now:\n- Generate spans from planstate. This means we don't need the change\nin instrument.h to track the start of a node for the first version.\n- Generate spans for parallel processes\n- Using a GUC variable to propagate trace context\n- Filtering sampling on queryId\n\nWith this, the current patch provides the core to:\n- Generate, store and export spans with basic buffer management (drop\nold spans when full or drop new spans when full)\n- Keep variable strings in a separate file (a la pg_stat_statements)\n- Track tracing state and top spans across nested execution levels\n\nThoughts on this approach?\n\nOn Mon, Jan 22, 2024 at 7:45 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> 2024-01 Commitfest.\n>\n> Hi, This patch has a CF status of \"Needs Review\" [1], but it seems\n> there were CFbot test failures last time it was run [2]. Please have a\n> look and post an updated version if necessary.\n>\n> ======\n> [1] https://commitfest.postgresql.org/46/4456/\n> [2] https://cirrus-ci.com/task/5581154296791040\n>\n> Kind Regards,\n> Peter Smith.", "msg_date": "Fri, 26 Jan 2024 11:17:39 +0100", "msg_from": "Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>", "msg_from_op": true, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Hi!\n\nIt's a good idea to split a big patch into several smaller ones.\nBut you've already implemented these features - you could provide them\nas sequential small patches (i.e. v13-0002-guc-context-propagation.patch\nand so on)\n\nGreat job! I'm both hands for committing your patch set.\n\nOn Fri, Jan 26, 2024 at 1:17 PM Anthonin Bonnefoy <\nanthonin.bonnefoy@datadoghq.com> wrote:\n\n> Hi,\n>\n> The last CFbot failure was on pg_upgrade/002_pg_upgrade and doesn't\n> seem related to the patch.\n>\n> # Failed test 'regression tests pass'\n> # at C:/cirrus/src/bin/pg_upgrade/t/002_pg_upgrade.pl line 212.\n> # got: '256'\n> # expected: '0'\n> # Looks like you failed 1 test of 18.\n>\n> Given that the patch grew a bit in complexity and features, I've tried\n> to reduce it to a minimum to make reviewing easier. The goal is to\n> keep the scope focused for a first version while additional features\n> and changes could be handled afterwards.\n>\n> For this version:\n> - Trace context can be propagated through SQLCommenter\n> - Sampling decision uses either a global sampling rate or on\n> SQLCommenter's sampled flag\n> - We generate spans for Planner, ExecutorRun and ExecutorFinish\n>\n> What was put on hold for now:\n> - Generate spans from planstate. This means we don't need the change\n> in instrument.h to track the start of a node for the first version.\n> - Generate spans for parallel processes\n> - Using a GUC variable to propagate trace context\n> - Filtering sampling on queryId\n>\n> With this, the current patch provides the core to:\n> - Generate, store and export spans with basic buffer management (drop\n> old spans when full or drop new spans when full)\n> - Keep variable strings in a separate file (a la pg_stat_statements)\n> - Track tracing state and top spans across nested execution levels\n>\n> Thoughts on this approach?\n>\n> On Mon, Jan 22, 2024 at 7:45 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > 2024-01 Commitfest.\n> >\n> > Hi, This patch has a CF status of \"Needs Review\" [1], but it seems\n> > there were CFbot test failures last time it was run [2]. Please have a\n> > look and post an updated version if necessary.\n> >\n> > ======\n> > [1] https://commitfest.postgresql.org/46/4456/\n> > [2] https://cirrus-ci.com/task/5581154296791040\n> >\n> > Kind Regards,\n> > Peter Smith.\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi!It's a good idea to split a big patch into several smaller ones.But you've already implemented these features - you could provide themas sequential small patches (i.e. v13-0002-guc-context-propagation.patch and so on)Great job! I'm both hands for committing your patch set.On Fri, Jan 26, 2024 at 1:17 PM Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com> wrote:Hi,\n\nThe last CFbot failure was on pg_upgrade/002_pg_upgrade and doesn't\nseem related to the patch.\n\n#   Failed test 'regression tests pass'\n#   at C:/cirrus/src/bin/pg_upgrade/t/002_pg_upgrade.pl line 212.\n#          got: '256'\n#     expected: '0'\n# Looks like you failed 1 test of 18.\n\nGiven that the patch grew a bit in complexity and features, I've tried\nto reduce it to a minimum to make reviewing easier. The goal is to\nkeep the scope focused for a first version while additional features\nand changes could be handled afterwards.\n\nFor this version:\n- Trace context can be propagated through SQLCommenter\n- Sampling decision uses either a global sampling rate or on\nSQLCommenter's sampled flag\n- We generate spans for Planner, ExecutorRun and ExecutorFinish\n\nWhat was put on hold for now:\n- Generate spans from planstate. This means we don't need the change\nin instrument.h to track the start of a node for the first version.\n- Generate spans for parallel processes\n- Using a GUC variable to propagate trace context\n- Filtering sampling on queryId\n\nWith this, the current patch provides the core to:\n- Generate, store and export spans with basic buffer management (drop\nold spans when full or drop new spans when full)\n- Keep variable strings in a separate file (a la pg_stat_statements)\n- Track tracing state and top spans across nested execution levels\n\nThoughts on this approach?\n\nOn Mon, Jan 22, 2024 at 7:45 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> 2024-01 Commitfest.\n>\n> Hi, This patch has a CF status of \"Needs Review\" [1], but it seems\n> there were CFbot test failures last time it was run [2]. Please have a\n> look and post an updated version if necessary.\n>\n> ======\n> [1] https://commitfest.postgresql.org/46/4456/\n> [2] https://cirrus-ci.com/task/5581154296791040\n>\n> Kind Regards,\n> Peter Smith.\n-- Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/", "msg_date": "Fri, 26 Jan 2024 15:42:50 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Hi Anthonin,\n\nI'm only now catching up on this thread. Very exciting feature!\n\nMy first observation is that you were able to implement this purely as \nan extension, without any core changes. That's very cool, because:\n\n- You don't need buy-in or blessing from anyone else. You can just put \nthis on github and people can immediately start using it.\n- As an extension, it's not tied to the PostgreSQL release cycle. Users \ncan immediately start using it with existing PostgreSQL versions.\n\nThere are benefits to being in contrib/ vs. an extension outside the \nPostgreSQL source tree, but there are significant downsides too. \nOut-of-tree, you get more flexibility, and you can develop much faster. \nI think this should live out-of-tree, at least until it's relatively \nstable. By stable, I mean \"not changing much\", not that it's bug-free.\n\n# Core changes\n\nThat said, there are a lot of things that would make sense to integrate \nin PostgreSQL itself. For example:\n\n- You're relying on various existing hooks, but it might be better to \nhave the 'begin_span'/'end_span' calls directly in PostgreSQL source \ncode in the right places. There are more places where spans might be \nnice, like when performing a large sort in tuplesort.c to name one \nexample. Or even across every I/O; not sure how much overhead the spans \nincur and how much we'd be willing to accept. 'begin_span'/'end_span' \ncould be new hooks that normally do nothing, but your extension would \nimplement them.\n\n- Other extensions could include begin_span/end_span calls too. (It's \ncomplicated for an extension to call functions in another extension.)\n\n- Extensions like postgres_fdw could get the tracing context and \npropagate it to the remote system too.\n\n- How to pass the tracing context from the client? You went with \nSQLCommenter and others proposed a GUC. A protocol extension would make \nsense too. I can see merits to all of those. It probably would be good \nto support multiple mechanisms, but some might need core changes. It \nmight be good to implement the mechanism for accepting trace context in \ncore. Without the extension, we could still include the trace ID in the \nlogs, for example.\n\n# Further ideas\n\nSome ideas on cool extra features on top of this:\n\n- SQL functions to begin/end a span. Could be handy if you have \ncomplicated PL/pgSQL functions, for example.\n- Have spans over subtransactions.\n- Include elog() messages in the traces. You might want to have a lower \nelevel for what's included in traces; something like the \nclient_min_messages and log_min_messages settings.\n- Include EXPLAIN plans in the traces.\n\n# Comments on the implementation\n\nThere was discussion already on push vs pull model. Currently, you \ncollect the traces in memory / files, and require a client to poll them. \nA push model is more common in applications that support tracing. If \nPostgres could connect directly to the OTEL collector, you'd need one \nfewer running component. You could keep the traces in backend-private \nmemory, no need to deal with shared memory and spilling to files.\n\nAdmittedly supporting the OTEL wire protocol is complicated unless you \nuse an existing library. Nevertheless, it might be a better tradeoff. \nOTEL/HTTP with JSON format seems just about feasible to implement from \nscratch. Or bite the bullet and use some external library. If this lives \nas an extension, you have more freedom to rely on 3rd party libraries.\n\n(If you publish this as an out-of-tree extension, then this is up to \nyou, of course, and doesn't need consensus here on pgsql-hackers)\n\n# Suggested next steps with this\n\nHere's how I'd suggest to proceed with this:\n\n1. Publish this as an extension on github, as it is. I think you'll get \na lot of immediate adoption.\n\n2. Start a new pgsql-hackers thread on in-core changes that would \nbenefit the extension. Write patches for them. I'm thinking of the \nthings I listed in the Core changes section above, but maybe there are \nothers.\n\n\nPS. Does any other DBMS implement this? Any lessons to be learned from them?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Fri, 2 Feb 2024 14:41:18 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Hi!\n\nOn Fri, 2 Feb 2024 at 13:41, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n> PS. Does any other DBMS implement this? Any lessons to be learned from\n> them?\n>\n\nMysql 8.3 has open telemetrie component, so very fresh:\n\nhttps://dev.mysql.com/doc/refman/8.3/en/telemetry-trace-install.html\nhttps://blogs.oracle.com/mysql/post/mysql-telemetry-tracing-with-oci-apm\n\nMysql ppl are already waiting for PG to catch up (in German):\nhttps://chaos.social/@isotopp/111421160719915296 :-)\n\nBest regards,\n\nJan\n\nHi!On Fri, 2 Feb 2024 at 13:41, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\nPS. Does any other DBMS implement this? Any lessons to be learned from them?Mysql 8.3 has open telemetrie component, so very fresh: https://dev.mysql.com/doc/refman/8.3/en/telemetry-trace-install.htmlhttps://blogs.oracle.com/mysql/post/mysql-telemetry-tracing-with-oci-apmMysql ppl are already waiting for PG to catch up (in German): https://chaos.social/@isotopp/111421160719915296 :-)Best regards,Jan", "msg_date": "Fri, 2 Feb 2024 17:42:03 +0100", "msg_from": "Jan Katins <jasc@gmx.net>", "msg_from_op": false, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Hi!\n\nOn Fri, Jan 26, 2024 at 1:43 PM Nikita Malakhov <hukutoc@gmail.com> wrote:\n> It's a good idea to split a big patch into several smaller ones.\n> But you've already implemented these features - you could provide them\n> as sequential small patches (i.e. v13-0002-guc-context-propagation.patch and so on)\n\nKeeping a consistent set of patches is going to be hard as any change\nin the first patch would require rebasing that are likely going to\nconflict. I haven't discarded the features but keeping them separated\ncould help with the discussion.\n\nOn Fri, Feb 2, 2024 at 1:41 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> Hi Anthonin,\n>\n> I'm only now catching up on this thread. Very exciting feature!\nThanks! And thanks for the comments.\n\n> - You're relying on various existing hooks, but it might be better to\n> have the 'begin_span'/'end_span' calls directly in PostgreSQL source\n> code in the right places. There are more places where spans might be\n> nice, like when performing a large sort in tuplesort.c to name one\n> example. Or even across every I/O; not sure how much overhead the spans\n> incur and how much we'd be willing to accept. 'begin_span'/'end_span'\n> could be new hooks that normally do nothing, but your extension would\n> implement them.\n\nHaving information that would only be available to tracing doesn't\nsound great. It probably should be added to the core instrumentation\nwhich could be then leveraged by tracing and other systems.\nOne of the features I initially implemented was to track parse start\nand end. However, since only the post parse hook is available, I could\nonly infer when the parse started. Changing the core to track parse\nstart would make this more easier and it could be used by EXPLAIN and\npg_stat_statements.\nIn the end, I've tried to match what's done in pg_stat_statements\nsince the core idea is the same, except pg_tracing outputs spans\ninstead of statistics.\n\n> - How to pass the tracing context from the client? You went with\n> SQLCommenter and others proposed a GUC. A protocol extension would make\n> sense too. I can see merits to all of those. It probably would be good\n> to support multiple mechanisms, but some might need core changes. It\n> might be good to implement the mechanism for accepting trace context in\n> core. Without the extension, we could still include the trace ID in the\n> logs, for example.\n\nI went with SQLCommenter for the initial implementation since it has\nthe benefit of already being implemented and working out of the box.\nJelte has an ongoing patch to add the possibility to modify GUC values\nthrough a protocol message [1]. I've tested it and it worked well\nwithout adding too much complexity in pg_tracing. I imagine having a\ndedicated traceparent GUC variable could be also used in the core to\npropagate in the logs and other places.\n\n> - Include EXPLAIN plans in the traces.\nThis was already implemented in the first versions (or rather, the\nplanstate was translated to spans) of the patch. I've removed it in\nthe latest patch as it was contributing to 50% of the code and also\nrequired a core change, namely a change in instrument.c to track the\nfirst start of a node. Though if you were thinking of the raw EXPLAIN\nANALYZE text dumped as a span metadata, that could be doable.\n\n> # Comments on the implementation\n>\n> There was discussion already on push vs pull model. Currently, you\n> collect the traces in memory / files, and require a client to poll them.\n> A push model is more common in applications that support tracing. If\n> Postgres could connect directly to the OTEL collector, you'd need one\n> fewer running component. You could keep the traces in backend-private\n> memory, no need to deal with shared memory and spilling to files.\n\nPostgreSQL metrics are currently exposed through tables like pg_stat_*\nand users likely have collectors pulling those metrics (like OTEL\nPostgreSQL Receiver [2]). Having this collector also pulling traces\nwould keep the logic in a single place and would avoid a situation\nwhere both pull and push is used.\n\n> 2. Start a new pgsql-hackers thread on in-core changes that would\n> benefit the extension. Write patches for them. I'm thinking of the\n> things I listed in the Core changes section above, but maybe there are\n> others.\nI already have an ongoing patch (more support for extended protocol in\npsql [3] needed for tests) and plan to do others (the parse changes\nI've mentioned). Though I also don't want to spread myself too thin.\n\n\n[1]: https://commitfest.postgresql.org/46/4736/\n[2]: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/postgresqlreceiver/README.md\n[3]: https://commitfest.postgresql.org/46/4650/\n\nOn Fri, Feb 2, 2024 at 5:42 PM Jan Katins <jasc@gmx.net> wrote:\n>\n> Hi!\n>\n> On Fri, 2 Feb 2024 at 13:41, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>>\n>> PS. Does any other DBMS implement this? Any lessons to be learned from them?\n>\n>\n> Mysql 8.3 has open telemetrie component, so very fresh:\n>\n> https://dev.mysql.com/doc/refman/8.3/en/telemetry-trace-install.html\n> https://blogs.oracle.com/mysql/post/mysql-telemetry-tracing-with-oci-apm\n>\n> Mysql ppl are already waiting for PG to catch up (in German): https://chaos.social/@isotopp/111421160719915296 :-)\n>\n> Best regards,\n>\n> Jan\n\n\n", "msg_date": "Tue, 6 Feb 2024 11:13:09 +0100", "msg_from": "Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>", "msg_from_op": true, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Hi!\n\nI agree with Heikki on most topics and especially the one he recommended\nto publish your extension on GitHub, this tool is very promising for highly\nloaded\nsystems, you will get a lot of feedback very soon.\n\nI'm curious about SpinLock - it is recommended for very short operations,\ntaking up to several instructions, and docs say that for longer ones it\nwill be\ntoo expensive, and recommends using LWLock. Why have you chosen SpinLock?\nDoes it have some benefits here?\n\nThank you!\n\nPS: process_query_desc function has trace_context argument that is neither\nused nor passed anywhere.\n\n--\nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi!I agree with Heikki on most topics and especially the one he recommendedto publish your extension on GitHub, this tool is very promising for highly loadedsystems, you will get a lot of feedback very soon.I'm curious about SpinLock - it is recommended for very short operations,taking up to several instructions, and docs say that for longer ones it will betoo expensive, and recommends using LWLock. Why have you chosen SpinLock?Does it have some benefits here?Thank you!PS: process_query_desc function has trace_context argument that is neitherused nor passed anywhere.--Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/", "msg_date": "Fri, 9 Feb 2024 19:46:58 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Hi,\n\nOn 2024-02-09 19:46:58 +0300, Nikita Malakhov wrote:\n> I agree with Heikki on most topics and especially the one he recommended\n> to publish your extension on GitHub, this tool is very promising for highly\n> loaded\n> systems, you will get a lot of feedback very soon.\n> \n> I'm curious about SpinLock - it is recommended for very short operations,\n> taking up to several instructions, and docs say that for longer ones it\n> will be\n> too expensive, and recommends using LWLock. Why have you chosen SpinLock?\n> Does it have some benefits here?\n\nIndeed - e.g. end_tracing() looks to hold the spinlock for far too long for\nspinlocks to be appropriate. Use an lwlock.\n\nRandom stuff I noticed while skimming:\n- pg_tracing.c should include postgres.h as the first thing\n\n- afaict none of the use of volatile is required, spinlocks have been barriers\n for a long time now\n\n- you acquire the spinlock for single increments of n_writers, perhaps that\n could become an atomic, to reduce contention?\n\n- I don't think it's a good idea to do memory allocations in the middle of a\n PG_CATCH. If the error was due to out-of-memory, you'll throw another error.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 9 Feb 2024 10:50:53 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Hi all!\n\nThanks for the reviews and comments.\n\n> - pg_tracing.c should include postgres.h as the first thing\nWill do.\n\n> afaict none of the use of volatile is required, spinlocks have been barriers\n> for a long time now\nGot it, I will remove them. I've been mimicking what was done in\npg_stat_statements and all spinlocks are done on volatile variables\n[1].\n\n> - I don't think it's a good idea to do memory allocations in the middle of a\n> PG_CATCH. If the error was due to out-of-memory, you'll throw another error.\nGood point. I was wondering what were the risks of generating spans\nfor errors. I will try to find a better way to handle this.\n\nTo give a quick update: following Heikki's suggestion, I'm currently\nworking on getting pg_tracing as a separate extension on github. I\nwill send an update when it's ready.\n\n[1]: https://github.com/postgres/postgres/blob/master/contrib/pg_stat_statements/pg_stat_statements.c#L2000\n\nOn Fri, Feb 9, 2024 at 7:50 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2024-02-09 19:46:58 +0300, Nikita Malakhov wrote:\n> > I agree with Heikki on most topics and especially the one he recommended\n> > to publish your extension on GitHub, this tool is very promising for highly\n> > loaded\n> > systems, you will get a lot of feedback very soon.\n> >\n> > I'm curious about SpinLock - it is recommended for very short operations,\n> > taking up to several instructions, and docs say that for longer ones it\n> > will be\n> > too expensive, and recommends using LWLock. Why have you chosen SpinLock?\n> > Does it have some benefits here?\n>\n> Indeed - e.g. end_tracing() looks to hold the spinlock for far too long for\n> spinlocks to be appropriate. Use an lwlock.\n>\n> Random stuff I noticed while skimming:\n> - pg_tracing.c should include postgres.h as the first thing\n>\n> - afaict none of the use of volatile is required, spinlocks have been barriers\n> for a long time now\n>\n> - you acquire the spinlock for single increments of n_writers, perhaps that\n> could become an atomic, to reduce contention?\n>\n> - I don't think it's a good idea to do memory allocations in the middle of a\n> PG_CATCH. If the error was due to out-of-memory, you'll throw another error.\n>\n>\n> Greetings,\n>\n> Andres Freund\n\n\n", "msg_date": "Tue, 12 Mar 2024 11:45:20 +0100", "msg_from": "Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>", "msg_from_op": true, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "On Tue, 12 Mar 2024, 23:45 Anthonin Bonnefoy, <\nanthonin.bonnefoy@datadoghq.com> wrote:\n\n>\n> > - I don't think it's a good idea to do memory allocations in the middle\n> of a\n> > PG_CATCH. If the error was due to out-of-memory, you'll throw another\n> error.\n> Good point. I was wondering what were the risks of generating spans\n> for errors. I will try to find a better way to handle this.\n>\n\nThe usual approach is to have pre-allocated memory. This must actually be\nwritten (zeroed usually) or it might be lazily allocated only on page\nfault. And it can't be copy on write memory allocated once in postmaster\nstartup then inherited by fork.\n\nThat imposes an overhead for every single postgres backend. So maybe\nthere's a better solution.\n\nOn Tue, 12 Mar 2024, 23:45 Anthonin Bonnefoy, <anthonin.bonnefoy@datadoghq.com> wrote:\n> - I don't think it's a good idea to do memory allocations in the middle of a\n> PG_CATCH. If the error was due to out-of-memory, you'll throw another error.\nGood point. I was wondering what were the risks of generating spans\nfor errors. I will try to find a better way to handle this.The usual approach is to have pre-allocated memory. This must actually be written (zeroed usually) or it might be lazily allocated only on page fault. And it can't be copy on write memory allocated once in postmaster startup then inherited by fork.That imposes an overhead for every single postgres backend. So maybe there's a better solution.", "msg_date": "Wed, 13 Mar 2024 17:28:55 +1300", "msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Hi Anthonin,\n\n> [...]\n>\n> The usual approach is to have pre-allocated memory. This must actually be written (zeroed usually) or it might be lazily allocated only on page fault. And it can't be copy on write memory allocated once in postmaster startup then inherited by fork.\n>\n> That imposes an overhead for every single postgres backend. So maybe there's a better solution.\n\nIt looks like the patch rotted a bit, see cfbot. Could you please\nsubmit an updated version?\n\nBest regards,\nAleksander Alekseev (wearing co-CFM hat)\n\n\n", "msg_date": "Fri, 15 Mar 2024 17:10:03 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" }, { "msg_contents": "Hi,\n\nI've rebased and updated the patch with some fixes:\n- Removed volatile\n- Switched from spinlock to LWLock and removed n_writer\n- Moved postgres.h as first header on all files\n- Removed possible allocations in PG_CATCH. Though I would probably need to\nrun some more thorough tests on this\n\nI should also have the GitHub repository version of the extension ready\nthis week.\n\nRegards,\nAnthonin\n\nOn Fri, Mar 15, 2024 at 3:10 PM Aleksander Alekseev <\naleksander@timescale.com> wrote:\n\n> Hi Anthonin,\n>\n> > [...]\n> >\n> > The usual approach is to have pre-allocated memory. This must actually\n> be written (zeroed usually) or it might be lazily allocated only on page\n> fault. And it can't be copy on write memory allocated once in postmaster\n> startup then inherited by fork.\n> >\n> > That imposes an overhead for every single postgres backend. So maybe\n> there's a better solution.\n>\n> It looks like the patch rotted a bit, see cfbot. Could you please\n> submit an updated version?\n>\n> Best regards,\n> Aleksander Alekseev (wearing co-CFM hat)\n>", "msg_date": "Mon, 18 Mar 2024 15:00:39 +0100", "msg_from": "Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>", "msg_from_op": true, "msg_subject": "Re: POC: Extension for adding distributed tracing - pg_tracing" } ]
[ { "msg_contents": "Hi,\n\nThe proposed patch is a small refactoring of inval.{c,h}:\n\n\"\"\"\n The \"public functions\" separator comment doesn't reflect reality anymore.\n We could rearrange the order of the functions. However, this would\ncomplicate\n back-porting of the patches, thus removing the comment instead.\n\n InvalidateSystemCachesExtended() is an internal function of inval.c. Make it\n static.\n\"\"\"\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Tue, 25 Jul 2023 16:24:34 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "[PATCH] Small refactoring of inval.c and inval.h" }, { "msg_contents": "On 2023-Jul-25, Aleksander Alekseev wrote:\n\n> Hi,\n> \n> The proposed patch is a small refactoring of inval.{c,h}:\n> \n> \"\"\"\n> The \"public functions\" separator comment doesn't reflect reality anymore.\n> We could rearrange the order of the functions. However, this would\n> complicate\n> back-porting of the patches, thus removing the comment instead.\n> \n> InvalidateSystemCachesExtended() is an internal function of inval.c. Make it\n> static.\n> \"\"\"\n\nHmm, this *Extended function looks a bit funny, and I think it's because\nit's part of a backpatched bugfix that didn't want to modify ABI. If\nwe're modifying this code, maybe we should get rid of the shim, that is,\nmove the boolean argument to InvalidateSystemCaches() and get rid of the\n*Extended() version altogether.\n\nAs for the /*--- public functions ---*/ comment, that one was just not\nmoved by b89e151054a0, which should have done so; but even at that\npoint, it had already been somewhat broken by the addition of\nPrepareInvalidationState() (a static function) in 6cb4afff33ba below it.\nIf we really want to clean this up, we could move PrepareInvalidationState()\nto just below RegisterSnapshotInvalidation() and move the \"public\nfunctions\" header to just below it (so that it appears just before\nInvalidateSystemCaches).\n\nIf we just remove the \"public functions\" comment, then we're still\ninside a section that starts with the \"private support functions\"\ncomment, which seems even worse to me than the current situation.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Saca el libro que tu religión considere como el indicado para encontrar la\noración que traiga paz a tu alma. Luego rebootea el computador\ny ve si funciona\" (Carlos Duclós)\n\n\n", "msg_date": "Tue, 25 Jul 2023 16:35:09 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Small refactoring of inval.c and inval.h" }, { "msg_contents": "Hi Alvaro,\n\nThanks for your feedback.\n\n> Hmm, this *Extended function looks a bit funny, and I think it's because\n> it's part of a backpatched bugfix that didn't want to modify ABI. If\n> we're modifying this code, maybe we should get rid of the shim, that is,\n> move the boolean argument to InvalidateSystemCaches() and get rid of the\n> *Extended() version altogether.\n\nThis would affect quite a few places where InvalidateSystemCaches() is\ncalled, and we will end-up with some sort of wrapper anyway. The\nreason is that InvalidateSystemCaches() is used as a callback passed\nto ReceiveSharedInvalidMessages():\n\n```\n ReceiveSharedInvalidMessages(LocalExecuteInvalidationMessage,\n InvalidateSystemCaches);\n```\n\nUnless of course we want to change its signature too. I don't think\nthis is going to be a good API change.\n\n> As for the /*--- public functions ---*/ comment, that one was just not\n> moved by b89e151054a0, which should have done so; but even at that\n> point, it had already been somewhat broken by the addition of\n> PrepareInvalidationState() (a static function) in 6cb4afff33ba below it.\n> If we really want to clean this up, we could move PrepareInvalidationState()\n> to just below RegisterSnapshotInvalidation() and move the \"public\n> functions\" header to just below it (so that it appears just before\n> InvalidateSystemCaches).\n>\n> If we just remove the \"public functions\" comment, then we're still\n> inside a section that starts with the \"private support functions\"\n> comment, which seems even worse to me than the current situation.\n\nFixed.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Tue, 25 Jul 2023 18:38:46 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Small refactoring of inval.c and inval.h" }, { "msg_contents": "On Tue, Jul 25, 2023 at 06:38:46PM +0300, Aleksander Alekseev wrote:\n> Unless of course we want to change its signature too. I don't think\n> this is going to be a good API change.\n\n extern void InvalidateSystemCaches(void);\n-extern void InvalidateSystemCachesExtended(bool debug_discard);\n\nIndeed, that looks a bit strange, but is there a strong need in\nremoving it, as you are proposing? There is always a risk that this\ncould be called by some out-of-core code. This is one of these\nthings where we could break something just for the sake of breaking\nit, so there is no real benefit IMO.\n\n>> As for the /*--- public functions ---*/ comment, that one was just not\n>> moved by b89e151054a0, which should have done so; but even at that\n>> point, it had already been somewhat broken by the addition of\n>> PrepareInvalidationState() (a static function) in 6cb4afff33ba below it.\n>> If we really want to clean this up, we could move PrepareInvalidationState()\n>> to just below RegisterSnapshotInvalidation() and move the \"public\n>> functions\" header to just below it (so that it appears just before\n>> InvalidateSystemCaches).\n>>\n>> If we just remove the \"public functions\" comment, then we're still\n>> inside a section that starts with the \"private support functions\"\n>> comment, which seems even worse to me than the current situation.\n> \n> Fixed.\n\nThis part looks OK to me, so I'll see about applying this part of the\npatch.\n--\nMichael", "msg_date": "Mon, 6 Nov 2023 16:50:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Small refactoring of inval.c and inval.h" }, { "msg_contents": "Hi,\n\n> extern void InvalidateSystemCaches(void);\n> -extern void InvalidateSystemCachesExtended(bool debug_discard);\n>\n> Indeed, that looks a bit strange, but is there a strong need in\n> removing it, as you are proposing? There is always a risk that this\n> could be called by some out-of-core code. This is one of these\n> things where we could break something just for the sake of breaking\n> it, so there is no real benefit IMO.\n\nFair enough, here is the corrected patch.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Mon, 6 Nov 2023 13:17:12 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Small refactoring of inval.c and inval.h" }, { "msg_contents": "On Mon, Nov 06, 2023 at 01:17:12PM +0300, Aleksander Alekseev wrote:\n> Fair enough, here is the corrected patch.\n\nOkay for me, so applied. Thanks!\n--\nMichael", "msg_date": "Tue, 7 Nov 2023 11:57:00 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Small refactoring of inval.c and inval.h" } ]
[ { "msg_contents": "Hi,\n\nPer Coverity.\n\nrmtree function can leak 64 bytes per call,\nwhen it can't open a directory.\n\npatch attached.\n\nbest regards,\nRanier Vilela", "msg_date": "Tue, 25 Jul 2023 11:31:05 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Avoid possible memory leak (src/common/rmtree.c)" }, { "msg_contents": "> On 25 Jul 2023, at 16:31, Ranier Vilela <ranier.vf@gmail.com> wrote:\n\n> rmtree function can leak 64 bytes per call, \n> when it can't open a directory.\n\nSkimming the tree there doesn't seem to be any callers which aren't exiting or\nereporting on failure so the real-world impact seems low. That being said,\nsilencing static analyzers could be reason enough to delay allocation.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 25 Jul 2023 16:45:22 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Avoid possible memory leak (src/common/rmtree.c)" }, { "msg_contents": "On Tue, Jul 25, 2023 at 04:45:22PM +0200, Daniel Gustafsson wrote:\n> Skimming the tree there doesn't seem to be any callers which aren't exiting or\n> ereporting on failure so the real-world impact seems low. That being said,\n> silencing static analyzers could be reason enough to delay allocation.\n\nA different reason would be out-of-core code that uses rmtree() in a\nmemory context where the leak would be an issue if facing a failure\ncontinuously? Delaying the allocation after the OPENDIR() seems like\na good practice anyway.\n--\nMichael", "msg_date": "Sat, 29 Jul 2023 11:54:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Avoid possible memory leak (src/common/rmtree.c)" }, { "msg_contents": "Em sex, 28 de jul de 2023 11:54 PM, Michael Paquier <michael@paquier.xyz>\nescreveu:\n\n> On Tue, Jul 25, 2023 at 04:45:22PM +0200, Daniel Gustafsson wrote:\n> > Skimming the tree there doesn't seem to be any callers which aren't\n> exiting or\n> > ereporting on failure so the real-world impact seems low. That being\n> said,\n> > silencing static analyzers could be reason enough to delay allocation.\n>\n> A different reason would be out-of-core code that uses rmtree() in a\n> memory context where the leak would be an issue if facing a failure\n> continuously? Delaying the allocation after the OPENDIR() seems like\n> a good practice anyway.\n>\nThanks for the commit, Michael.\n\nbest regards,\nRanier Vilela\n\nEm sex, 28 de jul de 2023 11:54 PM, Michael Paquier <michael@paquier.xyz> escreveu:On Tue, Jul 25, 2023 at 04:45:22PM +0200, Daniel Gustafsson wrote:\n> Skimming the tree there doesn't seem to be any callers which aren't exiting or\n> ereporting on failure so the real-world impact seems low.  That being said,\n> silencing static analyzers could be reason enough to delay allocation.\n\nA different reason would be out-of-core code that uses rmtree() in a\nmemory context where the leak would be an issue if facing a failure\ncontinuously?  Delaying the allocation after the OPENDIR() seems like\na good practice anyway.Thanks for the commit, Michael.best regards,Ranier Vilela", "msg_date": "Mon, 31 Jul 2023 20:10:55 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid possible memory leak (src/common/rmtree.c)" }, { "msg_contents": "On Mon, Jul 31, 2023 at 08:10:55PM -0300, Ranier Vilela wrote:\n> Thanks for the commit, Michael.\n\nSorry for the lack of update here. For the sake of the archives, this\nis f1e9f6b.\n--\nMichael", "msg_date": "Tue, 1 Aug 2023 08:56:08 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Avoid possible memory leak (src/common/rmtree.c)" } ]
[ { "msg_contents": "grassquit has been failing to run the regression tests for the last\nfew days, since [1]:\n\n# +++ regress check in src/test/regress +++\n# using temp instance on port 6880 with PID 766305\nERROR: stack depth limit exceeded\nHINT: Increase the configuration parameter \"max_stack_depth\" (currently 2048kB), after ensuring the platform's stack depth limit is adequate.\nERROR: stack depth limit exceeded\nHINT: Increase the configuration parameter \"max_stack_depth\" (currently 2048kB), after ensuring the platform's stack depth limit is adequate.\n# command failed: \"psql\" -X -q -c \"CREATE DATABASE \\\\\"regression\\\\\" TEMPLATE=template0 LOCALE='C'\" -c \"ALTER DATABASE \\\\\"regression\\\\\" SET lc_messages TO 'C';ALTER DATABASE \\\\\"regression\\\\\" SET lc_monetary TO 'C';ALTER DATABASE \\\\\"regression\\\\\" SET lc_numeric TO 'C';ALTER DATABASE \\\\\"regression\\\\\" SET lc_time TO 'C';ALTER DATABASE \\\\\"regression\\\\\" SET bytea_output TO 'hex';ALTER DATABASE \\\\\"regression\\\\\" SET timezone_abbreviations TO 'Default';\" \"postgres\"\n\nThis seems to be happening in all branches, so I doubt it\nhas anything to do with recent PG commits. Instead, I notice\nthat grassquit seems to have been updated to a newer Debian\nrelease: 6.1.0-6-amd64 works, 6.3.0-2-amd64 doesn't.\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grassquit&dt=2023-07-23%2011%3A05%3A50\n\n\n", "msg_date": "Tue, 25 Jul 2023 20:10:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Buildfarm animal grassquit is failing" }, { "msg_contents": "Hi,\n\nOn 2023-07-25 20:10:06 -0400, Tom Lane wrote:\n> grassquit has been failing to run the regression tests for the last\n> few days, since [1]:\n> \n> # +++ regress check in src/test/regress +++\n> # using temp instance on port 6880 with PID 766305\n> ERROR: stack depth limit exceeded\n> HINT: Increase the configuration parameter \"max_stack_depth\" (currently 2048kB), after ensuring the platform's stack depth limit is adequate.\n> ERROR: stack depth limit exceeded\n> HINT: Increase the configuration parameter \"max_stack_depth\" (currently 2048kB), after ensuring the platform's stack depth limit is adequate.\n> # command failed: \"psql\" -X -q -c \"CREATE DATABASE \\\\\"regression\\\\\" TEMPLATE=template0 LOCALE='C'\" -c \"ALTER DATABASE \\\\\"regression\\\\\" SET lc_messages TO 'C';ALTER DATABASE \\\\\"regression\\\\\" SET lc_monetary TO 'C';ALTER DATABASE \\\\\"regression\\\\\" SET lc_numeric TO 'C';ALTER DATABASE \\\\\"regression\\\\\" SET lc_time TO 'C';ALTER DATABASE \\\\\"regression\\\\\" SET bytea_output TO 'hex';ALTER DATABASE \\\\\"regression\\\\\" SET timezone_abbreviations TO 'Default';\" \"postgres\"\n> \n> This seems to be happening in all branches, so I doubt it\n> has anything to do with recent PG commits. Instead, I notice\n> that grassquit seems to have been updated to a newer Debian\n> release: 6.1.0-6-amd64 works, 6.3.0-2-amd64 doesn't.\n\nIndeed, the automated updates had been stuck, and I fixed that a few days\nago. I missed grassquit's failures unfortunately.\n\nI think I know what the issue is - I have seen them locally. Newer versions of\ngcc and clang (or libasan?) default to enabling \"stack use after return\"\nchecks - for some reason that implies using a shadow stack *sometimes*. Which\ncan confuse our stack depth checking terribly, not so surprisingly.\n\nI've been meaning to look into what we could do to fix that, but I haven't\nfound the cycles...\n\nFor now I added :detect_stack_use_after_return=0 to ASAN_OPTIONS, which I\nthink should fix the issue.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 25 Jul 2023 17:20:30 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Buildfarm animal grassquit is failing" } ]
[ { "msg_contents": "Hi,\n\nReorderBufferTupleBuf is defined as follow:\n\n/* an individual tuple, stored in one chunk of memory */\ntypedef struct ReorderBufferTupleBuf\n{\n /* position in preallocated list */\n slist_node node;\n\n /* tuple header, the interesting bit for users of logical decoding */\n HeapTupleData tuple;\n\n /* pre-allocated size of tuple buffer, different from tuple size */\n Size alloc_tuple_size;\n\n /* actual tuple data follows */\n} ReorderBufferTupleBuf;\n\nHowever, node and alloc_tuple_size are not used at all. It seems an\noversight in commit a4ccc1cef5a, which introduced the generation\ncontext and used it in logical decoding. I think we can remove them\n(only on HEAD). I've attached the patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 26 Jul 2023 14:58:39 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": true, "msg_subject": "Remove unused fields in ReorderBufferTupleBuf" }, { "msg_contents": "Hi,\n\nThanks for the patch.\n\n> However, node and alloc_tuple_size are not used at all. It seems an\n> oversight in commit a4ccc1cef5a, which introduced the generation\n> context and used it in logical decoding. I think we can remove them\n> (only on HEAD). I've attached the patch.\n\nI'm afraid it's a bit incomplete:\n\n```\n../src/backend/replication/logical/reorderbuffer.c: In function\n‘ReorderBufferGetTupleBuf’:\n../src/backend/replication/logical/reorderbuffer.c:565:14: error:\n‘ReorderBufferTupleBuf’ has no member named ‘alloc_tuple_size’\n 565 | tuple->alloc_tuple_size = alloc_len;\n | ^~\n[829/1882] Compiling C object contrib/xml2/pgxml.so.p/xpath.c.o\n```\n\nOn top of that IMO it doesn't make much sense to keep a one-field\nwrapper structure. Perhaps we should get rid of it entirely and just\nuse HeapTupleData instead.\n\nThoughts?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 26 Jul 2023 16:02:11 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Remove unused fields in ReorderBufferTupleBuf" }, { "msg_contents": "Hi,\n\n> I'm afraid it's a bit incomplete:\n>\n> ```\n> ../src/backend/replication/logical/reorderbuffer.c: In function\n> ‘ReorderBufferGetTupleBuf’:\n> ../src/backend/replication/logical/reorderbuffer.c:565:14: error:\n> ‘ReorderBufferTupleBuf’ has no member named ‘alloc_tuple_size’\n> 565 | tuple->alloc_tuple_size = alloc_len;\n> | ^~\n> [829/1882] Compiling C object contrib/xml2/pgxml.so.p/xpath.c.o\n> ```\n\nHere is the corrected patch. I added it to the nearest CF [1].\n\n> On top of that IMO it doesn't make much sense to keep a one-field\n> wrapper structure. Perhaps we should get rid of it entirely and just\n> use HeapTupleData instead.\n>\n> Thoughts?\n\nAlternatively we could convert ReorderBufferTupleBufData macro to an\ninlined function. At least it will add some type safety.\n\nI didn't change anything in this respect in v2. Feedback from the\ncommunity would be much appreciated.\n\n[1]: https://commitfest.postgresql.org/44/4461/\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Wed, 26 Jul 2023 16:20:49 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Remove unused fields in ReorderBufferTupleBuf" }, { "msg_contents": "Hi,\n\n> Here is the corrected patch. I added it to the nearest CF [1].\n\nI played a bit more with the patch. There was an idea to make\nReorderBufferTupleBufData an opaque structure known only within\nreorderbyffer.c but it turned out that replication/logical/decode.c\naccesses it directly so I abandoned that idea for now.\n\n> Alternatively we could convert ReorderBufferTupleBufData macro to an\n> inlined function. At least it will add some type safety.\n\nHere is v3 that implements it too as a separate patch.\n\nApologies for the noise.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Wed, 26 Jul 2023 16:52:14 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Remove unused fields in ReorderBufferTupleBuf" }, { "msg_contents": "2024-01 Commitfest.\n\nHi, this patch was marked in CF as \"Needs Review\" [1], but there has\nbeen no activity on this thread for 5+ months.\n\nDo you wish to keep this open, or can you post something to elicit\nmore interest in reviews for the latest patch set? Otherwise, if\nnothing happens then the CF entry will be closed (\"Returned with\nfeedback\") at the end of this CF.\n\n======\n[1] https://commitfest.postgresql.org/46/4461/\n\nKind Regards,\nPeter Smith.\n\n\n", "msg_date": "Mon, 22 Jan 2024 13:33:14 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove unused fields in ReorderBufferTupleBuf" }, { "msg_contents": "Hi,\n\n> Hi, this patch was marked in CF as \"Needs Review\" [1], but there has\n> been no activity on this thread for 5+ months.\n>\n> Do you wish to keep this open, or can you post something to elicit\n> more interest in reviews for the latest patch set? Otherwise, if\n> nothing happens then the CF entry will be closed (\"Returned with\n> feedback\") at the end of this CF.\n\nI don't think that closing CF entries only due to inactivity is a good\npractice, nor something we typically do. When someone will have spare\ntime this person will (hopefully) review the code.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 22 Jan 2024 13:47:26 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Remove unused fields in ReorderBufferTupleBuf" }, { "msg_contents": "On Wed, Jul 26, 2023 at 7:22 PM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> > Here is the corrected patch. I added it to the nearest CF [1].\n>\n> I played a bit more with the patch. There was an idea to make\n> ReorderBufferTupleBufData an opaque structure known only within\n> reorderbyffer.c but it turned out that replication/logical/decode.c\n> accesses it directly so I abandoned that idea for now.\n>\n> > Alternatively we could convert ReorderBufferTupleBufData macro to an\n> > inlined function. At least it will add some type safety.\n>\n> Here is v3 that implements it too as a separate patch.\n>\n\nBut why didn't you pursue your idea of getting rid of the wrapper\nstructure ReorderBufferTupleBuf which after this patch will have just\none member? I think there could be hassles in backpatching bug-fixes\nin some cases but in the longer run it would make the code look clean.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 22 Jan 2024 16:32:00 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove unused fields in ReorderBufferTupleBuf" }, { "msg_contents": "On Mon, Jan 22, 2024 at 4:17 PM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> Hi,\n>\n> > Hi, this patch was marked in CF as \"Needs Review\" [1], but there has\n> > been no activity on this thread for 5+ months.\n> >\n> > Do you wish to keep this open, or can you post something to elicit\n> > more interest in reviews for the latest patch set? Otherwise, if\n> > nothing happens then the CF entry will be closed (\"Returned with\n> > feedback\") at the end of this CF.\n>\n> I don't think that closing CF entries only due to inactivity is a good\n> practice, nor something we typically do. When someone will have spare\n> time this person will (hopefully) review the code.\n\nAgree. IMHO, a patch not picked up by anyone doesn't mean it's the\nauthor's problem to mark it \"Returned with feedback\". And, the timing\nto mark things this way is not quite right as we are getting close to\nthe PG17 release. The patch may be fixing a problem (like the one\nthat's closed due to inactivity\nhttps://commitfest.postgresql.org/46/3503/) or may be a feature or an\nimprovement that no reviewer/committer has had a chance to look at.\n\nI think a new label such as \"Returned due to lack of interest\" or\n\"Returned due to disinterest\" (or some better wording) helps\nreviewers/committers to pick things up. This new label can be\nattributed to the class of patches for which initially there's some\npositive feedback on the idea, the patch is being kept latest, but\nfinds no one to get it reviewed for say X number of months.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 22 Jan 2024 17:53:15 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove unused fields in ReorderBufferTupleBuf" }, { "msg_contents": "Hi,\n\n> > I played a bit more with the patch. There was an idea to make\n> > ReorderBufferTupleBufData an opaque structure known only within\n> > reorderbyffer.c but it turned out that replication/logical/decode.c\n> > accesses it directly so I abandoned that idea for now.\n> >\n> > > Alternatively we could convert ReorderBufferTupleBufData macro to an\n> > > inlined function. At least it will add some type safety.\n> >\n> > Here is v3 that implements it too as a separate patch.\n> >\n>\n> But why didn't you pursue your idea of getting rid of the wrapper\n> structure ReorderBufferTupleBuf which after this patch will have just\n> one member? I think there could be hassles in backpatching bug-fixes\n> in some cases but in the longer run it would make the code look clean.\n\nIndeed. In fact turned out that I suggested the same above but\napparently forgot:\n\n> On top of that IMO it doesn't make much sense to keep a one-field\n> wrapper structure. Perhaps we should get rid of it entirely and just\n> use HeapTupleData instead.\n\nAfter actually trying the refactoring I agree that the code becomes\ncleaner and it's going to be beneficial in the long run. Here is the\npatch.\n\n--\nBest regards,\nAleksander Alekseev", "msg_date": "Mon, 22 Jan 2024 17:02:03 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Remove unused fields in ReorderBufferTupleBuf" }, { "msg_contents": "Hi,\n\n> > But why didn't you pursue your idea of getting rid of the wrapper\n> > structure ReorderBufferTupleBuf which after this patch will have just\n> > one member? I think there could be hassles in backpatching bug-fixes\n> > in some cases but in the longer run it would make the code look clean.\n>\n> Indeed. In fact turned out that I suggested the same above but\n> apparently forgot:\n>\n> > On top of that IMO it doesn't make much sense to keep a one-field\n> > wrapper structure. Perhaps we should get rid of it entirely and just\n> > use HeapTupleData instead.\n>\n> After actually trying the refactoring I agree that the code becomes\n> cleaner and it's going to be beneficial in the long run. Here is the\n> patch.\n\nI did a mistake in v4:\n\n```\n- alloc_len = tuple_len + SizeofHeapTupleHeader;\n+ alloc_len = tuple_len + HEAPTUPLESIZE;\n```\n\nHere is the corrected patch.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Mon, 22 Jan 2024 17:18:50 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Remove unused fields in ReorderBufferTupleBuf" }, { "msg_contents": "Sorry for being absent for a while.\n\nOn Mon, Jan 22, 2024 at 11:19 PM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> Hi,\n>\n> > > But why didn't you pursue your idea of getting rid of the wrapper\n> > > structure ReorderBufferTupleBuf which after this patch will have just\n> > > one member? I think there could be hassles in backpatching bug-fixes\n> > > in some cases but in the longer run it would make the code look clean.\n\n+1\n\n> >\n> > Indeed. In fact turned out that I suggested the same above but\n> > apparently forgot:\n> >\n> > > On top of that IMO it doesn't make much sense to keep a one-field\n> > > wrapper structure. Perhaps we should get rid of it entirely and just\n> > > use HeapTupleData instead.\n> >\n> > After actually trying the refactoring I agree that the code becomes\n> > cleaner and it's going to be beneficial in the long run. Here is the\n> > patch.\n>\n> I did a mistake in v4:\n>\n> ```\n> - alloc_len = tuple_len + SizeofHeapTupleHeader;\n> + alloc_len = tuple_len + HEAPTUPLESIZE;\n> ```\n>\n> Here is the corrected patch.\n\nThank you for updating the patch! I have some comments:\n\n- tuple = (ReorderBufferTupleBuf *)\n+ tuple = (HeapTuple)\n MemoryContextAlloc(rb->tup_context,\n-\nsizeof(ReorderBufferTupleBuf) +\n+ HEAPTUPLESIZE +\n MAXIMUM_ALIGNOF +\nalloc_len);\n- tuple->alloc_tuple_size = alloc_len;\n- tuple->tuple.t_data = ReorderBufferTupleBufData(tuple);\n+ tuple->t_data = (HeapTupleHeader)((char *)tuple + HEAPTUPLESIZE);\n\nWhy do we need to add MAXIMUM_ALIGNOF? since HEAPTUPLESIZE is the\nMAXALIGN'd size, it seems we don't need it. heap_form_tuple() does a\nsimilar thing but it doesn't add it.\n\n---\n xl_parameter_change *xlrec =\n- (xl_parameter_change *)\nXLogRecGetData(buf->record);\n+ (xl_parameter_change *)\nXLogRecGetData(buf->record);\n\nUnnecessary change.\n\n---\n-/*\n- * Free a ReorderBufferTupleBuf.\n- */\n-void\n-ReorderBufferReturnTupleBuf(ReorderBuffer *rb, ReorderBufferTupleBuf *tuple)\n-{\n- pfree(tuple);\n-}\n-\n\nWhy does ReorderBufferReturnTupleBuf need to be moved from\nreorderbuffer.c to reorderbuffer.h? It seems not related to this\nrefactoring patch so I think we should do it in a separate patch if we\nreally want it. I'm not sure it's necessary, though.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 24 Jan 2024 17:36:37 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Remove unused fields in ReorderBufferTupleBuf" }, { "msg_contents": "Hi,\n\n> > Here is the corrected patch.\n>\n> Thank you for updating the patch! I have some comments:\n\nThanks for the review.\n\n> - tuple = (ReorderBufferTupleBuf *)\n> + tuple = (HeapTuple)\n> MemoryContextAlloc(rb->tup_context,\n> -\n> sizeof(ReorderBufferTupleBuf) +\n> + HEAPTUPLESIZE +\n> MAXIMUM_ALIGNOF +\n> alloc_len);\n> - tuple->alloc_tuple_size = alloc_len;\n> - tuple->tuple.t_data = ReorderBufferTupleBufData(tuple);\n> + tuple->t_data = (HeapTupleHeader)((char *)tuple + HEAPTUPLESIZE);\n>\n> Why do we need to add MAXIMUM_ALIGNOF? since HEAPTUPLESIZE is the\n> MAXALIGN'd size, it seems we don't need it. heap_form_tuple() does a\n> similar thing but it doesn't add it.\n\nIndeed. I gave it a try and nothing crashed, so it appears that\nMAXIMUM_ALIGNOF is not needed.\n\n> ---\n> xl_parameter_change *xlrec =\n> - (xl_parameter_change *)\n> XLogRecGetData(buf->record);\n> + (xl_parameter_change *)\n> XLogRecGetData(buf->record);\n>\n> Unnecessary change.\n\nThat's pgindent. Fixed.\n\n> ---\n> -/*\n> - * Free a ReorderBufferTupleBuf.\n> - */\n> -void\n> -ReorderBufferReturnTupleBuf(ReorderBuffer *rb, ReorderBufferTupleBuf *tuple)\n> -{\n> - pfree(tuple);\n> -}\n> -\n>\n> Why does ReorderBufferReturnTupleBuf need to be moved from\n> reorderbuffer.c to reorderbuffer.h? It seems not related to this\n> refactoring patch so I think we should do it in a separate patch if we\n> really want it. I'm not sure it's necessary, though.\n\nOK, fixed.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Thu, 25 Jan 2024 16:16:55 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Remove unused fields in ReorderBufferTupleBuf" }, { "msg_contents": "On Thu, 2024-01-25 at 16:16 +0300, Aleksander Alekseev wrote:\n> Hi,\n> \n> > > Here is the corrected patch.\n> > \n> > Thank you for updating the patch! I have some comments:\n> \n> Thanks for the review.\n> \n> > - tuple = (ReorderBufferTupleBuf *)\n> > + tuple = (HeapTuple)\n> > MemoryContextAlloc(rb->tup_context,\n> > -\n> > sizeof(ReorderBufferTupleBuf) +\n> > + HEAPTUPLESIZE +\n> > MAXIMUM_ALIGNOF +\n> > alloc_len);\n> > - tuple->alloc_tuple_size = alloc_len;\n> > - tuple->tuple.t_data = ReorderBufferTupleBufData(tuple);\n> > + tuple->t_data = (HeapTupleHeader)((char *)tuple + HEAPTUPLESIZE);\n> > \n> > Why do we need to add MAXIMUM_ALIGNOF? since HEAPTUPLESIZE is the\n> > MAXALIGN'd size, it seems we don't need it. heap_form_tuple() does a\n> > similar thing but it doesn't add it.\n> \n> Indeed. I gave it a try and nothing crashed, so it appears that\n> MAXIMUM_ALIGNOF is not needed.\n> \n> > ---\n> > xl_parameter_change *xlrec =\n> > - (xl_parameter_change *)\n> > XLogRecGetData(buf->record);\n> > + (xl_parameter_change *)\n> > XLogRecGetData(buf->record);\n> > \n> > Unnecessary change.\n> \n> That's pgindent. Fixed.\n> \n> > ---\n> > -/*\n> > - * Free a ReorderBufferTupleBuf.\n> > - */\n> > -void\n> > -ReorderBufferReturnTupleBuf(ReorderBuffer *rb, ReorderBufferTupleBuf *tuple)\n> > -{\n> > - pfree(tuple);\n> > -}\n> > -\n> > \n> > Why does ReorderBufferReturnTupleBuf need to be moved from\n> > reorderbuffer.c to reorderbuffer.h? It seems not related to this\n> > refactoring patch so I think we should do it in a separate patch if we\n> > really want it. I'm not sure it's necessary, though.\n> \n> OK, fixed.\n> \n\nI walked through v6 and didn't note any issues.\n\nI do want to ask, the patch alters ReorderBufferReturnTupleBuf() and\ndrops the unused parameter ReorderBuffer *rb. It seems that\nReorderBufferFreeSnap(), ReorderBufferReturnRelids(),\nReorderBufferImmediateInvalidation(), and ReorderBufferRestoreCleanup()\nalso pass ReorderBuffer *rb but do not use it. Would it be beneficial\nto implement a separate patch to remove this parameter from these\nfunctions also?\n\nThanks,\nReid\n\n\n\n", "msg_date": "Thu, 25 Jan 2024 16:11:17 -0500", "msg_from": "reid.thompson@crunchydata.com", "msg_from_op": false, "msg_subject": "Re: Remove unused fields in ReorderBufferTupleBuf" }, { "msg_contents": "On Thu, 2024-01-25 at 16:11 -0500, reid.thompson@crunchydata.com wrote:\n> \n> I walked through v6 and didn't note any issues.\n> \n> I do want to ask, the patch alters ReorderBufferReturnTupleBuf() and\n> drops the unused parameter ReorderBuffer *rb. It seems that\n> ReorderBufferFreeSnap(), ReorderBufferReturnRelids(),\n> ReorderBufferImmediateInvalidation(), and ReorderBufferRestoreCleanup()\n> also pass ReorderBuffer *rb but do not use it. Would it be beneficial\n> to implement a separate patch to remove this parameter from these\n> functions also?\n> \n> Thanks,\n> Reid\n> \n\nIt also appears that ReorderBufferSerializeChange() has 5 instances\nwhere it increments the local variables \"data\" but then they're never\nread.\nLines 3806, 3836, 3854, 3889, 3910\n\nI can create patch and post it to this thread or a new one if deemed\nworthwhile.\n\nThanks,\nReid\n\n\n", "msg_date": "Thu, 25 Jan 2024 23:23:51 -0500", "msg_from": "reid.thompson@crunchydata.com", "msg_from_op": false, "msg_subject": "Re: Remove unused fields in ReorderBufferTupleBuf" }, { "msg_contents": "On Fri, Jan 26, 2024 at 1:24 PM <reid.thompson@crunchydata.com> wrote:\n>\n> On Thu, 2024-01-25 at 16:11 -0500, reid.thompson@crunchydata.com wrote:\n> >\n> > I walked through v6 and didn't note any issues.\n\nThank you for reviewing the patch!\n\n> >\n> > I do want to ask, the patch alters ReorderBufferReturnTupleBuf() and\n> > drops the unused parameter ReorderBuffer *rb. It seems that\n> > ReorderBufferFreeSnap(), ReorderBufferReturnRelids(),\n> > ReorderBufferImmediateInvalidation(), and ReorderBufferRestoreCleanup()\n> > also pass ReorderBuffer *rb but do not use it. Would it be beneficial\n> > to implement a separate patch to remove this parameter from these\n> > functions also?\n> >\n> >\n>\n> It also appears that ReorderBufferSerializeChange() has 5 instances\n> where it increments the local variables \"data\" but then they're never\n> read.\n> Lines 3806, 3836, 3854, 3889, 3910\n>\n> I can create patch and post it to this thread or a new one if deemed\n> worthwhile.\n\nI'm not sure these changes are really beneficial. They contribute to\nimproving neither readability and performance IMO.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 26 Jan 2024 13:51:39 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Remove unused fields in ReorderBufferTupleBuf" }, { "msg_contents": "On Thu, Jan 25, 2024 at 10:17 PM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> Hi,\n>\n> > > Here is the corrected patch.\n> >\n> > Thank you for updating the patch! I have some comments:\n>\n> Thanks for the review.\n>\n> > - tuple = (ReorderBufferTupleBuf *)\n> > + tuple = (HeapTuple)\n> > MemoryContextAlloc(rb->tup_context,\n> > -\n> > sizeof(ReorderBufferTupleBuf) +\n> > + HEAPTUPLESIZE +\n> > MAXIMUM_ALIGNOF +\n> > alloc_len);\n> > - tuple->alloc_tuple_size = alloc_len;\n> > - tuple->tuple.t_data = ReorderBufferTupleBufData(tuple);\n> > + tuple->t_data = (HeapTupleHeader)((char *)tuple + HEAPTUPLESIZE);\n> >\n> > Why do we need to add MAXIMUM_ALIGNOF? since HEAPTUPLESIZE is the\n> > MAXALIGN'd size, it seems we don't need it. heap_form_tuple() does a\n> > similar thing but it doesn't add it.\n>\n> Indeed. I gave it a try and nothing crashed, so it appears that\n> MAXIMUM_ALIGNOF is not needed.\n>\n> > ---\n> > xl_parameter_change *xlrec =\n> > - (xl_parameter_change *)\n> > XLogRecGetData(buf->record);\n> > + (xl_parameter_change *)\n> > XLogRecGetData(buf->record);\n> >\n> > Unnecessary change.\n>\n> That's pgindent. Fixed.\n>\n> > ---\n> > -/*\n> > - * Free a ReorderBufferTupleBuf.\n> > - */\n> > -void\n> > -ReorderBufferReturnTupleBuf(ReorderBuffer *rb, ReorderBufferTupleBuf *tuple)\n> > -{\n> > - pfree(tuple);\n> > -}\n> > -\n> >\n> > Why does ReorderBufferReturnTupleBuf need to be moved from\n> > reorderbuffer.c to reorderbuffer.h? It seems not related to this\n> > refactoring patch so I think we should do it in a separate patch if we\n> > really want it. I'm not sure it's necessary, though.\n>\n> OK, fixed.\n\nThank you for updating the patch. It looks good to me. I'm going to\npush it next week, barring any objections.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 26 Jan 2024 16:04:47 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Remove unused fields in ReorderBufferTupleBuf" }, { "msg_contents": "On Fri, 2024-01-26 at 13:51 +0900, Masahiko Sawada wrote:\n> On Fri, Jan 26, 2024 at 1:24 PM <reid.thompson@crunchydata.com> wrote:\n> > \n> > On Thu, 2024-01-25 at 16:11 -0500, reid.thompson@crunchydata.com wrote:\n> > > \n> > > I walked through v6 and didn't note any issues.\n> \n> Thank you for reviewing the patch!\n> \nHappy to.\n> \n> I'm not sure these changes are really beneficial. They contribute to\n> improving neither readability and performance IMO.\n> \n> Regards,\n> \nI thought that may be the case, but wanted to ask to be sure. Thank you\nfor the followup.\n\nReid\n\n\n", "msg_date": "Fri, 26 Jan 2024 09:54:12 -0500", "msg_from": "reid.thompson@crunchydata.com", "msg_from_op": false, "msg_subject": "Re: Remove unused fields in ReorderBufferTupleBuf" }, { "msg_contents": "On Fri, Jan 26, 2024 at 4:04 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Jan 25, 2024 at 10:17 PM Aleksander Alekseev\n> <aleksander@timescale.com> wrote:\n> >\n> > Hi,\n> >\n> > > > Here is the corrected patch.\n> > >\n> > > Thank you for updating the patch! I have some comments:\n> >\n> > Thanks for the review.\n> >\n> > > - tuple = (ReorderBufferTupleBuf *)\n> > > + tuple = (HeapTuple)\n> > > MemoryContextAlloc(rb->tup_context,\n> > > -\n> > > sizeof(ReorderBufferTupleBuf) +\n> > > + HEAPTUPLESIZE +\n> > > MAXIMUM_ALIGNOF +\n> > > alloc_len);\n> > > - tuple->alloc_tuple_size = alloc_len;\n> > > - tuple->tuple.t_data = ReorderBufferTupleBufData(tuple);\n> > > + tuple->t_data = (HeapTupleHeader)((char *)tuple + HEAPTUPLESIZE);\n> > >\n> > > Why do we need to add MAXIMUM_ALIGNOF? since HEAPTUPLESIZE is the\n> > > MAXALIGN'd size, it seems we don't need it. heap_form_tuple() does a\n> > > similar thing but it doesn't add it.\n> >\n> > Indeed. I gave it a try and nothing crashed, so it appears that\n> > MAXIMUM_ALIGNOF is not needed.\n> >\n> > > ---\n> > > xl_parameter_change *xlrec =\n> > > - (xl_parameter_change *)\n> > > XLogRecGetData(buf->record);\n> > > + (xl_parameter_change *)\n> > > XLogRecGetData(buf->record);\n> > >\n> > > Unnecessary change.\n> >\n> > That's pgindent. Fixed.\n> >\n> > > ---\n> > > -/*\n> > > - * Free a ReorderBufferTupleBuf.\n> > > - */\n> > > -void\n> > > -ReorderBufferReturnTupleBuf(ReorderBuffer *rb, ReorderBufferTupleBuf *tuple)\n> > > -{\n> > > - pfree(tuple);\n> > > -}\n> > > -\n> > >\n> > > Why does ReorderBufferReturnTupleBuf need to be moved from\n> > > reorderbuffer.c to reorderbuffer.h? It seems not related to this\n> > > refactoring patch so I think we should do it in a separate patch if we\n> > > really want it. I'm not sure it's necessary, though.\n> >\n> > OK, fixed.\n>\n> Thank you for updating the patch. It looks good to me. I'm going to\n> push it next week, barring any objections.\n\nPushed (08e6344fd642).\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 29 Jan 2024 11:16:09 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Remove unused fields in ReorderBufferTupleBuf" } ]
[ { "msg_contents": "Hi hackers,\n\nI have a question about `use_physical_tlist()` which is applied in `create_scan_plan()`:\n```\nif (flags == CP_IGNORE_TLIST)\n{\n tlist = NULL;\n}\nelse if (use_physical_tlist(root, best_path, flags))\n{\n if (best_path->pathtype == T_IndexOnlyScan)\n {\n /* For index-only scan, the preferred tlist is the index's */\n tlist = copyObject(((IndexPath *) best_path)->indexinfo->indextlist);\n\n /*\n * Transfer sortgroupref data to the replacement tlist, if\n * requested (use_physical_tlist checked that this will work).\n */\n if (flags & CP_LABEL_TLIST)\n apply_pathtarget_labeling_to_tlist(tlist, best_path->pathtarget);\n }\n else\n {\ntlist = build_physical_tlist(root, rel);\n……\n```\nAnd the comment above the code block says:\n\n```\n/*\n* For table scans, rather than using the relation targetlist (which is\n* only those Vars actually needed by the query), we prefer to generate a\n* tlist containing all Vars in order. This will allow the executor to\n* optimize away projection of the table tuples, if possible.\n*\n* But if the caller is going to ignore our tlist anyway, then don't\n* bother generating one at all. We use an exact equality test here, so\n* that this only applies when CP_IGNORE_TLIST is the only flag set.\n*/\n```\n\nBut for some column-oriented database based on Postgres, it may help a lot in case of projection of the table tuples in execution? And is there any other optimization considerations behind this design?\n\ne.g. If we have such table definition and a query:\n\n```\nCREATE TABLE partsupp\n(PS_PARTKEY INT,\nPS_SUPPKEY INT,\nPS_AVAILQTY INTEGER,\nPS_SUPPLYCOST DECIMAL(15,2),\nPS_COMMENT VARCHAR(199),\ndummy text);\n\nexplain analyze verbose select sum(ps_supplycost * ps_availqty) from partsupp;\n```\n\nAnd the planner would give such plan:\n\n```\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------\nAggregate (cost=12.80..12.81 rows=1 width=32) (actual time=0.013..0.015 rows=1 loops=1)\n Output: sum((ps_supplycost * (ps_availqty)::numeric))\n -> Seq Scan on public.partsupp (cost=0.00..11.60 rows=160 width=22) (actual time=0.005..0.005 rows=0 loops=1)\n Output: ps_partkey, ps_suppkey, ps_availqty, ps_supplycost, ps_comment, dummy\nPlanning Time: 0.408 ms\nExecution Time: 0.058 ms\n(6 rows)\n```\nIt looks the columns besides `ps_supplycost` and `ps_availqty` are not necessary, but fetched from tuples all at once. For the row-based storage such as heap, it looks fine, but for column-based storage, it would result into unnecessary overhead and impact performance. Is there any plan to optimize here?\n\nThanks.\n\n\n\n\n\n\n\n\n\nHi hackers,\n \nI have a question about `use_physical_tlist()` which is applied in `create_scan_plan()`:\n```\nif (flags == CP_IGNORE_TLIST)\n{\n  tlist = NULL;\n}\nelse if (use_physical_tlist(root, best_path, flags))\n{\n  if (best_path->pathtype == T_IndexOnlyScan)\n  {\n    /* For index-only scan, the preferred tlist is the index's */\n    tlist = copyObject(((IndexPath *) best_path)->indexinfo->indextlist);\n \n    /*\n     * Transfer sortgroupref data to the replacement tlist, if\n     * requested (use_physical_tlist checked that this will work).\n     */\n    if (flags & CP_LABEL_TLIST)\n      apply_pathtarget_labeling_to_tlist(tlist, best_path->pathtarget);\n  }\n  else\n  {\ntlist = build_physical_tlist(root, rel);\n……\n```\nAnd the comment above the code block says:\n \n```\n/*\n* For table scans, rather than using the relation targetlist (which is\n* only those Vars actually needed by the query), we prefer to generate a\n* tlist containing all Vars in order.  This will allow the executor to\n* optimize away projection of the table tuples, if possible.\n*\n* But if the caller is going to ignore our tlist anyway, then don't\n* bother generating one at all.  We use an exact equality test here, so\n* that this only applies when CP_IGNORE_TLIST is the only flag set.\n*/\n```\n \nBut for some column-oriented database based on Postgres, it may help a lot in case of projection of the table tuples in execution?\n And is there any other optimization considerations behind this design?\n \ne.g. If we have such table definition and a query:\n \n```\nCREATE TABLE partsupp\n(PS_PARTKEY INT,\nPS_SUPPKEY INT,\nPS_AVAILQTY INTEGER,\nPS_SUPPLYCOST DECIMAL(15,2),\nPS_COMMENT VARCHAR(199),\ndummy text);\n \nexplain analyze verbose select sum(ps_supplycost * ps_availqty) from partsupp;\n```\n \nAnd the planner would give such plan:\n \n```\n                                                    QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------\nAggregate  (cost=12.80..12.81 rows=1 width=32) (actual time=0.013..0.015 rows=1 loops=1)\n   Output: sum((ps_supplycost * (ps_availqty)::numeric))\n   ->  Seq Scan on public.partsupp  (cost=0.00..11.60 rows=160 width=22) (actual time=0.005..0.005 rows=0 loops=1)\n         Output: ps_partkey, ps_suppkey, ps_availqty, ps_supplycost, ps_comment, dummy\nPlanning Time: 0.408 ms\nExecution Time: 0.058 ms\n(6 rows)\n```\nIt looks the columns besides `ps_supplycost` and `ps_availqty` are not necessary, but fetched from tuples all at once. For the\n row-based storage such as heap, it looks fine, but for column-based storage, it would result into unnecessary overhead and impact performance. Is there any plan to optimize here?\n \nThanks.", "msg_date": "Wed, 26 Jul 2023 06:40:28 +0000", "msg_from": "Jian Guo <gjian@vmware.com>", "msg_from_op": true, "msg_subject": "Question about use_physical_tlist() which is applied on Scan path" }, { "msg_contents": "On 2023-Jul-26, Jian Guo wrote:\n\n> It looks the columns besides `ps_supplycost` and `ps_availqty` are not\n> necessary, but fetched from tuples all at once. For the row-based\n> storage such as heap, it looks fine, but for column-based storage, it\n> would result into unnecessary overhead and impact performance. Is\n> there any plan to optimize here?\n\nI suppose that, at some point, it is going to have to be the table AM\nthe one that makes the decision. That is, use_physical_tlist would have\nto involve some new flag in path->parent->amflags to determine whether\nto skip using a physical tlist. Right now, we don't have any columnar\nstores, so there's no way to verify an implementation. If you do have a\ncolumnar store implementation, you're welcome to share it.\n\n-- \nÁlvaro Herrera PostgreSQL Developer\n\"I am amazed at [the pgsql-sql] mailing list for the wonderful support, and\nlack of hesitasion in answering a lost soul's question, I just wished the rest\nof the mailing list could be like this.\" (Fotis)\n (http://archives.postgresql.org/pgsql-sql/2006-06/msg00265.php)\n\n\n", "msg_date": "Wed, 26 Jul 2023 12:16:26 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Question about use_physical_tlist() which is applied on Scan path" }, { "msg_contents": "I had the same question recently. In addition, I looked at the results of\ntpch which scale factor is 1 ran on postgres REL_15_STABLE and observed no\nperformance improvement from physical tlist. To be specific, I run two\nversions of tpch, one with physical tlist enabled and one with physical\ntlist disabled. The performance improvement of some queries in the former\nwas less than 5%, some queries performed worse than latter, and I think\nthis is a normal range of performance fluctuations which was not caused by\nphysical tlist. I have read the relevant commits, maybe because of my\ncarelessness, I did not find the test queries corresponding to physical\ntlist. Are there any test queries of physical tlist?\nThanks\n\nAlvaro Herrera <alvherre@alvh.no-ip.org> 于2023年7月26日周三 18:16写道:\n\n> On 2023-Jul-26, Jian Guo wrote:\n>\n> > It looks the columns besides `ps_supplycost` and `ps_availqty` are not\n> > necessary, but fetched from tuples all at once. For the row-based\n> > storage such as heap, it looks fine, but for column-based storage, it\n> > would result into unnecessary overhead and impact performance. Is\n> > there any plan to optimize here?\n>\n> I suppose that, at some point, it is going to have to be the table AM\n> the one that makes the decision. That is, use_physical_tlist would have\n> to involve some new flag in path->parent->amflags to determine whether\n> to skip using a physical tlist. Right now, we don't have any columnar\n> stores, so there's no way to verify an implementation. If you do have a\n> columnar store implementation, you're welcome to share it.\n>\n> --\n> Álvaro Herrera PostgreSQL Developer\n> \"I am amazed at [the pgsql-sql] mailing list for the wonderful support, and\n> lack of hesitasion in answering a lost soul's question, I just wished the\n> rest\n> of the mailing list could be like this.\"\n> (Fotis)\n> (\n> http://archives.postgresql.org/pgsql-sql/2006-06/msg00265.php)\n>\n>\n>\n\nI had the same question recently. In addition, I looked at the results of tpch which scale factor is 1 ran on postgres REL_15_STABLE and observed no performance improvement from physical tlist. To be specific, I run two versions of tpch, one with physical tlist enabled and one with physical tlist disabled. The performance improvement of some queries in the former was less than 5%, some queries performed worse than latter, and I think this is a normal range of performance fluctuations which was not caused by physical tlist. I have read the relevant commits, maybe because of my carelessness, I did not find the test queries corresponding to physical tlist. Are there any test queries of physical tlist?ThanksAlvaro Herrera <alvherre@alvh.no-ip.org> 于2023年7月26日周三 18:16写道:On 2023-Jul-26, Jian Guo wrote:\n\n> It looks the columns besides `ps_supplycost` and `ps_availqty` are not\n> necessary, but fetched from tuples all at once. For the row-based\n> storage such as heap, it looks fine, but for column-based storage, it\n> would result into unnecessary overhead and impact performance. Is\n> there any plan to optimize here?\n\nI suppose that, at some point, it is going to have to be the table AM\nthe one that makes the decision.  That is, use_physical_tlist would have\nto involve some new flag in path->parent->amflags to determine whether\nto skip using a physical tlist.  Right now, we don't have any columnar\nstores, so there's no way to verify an implementation.  If you do have a\ncolumnar store implementation, you're welcome to share it.\n\n-- \nÁlvaro Herrera                                         PostgreSQL Developer\n\"I am amazed at [the pgsql-sql] mailing list for the wonderful support, and\nlack of hesitasion in answering a lost soul's question, I just wished the rest\nof the mailing list could be like this.\"                               (Fotis)\n               (http://archives.postgresql.org/pgsql-sql/2006-06/msg00265.php)", "msg_date": "Wed, 26 Jul 2023 22:25:48 +0800", "msg_from": "=?UTF-8?B?5rK55bGL?= <yw987194828@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Question about use_physical_tlist() which is applied on Scan path" } ]
[ { "msg_contents": "Hi Hackes: I found this page : https://pgsql-hackers.postgresql.narkive.com/cMxBwq65/incremental-checkopints,PostgreSQL no incremental checkpoints have been implemented so far. When a checkpoint is triggered, the performance jitter of PostgreSQL is very noticeable. I think incremental checkpoints should be implemented as soon as possible\r\n\r\n\r\n\r\n\r\n\r\n\r\nBest whish\r\n\r\nThomas wen\r\n\n\n\n\n\n\n\nHi Hackes:\n  I found this page : https://pgsql-hackers.postgresql.narkive.com/cMxBwq65/incremental-checkopints,PostgreSQL no\n incremental checkpoints have been implemented so far. When a checkpoint is triggered, the performance jitter of\nPostgreSQL is very noticeable. I think incremental checkpoints should be implemented as soon as possible\n\n\n\n\n\n\n\nBest whish\n\nThomas wen", "msg_date": "Wed, 26 Jul 2023 07:21:57 +0000", "msg_from": "Thomas wen <Thomas_valentine_365@outlook.com>", "msg_from_op": true, "msg_subject": "incremental-checkopints" }, { "msg_contents": "On 7/26/23 09:21, Thomas wen wrote:\n> Hi Hackes:  I found this page :\n> https://pgsql-hackers.postgresql.narkive.com/cMxBwq65/incremental-checkopints,PostgreSQL <https://pgsql-hackers.postgresql.narkive.com/cMxBwq65/incremental-checkopints,PostgreSQL> no incremental checkpoints have been implemented so far. When a checkpoint is triggered, the performance jitter of PostgreSQL is very noticeable. I think incremental checkpoints should be implemented as soon as possible\n> \n\nWell, that thread is 12 years old, and no one followed on that proposal.\nSo it seems people have different priorities, working on other stuff\nthat they consider is more valuable ...\n\nYou can either work on this yourself and write a patch, or try to\nconvince others it's worth working on. But you didn't provide any\ninformation that'd demonstrate the jitter and that incremental\ncheckpoints would improve that.\n\n\nFor the record, the thread in our archives is:\n\nhttps://www.postgresql.org/message-id/8a867f1ffea72091bf3cd6a49ba68a97.squirrel%40mail.go-link.net\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 26 Jul 2023 14:35:48 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: incremental-checkopints" }, { "msg_contents": "Hello\n\nOn 2023-Jul-26, Thomas wen wrote:\n\n> Hi Hackes: I found this page :\n> https://pgsql-hackers.postgresql.narkive.com/cMxBwq65/incremental-checkopints,PostgreSQL\n> no incremental checkpoints have been implemented so far. When a\n> checkpoint is triggered, the performance jitter of PostgreSQL is very\n> noticeable. I think incremental checkpoints should be implemented as\n> soon as possible\n\nI think my first question is why do you think that is necessary; there\nare probably other tools to achieve better performance. For example,\nyou may want to try making checkpoint_completion_target closer to 1, and\nthe checkpoint interval longer (both checkpoint_timeout and\nmax_wal_size). Also, changing shared_buffers may improve things. You\ncan try adding more RAM to the machine.\n\nTuning the overall performance of a Postgres server is still black magic\nto some extent, but there are a few well-known things to play with,\nwithout having to write any patches.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Someone said that it is at least an order of magnitude more work to do\nproduction software than a prototype. I think he is wrong by at least\nan order of magnitude.\" (Brian Kernighan)\n\n\n", "msg_date": "Wed, 26 Jul 2023 14:41:39 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: incremental-checkopints" }, { "msg_contents": "On Wed, 26 Jul 2023 at 14:41, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> Hello\n>\n> On 2023-Jul-26, Thomas wen wrote:\n>\n> > Hi Hackes: I found this page :\n> > https://pgsql-hackers.postgresql.narkive.com/cMxBwq65/incremental-checkopints,PostgreSQL\n> > no incremental checkpoints have been implemented so far. When a\n> > checkpoint is triggered, the performance jitter of PostgreSQL is very\n> > noticeable. I think incremental checkpoints should be implemented as\n> > soon as possible\n>\n> I think my first question is why do you think that is necessary; there\n> are probably other tools to achieve better performance. For example,\n> you may want to try making checkpoint_completion_target closer to 1, and\n> the checkpoint interval longer (both checkpoint_timeout and\n> max_wal_size). Also, changing shared_buffers may improve things. You\n> can try adding more RAM to the machine.\n\nEven with all those tuning options, a significant portion of a\ncheckpoint's IO (up to 50%) originates from FPIs in the WAL, which (in\ngeneral) will most often appear at the start of each checkpoint due to\neach first update to a page after a checkpoint needing an FPI.\nIf instead we WAL-logged only the pages we are about to write to disk\n(like MySQL's double-write buffer, but in WAL instead of a separate\ncyclical buffer file), then a checkpoint_completion_target close to 1\nwould probably solve the issue, but with \"WAL-logged torn page\nprotection at first update after checkpoint\" we'll probably always\nhave higher-than-average FPI load just after a new checkpoint.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech/)\n\n\n", "msg_date": "Wed, 26 Jul 2023 15:16:02 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: incremental-checkopints" }, { "msg_contents": "\n\nOn 7/26/23 15:16, Matthias van de Meent wrote:\n> On Wed, 26 Jul 2023 at 14:41, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>>\n>> Hello\n>>\n>> On 2023-Jul-26, Thomas wen wrote:\n>>\n>>> Hi Hackes: I found this page :\n>>> https://pgsql-hackers.postgresql.narkive.com/cMxBwq65/incremental-checkopints,PostgreSQL\n>>> no incremental checkpoints have been implemented so far. When a\n>>> checkpoint is triggered, the performance jitter of PostgreSQL is very\n>>> noticeable. I think incremental checkpoints should be implemented as\n>>> soon as possible\n>>\n>> I think my first question is why do you think that is necessary; there\n>> are probably other tools to achieve better performance. For example,\n>> you may want to try making checkpoint_completion_target closer to 1, and\n>> the checkpoint interval longer (both checkpoint_timeout and\n>> max_wal_size). Also, changing shared_buffers may improve things. You\n>> can try adding more RAM to the machine.\n> \n> Even with all those tuning options, a significant portion of a\n> checkpoint's IO (up to 50%) originates from FPIs in the WAL, which (in\n> general) will most often appear at the start of each checkpoint due to\n> each first update to a page after a checkpoint needing an FPI.\n\nYeah, FPIs are certainly expensive and can represent huge part of the\nWAL produced. But how would incremental checkpoints make that step\nunnecessary?\n\n> If instead we WAL-logged only the pages we are about to write to disk\n> (like MySQL's double-write buffer, but in WAL instead of a separate\n> cyclical buffer file), then a checkpoint_completion_target close to 1\n> would probably solve the issue, but with \"WAL-logged torn page\n> protection at first update after checkpoint\" we'll probably always\n> have higher-than-average FPI load just after a new checkpoint.\n> \n\nSo essentially instead of WAL-logging the FPI on the first change, we'd\nonly do that later when actually writing-out the page (either during a\ncheckpoint or because of memory pressure)? How would you make sure\nthere's enough WAL space until the next checkpoint? I mean, FPIs are a\nhuge write amplification source ...\n\nImagine the system has max_wal_size set to 1GB, and does 1M updates\nbefore writing 512MB of WAL and thus triggering a checkpoint. Now it\nneeds to write FPIs for 1M updates - easily 8GB of WAL, maybe more with\nindexes. What then?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 26 Jul 2023 20:58:29 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: incremental-checkopints" }, { "msg_contents": "Starting from increments checkpoint is approaching the problem from\nthe wrong end.\n\nWhat you actually want is Atomic Disk Writes which will allow turning\noff full_page_writes .\n\nWithout this you really can not do incremental checkpoints efficiently\nas checkpoints are currently what is used to determine when is \"the\nfirst write to a page after checkpoint\" and thereby when the full page\nwrite is needed.\n\n\n\n\nOn Wed, Jul 26, 2023 at 8:58 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n>\n>\n> On 7/26/23 15:16, Matthias van de Meent wrote:\n> > On Wed, 26 Jul 2023 at 14:41, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >>\n> >> Hello\n> >>\n> >> On 2023-Jul-26, Thomas wen wrote:\n> >>\n> >>> Hi Hackes: I found this page :\n> >>> https://pgsql-hackers.postgresql.narkive.com/cMxBwq65/incremental-checkopints,PostgreSQL\n> >>> no incremental checkpoints have been implemented so far. When a\n> >>> checkpoint is triggered, the performance jitter of PostgreSQL is very\n> >>> noticeable. I think incremental checkpoints should be implemented as\n> >>> soon as possible\n> >>\n> >> I think my first question is why do you think that is necessary; there\n> >> are probably other tools to achieve better performance. For example,\n> >> you may want to try making checkpoint_completion_target closer to 1, and\n> >> the checkpoint interval longer (both checkpoint_timeout and\n> >> max_wal_size). Also, changing shared_buffers may improve things. You\n> >> can try adding more RAM to the machine.\n> >\n> > Even with all those tuning options, a significant portion of a\n> > checkpoint's IO (up to 50%) originates from FPIs in the WAL, which (in\n> > general) will most often appear at the start of each checkpoint due to\n> > each first update to a page after a checkpoint needing an FPI.\n>\n> Yeah, FPIs are certainly expensive and can represent huge part of the\n> WAL produced. But how would incremental checkpoints make that step\n> unnecessary?\n>\n> > If instead we WAL-logged only the pages we are about to write to disk\n> > (like MySQL's double-write buffer, but in WAL instead of a separate\n> > cyclical buffer file), then a checkpoint_completion_target close to 1\n> > would probably solve the issue, but with \"WAL-logged torn page\n> > protection at first update after checkpoint\" we'll probably always\n> > have higher-than-average FPI load just after a new checkpoint.\n> >\n>\n> So essentially instead of WAL-logging the FPI on the first change, we'd\n> only do that later when actually writing-out the page (either during a\n> checkpoint or because of memory pressure)? How would you make sure\n> there's enough WAL space until the next checkpoint? I mean, FPIs are a\n> huge write amplification source ...\n>\n> Imagine the system has max_wal_size set to 1GB, and does 1M updates\n> before writing 512MB of WAL and thus triggering a checkpoint. Now it\n> needs to write FPIs for 1M updates - easily 8GB of WAL, maybe more with\n> indexes. What then?\n>\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\n\n\n", "msg_date": "Wed, 26 Jul 2023 21:51:00 +0200", "msg_from": "Hannu Krosing <hannuk@google.com>", "msg_from_op": false, "msg_subject": "Re: incremental-checkopints" }, { "msg_contents": "On Wed, 26 Jul 2023 at 20:58, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n>\n>\n> On 7/26/23 15:16, Matthias van de Meent wrote:\n> > On Wed, 26 Jul 2023 at 14:41, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >>\n> >> Hello\n> >>\n> >> On 2023-Jul-26, Thomas wen wrote:\n> >>\n> >>> Hi Hackes: I found this page :\n> >>> https://pgsql-hackers.postgresql.narkive.com/cMxBwq65/incremental-checkopints,PostgreSQL\n> >>> no incremental checkpoints have been implemented so far. When a\n> >>> checkpoint is triggered, the performance jitter of PostgreSQL is very\n> >>> noticeable. I think incremental checkpoints should be implemented as\n> >>> soon as possible\n> >>\n> >> I think my first question is why do you think that is necessary; there\n> >> are probably other tools to achieve better performance. For example,\n> >> you may want to try making checkpoint_completion_target closer to 1, and\n> >> the checkpoint interval longer (both checkpoint_timeout and\n> >> max_wal_size). Also, changing shared_buffers may improve things. You\n> >> can try adding more RAM to the machine.\n> >\n> > Even with all those tuning options, a significant portion of a\n> > checkpoint's IO (up to 50%) originates from FPIs in the WAL, which (in\n> > general) will most often appear at the start of each checkpoint due to\n> > each first update to a page after a checkpoint needing an FPI.\n>\n> Yeah, FPIs are certainly expensive and can represent huge part of the\n> WAL produced. But how would incremental checkpoints make that step\n> unnecessary?\n>\n> > If instead we WAL-logged only the pages we are about to write to disk\n> > (like MySQL's double-write buffer, but in WAL instead of a separate\n> > cyclical buffer file), then a checkpoint_completion_target close to 1\n> > would probably solve the issue, but with \"WAL-logged torn page\n> > protection at first update after checkpoint\" we'll probably always\n> > have higher-than-average FPI load just after a new checkpoint.\n> >\n>\n> So essentially instead of WAL-logging the FPI on the first change, we'd\n> only do that later when actually writing-out the page (either during a\n> checkpoint or because of memory pressure)? How would you make sure\n> there's enough WAL space until the next checkpoint? I mean, FPIs are a\n> huge write amplification source ...\n\nYou don't make sure that there's enough space for the modifications,\nbut does it matter from a durability point of view? As long as the\npage isn't written to disk before the FPI, we can replay non-FPI (but\nfsynced) WAL on top of the old version of the page that you read from\ndisk, instead of only trusting FPIs from WAL.\n\n> Imagine the system has max_wal_size set to 1GB, and does 1M updates\n> before writing 512MB of WAL and thus triggering a checkpoint. Now it\n> needs to write FPIs for 1M updates - easily 8GB of WAL, maybe more with\n> indexes. What then?\n\nThen you ignore the max_wal_size GUC as PostgreSQL so often already\ndoes. At least, it doesn't do what I expect it to do at face value -\nlimit the size of the WAL directory to the given size.\n\nBut more reasonably, you'd keep track of the count of modified pages\nthat are yet to be fully WAL-logged, and keep that into account as a\ndebt that you have to the current WAL insert pointer when considering\ncheckpoint distances and max_wal_size.\n\n---\n\nThe main issue that I see with \"WAL-logging the FPI only when you\nwrite the dirty page to disk\" is that dirty page flushing also happens\nwith buffer eviction in ReadBuffer(). This change in behaviour would\nadd a WAL insertion penalty to this write, and make it a very common\noccurrance that we'd have to write WAL + fsync the WAL when we have to\nwrite the dirty page. It would thus add significant latency to the\ndirty write mechanism, which is probably a unpopular change.\n\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Wed, 26 Jul 2023 21:53:35 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: incremental-checkopints" }, { "msg_contents": "On Wed, Jul 26, 2023 at 9:54 PM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n>\n> Then you ignore the max_wal_size GUC as PostgreSQL so often already\n> does. At least, it doesn't do what I expect it to do at face value -\n> limit the size of the WAL directory to the given size.\n\nThat would require stopping any new writes at wal size == max_wal_size\nuntil the checkpoint is completed.\nI don't think anybody would want that.\n\n> But more reasonably, you'd keep track of the count of modified pages\n> that are yet to be fully WAL-logged, and keep that into account as a\n> debt that you have to the current WAL insert pointer when considering\n> checkpoint distances and max_wal_size.\n\nI think Peter Geoghegan has worked on somewhat similar approach to\naccount for \"accumulated work needed until some desired outcome\"\nthough I think it was on the VACUUM side of things.\n\n\n", "msg_date": "Wed, 26 Jul 2023 22:56:45 +0200", "msg_from": "Hannu Krosing <hannuk@google.com>", "msg_from_op": false, "msg_subject": "Re: incremental-checkopints" }, { "msg_contents": "On 7/26/23 21:53, Matthias van de Meent wrote:\n> On Wed, 26 Jul 2023 at 20:58, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>>\n>>\n>> On 7/26/23 15:16, Matthias van de Meent wrote:\n>>> On Wed, 26 Jul 2023 at 14:41, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>>>>\n>>>> Hello\n>>>>\n>>>> On 2023-Jul-26, Thomas wen wrote:\n>>>>\n>>>>> Hi Hackes: I found this page :\n>>>>> https://pgsql-hackers.postgresql.narkive.com/cMxBwq65/incremental-checkopints,PostgreSQL\n>>>>> no incremental checkpoints have been implemented so far. When a\n>>>>> checkpoint is triggered, the performance jitter of PostgreSQL is very\n>>>>> noticeable. I think incremental checkpoints should be implemented as\n>>>>> soon as possible\n>>>>\n>>>> I think my first question is why do you think that is necessary; there\n>>>> are probably other tools to achieve better performance. For example,\n>>>> you may want to try making checkpoint_completion_target closer to 1, and\n>>>> the checkpoint interval longer (both checkpoint_timeout and\n>>>> max_wal_size). Also, changing shared_buffers may improve things. You\n>>>> can try adding more RAM to the machine.\n>>>\n>>> Even with all those tuning options, a significant portion of a\n>>> checkpoint's IO (up to 50%) originates from FPIs in the WAL, which (in\n>>> general) will most often appear at the start of each checkpoint due to\n>>> each first update to a page after a checkpoint needing an FPI.\n>>\n>> Yeah, FPIs are certainly expensive and can represent huge part of the\n>> WAL produced. But how would incremental checkpoints make that step\n>> unnecessary?\n>>\n>>> If instead we WAL-logged only the pages we are about to write to disk\n>>> (like MySQL's double-write buffer, but in WAL instead of a separate\n>>> cyclical buffer file), then a checkpoint_completion_target close to 1\n>>> would probably solve the issue, but with \"WAL-logged torn page\n>>> protection at first update after checkpoint\" we'll probably always\n>>> have higher-than-average FPI load just after a new checkpoint.\n>>>\n>>\n>> So essentially instead of WAL-logging the FPI on the first change, we'd\n>> only do that later when actually writing-out the page (either during a\n>> checkpoint or because of memory pressure)? How would you make sure\n>> there's enough WAL space until the next checkpoint? I mean, FPIs are a\n>> huge write amplification source ...\n> \n> You don't make sure that there's enough space for the modifications,\n> but does it matter from a durability point of view? As long as the\n> page isn't written to disk before the FPI, we can replay non-FPI (but\n> fsynced) WAL on top of the old version of the page that you read from\n> disk, instead of only trusting FPIs from WAL.\n> \n\nIt does not matter from durability point of view, I think. But I was\nthinking more about how this affects scheduling of checkpoints - how\nwould you know when the next checkpoint is likely to happen, when you\ndon't know how many FPIs you're going to write?\n\n>> Imagine the system has max_wal_size set to 1GB, and does 1M updates\n>> before writing 512MB of WAL and thus triggering a checkpoint. Now it\n>> needs to write FPIs for 1M updates - easily 8GB of WAL, maybe more with\n>> indexes. What then?\n> \n> Then you ignore the max_wal_size GUC as PostgreSQL so often already\n> does. At least, it doesn't do what I expect it to do at face value -\n> limit the size of the WAL directory to the given size.\n> \n\nI agree the soft-limit nature of max_wal_size (i.e. best effort, not a\nstrict limit) is not great. But just ignoring the limit altogether seems\nlike a step in the wrong direction - we should try not to exceed it.\n\nI wonder if we'd actually need / want to write the FPIs into WAL. AFAICS\nwe only need the FPI until the page is written and flushed - since that\nmoment it shouldn't be possible to tear the page. So a small cyclic\nbuffer separate from WAL would be better ...\n\n> But more reasonably, you'd keep track of the count of modified pages\n> that are yet to be fully WAL-logged, and keep that into account as a\n> debt that you have to the current WAL insert pointer when considering\n> checkpoint distances and max_wal_size.\n> \n\nYeah, that might work. It'd likely be just estimates, but probably good\nenough for pacing the writes.\n\n> ---\n> \n> The main issue that I see with \"WAL-logging the FPI only when you\n> write the dirty page to disk\" is that dirty page flushing also happens\n> with buffer eviction in ReadBuffer(). This change in behaviour would\n> add a WAL insertion penalty to this write, and make it a very common\n> occurrance that we'd have to write WAL + fsync the WAL when we have to\n> write the dirty page. It would thus add significant latency to the\n> dirty write mechanism, which is probably a unpopular change.\n\nYeah, it certainly move the latencies from one place to another.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 27 Jul 2023 12:50:00 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: incremental-checkopints" } ]
[ { "msg_contents": "Some refactoring to export json(b) conversion functions\n\nThis is to export datum_to_json(), datum_to_jsonb(), and\njsonb_from_cstring(), though the last one is exported as\njsonb_from_text().\n\nA subsequent commit to add new SQL/JSON constructor functions will\nneed them for calling from the executor.\n\nDiscussion: https://postgr.es/m/20230720160252.ldk7jy6jqclxfxkq%40alvherre.pgsql\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/b22391a2ff7bdfeff4438f7a9ab26de3e33fdeff\n\nModified Files\n--------------\nsrc/backend/utils/adt/json.c | 59 +++++++++++++++++++++++--------------\nsrc/backend/utils/adt/jsonb.c | 67 +++++++++++++++++++++++++++++++------------\nsrc/include/utils/jsonfuncs.h | 5 ++++\n3 files changed, 90 insertions(+), 41 deletions(-)", "msg_date": "Wed, 26 Jul 2023 08:09:45 +0000", "msg_from": "Amit Langote <amitlan@postgresql.org>", "msg_from_op": true, "msg_subject": "pgsql: Some refactoring to export json(b) conversion functions" }, { "msg_contents": "On 2023-Jul-26, Amit Langote wrote:\n\n> Some refactoring to export json(b) conversion functions\n> \n> This is to export datum_to_json(), datum_to_jsonb(), and\n> jsonb_from_cstring(), though the last one is exported as\n> jsonb_from_text().\n\nAfter this commit, Coverity started complaining that\ndatum_to_jsonb_internal() leaks the JsonLexContext here\n\n 754 │ case JSONTYPE_CAST:\n 755 │ case JSONTYPE_JSON:\n 756 │ {\n 757 │ /* parse the json right into the existing result object */\n 758 │ JsonLexContext *lex;\n 759 │ JsonSemAction sem;\n 760 │ text *json = DatumGetTextPP(val);\n 761 │ \n 762 │ lex = makeJsonLexContext(json, true);\n 763 │ \n 764 │ memset(&sem, 0, sizeof(sem));\n 765 │ \n 766 │ sem.semstate = (void *) result;\n 767 │ \n 768 │ sem.object_start = jsonb_in_object_start;\n 769 │ sem.array_start = jsonb_in_array_start;\n 770 │ sem.object_end = jsonb_in_object_end;\n 771 │ sem.array_end = jsonb_in_array_end;\n 772 │ sem.scalar = jsonb_in_scalar;\n 773 │ sem.object_field_start = jsonb_in_object_field_start;\n 774 │ \n 775 │ pg_parse_json_or_ereport(lex, &sem);\n 776 │ }\n 777 │ break;\n\nAdmittedly, our support code for this is not great, since we have no\nclean way to free those resources. Some places like json_object_keys\nare freeing everything manually (even though in that particular case\nit's unnecessary, since that one runs in a memcxt that's going to be\ncleaned up shortly afterwards).\n\nOne idea that Tom floated was to allow the JsonLexContext to be\noptionally stack-allocated. That reduces palloc() traffic; but some\ncallers do need it to be palloc'ed. Here's a patch that does it that\nway, and adds a freeing routine that knows what to do in either case.\nIt may make sense to do some further analysis and remove useless free\ncalls.\n\n\nIt may make sense to change the structs that contain JsonLexContext *\nso that they directly embed JsonLexContext instead. That would further\nreduce palloc'ing.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Thou shalt not follow the NULL pointer, for chaos and madness await\nthee at its end.\" (2nd Commandment for C programmers)", "msg_date": "Tue, 8 Aug 2023 19:41:10 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: pgsql: Some refactoring to export json(b) conversion functions" }, { "msg_contents": "On 2023-Aug-08, Alvaro Herrera wrote:\n\n> One idea that Tom floated was to allow the JsonLexContext to be\n> optionally stack-allocated. That reduces palloc() traffic; but some\n> callers do need it to be palloc'ed. Here's a patch that does it that\n> way, and adds a freeing routine that knows what to do in either case.\n> It may make sense to do some further analysis and remove useless free\n> calls.\n\nPushed this, after some further tweaking.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"La verdad no siempre es bonita, pero el hambre de ella sí\"\n\n\n", "msg_date": "Thu, 5 Oct 2023 11:13:54 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: pgsql: Some refactoring to export json(b) conversion functions" } ]
[ { "msg_contents": "Hi community,\r\nI use the PostgreSQL 17devel on x86_64-pc-linux-gnu, compiled by gcc (GCC) 13.1.1 20230614 (Red Hat 13.1.1-4), 64-bit.(Fedora Linux)\r\nAnd I use the gdb to debug the postgres, just test the pg_ctl.\r\nAs you can see:\r\n\r\n\r\n\r\n-----------------------------------------------------------------------------------------------------------------------------------\r\n\r\n\r\n\r\n[Switching to Thread 0x7ffff7dce740 (LWP 83554)]\r\nstart_postmaster () at pg_ctl.c:455\r\n455&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; if (pm_pid < 0)\r\n(gdb) ..\r\n462&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; if (pm_pid &gt; 0)\r\n(gdb) ................\r\n476&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; if (setsid() < 0)\r\n(gdb) ..\r\n489&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; if (log_file != NULL)\r\n(gdb) \r\n490&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; cmd = psprintf(\"exec \\\"%s\\\" %s%s < \\\"%s\\\" &gt;&gt; \\\"%s\\\" 2&gt;&amp;1\",\r\n(gdb) ..\r\n497&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; (void) execl(\"/bin/sh\", \"/bin/sh\", \"-c\", cmd, (char *) NULL);\r\n(gdb) .\r\nprocess 83554 is executing new program: /usr/bin/bash\r\nError in re-setting breakpoint 1: No source file named /home/postgres/project/postgres/src/devel/src/bin/pg_ctl/pg_ctl.c.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;\r\nError in re-setting breakpoint 2: No source file named /home/postgres/project/postgres/src/devel/src/bin/pg_ctl/pg_ctl.c.\r\n[Thread debugging using libthread_db enabled]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;\r\nUsing host libthread_db library \"/lib64/libthread_db.so.1\".\r\nprocess 83554 is executing new program: /home/postgres/postgres/bin/bin/postgres\r\n[Thread debugging using libthread_db enabled]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;\r\nUsing host libthread_db library \"/lib64/libthread_db.so.1\".\r\nBFD: warning: /home/postgres/.cache/debuginfod_client/d25eaf3596d9455fe9725f6e9cd1aa5433f31b92/debuginfo has a section extending past end of file&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;\r\nError while reading shared library symbols for /lib64/libstdc++.so.6:\r\n`/home/postgres/.cache/debuginfod_client/d25eaf3596d9455fe9725f6e9cd1aa5433f31b92/debuginfo': can't read symbols: file format not recognized.\r\n.[Attaching after Thread 0x7ffff7e8d480 (LWP 83554) fork to child process 83559]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;\r\n[New inferior 3 (process 83559)]\r\n[Detaching after fork from parent process 83554]\r\n[Inferior 2 (process 83554) detached]\r\ndouble free or corruption (out)\r\n\r\n\r\nFatal signal: Aborted\r\n----- Backtrace -----\r\ncorrupted double-linked list\r\n\r\n\r\nFatal signal: Aborted\r\n----- Backtrace -----\r\n&nbsp;done\r\nserver started\r\n0x5557bf5908b0 ???\r\n0x5557bf6cb4cd ???\r\n0x7f040125fb6f ???\r\n0x7f04012b0844 ???\r\n0x7f040125fabd ???\r\n0x7f040124887e ???\r\n0x7f040124960e ???\r\n0x7f04012ba774 ???\r\n0x7f04012bc93f ???\r\n0x7f04012bf1cd ???\r\n0x5557bf98272a ???\r\n0x5557bf7eb93c ???\r\n0x5557bf643b48 ???\r\n0x5557bf643d11 ???\r\n0x7f04012b3af2 ???\r\n0x5557bfc37d48 ???\r\n0x7f04014e31f2 ???\r\n0x7f04012ae906 ???\r\n0x7f040133486f ???\r\n0xffffffffffffffff ???\r\n---------------------\r\nA fatal error internal to GDB has been detected, further\r\ndebugging is not possible.&nbsp; GDB will now terminate.\r\n\r\nThis is a bug, please report it.&nbsp; For instructions, see:\r\n<https://www.gnu.org/software/gdb/bugs/&gt;.\r\n\r\nAborted (core dumped)\r\n[postgres@fedora postgres]$ \r\n\r\n\r\n--------------------------------------------------------------------------------------------------------------\r\n\r\n\r\n\r\nAs you can see, the gdb tell me I should report this, because gdb think there's a double-free.\r\nBut I check the postgres, it keep the run rightly, like this:(Before I run the psql, I print the log file)\r\n\r\n\r\n-------------------------------------------------------------------------------------------------------------\r\n\r\n\r\n\r\n2023-07-26 22:16:17.489 CST [83554] LOG:&nbsp; database system is ready to accept connections\r\n[postgres@fedora postgres]$ psql\r\npsql (17devel)\r\nType \"help\" for help.\r\n\r\npostgres=# \\q\r\n[postgres@fedora postgres]$ \r\n\r\n\r\n\r\n\r\n\r\n--------------------------------------------------------------------------------------------------------------\r\n\r\n\r\n\r\nCan someone notice this problem?\r\nThanks in advance\r\n\r\n\r\n\r\nYours,\r\nWen Yi\nHi community,I use the PostgreSQL 17devel on x86_64-pc-linux-gnu, compiled by gcc (GCC) 13.1.1 20230614 (Red Hat 13.1.1-4), 64-bit.(Fedora Linux)And I use the gdb to debug the postgres, just test the pg_ctl.As you can see:-----------------------------------------------------------------------------------------------------------------------------------[Switching to Thread 0x7ffff7dce740 (LWP 83554)]start_postmaster () at pg_ctl.c:455455        if (pm_pid < 0)(gdb) ..462        if (pm_pid > 0)(gdb) ................476        if (setsid() < 0)(gdb) ..489        if (log_file != NULL)(gdb) 490            cmd = psprintf(\"exec \\\"%s\\\" %s%s < \\\"%s\\\" >> \\\"%s\\\" 2>&1\",(gdb) ..497        (void) execl(\"/bin/sh\", \"/bin/sh\", \"-c\", cmd, (char *) NULL);(gdb) .process 83554 is executing new program: /usr/bin/bashError in re-setting breakpoint 1: No source file named /home/postgres/project/postgres/src/devel/src/bin/pg_ctl/pg_ctl.c.                                                                                 Error in re-setting breakpoint 2: No source file named /home/postgres/project/postgres/src/devel/src/bin/pg_ctl/pg_ctl.c.[Thread debugging using libthread_db enabled]                                                                                                                                                             Using host libthread_db library \"/lib64/libthread_db.so.1\".process 83554 is executing new program: /home/postgres/postgres/bin/bin/postgres[Thread debugging using libthread_db enabled]                                                                                                                                                             Using host libthread_db library \"/lib64/libthread_db.so.1\".BFD: warning: /home/postgres/.cache/debuginfod_client/d25eaf3596d9455fe9725f6e9cd1aa5433f31b92/debuginfo has a section extending past end of file                                                         Error while reading shared library symbols for /lib64/libstdc++.so.6:`/home/postgres/.cache/debuginfod_client/d25eaf3596d9455fe9725f6e9cd1aa5433f31b92/debuginfo': can't read symbols: file format not recognized..[Attaching after Thread 0x7ffff7e8d480 (LWP 83554) fork to child process 83559]                                                                                                                          [New inferior 3 (process 83559)][Detaching after fork from parent process 83554][Inferior 2 (process 83554) detached]double free or corruption (out)Fatal signal: Aborted----- Backtrace -----corrupted double-linked listFatal signal: Aborted----- Backtrace ----- doneserver started0x5557bf5908b0 ???0x5557bf6cb4cd ???0x7f040125fb6f ???0x7f04012b0844 ???0x7f040125fabd ???0x7f040124887e ???0x7f040124960e ???0x7f04012ba774 ???0x7f04012bc93f ???0x7f04012bf1cd ???0x5557bf98272a ???0x5557bf7eb93c ???0x5557bf643b48 ???0x5557bf643d11 ???0x7f04012b3af2 ???0x5557bfc37d48 ???0x7f04014e31f2 ???0x7f04012ae906 ???0x7f040133486f ???0xffffffffffffffff ???---------------------A fatal error internal to GDB has been detected, furtherdebugging is not possible.  GDB will now terminate.This is a bug, please report it.  For instructions, see:<https://www.gnu.org/software/gdb/bugs/>.Aborted (core dumped)[postgres@fedora postgres]$ --------------------------------------------------------------------------------------------------------------As you can see, the gdb tell me I should report this, because gdb think there's a double-free.But I check the postgres, it keep the run rightly, like this:(Before I run the psql, I print the log file)-------------------------------------------------------------------------------------------------------------2023-07-26 22:16:17.489 CST [83554] LOG:  database system is ready to accept connections[postgres@fedora postgres]$ psqlpsql (17devel)Type \"help\" for help.postgres=# \\q[postgres@fedora postgres]$ --------------------------------------------------------------------------------------------------------------Can someone notice this problem?Thanks in advanceYours,Wen Yi", "msg_date": "Wed, 26 Jul 2023 22:24:11 +0800", "msg_from": "\"=?ISO-8859-1?B?V2VuIFlp?=\" <wen-yi@qq.com>", "msg_from_op": true, "msg_subject": "[May be a bug] double free or corruption" }, { "msg_contents": "Hi Wen,\n\nOn Wed, Jul 26, 2023 at 7:55 PM Wen Yi <wen-yi@qq.com> wrote:\n... snip ...\n> ----- Backtrace -----\n> corrupted double-linked list\n>\n>\n> Fatal signal: Aborted\n> ----- Backtrace -----\n> done\n> server started\n> 0x5557bf5908b0 ???\n... snip ...\n> ---------------------\n> A fatal error internal to GDB has been detected, further\n> debugging is not possible. GDB will now terminate.\n>\n> This is a bug, please report it. For instructions, see:\n> <https://www.gnu.org/software/gdb/bugs/>.\n>\n> Aborted (core dumped)\n> [postgres@fedora postgres]$\n>\n> --------------------------------------------------------------------------------------------------------------\n>\n> As you can see, the gdb tell me I should report this, because gdb think there's a double-free.\n> But I check the postgres, it keep the run rightly, like this:(Before I run the psql, I print the log file)\n>\n\nI think you hit a gdb bug. The error message from gdb references\nhttps://www.gnu.org/software/gdb/bugs/ which has instructions on\nreporting bugs about gdb.\n\n> 2023-07-26 22:16:17.489 CST [83554] LOG: database system is ready to accept connections\n> [postgres@fedora postgres]$ psql\n> psql (17devel)\n> Type \"help\" for help.\n>\n> postgres=# \\q\n> [postgres@fedora postgres]$\n\nI guess gdb hit this bug after detaching from the backend you were\ndebugging. So it continued running. In case it crashed because\nsomething went wrong. postmaster will just restart all the backends\nand will allow you to reconnect.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Thu, 27 Jul 2023 20:10:37 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [May be a bug] double free or corruption" } ]
[ { "msg_contents": "Hi hackers,\n\nTriggered by a discussion on IRC, I noticed that there's a stray\nreference to pg_relation in a comment that was added long after it was\nrenamed to pg_class. Here's a patch to bring that up to speed.\n\n- ilmari", "msg_date": "Wed, 26 Jul 2023 18:48:51 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": true, "msg_subject": "Obsolete reference to pg_relation in comment" }, { "msg_contents": "On Wed, Jul 26, 2023 at 06:48:51PM +0100, Dagfinn Ilmari Manns�ker wrote:\n> Triggered by a discussion on IRC, I noticed that there's a stray\n> reference to pg_relation in a comment that was added long after it was\n> renamed to pg_class. Here's a patch to bring that up to speed.\n\n> pg_relation was renamed to pg_class in 1991, but this comment (added\n> in 2004) missed the memo\n\nHuh, interesting! I dug around the Berkeley archives [0] and found\ncomments indicating that pg_relation was renamed to pg_class in Februrary\n1990. However, it looks like the file was named pg_relation.h until\nPostgres95 v0.01, which has the following comment in pg_class.h:\n\n\t * ``pg_relation'' is being replaced by ``pg_class''. currently\n\t * we are only changing the name in the catalogs but someday the\n\t * code will be changed too. -cim 2/26/90\n\t * [it finally happens. -ay 11/5/94]\n\n[0] https://dsf.berkeley.edu/postgres.html\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 26 Jul 2023 11:53:06 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Obsolete reference to pg_relation in comment" }, { "msg_contents": "On Wed, Jul 26, 2023 at 11:53:06AM -0700, Nathan Bossart wrote:\n> On Wed, Jul 26, 2023 at 06:48:51PM +0100, Dagfinn Ilmari Manns�ker wrote:\n>> Triggered by a discussion on IRC, I noticed that there's a stray\n>> reference to pg_relation in a comment that was added long after it was\n>> renamed to pg_class. Here's a patch to bring that up to speed.\n> \n>> pg_relation was renamed to pg_class in 1991, but this comment (added\n>> in 2004) missed the memo\n> \n> Huh, interesting! I dug around the Berkeley archives [0] and found\n> comments indicating that pg_relation was renamed to pg_class in Februrary\n> 1990. However, it looks like the file was named pg_relation.h until\n> Postgres95 v0.01, which has the following comment in pg_class.h:\n> \n> \t * ``pg_relation'' is being replaced by ``pg_class''. currently\n> \t * we are only changing the name in the catalogs but someday the\n> \t * code will be changed too. -cim 2/26/90\n> \t * [it finally happens. -ay 11/5/94]\n\nThis comment actually lived in Postgres until 9cf80f2 (June 2000), too.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 26 Jul 2023 12:11:53 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Obsolete reference to pg_relation in comment" }, { "msg_contents": "Okay, now looking at the patch...\n\nOn Wed, Jul 26, 2023 at 06:48:51PM +0100, Dagfinn Ilmari Manns�ker wrote:\n> * All accesses to pg_largeobject and its index make use of a single Relation\n> - * reference, so that we only need to open pg_relation once per transaction.\n> + * reference, so that we only need to open pg_class once per transaction.\n> * To avoid problems when the first such reference occurs inside a\n> * subtransaction, we execute a slightly klugy maneuver to assign ownership of\n> * the Relation reference to TopTransactionResourceOwner.\n\nHm. Are you sure this is actually referring to pg_class? It seems\nunlikely given pg_relation was renamed 14 years before this comment was\nadded, and the code appears to be ensuring that pg_largeobject and its\nindex are opened at most once per transaction. I couldn't find the\noriginal thread for this comment, unfortunately, but ISTM we might want to\nreplace \"pg_relation\" with \"them\" instead.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 26 Jul 2023 13:50:31 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Obsolete reference to pg_relation in comment" }, { "msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Wed, Jul 26, 2023 at 06:48:51PM +0100, Dagfinn Ilmari Mannsåker wrote:\n>> * All accesses to pg_largeobject and its index make use of a single Relation\n>> - * reference, so that we only need to open pg_relation once per transaction.\n>> + * reference, so that we only need to open pg_class once per transaction.\n>> * To avoid problems when the first such reference occurs inside a\n>> * subtransaction, we execute a slightly klugy maneuver to assign ownership of\n>> * the Relation reference to TopTransactionResourceOwner.\n\n> Hm. Are you sure this is actually referring to pg_class? It seems\n> unlikely given pg_relation was renamed 14 years before this comment was\n> added, and the code appears to be ensuring that pg_largeobject and its\n> index are opened at most once per transaction.\n\nI believe it is just a typo/thinko for pg_class, but there's more not\nto like about this comment. First, once we've made a relcache entry\nit would typically stay valid across uses, so it's far from clear that\nthis coding actually prevents many catalog accesses in typical cases.\nSecond, when we do have to rebuild the relcache entry, there's a lot\nmore involved than just a pg_class fetch; we at least need to read\npg_attribute, and I think there may be other catalogs that we'd read\nalong the way, even for a system catalog that lacks complicated\nfeatures. (pg_index would presumably get looked at, for instance.)\n\nI think we should reword this to just generically claim that holding\nthe Relation reference open for the whole transaction reduces overhead.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 26 Jul 2023 17:14:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Obsolete reference to pg_relation in comment" }, { "msg_contents": "On Wed, Jul 26, 2023 at 05:14:08PM -0400, Tom Lane wrote:\n> I think we should reword this to just generically claim that holding\n> the Relation reference open for the whole transaction reduces overhead.\n\nWFM\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 26 Jul 2023 14:20:36 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Obsolete reference to pg_relation in comment" }, { "msg_contents": "On Wed, Jul 26, 2023 at 05:14:08PM -0400, Tom Lane wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n> > On Wed, Jul 26, 2023 at 06:48:51PM +0100, Dagfinn Ilmari Mannsåker wrote:\n> >> * All accesses to pg_largeobject and its index make use of a single Relation\n> >> - * reference, so that we only need to open pg_relation once per transaction.\n> >> + * reference, so that we only need to open pg_class once per transaction.\n> >> * To avoid problems when the first such reference occurs inside a\n> >> * subtransaction, we execute a slightly klugy maneuver to assign ownership of\n> >> * the Relation reference to TopTransactionResourceOwner.\n> \n> > Hm. Are you sure this is actually referring to pg_class? It seems\n> > unlikely given pg_relation was renamed 14 years before this comment was\n> > added, and the code appears to be ensuring that pg_largeobject and its\n> > index are opened at most once per transaction.\n> \n> I believe it is just a typo/thinko for pg_class, but there's more not\n> to like about this comment. First, once we've made a relcache entry\n> it would typically stay valid across uses, so it's far from clear that\n> this coding actually prevents many catalog accesses in typical cases.\n> Second, when we do have to rebuild the relcache entry, there's a lot\n> more involved than just a pg_class fetch; we at least need to read\n> pg_attribute, and I think there may be other catalogs that we'd read\n> along the way, even for a system catalog that lacks complicated\n> features. (pg_index would presumably get looked at, for instance.)\n> \n> I think we should reword this to just generically claim that holding\n> the Relation reference open for the whole transaction reduces overhead.\n\nHow is this attached patch?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Wed, 6 Sep 2023 15:13:20 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Obsolete reference to pg_relation in comment" }, { "msg_contents": "> On 6 Sep 2023, at 21:13, Bruce Momjian <bruce@momjian.us> wrote:\n> On Wed, Jul 26, 2023 at 05:14:08PM -0400, Tom Lane wrote:\n\n>> I think we should reword this to just generically claim that holding\n>> the Relation reference open for the whole transaction reduces overhead.\n> \n> How is this attached patch?\n\nReads good to me, +1.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 7 Sep 2023 10:44:25 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Obsolete reference to pg_relation in comment" }, { "msg_contents": "On Thu, Sep 7, 2023 at 10:44:25AM +0200, Daniel Gustafsson wrote:\n> > On 6 Sep 2023, at 21:13, Bruce Momjian <bruce@momjian.us> wrote:\n> > On Wed, Jul 26, 2023 at 05:14:08PM -0400, Tom Lane wrote:\n> \n> >> I think we should reword this to just generically claim that holding\n> >> the Relation reference open for the whole transaction reduces overhead.\n> > \n> > How is this attached patch?\n> \n> Reads good to me, +1.\n\nPatch applied to master. I didn't think backpatching it made much sense\nsince it is so localized.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Tue, 26 Sep 2023 17:07:45 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Obsolete reference to pg_relation in comment" } ]
[ { "msg_contents": "Hi\n\nFollowing the thread \"Inefficiency in parallel pg_restore with many tables\", I \nstarted digging into why the toc.dat files are that big and where time is spent \nwhen parsing them.\n\nI ended up writing several patches that shaved some time for pg_restore -l, \nand reduced the toc.dat size.\n\nFirst patch is \"finishing\" the job of removing has oids support. When this \nsupport was removed, instead of dropping the field from the dumps and \nincreasing the dump versions, the field was kept as is. This field stores a \nboolean as a string, \"true\" or \"false\". This is not free, and requires 10 \nbytes per toc entry.\n\nThe second patch removes calls to sscanf and replaces them with strtoul. This \nwas the biggest speedup for pg_restore -l.\n\nThe third patch changes the dump format further to remove these strtoul calls \nand store the integers as is instead.\n\nThe fourth patch is dirtier and does more changes to the dump format. Instead \nof storing the owner, tablespace, table access method and schema of each \nobject as a string, pg_dump builds an array of these, stores them at the \nbeginning of the file and replaces the strings with integer fields in the dump. \nThis reduces the file size further, and removes a lot of calls to ReadStr, thus \nsaving quite some time.\n\nToc has 453999 entries.\n\nPatch\tToc size\tDump -s duration\tpg_restore -l duration\nHEAD\t214M\t23.1s\t1.27s\n#1 (has oid)\t210M\t22.9s\t1.26s\n#2 (scanf)\t210M\t22.9s\t1.07s\n#3 (no strtoul)\t202M\t22.8s\t0.94s\n#4 (string list)\t181M\t23.1s\t0.87s\n\nPatch four is likely to require more changes. I don't know PostgreSQL code \nenough to do better than calling pgmalloc/pgrealloc and maintaining a char** \nmanually, I guess there are structs and functions that do that in a better \nway. And the location of string tables in the file and in the structures is \nprobably not acceptable, I suppose these should go to the toc header instead.\n\nI still submit these for comments and first review.\n\nBest regards\n\n Pierre Ducroquet", "msg_date": "Thu, 27 Jul 2023 10:51:11 +0200", "msg_from": "Pierre Ducroquet <p.psql@pinaraf.info>", "msg_from_op": true, "msg_subject": "Improvements in pg_dump/pg_restore toc format and performances" }, { "msg_contents": "On Thu, Jul 27, 2023 at 10:51:11AM +0200, Pierre Ducroquet wrote:\n> I ended up writing several patches that shaved some time for pg_restore -l, \n> and reduced the toc.dat size.\n\nI've only just started taking a look at these patches, and I intend to do a\nmore thorough review in the hopefully-not-too-distant future.\n\n> First patch is \"finishing\" the job of removing has oids support. When this \n> support was removed, instead of dropping the field from the dumps and \n> increasing the dump versions, the field was kept as is. This field stores a \n> boolean as a string, \"true\" or \"false\". This is not free, and requires 10 \n> bytes per toc entry.\n\nThis sounds reasonable to me. I wonder why this wasn't done when WITH OIDS\nwas removed in v12.\n\n> The second patch removes calls to sscanf and replaces them with strtoul. This \n> was the biggest speedup for pg_restore -l.\n\nNice.\n\n> The third patch changes the dump format further to remove these strtoul calls \n> and store the integers as is instead.\n\nDo we need to worry about endianness here?\n\n> The fourth patch is dirtier and does more changes to the dump format. Instead \n> of storing the owner, tablespace, table access method and schema of each \n> object as a string, pg_dump builds an array of these, stores them at the \n> beginning of the file and replaces the strings with integer fields in the dump. \n> This reduces the file size further, and removes a lot of calls to ReadStr, thus \n> saving quite some time.\n\nThis sounds promising.\n\n> Patch\tToc size\tDump -s duration\tpg_restore -l duration\n> HEAD\t214M\t23.1s\t1.27s\n> #1 (has oid)\t210M\t22.9s\t1.26s\n> #2 (scanf)\t210M\t22.9s\t1.07s\n> #3 (no strtoul)\t202M\t22.8s\t0.94s\n> #4 (string list)\t181M\t23.1s\t0.87s\n\nAt a glance, the size improvements in 0004 look the most interesting to me.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 18 Sep 2023 14:52:47 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improvements in pg_dump/pg_restore toc format and performances" }, { "msg_contents": "On Mon, Sep 18, 2023 at 02:52:47PM -0700, Nathan Bossart wrote:\n> On Thu, Jul 27, 2023 at 10:51:11AM +0200, Pierre Ducroquet wrote:\n>> I ended up writing several patches that shaved some time for pg_restore -l, \n>> and reduced the toc.dat size.\n> \n> I've only just started taking a look at these patches, and I intend to do a\n> more thorough review in the hopefully-not-too-distant future.\n\nSince cfbot is failing on some pg_upgrade and pg_dump tests, I've set this\nto waiting-on-author.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 18 Sep 2023 14:54:42 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improvements in pg_dump/pg_restore toc format and performances" }, { "msg_contents": "On Monday, September 18, 2023 11:52:47 PM CEST Nathan Bossart wrote:\n> On Thu, Jul 27, 2023 at 10:51:11AM +0200, Pierre Ducroquet wrote:\n> > I ended up writing several patches that shaved some time for pg_restore\n> > -l,\n> > and reduced the toc.dat size.\n> \n> I've only just started taking a look at these patches, and I intend to do a\n> more thorough review in the hopefully-not-too-distant future.\n\nThank you very much.\n\n> Since cfbot is failing on some pg_upgrade and pg_dump tests, I've set this\n> to waiting-on-author.\n\nI did not notice anything running meson test -v, I'll look further into it in \nthe next days.\n\n> > First patch is \"finishing\" the job of removing has oids support. When this\n> > support was removed, instead of dropping the field from the dumps and\n> > increasing the dump versions, the field was kept as is. This field stores\n> > a\n> > boolean as a string, \"true\" or \"false\". This is not free, and requires 10\n> > bytes per toc entry.\n> \n> This sounds reasonable to me. I wonder why this wasn't done when WITH OIDS\n> was removed in v12.\n\nI suppose it is an oversight, or not wanting to increase the dump version for \nno reason.\n\n> > The second patch removes calls to sscanf and replaces them with strtoul.\n> > This was the biggest speedup for pg_restore -l.\n> \n> Nice.\n> \n> > The third patch changes the dump format further to remove these strtoul\n> > calls and store the integers as is instead.\n> \n> Do we need to worry about endianness here?\n\nI used the ReadInt/WriteInt functions already defined in pg_dump that take care \nof this issue, so there should be no need to worry.\n\n> > The fourth patch is dirtier and does more changes to the dump format.\n> > Instead of storing the owner, tablespace, table access method and schema\n> > of each object as a string, pg_dump builds an array of these, stores them\n> > at the beginning of the file and replaces the strings with integer fields\n> > in the dump. This reduces the file size further, and removes a lot of\n> > calls to ReadStr, thus saving quite some time.\n> \n> This sounds promising.\n> \n> > Patch\tToc size\tDump -s duration\tpg_restore -l duration\n> > HEAD\t214M\t23.1s\t1.27s\n> > #1 (has oid)\t210M\t22.9s\t1.26s\n> > #2 (scanf)\t210M\t22.9s\t1.07s\n> > #3 (no strtoul)\t202M\t22.8s\t0.94s\n> > #4 (string list)\t181M\t23.1s\t0.87s\n> \n> At a glance, the size improvements in 0004 look the most interesting to me.\n\nYes it is, and the speed benefits are interesting too (at least for my usecase)\n\n\n\n\n\n\n", "msg_date": "Tue, 19 Sep 2023 00:41:24 +0200", "msg_from": "Pierre Ducroquet <p.psql@pinaraf.info>", "msg_from_op": true, "msg_subject": "Re: Improvements in pg_dump/pg_restore toc format and performances" }, { "msg_contents": "On Monday, September 18, 2023 11:54:42 PM CEST Nathan Bossart wrote:\n> On Mon, Sep 18, 2023 at 02:52:47PM -0700, Nathan Bossart wrote:\n> > On Thu, Jul 27, 2023 at 10:51:11AM +0200, Pierre Ducroquet wrote:\n> >> I ended up writing several patches that shaved some time for pg_restore\n> >> -l,\n> >> and reduced the toc.dat size.\n> > \n> > I've only just started taking a look at these patches, and I intend to do\n> > a\n> > more thorough review in the hopefully-not-too-distant future.\n> \n> Since cfbot is failing on some pg_upgrade and pg_dump tests, I've set this\n> to waiting-on-author.\n\nFYI, the failures are related to patch 0004, I said it was dirty, but it was \napparently an understatement. Patches 0001 to 0003 don't exhibit any \nregression.\n\n\n\n\n", "msg_date": "Tue, 19 Sep 2023 10:25:09 +0200", "msg_from": "Pierre Ducroquet <p.psql@pinaraf.info>", "msg_from_op": true, "msg_subject": "Re: Improvements in pg_dump/pg_restore toc format and performances" }, { "msg_contents": "On Monday, September 18, 2023 11:54:42 PM CEST Nathan Bossart wrote:\n> On Mon, Sep 18, 2023 at 02:52:47PM -0700, Nathan Bossart wrote:\n> > On Thu, Jul 27, 2023 at 10:51:11AM +0200, Pierre Ducroquet wrote:\n> >> I ended up writing several patches that shaved some time for pg_restore\n> >> -l,\n> >> and reduced the toc.dat size.\n> > \n> > I've only just started taking a look at these patches, and I intend to do\n> > a\n> > more thorough review in the hopefully-not-too-distant future.\n> \n> Since cfbot is failing on some pg_upgrade and pg_dump tests, I've set this\n> to waiting-on-author.\n\nAttached updated patches fix this regression, I'm sorry I missed that.", "msg_date": "Tue, 19 Sep 2023 12:15:55 +0200", "msg_from": "Pierre Ducroquet <p.psql@pinaraf.info>", "msg_from_op": true, "msg_subject": "Re: Improvements in pg_dump/pg_restore toc format and performances" }, { "msg_contents": "On Tue, Sep 19, 2023 at 12:15:55PM +0200, Pierre Ducroquet wrote:\n> Attached updated patches fix this regression, I'm sorry I missed that.\n\nThanks for the new patches. 0001 and 0002 look reasonable to me. This is\na nitpick, but we might want to use atooid() in 0002, which is just\nshorthand for the strtoul() call you are using.\n\n> -\t\tWriteStr(AH, NULL);\t\t/* Terminate List */\n> +\t\tWriteInt(AH, -1);\t\t/* Terminate List */\n\nI think we need to be cautious about using WriteInt and ReadInt here. OIDs\nare unsigned, so we probably want to use InvalidOid (0) instead.\n\n> +\t\t\t\tif (AH->version >= K_VERS_1_16)\n> \t\t\t\t{\n> -\t\t\t\t\tdepSize *= 2;\n> -\t\t\t\t\tdeps = (DumpId *) pg_realloc(deps, sizeof(DumpId) * depSize);\n> +\t\t\t\t\tdepId = ReadInt(AH);\n> +\t\t\t\t\tif (depId == -1)\n> +\t\t\t\t\t\tbreak;\t\t/* end of list */\n> +\t\t\t\t\tif (depIdx >= depSize)\n> +\t\t\t\t\t{\n> +\t\t\t\t\t\tdepSize *= 2;\n> +\t\t\t\t\t\tdeps = (DumpId *) pg_realloc(deps, sizeof(DumpId) * depSize);\n> +\t\t\t\t\t}\n> +\t\t\t\t\tdeps[depIdx] = depId;\n> +\t\t\t\t}\n> +\t\t\t\telse\n> +\t\t\t\t{\n> +\t\t\t\t\ttmp = ReadStr(AH);\n> +\t\t\t\t\tif (!tmp)\n> +\t\t\t\t\t\tbreak;\t\t/* end of list */\n> +\t\t\t\t\tif (depIdx >= depSize)\n> +\t\t\t\t\t{\n> +\t\t\t\t\t\tdepSize *= 2;\n> +\t\t\t\t\t\tdeps = (DumpId *) pg_realloc(deps, sizeof(DumpId) * depSize);\n> +\t\t\t\t\t}\n> +\t\t\t\t\tdeps[depIdx] = strtoul(tmp, NULL, 10);\n> +\t\t\t\t\tfree(tmp);\n> \t\t\t\t}\n\nI would suggest making the resizing logic common.\n\n\tif (AH->version >= K_VERS_1_16)\n\t{\n\t\tdepId = ReadInt(AH);\n\t\tif (depId == InvalidOid)\n\t\t\tbreak;\t\t/* end of list */\n\t}\n\telse\n\t{\n\t\ttmp = ReadStr(AH);\n\t\tif (!tmp)\n\t\t\tbreak;\t\t/* end of list */\n\t\tdepId = strtoul(tmp, NULL, 10);\n\t\tfree(tmp);\n\t}\n\n\tif (depIdx >= depSize)\n\t{\n\t\tdepSize *= 2;\n\t\tdeps = (DumpId *) pg_realloc(deps, sizeof(DumpId) * depSize);\n\t}\n\tdeps[depIdx] = depId;\n\nAlso, can we make depId more locally scoped?\n\nI have yet to look into 0004 still.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 20 Sep 2023 13:29:22 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improvements in pg_dump/pg_restore toc format and performances" }, { "msg_contents": "On Tue, 19 Sept 2023 at 15:46, Pierre Ducroquet <p.psql@pinaraf.info> wrote:\n>\n> On Monday, September 18, 2023 11:54:42 PM CEST Nathan Bossart wrote:\n> > On Mon, Sep 18, 2023 at 02:52:47PM -0700, Nathan Bossart wrote:\n> > > On Thu, Jul 27, 2023 at 10:51:11AM +0200, Pierre Ducroquet wrote:\n> > >> I ended up writing several patches that shaved some time for pg_restore\n> > >> -l,\n> > >> and reduced the toc.dat size.\n> > >\n> > > I've only just started taking a look at these patches, and I intend to do\n> > > a\n> > > more thorough review in the hopefully-not-too-distant future.\n> >\n> > Since cfbot is failing on some pg_upgrade and pg_dump tests, I've set this\n> > to waiting-on-author.\n>\n> Attached updated patches fix this regression, I'm sorry I missed that.\n\nFew comments:\n1) These printf statements are not required:\n+ /* write the list of am */\n+ printf(\"%d tableams to save\\n\", tableam_count);\n+ WriteInt(AH, tableam_count);\n+ for (i = 0 ; i < tableam_count ; i++)\n+ {\n+ printf(\"%d is %s\\n\", i, tableams[i]);\n+ WriteStr(AH, tableams[i]);\n+ }\n+\n+ /* write the list of namespaces */\n+ printf(\"%d namespaces to save\\n\", namespace_count);\n+ WriteInt(AH, namespace_count);\n+ for (i = 0 ; i < namespace_count ; i++)\n+ {\n+ printf(\"%d is %s\\n\", i, namespaces[i]);\n+ WriteStr(AH, namespaces[i]);\n+ }\n+\n+ /* write the list of owners */\n+ printf(\"%d owners to save\\n\", owner_count);\n\n2) We generally use pg_malloc in client tools, we should change palloc\nto pg_malloc:\n+ /* prepare dynamic arrays */\n+ tableams = palloc(sizeof(char*) * 1);\n+ tableams[0] = NULL;\n+ tableam_count = 0;\n+ namespaces = palloc(sizeof(char*) * 1);\n+ namespaces[0] = NULL;\n+ namespace_count = 0;\n+ owners = palloc(sizeof(char*) * 1);\n+ owners[0] = NULL;\n+ owner_count = 0;\n+ tablespaces = palloc(sizeof(char*) * 1);\n\n3) This similar code is repeated few times, will it be possible to do\nit in a common function:\n+ if (namespace_count < 128)\n+ {\n+ te->namespace_idx = AH->ReadBytePtr(AH);\n+ invalid_entry = 255;\n+ }\n+ else\n+ {\n+ te->namespace_idx = ReadInt(AH);\n+ invalid_entry = -1;\n+ }\n+ if (te->namespace_idx == invalid_entry)\n+ te->namespace = \"\";\n+ else\n+ te->namespace = namespaces[te->namespace_idx];\n\n4) Can the initialization of tableam_count, namespace_count,\nowner_count and tablespace_count be done at declaration and these\ninitialization code can be removed:\n+ /* prepare dynamic arrays */\n+ tableams = palloc(sizeof(char*) * 1);\n+ tableams[0] = NULL;\n+ tableam_count = 0;\n+ namespaces = palloc(sizeof(char*) * 1);\n+ namespaces[0] = NULL;\n+ namespace_count = 0;\n+ owners = palloc(sizeof(char*) * 1);\n+ owners[0] = NULL;\n+ owner_count = 0;\n+ tablespaces = palloc(sizeof(char*) * 1);\n+ tablespaces[0] = NULL;\n+ tablespace_count = 0;\n\n4) There are some whitespace issues in the patch:\nApplying: move static strings to arrays at beginning\n.git/rebase-apply/patch:24: trailing whitespace.\n.git/rebase-apply/patch:128: trailing whitespace.\n.git/rebase-apply/patch:226: trailing whitespace.\n.git/rebase-apply/patch:232: trailing whitespace.\nwarning: 4 lines add whitespace errors.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 3 Oct 2023 15:17:57 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improvements in pg_dump/pg_restore toc format and performances" }, { "msg_contents": "On Tue, Oct 03, 2023 at 03:17:57PM +0530, vignesh C wrote:\n> Few comments:\n\nPierre, do you plan to submit a new revision of this patch set for the\nNovember commitfest? If not, the commitfest entry may be marked as\nreturned-with-feedback soon.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 10 Nov 2023 11:50:04 -0600", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improvements in pg_dump/pg_restore toc format and performances" }, { "msg_contents": "On Fri, 10 Nov 2023 at 23:20, Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Tue, Oct 03, 2023 at 03:17:57PM +0530, vignesh C wrote:\n> > Few comments:\n>\n> Pierre, do you plan to submit a new revision of this patch set for the\n> November commitfest? If not, the commitfest entry may be marked as\n> returned-with-feedback soon.\n\nI have changed the status of commitfest entry to \"Returned with\nFeedback\" as the comments have not yet been resolved. Please handle\nthe comments and update the commitfest entry accordingly.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sun, 14 Jan 2024 17:19:55 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improvements in pg_dump/pg_restore toc format and performances" } ]
[ { "msg_contents": "Hi All,\n\nWhen we implemented partitionwise join we disabled it by default (by\nsetting enable_partitionwise_join to false by default) since it\nconsumed a lot of memory and took a lot of time [1]. We also set\nenable_partitionwise_aggregate to false by default since partitionwise\naggregates require partitionwise join. I have come across at least a\nfew cases in the field where users didn't know about these GUCs and\nsuffered from bad performance when queries involved partitioned\ntables. Their queries improved a lot by just turning these GUCs ON.\nGiven that the partitionwise operations improve performance of queries\non partitioned tables, I think these GUCs need to be turned ON by\ndefault.\n\nIrrespective of whether the GUC is turned ON or OFF by default, we\nneed to reduce the memory consumed by the partitionwise planning and\nthe time those strategies take during planning. In this email I will\ndiscuss the planner memory consumption problem.\n\nPlease find below the memory consumption measurements on master.\n\nExperiment\n----------\nCreated two tables t1 and t1_parted respectively. t1 is an\nunpartitioned table whereas t1_parted is a partitioned table with 1000\npartitions. Both of them are empty. Being empty does not affect\nplanning time and memory consumption much, but it takes far less time\nto set up and then run EXPLAIN ANALYZE. I am using a self-join on the\npartition key to measure memory and planning time. Again it's\nsomething that makes setup easier yet good enough for measurements.\nAttached are two scripts. setup.sql should be run to create tables,\npartitions and helper functions. queries.sql executes and measures\ntime and memory of 2-way, 3-way, 4-way and 5-way self joins\nrespectively. It runs each query four times, once to warm up caches\nand then thrice for measurement. Below I report averages of three\nruns.\n\nThe first attached patch is used to measure memory consumption and\nreport it as part of EXPLAIN ANALYZE. It simply takes the difference\nbetween memory used in the CurrentMemoryContext before and after\ncalling pg_plan_query() in ExplainOneQuery(). It does not account for\nthe memory consumed outside the CurrentMemoryContext while planning\nthe query e.g. caches. The memory consumed in those contexts does not\nnecessarily contribute to planning of a given query. Hence they are\nignored. I think the patch by itself makes a good improvement to\nEXPLAIN ANALYZE, so we may want to consider it to be accepted in the\ncore code.\n\nMeasurements\n------------\nIn all the tables below, the first column shows the number of tables\njoined. Second column is for queries on unpartitioned table. Third\ncolumn for queries on partitioned table with partitionwise join turned\nOFF. Fourth column is for queries on partitioned table with\npartitionwise join turned ON.\n\nTable 1: Planning time in ms.\n\nNumber of | unpartitioned | partitioned with | partitioned |\ntables | | PWJ off | PWJ on |\n---------------------------------------------------------------\n 2 | 0.10 | 464.39 | 501.75 |\n 3 | 0.17 | 2,714.28 | 3,035.07 |\n 4 | 0.34 | 8,856.82 | 10,416.31 |\n 5 | 0.84 | 22,519.02 | 29,769.19 |\n\nTable 2: Memory consumption\n\nNumber of | unpartitioned | partitioned with | partitioned |\ntables | | PWJ off | PWJ on |\n---------------------------------------------------------------\n 2 | 29.056 KiB | 19.520 MiB | 40.298 MiB |\n 3 | 79.088 KiB | 45.227 MiB | 146.877 MiB |\n 4 | 208.552 KiB | 83.540 MiB | 445.453 MiB |\n 5 | 561.592 KiB | 149.283 MiB | 1563.253 MiB |\n\nThe time required to plan queries on partitioned table is very high\ncompared to the time taken to plan queries on unpartitioned tables. In\nfact it's way higher than the factor of 1000 expected when there 1000\npartitions. But the difference between planning time when\npartitionwise join is turned OFF vs ON is less compared to difference\nbetween unpartitioned vs partitioned case. The problem of reducing\nplanning time for queries on partitioned tables is being worked on by\nYuya and David in [2]. That work focuses on queries without\npartitionwise join. But it might fix the problem for partitionwise\njoin as well.\n\nI am focusing on reducing the memory consumption when partitionwise\njoin is enabled. As visible from table 2 when partitionwise join is\n*not* used, the memory consumed in planning is proportional to the\nnumber of partitions as expected. But when partitionwise join is\nenabled the memory consumption increases exponentially with the number\nof tables being joined. This is because the number of ways a join can\nbe computed is exponentially proportional to the number of tables\nbeing joined. Table 3 has a breakup of memory consumed.\n\nThe memory consumption is broken by the objects that consume memory\nduring planning. The second attached patch is used to measure breakup\nby functionality . Here's a brief explanation of the rows in the\ntable.\n\n1. Restrictlist translations: Like other expressions the Restrictinfo\nlists of parent are translated to obtain Restrictinfo lists to be\napplied to child partitions (base as well as join). The first row\nshows the memory consumed by the translated RestrictInfos. We can't\navoid these translations but closer examination reveals that a given\nRestrictInfo gets translated multiple times proportional to the join\norders. These repeated translations can be avoided. I will start a\nseparate thread to discuss this topic.\n\n2. Paths: this is the memory consumed when creating child join paths\nand the Append paths in parent joins. It includes memory consumed by\nthe paths as well as translated expressions. I don't think we can\navoid creating these paths. But once the best paths are chosen for the\nlower level relations, the unused paths can be freed. I will start a\nseparate thread to discuss this topic.\n\n3. targetlist translation: child join relations' targetlists are\ncreated by translating parent relations' targetlist. This row shows\nthe memory consumed by the translated targetlists. This translation\ncan't be avoided.\n\n4. child SpecialJoinInfo: This is memory consumed in child joins'\nSpecialJoinInfos translated from SpecialJoinInfo applicable to parent\njoins. The child SpecialJoinInfos are translated on the fly when\ncomputing child joins but are never freed. May be we can free them on\nthe fly as well or even better save them somewhere and fetch as and\nwhen required. I will start a separate thread to discuss this topic.\n\n5. Child join RelOptInfos: memory consumed by child join relations.\nThis is unavoidable as we need the RelOptInfos representing the child\njoins.\n\nTable 3: Partitionwise join planning memory breakup\nNum joins | 2 | 3 | 4 | 5 |\n------------------------------------------------------------------------\n1. translated | 1.8 MiB | 13.1 MiB | 58.0 MiB | 236.5 MiB |\nrestrictlists | | | | |\n------------------------------------------------------------------------\n2. creating child | 11.6 MiB | 59.4 MiB | 207.6 MiB | 768.2 MiB |\njoin paths | | | | |\n------------------------------------------------------------------------\n3. translated | 723.5 KiB | 3.3 MiB | 10.6 MiB | 28.5 MiB |\ntargetlists | | | | |\n------------------------------------------------------------------------\n4. child | 926.8 KiB | 9.0 MiB | 45.7 MiB | 245.5 MiB |\nSpecialJoinInfo | | | | |\n------------------------------------------------------------------------\n5. Child join rels | 1.6 MiB | 7.9 MiB | 23.8 MiB | 67.5 MiB |\n------------------------------------------------------------------------\n\nRows 1, 2 and 4 show most of the memory consumption. Those are also\nthe ones where there are opportunities to save memory. As said above,\nwe will discuss them in their respective email threads since each of\nthese fixes are independent and require separate discussion.\n\nHere's memory consumption numbers after applying all the POC patches\nmentioned above.\nNumber of tables | no | all POC | % | Absolute |\njoined | patches | patches | reduction | reduction |\n----------------------------------------------------------------------\n 2 | 40.3 MiB | 36.5 MiB | 9.43% | 3.8 MiB |\n 3 | 146.9 MiB | 120.2 MiB | 18.13% | 26.6 MiB |\n 4 | 445.5 MiB | 328.8 MiB | 26.19% | 116.7 MiB |\n 5 | 1563.3 MiB | 953.2 MiB | 39.03% | 610.1 MiB |\n\nEven if we are able to recover some memory as mentioned above, a lot\nof memory is still consumed by the translated objects (expressions,\nRestrictLists etc.). In case of partitioned tables, these translated\nobject trees differ only in leaf Var nodes. Thus if we device a way to\nuse the same Var node for parents as well as its children, we won't\nneed to translate the expressions. But that's a much larger effort and\nprobably something not even feasible.\n\nWhile subproblems and their solutions will be discussed in separate\nemail threads, this thread is to discuss\n1. generic memory consumption problems not covered above but seen by others\n2. Is the memory consumption reduction after the POC patches good\nenough for us to make partitionwise operations turned ON by default?\n3. Is there anything else that stops us from turning the partitionwise\noperations ON by default?\n\n[1] https://www.postgresql.org/message-id/CA+TgmoYggDp6k-HXNAgrykZh79w6nv2FevpYR_jeMbrORDaQrA@mail.gmail.com\n[2] https://www.postgresql.org/message-id/flat/CAJ2pMkZNCgoUKSE%2B_5LthD%2BKbXKvq6h2hQN8Esxpxd%2Bcxmgomg%40mail.gmail.com\n\n--\nBest Wishes,\nAshutosh Bapat", "msg_date": "Thu, 27 Jul 2023 19:28:49 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Memory consumption during partitionwise join planning" }, { "msg_contents": "On Thu, Jul 27, 2023 at 7:28 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n\n> The memory consumption is broken by the objects that consume memory\n> during planning. The second attached patch is used to measure breakup\n> by functionality . Here's a brief explanation of the rows in the\n> table.\n>\n> 1. Restrictlist translations: Like other expressions the Restrictinfo\n> lists of parent are translated to obtain Restrictinfo lists to be\n> applied to child partitions (base as well as join). The first row\n> shows the memory consumed by the translated RestrictInfos. We can't\n> avoid these translations but closer examination reveals that a given\n> RestrictInfo gets translated multiple times proportional to the join\n> orders. These repeated translations can be avoided. I will start a\n> separate thread to discuss this topic.\n>\n> 2. Paths: this is the memory consumed when creating child join paths\n> and the Append paths in parent joins. It includes memory consumed by\n> the paths as well as translated expressions. I don't think we can\n> avoid creating these paths. But once the best paths are chosen for the\n> lower level relations, the unused paths can be freed. I will start a\n> separate thread to discuss this topic.\n>\n> 3. targetlist translation: child join relations' targetlists are\n> created by translating parent relations' targetlist. This row shows\n> the memory consumed by the translated targetlists. This translation\n> can't be avoided.\n>\n> 4. child SpecialJoinInfo: This is memory consumed in child joins'\n> SpecialJoinInfos translated from SpecialJoinInfo applicable to parent\n> joins. The child SpecialJoinInfos are translated on the fly when\n> computing child joins but are never freed. May be we can free them on\n> the fly as well or even better save them somewhere and fetch as and\n> when required. I will start a separate thread to discuss this topic.\n>\n> 5. Child join RelOptInfos: memory consumed by child join relations.\n> This is unavoidable as we need the RelOptInfos representing the child\n> joins.\n>\n> Table 3: Partitionwise join planning memory breakup\n> Num joins | 2 | 3 | 4 | 5 |\n> ------------------------------------------------------------------------\n> 1. translated | 1.8 MiB | 13.1 MiB | 58.0 MiB | 236.5 MiB |\n> restrictlists | | | | |\n> ------------------------------------------------------------------------\n> 2. creating child | 11.6 MiB | 59.4 MiB | 207.6 MiB | 768.2 MiB |\n> join paths | | | | |\n> ------------------------------------------------------------------------\n> 3. translated | 723.5 KiB | 3.3 MiB | 10.6 MiB | 28.5 MiB |\n> targetlists | | | | |\n> ------------------------------------------------------------------------\n> 4. child | 926.8 KiB | 9.0 MiB | 45.7 MiB | 245.5 MiB |\n> SpecialJoinInfo | | | | |\n> ------------------------------------------------------------------------\n> 5. Child join rels | 1.6 MiB | 7.9 MiB | 23.8 MiB | 67.5 MiB |\n> ------------------------------------------------------------------------\n\n>\n> While subproblems and their solutions will be discussed in separate\n> email threads, this thread is to discuss\n\nI posted these patches long back but forgot to mention those in this\nthread. Listing them here at one place.\n\n[1] Memory reduction in SpecialJoinInfo -\nhttps://www.postgresql.org/message-id/flat/CAExHW5tHqEf3ASVqvFFcghYGPfpy7o3xnvhHwBGbJFMRH8KjNw@mail.gmail.com\n[2] Memory consumption reduction in RestrictInfos -\nhttps://www.postgresql.org/message-id/flat/CAExHW5s=bCLMMq8n_bN6iU+Pjau0DS3z_6Dn6iLE69ESmsPMJQ@mail.gmail.com\n[3] Memory consumption reduction in paths -\nhttps://www.postgresql.org/message-id/flat/CAExHW5tUcVsBkq9qT%3DL5vYz4e-cwQNw%3DKAGJrtSyzOp3F%3DXacA%40mail.gmail.com\n[4] Small change to reuse child bitmapsets in try_partitionwise_join()\n- https://www.postgresql.org/message-id/CAExHW5snUW7pD2RdtaBa1T_TqJYaY6W_YPVjWDrgSf33i-0uqA%40mail.gmail.com\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 11 Dec 2023 18:43:52 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Memory consumption during partitionwise join planning" } ]
[ { "msg_contents": "Hi All,\n\nFollowing up on [1] ...\n\nA restrictlist is a list of RestrictInfo nodes, each representing one\nclause, applicable to a set of joins. When partitionwise join is used\nas a strategy, the restrictlists for child join are obtained by\ntranslating the restrictlists for the parent in\ntry_partitionwise_join(). That function is called once for every join\norder. E.g. when computing join ABC, it will be called for planning\njoins AB, BC, AC, (AB)C, A(BC), B(AC). Every time it is called it will\ntranslate the given parent restrictlist. This means that a\nRestrictInfo node applicable to given child relations will be\ntranslated as many times as the join orders in which those child\nrelations appear in different joining relations.\n\nFor example, consider a query \"select * from A, B, C where A.a = B.a\nand B.a = C.a\" where A, B and C are partitioned tables. A has\npartitions A1, A2, ... An. B has partitions B1, B2, ... Bn and C has\npartitions C1, C2, ... Cn. Partitions Ai, Bi and Ci are matching\npartitions respectively for all i's. The join ABC is computed as\nAppend of (A1B1C1, A2B2C2, ... AnBnCn). The clause A.a = B.a is\ntranslated to A1.a = B1.a thrice, when computing A1B1, A1(B1C1) and\nB1(A1C1) respectively. Similarly other clauses are translated multiple\ntimes. Some extra translations also happen in\nreparameterize_path_by_child().\n\nThese translations consume memory which remains allocated till the\nstatement finishes. A ResrtictInfo should be translated only once per\nparent-child pair, thus avoiding consuming extra memory.\n\nThere are two patches attached\n0001 - to measure memory consumption during planning. This is the same\none as attached to [1].\n0002 - WIP patch to avoid repeated translations of RestrictInfo.\nThe WIP patch avoids repeated translations by tracking the child for\nwhich a RestrictInfo is translated and reusing the same translation\nevery time it is requested. In order to track the translations,\nRestrictInfo gets two new members.\n1. parent_rinfo - In a child's RestrictInfo this points to the\nRestrictInfo applicable to the topmost parent in partition hierarchy.\nThis is NULL in the topmost parent's RestrictInfo\n2. child_rinfos - In a parent's RestrictInfo, this is a list that\ncontains all the translated child RestrictInfos. In child\nRestrictInfos this is NULL.\n\nEvery translated RestrictInfo is stored in the top parent's\nRestrictInfo child_rinfos. RestrictInfo::required_relids is used as a\nkey to search a given translation. I have intercepted\nadjust_appendrel_attrs_mutator() to track translations as well as\navoid multiple translations. It first looks for an existing\ntranslation when translating a RestrictInfo and creates a new one only\nwhen one doesn't exist already.\n\nUsing this patch the memory consumption for the above query reduces as follows\n\nNumber of partitions: 1000\n\nNumber of tables | without patch | with patch | % reduction |\nbeing joined | | | |\n--------------------------------------------------------------\n 2 | 40.3 MiB | 37.7 MiB | 6.43% |\n 3 | 146.8 MiB | 133.0 MiB | 9.42% |\n 4 | 445.4 MiB | 389.5 MiB | 12.57% |\n 5 | 1563.2 MiB | 1243.2 MiB | 20.47% |\n\nThe number of times a RestrictInfo requires to be translated increases\nexponentially with the number of tables joined. Thus we see more\nmemory saved as the number of tables joined increases. When two tables\nare joined there's only a single join planned so no extra translations\nhappen in try_partitionwise_join(). The memory saved in case of 2\njoining tables comes from avoiding extra translations happening during\nreparameterization of paths (in reparameterize_path_by_child()).\n\nThe attached patch is to show how much memory can be saved if we avoid\nextra translation. But I want to discuss the following things about\nthe approach.\n\n1. The patch uses RestrictInfo::required_relids as the key for\nsearching child RelOptInfos. I am not sure which of the two viz.\nrequired_relids and clause_relids is a better key. required_relids\nseems to be a subset of clause_relids and from the description it\nlooks like that's the set that decides the applicability of a clause\nin a join. But clause_relids is obtained from all the Vars that appear\nin the clause, so may be that's the one that matters for the\ntranslations. Can somebody guide me?\n\n2. The patch adds two extra pointers per RestrictInfo. They will\nremain unused when partitionwise join is not used. Right now, I do not\nsee any increase in memory consumed by planner because of those\npointers even in case of unpartitioned tables; maybe they are absorbed\nin memory alignment. They may show up as extra memory in the future. I\nam wondering whether we can instead save and track translations in\nPlannerInfo as a hash table using <rinfo_serial, required_relids (or\nwhatever is the answer to above question) of parent and child\nrespectively> as key. That will just add one extra pointer in\nPlannerInfo when partitionwise join is not used. Please let me know\nyour suggestions.\n\n3. I have changed adjust_appendrel_attrs_mutator() to return a\ntranslated RestrictInfo if it already exists. IOW, it won't always\nreturn a deep copy of given RestrictInfo as it does today. This can be\nfixed by writing wrappers around adjust_appendrel_attrs() to translate\nRestrictInfo specifically. But maybe we don't always need deep copies.\nAre there any cases when we need translated deep copies of\nRestrictInfo? Those cases will require fixing callers of\nadjust_appendrel_attrs() instead of the mutator.\n\n4. IIRC, when partitionwise join was implemented we had discussed\ncreating child RestrictInfos using a login similar to\nbuild_joinrel_restrictlist(). That might be another way to build\nRestrictInfo only once and use it multiple times. But we felt that it\nwas much harder problem to solve since it's not known which partitions\nfrom joining partitioned tables will match and will be joined till we\nenter try_partitionwise_join(), so the required RestrictInfos may not\nbe available in RelOptInfo::joininfo. Let me know your thoughts on\nthis.\n\nComments/suggestions welcome.\n\nreferences\n[1] https://www.postgresql.org/message-id/CAExHW5stmOUobE55pMt83r8UxvfCph+Pvo5dNpdrVCsBgXEzDQ@mail.gmail.com\n\n--\nBest Wishes,\nAshutosh Bapat", "msg_date": "Thu, 27 Jul 2023 19:35:52 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Reducing memory consumed by RestrictInfo list translations in\n partitionwise join planning" }, { "msg_contents": "On Thu, Jul 27, 2023 at 10:06 PM Ashutosh Bapat <\nashutosh.bapat.oss@gmail.com> wrote:\n\n> 0002 - WIP patch to avoid repeated translations of RestrictInfo.\n> The WIP patch avoids repeated translations by tracking the child for\n> which a RestrictInfo is translated and reusing the same translation\n> every time it is requested. In order to track the translations,\n> RestrictInfo gets two new members.\n> 1. parent_rinfo - In a child's RestrictInfo this points to the\n> RestrictInfo applicable to the topmost parent in partition hierarchy.\n> This is NULL in the topmost parent's RestrictInfo\n> 2. child_rinfos - In a parent's RestrictInfo, this is a list that\n> contains all the translated child RestrictInfos. In child\n> RestrictInfos this is NULL.\n\n\nI haven't looked into the details but with 0002 patch I came across a\ncrash while planning the query below.\n\nregression=# set enable_partitionwise_join to on;\nSET\nregression=# EXPLAIN (COSTS OFF)\nSELECT * FROM prt1 t1, prt2 t2\nWHERE t1.a = t2.b AND t1.a < 450 AND t2.b > 250 AND t1.b = 0;\nserver closed the connection unexpectedly\n\nThanks\nRichard\n\nOn Thu, Jul 27, 2023 at 10:06 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:\n0002 - WIP patch to avoid repeated translations of RestrictInfo.\nThe WIP patch avoids repeated translations by tracking the child for\nwhich a RestrictInfo is translated and reusing the same translation\nevery time it is requested. In order to track the translations,\nRestrictInfo gets two new members.\n1. parent_rinfo - In a child's RestrictInfo this points to the\nRestrictInfo applicable to the topmost parent in partition hierarchy.\nThis is NULL in the topmost parent's RestrictInfo\n2. child_rinfos - In a parent's RestrictInfo, this is a list that\ncontains all the translated child RestrictInfos. In child\nRestrictInfos this is NULL.I haven't looked into the details but with 0002 patch I came across acrash while planning the query below.regression=# set enable_partitionwise_join to on;SETregression=# EXPLAIN (COSTS OFF)SELECT * FROM prt1 t1, prt2 t2WHERE t1.a = t2.b AND t1.a < 450 AND t2.b > 250 AND t1.b = 0;server closed the connection unexpectedlyThanksRichard", "msg_date": "Tue, 8 Aug 2023 18:38:30 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Reducing memory consumed by RestrictInfo list translations in\n partitionwise join planning" }, { "msg_contents": "On Tue, Aug 8, 2023 at 4:08 PM Richard Guo <guofenglinux@gmail.com> wrote:\n>\n>\n> I haven't looked into the details but with 0002 patch I came across a\n> crash while planning the query below.\n>\n> regression=# set enable_partitionwise_join to on;\n> SET\n> regression=# EXPLAIN (COSTS OFF)\n> SELECT * FROM prt1 t1, prt2 t2\n> WHERE t1.a = t2.b AND t1.a < 450 AND t2.b > 250 AND t1.b = 0;\n> server closed the connection unexpectedly\n\nThanks for the report. PFA the patches with this issue fixed. There is\nanother issue seen when running partition_join.sql \"ERROR: index key\ndoes not match expected index column\". I will investigate that issue.\n\nRight now I am looking at the 4th point in my first email in this\nthread. [1]. I am trying to figure whether that approach would work.\nIf that approach doesn't work what's the best to track the\ntranslations and also figure out answers to 1, 2, 3 there. Please let\nme know your opinions if any.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n[1] https://www.postgresql.org/message-id/CAExHW5s=bCLMMq8n_bN6iU+Pjau0DS3z_6Dn6iLE69ESmsPMJQ@mail.gmail.com", "msg_date": "Tue, 8 Aug 2023 19:44:50 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Reducing memory consumed by RestrictInfo list translations in\n partitionwise join planning" }, { "msg_contents": "On Fri, 28 Jul 2023 at 02:06, Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> 0001 - to measure memory consumption during planning. This is the same\n> one as attached to [1].\n\nI see you're recording the difference in the CurrentMemoryContext of\npalloc'd memory before and after planning. That won't really alert us\nto problems if the planner briefly allocates, say 12GBs of memory\nduring, say the join search then quickly pfree's it again. unless\nit's an oversized chunk, aset.c won't free() any memory until\nMemoryContextReset(). Chunks just go onto a freelist for reuse later.\nSo at the end of planning, the context may still have that 12GBs\nmalloc'd, yet your new EXPLAIN ANALYZE property might end up just\nreporting a tiny fraction of that.\n\nI wonder if it would be more useful to just have ExplainOneQuery()\ncreate a new memory context and switch to that and just report the\ncontext's mem_allocated at the end.\n\nIt's also slightly annoying that these planner-related summary outputs\nare linked to EXPLAIN ANALYZE. We could be showing them in EXPLAIN\nwithout ANALYZE. If we were to change that now, it might be a bit\nannoying for the regression tests as we'd need to go and add SUMMARY\nOFF in a load of places...\n\ndrowley@amd3990x:~/pg_src/src/test/regress/sql$ git grep -i \"costs off\" | wc -l\n1592\n\nhmm, that would cause a bit of churn... :-(\n\nDavid\n\n\n", "msg_date": "Wed, 9 Aug 2023 16:41:58 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Reducing memory consumed by RestrictInfo list translations in\n partitionwise join planning" }, { "msg_contents": "On Wed, Aug 9, 2023 at 10:12 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Fri, 28 Jul 2023 at 02:06, Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> > 0001 - to measure memory consumption during planning. This is the same\n> > one as attached to [1].\n>\n\nI have started a separate thread to discuss this patch. I am taking\nthis discussion to that thread.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Wed, 9 Aug 2023 19:55:31 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Reducing memory consumed by RestrictInfo list translations in\n partitionwise join planning" }, { "msg_contents": "I spent some time on 4th point below but also looked at other points.\nHere's what I have found so far\n\nOn Thu, Jul 27, 2023 at 7:35 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> 1. The patch uses RestrictInfo::required_relids as the key for\n> searching child RelOptInfos. I am not sure which of the two viz.\n> required_relids and clause_relids is a better key. required_relids\n> seems to be a subset of clause_relids and from the description it\n> looks like that's the set that decides the applicability of a clause\n> in a join. But clause_relids is obtained from all the Vars that appear\n> in the clause, so may be that's the one that matters for the\n> translations. Can somebody guide me?\n\nI was wrong that required_relids is subset of clause_relids. The first\ncan contain OJ relids which the other can not. OJ relids do not have\nany children, so they won't be translated. So clause_relids seems to\nbe a better key. I haven't made a decision yet.\n\n>\n> 2. The patch adds two extra pointers per RestrictInfo. They will\n> remain unused when partitionwise join is not used. Right now, I do not\n> see any increase in memory consumed by planner because of those\n> pointers even in case of unpartitioned tables; maybe they are absorbed\n> in memory alignment. They may show up as extra memory in the future. I\n> am wondering whether we can instead save and track translations in\n> PlannerInfo as a hash table using <rinfo_serial, required_relids (or\n> whatever is the answer to above question) of parent and child\n> respectively> as key. That will just add one extra pointer in\n> PlannerInfo when partitionwise join is not used. Please let me know\n> your suggestions.\n\nI will go ahead with a pointer in PlannerInfo for now.\n\n>\n> 3. I have changed adjust_appendrel_attrs_mutator() to return a\n> translated RestrictInfo if it already exists. IOW, it won't always\n> return a deep copy of given RestrictInfo as it does today. This can be\n> fixed by writing wrappers around adjust_appendrel_attrs() to translate\n> RestrictInfo specifically. But maybe we don't always need deep copies.\n> Are there any cases when we need translated deep copies of\n> RestrictInfo? Those cases will require fixing callers of\n> adjust_appendrel_attrs() instead of the mutator.\n\nI think it's better to handle the tracking logic outside\nadjust_appendrel_attrs. That will be some code churn but it will be\ncleaner and won't affect anything other that partitionwise joins.\n\n>\n> 4. IIRC, when partitionwise join was implemented we had discussed\n> creating child RestrictInfos using a login similar to\n> build_joinrel_restrictlist(). That might be another way to build\n> RestrictInfo only once and use it multiple times. But we felt that it\n> was much harder problem to solve since it's not known which partitions\n> from joining partitioned tables will match and will be joined till we\n> enter try_partitionwise_join(), so the required RestrictInfos may not\n> be available in RelOptInfo::joininfo. Let me know your thoughts on\n> this.\n\nHere's some lengthy description of why I feel translations are better\ncompared to computing restrictlist and joininfo for a child join from\njoining relation's joininfo\nConsider a query \"select * from p, q, r where p.c1 = q.c1 and q.c1 =\nr.c1 and p.c2 + q.c2 < r.c2 and p.c3 != q.c3 and q.c3 != r.c3\". The\nquery has following clauses\n1. p.c1 = q.c1\n2. q.c1 = r.c1\n3. p.c2 + q.c2 < r.c2\n4. p.c3 != q.c3\n5. q.c3 != r.c3\n\nThe first two clauses are added to EC machinery and do not appear in\njoininfo. They appear in restrictlist when we construct clauses in\nrestrictlist from ECs. Let's ignore them for now.\n\nAssume that each table p, q, r has partitions (p1, p2, ...), (q1, q2,\n...) and (r1, r2, ... ) respectively. Each triplet (pi, qi,ri) forms\nthe set of matching partitions from p, q, r respectively for all \"i\".\nConsider join, p1q1r1. We will generate relations p1, q1, r1, p1q1,\np1r1, q1r1 and p1q1r1 while building the last join. Below is\ndescription of how these clauses would look in each of these relations\nand the list they appear in when computing that join. Please notice\nthe numeric suffixes carefully.\n\np1.\njoininfo: p1.c2 + q.c2 < r.c2, p1.c3 != q.c3\nrestrictlist: <>\n\nq1\njoininfo: p.c2 + q1.c2 < r.c2, p.c3 != q1.c3, q1.c3 != r.c3\nrestrictlist: <>\n\nr1\njoininfo: p.c2 + q.c2 < r1.c2, q.c3 != r1.c3\nrestrictlist: <>\n\np1q1\njoininfo: p1.c2 + q1.c2 < r.c2, q1.c3 != r.c3\nrestrictlist: p1.c3 != q1.c3\n\nq1r1\njoininfo: p.c2 + q1.c2 < r1.c2, p.c3 != q1.c3\nrestrictlist: q1.c3 != r1.c3\n\np1r1\njoininfo: p1.c2 + q.c2 < r1.c2, p1.c3 != q.c3, q.c3 != r1.c3\nrestrictlist: <>\n\np1q1r1\njoininfo: <>\nrestrictlist for (p1q1)r1: p1.c2 + q1.c2 < r1.c2, q1.c3 != r1.c3\nrestrictlist for (p1r1)q1: p1.c2 + q1.c2 < r1.c2, p1.c3 != q1.c3, q1.c3 != r1.c3\nrestrictlist for p1(q1r1): p1.c2 + q1.c2 < r1.c2, p1.c3 != q1.c3\n\nIf we translate the clauses when building join e.g. translate p1.c3 !=\nq1.c3 when building p1q1 or p1q1r1, it would cause repeated\ntranslations. So the translations need to be saved in lower relations\nwhen we detect matching partitions and then use these translations.\nSomething I have done in the attached patches. But the problem is the\nsame clause reaches its final translation through different\nintermediate translations as the join search advances. E.g. the\nevolution of p.c2 + q.c2 < r.c2 to p1.c2 + q1.c2 < r1.c2 has three\ndifferent intemediate translations at second level of join. Each of\nthese intermediate translations conflict with each other and none of\nthem can be saved in any of the second level joins as a candidate for\nthe last stage translation. Extending the logic in the patches would\nmake those more complicated.\n\nAnother possibility is to avoid the same clause being translated\nmultiple times when building the join using\nRestrictInfo::rinfo_serial. But simply that won't help avoiding\nrepeated translations caused by different join orders. E.g. we won't\nbe able to detect that p.c2 + q.c2 < r.c2 has been translated to p1.c2\n+ q1.c2 < r1.c2 already when we computed (p1r1)q1 or p1(q1r1) or\n(p1q1)r1 whichever was computed earlier. For that we need some\ntracking outside the join relations themselves like I did in my first\npatch.\n\nComing back to the problem of generating child restrictlist clauses\nfrom equivalence classes, I think it's easier with some tweaks to pass\nchild relids down to the minions when dealing with child joins. It\nseems to be working as is but I haven't tested it thoroughly.\n\nObtaining child clauses from parent clauses by translation and\ntracking the translations is less complex and may be more efficient\ntoo. I will post a patch on those lines soon.\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Fri, 11 Aug 2023 18:24:37 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Reducing memory consumed by RestrictInfo list translations in\n partitionwise join planning" }, { "msg_contents": "Hi All,\n\nOn Fri, Aug 11, 2023 at 6:24 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> Obtaining child clauses from parent clauses by translation and\n> tracking the translations is less complex and may be more efficient\n> too. I will post a patch on those lines soon.\n>\n\nPFA patch set to add infrastructure to track RestrictInfo translations\nand reuse them.\n\nPlannerInfo gets a new member \"struct HTAB *child_rinfo_hash\" which is\na hash table of hash tables keyed by RestrictInfo::rinfo_serial. Each\nelement in the array is a hash table of RestrictInfos keyed by\nRestrictInfo::required_relids as explained in my previous email. When\nbuilding child clauses when a. building child join rels or b. when\nreparameterizing paths, we first access the first level hash table\nusing RestrictInfo::rinfo_serial of the parent and search the required\ntranslation by computing the child RestrictInfo::required_relids\nobtained by translating RestrictInfo::required_relids of the parent\nRestrictInfo. If the translation doesn't exist, we create one and add\nto the hash table.\n\nRestrictInfo::required_relids is same for a RestrictInfo and its\ncommuted RestrictInfo. The order of operands is important for\nIndexClauses. Hence we track the commuted RestrictInfo in a new field\nRestrictInfo::comm_rinfo. RestrictInfo::is_commuted differentiates\nbetween a RestrictInfo and its commuted version. This is explained as\na comment in the patch. This scheme has a minor benefit of saving\nmemory when the same RestrictInfo is commuted multiple times.\n\nHash table of hash table is used instead of an array of hash tables\nsince a. not every rinfo_serial has a RestrictInfo associated with it\nb. not every RestrictInfo has translations, c. I don't think the exact\nsize of this array is not known till the planning ends since we\ncontinue to create new clauses as we create new RelOptInfos. Of\ncourse, an array can be repalloc'ed and unused slots in the array may\nnot waste a lot of memory. I am open to change hash table to an array\nwhich may be more efficient.\n\nWith these set of patches, the memory consumption stands as below\nNumber of tables | without patch | with patch | % reduction |\nbeing joined | | | |\n--------------------------------------------------------------\n 2 | 40.8 MiB | 37.4 MiB | 8.46% |\n 3 | 151.6 MiB | 135.0 MiB | 10.96% |\n 4 | 464.0 MiB | 403.6 MiB | 12.00% |\n 5 | 1663.9 MiB | 1329.1 MiB | 20.12% |\n\nThe patch set is thus\n0001 - patch used to measure memory used during planning\n\n0002 - Patch to free intermediate Relids computed by\nadjust_child_relids_multilevel(). I didn't test memory consumption for\nmulti-level partitioning. But this is clear improvement. In that\nfunction we free the AppendInfos array which as many pointers long as\nthe number of relids. So it doesn't make sense not to free the Relids\nwhich can be {largest relid}/8 bytes long at least.\n\n0003 - patch to save and reuse commuted RestrictInfo. This patch by\nitself shows a small memory saving (3%) in the query below where the\nsame clause is commuted twice. The query does not contain any\npartitioned tables.\ncreate table t2 (a int primary key, b int, c int);\ncreate index t2_a_b on t2(a, b);\nselect * from t2 where 10 = a\nMemory used without patch: 13696 bytes\nMemory used with patch: 13264 bytes\n\n0004 - Patch which implements the hash table of hash table described\nabove and also code to avoid repeated RestrictInfo list translations.\n\nI will add this patchset to next commitfest.\n\n--\nBest Wishes,\nAshutosh Bapat", "msg_date": "Thu, 14 Sep 2023 16:39:50 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Reducing memory consumed by RestrictInfo list translations in\n partitionwise join planning" }, { "msg_contents": "On Thu, Sep 14, 2023 at 4:39 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> The patch set is thus\n> 0001 - patch used to measure memory used during planning\n>\n> 0002 - Patch to free intermediate Relids computed by\n> adjust_child_relids_multilevel(). I didn't test memory consumption for\n> multi-level partitioning. But this is clear improvement. In that\n> function we free the AppendInfos array which as many pointers long as\n> the number of relids. So it doesn't make sense not to free the Relids\n> which can be {largest relid}/8 bytes long at least.\n>\n> 0003 - patch to save and reuse commuted RestrictInfo. This patch by\n> itself shows a small memory saving (3%) in the query below where the\n> same clause is commuted twice. The query does not contain any\n> partitioned tables.\n> create table t2 (a int primary key, b int, c int);\n> create index t2_a_b on t2(a, b);\n> select * from t2 where 10 = a\n> Memory used without patch: 13696 bytes\n> Memory used with patch: 13264 bytes\n>\n> 0004 - Patch which implements the hash table of hash table described\n> above and also code to avoid repeated RestrictInfo list translations.\n>\n> I will add this patchset to next commitfest.\n>\n> --\n> Best Wishes,\n> Ashutosh Bapat\n\nPFA rebased patches. Nothing changes in 0002, 0003 and 0004. 0001 is\nthe squashed version of the latest patch set at\nhttps://www.postgresql.org/message-id/CAExHW5sCJX7696sF-OnugAiaXS=Ag95=-m1cSrjcmyYj8Pduuw@mail.gmail.com.\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Tue, 31 Oct 2023 10:47:44 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Reducing memory consumed by RestrictInfo list translations in\n partitionwise join planning" }, { "msg_contents": "Hi! Thank you for your work on the subject, I think it's a really useful \nfeature.\n\nI've reviewed your patch and I have a few questions.\n\nFirst of all, have you thought about creating a gun parameter to display \nmemory scheduling information? I agree that this is an important \nfeature, but I think only for debugging.\n\nSecondly, I noticed that for the child_rinfo_hash key you use a counter \n(int) and can it lead to collisions? Why didn't you generate a hash from \nchildRestrictInfo for this? For example, something like how it is formed \nhere [0].\n\n[0] \nhttps://www.postgresql.org/message-id/43ad8a48-b980-410d-a83c-5beebf82a4ed%40postgrespro.ru \n\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional\n\n\n\n\n\n\nHi! Thank you for your work on the subject, I think it's a really\n useful feature.\n I've reviewed your patch and I have a few questions.\n\n First of all, have you thought about creating a gun parameter to\n display memory scheduling information? I agree that this is an\n important feature, but I think only for debugging.\n Secondly, I noticed that for the child_rinfo_hash key you use a counter (int) and can it lead to collisions? Why didn't you generate a hash from childRestrictInfo for this? For example, something like how it is formed here [0].\n[0] https://www.postgresql.org/message-id/43ad8a48-b980-410d-a83c-5beebf82a4ed%40postgrespro.ru\n\n\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional", "msg_date": "Fri, 24 Nov 2023 13:20:52 +0300", "msg_from": "Alena Rybakina <lena.ribackina@yandex.ru>", "msg_from_op": false, "msg_subject": "Re: Reducing memory consumed by RestrictInfo list translations in\n partitionwise join planning" }, { "msg_contents": "On 24.11.2023 13:20, Alena Rybakina wrote:\n>\n> Hi! Thank you for your work on the subject, I think it's a really \n> useful feature.\n>\n> I've reviewed your patch and I have a few questions.\n>\n> First of all, have you thought about creating a gun parameter to \n> display memory scheduling information? I agree that this is an \n> important feature, but I think only for debugging.\n>\n> Secondly, I noticed that for the child_rinfo_hash key you use a \n> counter (int) and can it lead to collisions? Why didn't you generate a \n> hash from childRestrictInfo for this? For example, something like how \n> it is formed here [0].\n>\n> [0] \n> https://www.postgresql.org/message-id/43ad8a48-b980-410d-a83c-5beebf82a4ed%40postgrespro.ru \n>\n>\nSorry, my first question was not clear, I mean: have you thought about \ncreating a guc parameter to display memory planning information?\n\n-- \nRegards,\nAlena Rybakina\n\n\n\n\n\n\nOn 24.11.2023 13:20, Alena Rybakina\n wrote:\n\n\n\nHi! Thank you for your work on the subject, I think it's a\n really useful feature.\n I've reviewed your patch and I have a few questions.\n\n First of all, have you thought about creating a gun parameter to\n display memory scheduling information? I agree that this is an\n important feature, but I think only for debugging.\n Secondly, I noticed that for the child_rinfo_hash key you use a counter (int) and can it lead to collisions? Why didn't you generate a hash from childRestrictInfo for this? For example, something like how it is formed here [0].\n[0] https://www.postgresql.org/message-id/43ad8a48-b980-410d-a83c-5beebf82a4ed%40postgrespro.ru\n\n\n\n\n Sorry, my first question was not clear, I mean: have you thought\n about creating a guc parameter to display memory planning\n information?\n -- \nRegards,\nAlena Rybakina", "msg_date": "Fri, 24 Nov 2023 13:26:53 +0300", "msg_from": "Alena Rybakina <lena.ribackina@yandex.ru>", "msg_from_op": false, "msg_subject": "Re: Reducing memory consumed by RestrictInfo list translations in\n partitionwise join planning" }, { "msg_contents": "On Fri, Nov 24, 2023 at 3:56 PM Alena Rybakina <lena.ribackina@yandex.ru> wrote:\n>\n> On 24.11.2023 13:20, Alena Rybakina wrote:\n>\n> Hi! Thank you for your work on the subject, I think it's a really useful feature.\n>\n> I've reviewed your patch and I have a few questions.\n>\n> First of all, have you thought about creating a gun parameter to display memory scheduling information? I agree that this is an important feature, but I think only for debugging.\n\nNot a GUC parameter but I have a proposal to use EXPLAIN for the same. [1]\n\n>\n> Secondly, I noticed that for the child_rinfo_hash key you use a counter (int) and can it lead to collisions? Why didn't you generate a hash from childRestrictInfo for this? For example, something like how it is formed here [0].\n\nNot usually. But that's the only \"key\" we have to access a set of\nsematically same RestrictInfos. Relids is another key to access the\nexact RestrictInfo. A child RestrictInfo can not be used since there\nwill many child RestrictInfos. Similar parent RestrictInfo can not be\nused since there will be multiple forms of the same RestrictInfo.\n\n[1] https://commitfest.postgresql.org/45/4492/\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 24 Nov 2023 16:24:28 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Reducing memory consumed by RestrictInfo list translations in\n partitionwise join planning" }, { "msg_contents": "On Tue, 31 Oct 2023 at 10:48, Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Thu, Sep 14, 2023 at 4:39 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > The patch set is thus\n> > 0001 - patch used to measure memory used during planning\n> >\n> > 0002 - Patch to free intermediate Relids computed by\n> > adjust_child_relids_multilevel(). I didn't test memory consumption for\n> > multi-level partitioning. But this is clear improvement. In that\n> > function we free the AppendInfos array which as many pointers long as\n> > the number of relids. So it doesn't make sense not to free the Relids\n> > which can be {largest relid}/8 bytes long at least.\n> >\n> > 0003 - patch to save and reuse commuted RestrictInfo. This patch by\n> > itself shows a small memory saving (3%) in the query below where the\n> > same clause is commuted twice. The query does not contain any\n> > partitioned tables.\n> > create table t2 (a int primary key, b int, c int);\n> > create index t2_a_b on t2(a, b);\n> > select * from t2 where 10 = a\n> > Memory used without patch: 13696 bytes\n> > Memory used with patch: 13264 bytes\n> >\n> > 0004 - Patch which implements the hash table of hash table described\n> > above and also code to avoid repeated RestrictInfo list translations.\n> >\n> > I will add this patchset to next commitfest.\n> >\n> > --\n> > Best Wishes,\n> > Ashutosh Bapat\n>\n> PFA rebased patches. Nothing changes in 0002, 0003 and 0004. 0001 is\n> the squashed version of the latest patch set at\n> https://www.postgresql.org/message-id/CAExHW5sCJX7696sF-OnugAiaXS=Ag95=-m1cSrjcmyYj8Pduuw@mail.gmail.com.\n\nCFBot shows that the patch does not apply anymore as in [1]:\n=== Applying patches on top of PostgreSQL commit ID\n7014c9a4bba2d1b67d60687afb5b2091c1d07f73 ===\n=== applying patch\n./0001-Report-memory-used-for-planning-a-query-in--20231031.patch\n...\npatching file src/test/regress/expected/explain.out\nHunk #5 FAILED at 290.\nHunk #6 succeeded at 545 (offset 4 lines).\n1 out of 6 hunks FAILED -- saving rejects to file\nsrc/test/regress/expected/explain.out.rej\npatching file src/tools/pgindent/typedefs.list\nHunk #1 succeeded at 1562 (offset 18 lines).\n\nPlease post an updated version for the same.\n\n[1] - http://cfbot.cputube.org/patch_46_4564.log\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sat, 27 Jan 2024 08:25:47 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Reducing memory consumed by RestrictInfo list translations in\n partitionwise join planning" }, { "msg_contents": "On Sat, Jan 27, 2024 at 8:26 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, 31 Oct 2023 at 10:48, Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > On Thu, Sep 14, 2023 at 4:39 PM Ashutosh Bapat\n> > <ashutosh.bapat.oss@gmail.com> wrote:\n> > >\n> > > The patch set is thus\n> > > 0001 - patch used to measure memory used during planning\n> > >\n> > > 0002 - Patch to free intermediate Relids computed by\n> > > adjust_child_relids_multilevel(). I didn't test memory consumption for\n> > > multi-level partitioning. But this is clear improvement. In that\n> > > function we free the AppendInfos array which as many pointers long as\n> > > the number of relids. So it doesn't make sense not to free the Relids\n> > > which can be {largest relid}/8 bytes long at least.\n> > >\n> > > 0003 - patch to save and reuse commuted RestrictInfo. This patch by\n> > > itself shows a small memory saving (3%) in the query below where the\n> > > same clause is commuted twice. The query does not contain any\n> > > partitioned tables.\n> > > create table t2 (a int primary key, b int, c int);\n> > > create index t2_a_b on t2(a, b);\n> > > select * from t2 where 10 = a\n> > > Memory used without patch: 13696 bytes\n> > > Memory used with patch: 13264 bytes\n> > >\n> > > 0004 - Patch which implements the hash table of hash table described\n> > > above and also code to avoid repeated RestrictInfo list translations.\n> > >\n> > > I will add this patchset to next commitfest.\n> > >\n> > > --\n> > > Best Wishes,\n> > > Ashutosh Bapat\n> >\n> > PFA rebased patches. Nothing changes in 0002, 0003 and 0004. 0001 is\n> > the squashed version of the latest patch set at\n> > https://www.postgresql.org/message-id/CAExHW5sCJX7696sF-OnugAiaXS=Ag95=-m1cSrjcmyYj8Pduuw@mail.gmail.com.\n>\n> CFBot shows that the patch does not apply anymore as in [1]:\n> === Applying patches on top of PostgreSQL commit ID\n> 7014c9a4bba2d1b67d60687afb5b2091c1d07f73 ===\n> === applying patch\n> ./0001-Report-memory-used-for-planning-a-query-in--20231031.patch\n> ...\n> patching file src/test/regress/expected/explain.out\n> Hunk #5 FAILED at 290.\n> Hunk #6 succeeded at 545 (offset 4 lines).\n> 1 out of 6 hunks FAILED -- saving rejects to file\n> src/test/regress/expected/explain.out.rej\n> patching file src/tools/pgindent/typedefs.list\n> Hunk #1 succeeded at 1562 (offset 18 lines).\n>\n> Please post an updated version for the same.\n>\n> [1] - http://cfbot.cputube.org/patch_46_4564.log\n>\n> Regards,\n> Vignesh\n\nThanks Vignesh for the notification. PFA rebased patches. 0001 in\nearlier patch-set is now removed.\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Tue, 30 Jan 2024 11:32:40 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Reducing memory consumed by RestrictInfo list translations in\n partitionwise join planning" }, { "msg_contents": "Hi,\n\nAfter taking a look at the patch optimizing SpecialJoinInfo allocations,\nI decided to take a quick look at this one too. I don't have any\nspecific comments on the code yet, but it seems quite a bit more complex\nthan the other patch ... it's introducing a HTAB into the optimizer,\nsurely that has costs too.\n\nI started by doing the same test as with the other patch, comparing\nmaster to the two patches (applied independently and both). And I got\nabout this (in MB):\n\n tables master sjinfo rinfo both\n -----------------------------------------------\n 2 37 36 34 33\n 3 138 129 122 113\n 4 421 376 363 318\n 5 1495 1254 1172 931\n\nUnlike the SpecialJoinInfo patch, I haven't seen any reduction in\nplanning time for this one.\n\nThe reduction in allocated memory is nice. I wonder what's allocating\nthe remaining memory, and we'd have to do to reduce it further.\n\nHowever, this is a somewhat extreme example - it's joining 5 tables,\neach with 1000 partitions, using a partition-wise join. It reduces the\namount of memory, but the planning time is still quite high (and\nessentially the same as without the patch). So it's not like it'd make\nthem significantly more practical ... do we have any other ideas/plans\nhow to improve that?\n\nAFAIK we don't expect this to improve \"regular\" cases with modest number\nof tables / partitions, etc. But could it make them slower in some case?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 19 Feb 2024 00:05:44 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Reducing memory consumed by RestrictInfo list translations in\n partitionwise join planning" }, { "msg_contents": "On 19/2/2024 06:05, Tomas Vondra wrote:\n> However, this is a somewhat extreme example - it's joining 5 tables,\n> each with 1000 partitions, using a partition-wise join. It reduces the\n> amount of memory, but the planning time is still quite high (and\n> essentially the same as without the patch). So it's not like it'd make\n> them significantly more practical ... do we have any other ideas/plans\n> how to improve that?\nThe planner principle of cleaning up all allocated structures after the \noptimization stage simplifies development and code. But, if we want to \nachieve horizontal scalability on many partitions, we should introduce \nper-partition memory context and reset it in between. GEQO already has a \nshort-lived memory context, making designing extensions a bit more \nchallenging but nothing too painful.\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n", "msg_date": "Mon, 19 Feb 2024 08:37:54 +0700", "msg_from": "Andrei Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Reducing memory consumed by RestrictInfo list translations in\n partitionwise join planning" }, { "msg_contents": "On Mon, Feb 19, 2024 at 4:35 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> Hi,\n>\n> After taking a look at the patch optimizing SpecialJoinInfo allocations,\n> I decided to take a quick look at this one too. I don't have any\n> specific comments on the code yet, but it seems quite a bit more complex\n> than the other patch ... it's introducing a HTAB into the optimizer,\n> surely that has costs too.\n\nThanks for looking into this too.\n\n>\n> I started by doing the same test as with the other patch, comparing\n> master to the two patches (applied independently and both). And I got\n> about this (in MB):\n>\n> tables master sjinfo rinfo both\n> -----------------------------------------------\n> 2 37 36 34 33\n> 3 138 129 122 113\n> 4 421 376 363 318\n> 5 1495 1254 1172 931\n>\n> Unlike the SpecialJoinInfo patch, I haven't seen any reduction in\n> planning time for this one.\n\nYeah. That agreed with my observation as well.\n\n>\n> The reduction in allocated memory is nice. I wonder what's allocating\n> the remaining memory, and we'd have to do to reduce it further.\n\nPlease see reply to SpecialJoinInfo thread. Other that the current\npatches, we require memory efficient Relids implementation. I have\nshared some ideas in the slides I shared in the other thread, but\nhaven't spent time experimenting with any ideas there.\n\n>\n> However, this is a somewhat extreme example - it's joining 5 tables,\n> each with 1000 partitions, using a partition-wise join. It reduces the\n> amount of memory, but the planning time is still quite high (and\n> essentially the same as without the patch). So it's not like it'd make\n> them significantly more practical ... do we have any other ideas/plans\n> how to improve that?\n\nYuya has been working on reducing planning time [1]. Has some\nsignificant improvements in that area based on my experiments. But\nthose patches are complex and still WIP.\n\n>\n> AFAIK we don't expect this to improve \"regular\" cases with modest number\n> of tables / partitions, etc. But could it make them slower in some case?\n>\n\nAFAIR, my experiments did not show any degradation in regular cases\nwith modest number of tables/partitions. The variation in planning\ntime was with the usual planning time variations.\n\n[1] https://www.postgresql.org/message-id/flat/CAExHW5uVZ3E5RT9cXHaxQ_DEK7tasaMN%3DD6rPHcao5gcXanY5w%40mail.gmail.com#112b3e104e0f9e39eb007abe075aae20\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 19 Feb 2024 18:47:27 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Reducing memory consumed by RestrictInfo list translations in\n partitionwise join planning" }, { "msg_contents": "On Mon, Feb 19, 2024 at 5:17 AM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n\n> On Mon, Feb 19, 2024 at 4:35 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> >\n> > Hi,\n> >\n> > After taking a look at the patch optimizing SpecialJoinInfo allocations,\n> > I decided to take a quick look at this one too. I don't have any\n> > specific comments on the code yet, but it seems quite a bit more complex\n> > than the other patch ... it's introducing a HTAB into the optimizer,\n> > surely that has costs too.\n>\n> Thanks for looking into this too.\n>\n> >\n> > I started by doing the same test as with the other patch, comparing\n> > master to the two patches (applied independently and both). And I got\n> > about this (in MB):\n> >\n> > tables master sjinfo rinfo both\n> > -----------------------------------------------\n> > 2 37 36 34 33\n> > 3 138 129 122 113\n> > 4 421 376 363 318\n> > 5 1495 1254 1172 931\n> >\n> > Unlike the SpecialJoinInfo patch, I haven't seen any reduction in\n> > planning time for this one.\n>\n> Yeah. That agreed with my observation as well.\n>\n> >\n> > The reduction in allocated memory is nice. I wonder what's allocating\n> > the remaining memory, and we'd have to do to reduce it further.\n>\n> Please see reply to SpecialJoinInfo thread. Other that the current\n> patches, we require memory efficient Relids implementation. I have\n> shared some ideas in the slides I shared in the other thread, but\n> haven't spent time experimenting with any ideas there.\n>\n> >\n> > However, this is a somewhat extreme example - it's joining 5 tables,\n> > each with 1000 partitions, using a partition-wise join. It reduces the\n> > amount of memory, but the planning time is still quite high (and\n> > essentially the same as without the patch). So it's not like it'd make\n> > them significantly more practical ... do we have any other ideas/plans\n> > how to improve that?\n>\n> Yuya has been working on reducing planning time [1]. Has some\n> significant improvements in that area based on my experiments. But\n> those patches are complex and still WIP.\n>\n> >\n> > AFAIK we don't expect this to improve \"regular\" cases with modest number\n> > of tables / partitions, etc. But could it make them slower in some case?\n> >\n>\n> AFAIR, my experiments did not show any degradation in regular cases\n> with modest number of tables/partitions. The variation in planning\n> time was with the usual planning time variations.\n>\n\nDocumenting some comments from todays' patch review session\n1. Instead of a nested hash table, it might be better to use a flat hash\ntable to save more memory.\n2. new comm_rinfo member in RestrictInfo may have problems when copying\nRestrictInfo or translating it. Instead commuted versions may be tracked\noutside RestrictInfo\n\nCombining the above two, it feels like we need a single hash table with\n(commuted, rinfo_serial, relids) as key to store all the translations of a\nRestrictInfo.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Mon, Feb 19, 2024 at 5:17 AM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:On Mon, Feb 19, 2024 at 4:35 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> Hi,\n>\n> After taking a look at the patch optimizing SpecialJoinInfo allocations,\n> I decided to take a quick look at this one too. I don't have any\n> specific comments on the code yet, but it seems quite a bit more complex\n> than the other patch ... it's introducing a HTAB into the optimizer,\n> surely that has costs too.\n\nThanks for looking into this too.\n\n>\n> I started by doing the same test as with the other patch, comparing\n> master to the two patches (applied independently and both). And I got\n> about this (in MB):\n>\n>   tables    master     sjinfo    rinfo      both\n>   -----------------------------------------------\n>        2        37         36       34        33\n>        3       138        129      122       113\n>        4       421        376      363       318\n>        5      1495       1254     1172       931\n>\n> Unlike the SpecialJoinInfo patch, I haven't seen any reduction in\n> planning time for this one.\n\nYeah. That agreed with my observation as well.\n\n>\n> The reduction in allocated memory is nice. I wonder what's allocating\n> the remaining memory, and we'd have to do to reduce it further.\n\nPlease see reply to SpecialJoinInfo thread. Other that the current\npatches, we require memory efficient Relids implementation. I have\nshared some ideas in the slides I shared in the other thread, but\nhaven't spent time experimenting with any ideas there.\n\n>\n> However, this is a somewhat extreme example - it's joining 5 tables,\n> each with 1000 partitions, using a partition-wise join. It reduces the\n> amount of memory, but the planning time is still quite high (and\n> essentially the same as without the patch). So it's not like it'd make\n> them significantly more practical ... do we have any other ideas/plans\n> how to improve that?\n\nYuya has been working on reducing planning time [1]. Has some\nsignificant improvements in that area based on my experiments. But\nthose patches are complex and still WIP.\n\n>\n> AFAIK we don't expect this to improve \"regular\" cases with modest number\n> of tables / partitions, etc. But could it make them slower in some case?\n>\n\nAFAIR, my experiments did not show any degradation in regular cases\nwith modest number of tables/partitions. The variation in planning\ntime was with the usual planning time variations.Documenting some comments from todays' patch review session1. Instead of a nested hash table, it might be better to use a flat hash table to save more memory.2. new comm_rinfo member in RestrictInfo may have problems when copying RestrictInfo or translating it. Instead commuted versions may be tracked outside RestrictInfoCombining the above two, it feels like we need a single hash table with (commuted, rinfo_serial, relids) as key to store all the translations of a RestrictInfo.-- Best Wishes,Ashutosh Bapat", "msg_date": "Tue, 28 May 2024 21:04:17 -0700", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Reducing memory consumed by RestrictInfo list translations in\n partitionwise join planning" }, { "msg_contents": "Sorry for the delay in my response.\n\nOn Wed, May 29, 2024 at 9:34 AM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> Documenting some comments from todays' patch review session\n\nI forgot to mention back then that both of the suggestions below came\nfrom Tom Lane.\n\n> 1. Instead of a nested hash table, it might be better to use a flat hash table to save more memory.\n\nDone. It indeed saves memory without impacting planning time.\n\n> 2. new comm_rinfo member in RestrictInfo may have problems when copying RestrictInfo or translating it. Instead commuted versions may be tracked outside RestrictInfo\n\nAfter commit 767c598954bbf72e0535f667e2e0667765604b2a,\nrepameterization of parent paths for child relation happen at the time\nof creating the plan. This reduces the number of child paths produced\nby reparameterization and also reduces the number of RestrictInfos\nthat get translated during reparameterization. During\nreparameterization commuted parent RestrictInfos are required to be\ntranslated to child RestrictInfos. Before the commit, this led to\ntranslating the same commuted parent RestrictInfo multiple times.\nAfter the commit, only one path gets reparameterized for a given\nparent and child pair. Hence we do not produce multiple copies of the\nsame commuted child RestrictInfo. Hence we don't need to keep track of\ncommuted child RestrictInfos anymore. Removed that portion of code\nfrom the patches.\n\nI made detailed memory consumption measurements with this patch for\nnumber of partitions changed from 0 (unpartitioned) to 1000 and for 2\nto 5-way joins. They are available in the spreadsheet at [1]. The raw\nmeasurement data is in the first sheet named \"pwj_mem_measurements raw\nnumbers\". The averages over multiple runs are in second sheet named\n\"avg_numbers\". Rest of the sheet represent the averages in more\nconsumable manner. Please note that the averages make sense only for\nplanning time since the memory consumption remains same with every\nrun. Also note that EXPLAIN now reports planning memory consumption in\nkB. Any changes to memory consumption below 1kB are not reported and\nhence not noticed. Here are some observations.\n\n1. When partitionwise join is not enabled, no changes to planner's\nmemory consumption are observed. See sheet named \"pwj disabled,\nplanning memory\".\n\n2. When partitionwise join is enabled, upto 17% (206MB) memory is\nsaved by the patch in case of 5-way self-join with 1000 partitions.\nThis is the maximum memory saving observed. The amount of memory saved\nincreases with the number of joins and number of partitions. See sheet\nwith name \"pwj enabled, planning memory\"\n\n3. After commit 767c598954bbf72e0535f667e2e0667765604b2a, we do not\ntranslate a parent RestrictInfo multiple times for the same\nparent-child pair in case of a *2-way partitionwise join*. But we\nstill consume memory in saving the child RestrictInfo in the hash\ntable. Hence in case of 2-way join we see increased memory consumption\nwith the patch compared to master. The memory consumption increases by\n13kb, 23kB, 76kB and 146kB for 10, 100, 500, 1000 partitions\nrespectively. This increase is smaller compared to the overall memory\nsaving. In order to avoid this memory increase, we will need to avoid\nusing hash table for 2-way join. We will need to know whether there\nwill be more than one partitionwise join before translating the\nRestrictInfos for the first partitionwise join. This is hard to\nachieve in all the cases since the decision to use partitionwise join\nhappens at the time of creating paths for a given join relation, which\nitself is computed on the fly. We may choose some heuristics which\ntake into account the number of partitioned tables in the query, their\npartition schemes, and the quals in the query to decide whether or not\nto track the translated child RestrictInfos. But that will make the\ncode more complex, but more importantly the heuristics may not be able\nto keep up if we start using partitionwise join as an optimization\nstrategy for more cases (e.g. asymmetric partitionwise join [2]). The\nattached patch looks like a good tradeoff to me. But other opinions\nmight vary. Suggestions are welcome.\n\n4. There is no noticeable change in the planning time. I ran the same\nexperiment multiple times. The planning time variations from each\nexperiment do not show any noticeable pattern suggesting increase or\ndecrease in the planning time with the patch.\n\nA note about the code: I have added all the structures and functions\ndealing with the RestrictInfo hash table at the end of restrictinfo.c.\nI have not come across a C file in PostgreSQL code base where private\nstructures are defined in the middle the file; usually they are\ndefined at the beginning of the file. But I have chosen it that way\nhere since it makes it easy to document the hash table and the\nfunctions at one place at the beginning of this code section. I am\nopen to suggestions which make the documentation easy while placing\nthe structures at the beginning of the file.\n\n[1] https://docs.google.com/spreadsheets/d/1CEjRWZ02vuR8fSwhYaNugewtX8f0kIm5pLoRA95f3s8/edit?usp=sharing\n[2] https://www.postgresql.org/message-id/flat/CAOP8fzaVL_2SCJayLL9kj5pCA46PJOXXjuei6-3aFUV45j4LJQ%40mail.gmail.com\n\n\n--\nBest Wishes,\nAshutosh Bapat", "msg_date": "Mon, 19 Aug 2024 18:43:49 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Reducing memory consumed by RestrictInfo list translations in\n partitionwise join planning" }, { "msg_contents": "On Mon, Aug 19, 2024 at 6:43 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> Sorry for the delay in my response.\n>\n> On Wed, May 29, 2024 at 9:34 AM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > Documenting some comments from todays' patch review session\n>\n> I forgot to mention back then that both of the suggestions below came\n> from Tom Lane.\n>\n> > 1. Instead of a nested hash table, it might be better to use a flat hash table to save more memory.\n>\n> Done. It indeed saves memory without impacting planning time.\n>\n> > 2. new comm_rinfo member in RestrictInfo may have problems when copying RestrictInfo or translating it. Instead commuted versions may be tracked outside RestrictInfo\n>\n> After commit 767c598954bbf72e0535f667e2e0667765604b2a,\n> repameterization of parent paths for child relation happen at the time\n> of creating the plan. This reduces the number of child paths produced\n> by reparameterization and also reduces the number of RestrictInfos\n> that get translated during reparameterization. During\n> reparameterization commuted parent RestrictInfos are required to be\n> translated to child RestrictInfos. Before the commit, this led to\n> translating the same commuted parent RestrictInfo multiple times.\n> After the commit, only one path gets reparameterized for a given\n> parent and child pair. Hence we do not produce multiple copies of the\n> same commuted child RestrictInfo. Hence we don't need to keep track of\n> commuted child RestrictInfos anymore. Removed that portion of code\n> from the patches.\n>\n> I made detailed memory consumption measurements with this patch for\n> number of partitions changed from 0 (unpartitioned) to 1000 and for 2\n> to 5-way joins. They are available in the spreadsheet at [1]. The raw\n> measurement data is in the first sheet named \"pwj_mem_measurements raw\n> numbers\". The averages over multiple runs are in second sheet named\n> \"avg_numbers\". Rest of the sheet represent the averages in more\n> consumable manner. Please note that the averages make sense only for\n> planning time since the memory consumption remains same with every\n> run. Also note that EXPLAIN now reports planning memory consumption in\n> kB. Any changes to memory consumption below 1kB are not reported and\n> hence not noticed. Here are some observations.\n>\n> 1. When partitionwise join is not enabled, no changes to planner's\n> memory consumption are observed. See sheet named \"pwj disabled,\n> planning memory\".\n>\n> 2. When partitionwise join is enabled, upto 17% (206MB) memory is\n> saved by the patch in case of 5-way self-join with 1000 partitions.\n> This is the maximum memory saving observed. The amount of memory saved\n> increases with the number of joins and number of partitions. See sheet\n> with name \"pwj enabled, planning memory\"\n>\n> 3. After commit 767c598954bbf72e0535f667e2e0667765604b2a, we do not\n> translate a parent RestrictInfo multiple times for the same\n> parent-child pair in case of a *2-way partitionwise join*. But we\n> still consume memory in saving the child RestrictInfo in the hash\n> table. Hence in case of 2-way join we see increased memory consumption\n> with the patch compared to master. The memory consumption increases by\n> 13kb, 23kB, 76kB and 146kB for 10, 100, 500, 1000 partitions\n> respectively. This increase is smaller compared to the overall memory\n> saving. In order to avoid this memory increase, we will need to avoid\n> using hash table for 2-way join. We will need to know whether there\n> will be more than one partitionwise join before translating the\n> RestrictInfos for the first partitionwise join. This is hard to\n> achieve in all the cases since the decision to use partitionwise join\n> happens at the time of creating paths for a given join relation, which\n> itself is computed on the fly. We may choose some heuristics which\n> take into account the number of partitioned tables in the query, their\n> partition schemes, and the quals in the query to decide whether or not\n> to track the translated child RestrictInfos. But that will make the\n> code more complex, but more importantly the heuristics may not be able\n> to keep up if we start using partitionwise join as an optimization\n> strategy for more cases (e.g. asymmetric partitionwise join [2]). The\n> attached patch looks like a good tradeoff to me. But other opinions\n> might vary. Suggestions are welcome.\n>\n> 4. There is no noticeable change in the planning time. I ran the same\n> experiment multiple times. The planning time variations from each\n> experiment do not show any noticeable pattern suggesting increase or\n> decrease in the planning time with the patch.\n>\n> A note about the code: I have added all the structures and functions\n> dealing with the RestrictInfo hash table at the end of restrictinfo.c.\n> I have not come across a C file in PostgreSQL code base where private\n> structures are defined in the middle the file; usually they are\n> defined at the beginning of the file. But I have chosen it that way\n> here since it makes it easy to document the hash table and the\n> functions at one place at the beginning of this code section. I am\n> open to suggestions which make the documentation easy while placing\n> the structures at the beginning of the file.\n>\n> [1] https://docs.google.com/spreadsheets/d/1CEjRWZ02vuR8fSwhYaNugewtX8f0kIm5pLoRA95f3s8/edit?usp=sharing\n> [2] https://www.postgresql.org/message-id/flat/CAOP8fzaVL_2SCJayLL9kj5pCA46PJOXXjuei6-3aFUV45j4LJQ%40mail.gmail.com\n>\n\nNoticed that Tomas's email address has changed. Adding his new email address.\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 19 Aug 2024 19:15:13 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Reducing memory consumed by RestrictInfo list translations in\n partitionwise join planning" } ]
[ { "msg_contents": "Hi All,\n\nIf add_path() and add_partial_path() do not find a new path to be\nsuperior to any existing paths, they free the new path. They free an\nexisting path if it is found to be inferior to the new path. But not\nall paths surviving in a RelOptInfo are used to create paths for\nrelations which use it as input. Further add_path and add_partial_path\ndo not free the whole path tree but just the topmost pathnode. The\nsubpath nodes are not freed because they may be referenced by other\npaths. The subpaths continue to occupy memory even if they are not\nused anywhere. As we build the relation tree upwards (base, join,\nupper relations) more and more such paths are accumulated and continue\nto consume memory till the statement ends. With partitionwise join\ninvolving partitioned tables with thousands of partitions this memory\nconsumption increases proportional to the number of partitions.\n\nAttached WIP patches address this issue by adding infrastructure to\ntrack references to paths and free unreferenced paths once they can be\ndeemed as useless. A new member Path::ref_count in a pathnode tracks\nhow many other objects, like pathlists in RelOptInfo and other paths,\nreference that pathnode. As the path nodes are referenced they are\n\"linked\" using link_path() to referencing objects. When the path nodes\nare no longer referenced they are \"unlinked\" using unlink_path() from\nthe referencing objects. Path nodes are freed using free_path(). The\nfunction unlinks the sub path nodes so that they can be freed when\ntheir reference count drops to 0. The paths whose references reach 0\nduring unlinking are freed automatically using free_path(). Patch 0002\nadds this infrastructure. 0003 and 0004 use these functions in example\ncases.\n\nWith these patches the memory consumption numbers look like below.\nExperiment\n----------\nMemory consumption is measured using a method similar to the one\ndescribed in [1]. The changes to EXPLAIN ANALYZE to report planning\nmemory are in 0001. Memory consumed when planning a self-join query is\nmeasured. The queries involve partitioned and unpartitioned tables\nrespectively. The partitioned table has 1000 partitions in it. The\ntable definitions and helper function can be found in setup.sql and\nthe queries can be found in queries.sql. This is the simplest setup.\nHigher savings may be seen with more complex setups involving indexes,\nSQL operators and clauses.\n\nTable 1: Join between unpartitioned tables\nNumber of tables | without patch | with patch | % reduction |\nbeing joined | | | |\n--------------------------------------------------------------\n 2 | 29.0 KiB | 28.9 KiB | 0.66% |\n 3 | 79.1 KiB | 76.7 KiB | 3.03% |\n 4 | 208.6 KiB | 198.2 KiB | 4.97% |\n 5 | 561.6 KiB | 526.3 KiB | 6.28% |\n\nTable 2: join between partitioned tables with partitionwise join\nenabled (enable_partitionwise_join = true).\nNumber of tables | without patch | with patch | % reduction |\nbeing joined | | | |\n----------------------------------------------------------------\n 2 | 40.3 MiB | 40.0 MiB | 0.70% |\n 3 | 146.9 MiB | 143.1 MiB | 2.55% |\n 4 | 445.4 MiB | 430.4 MiB | 3.38% |\n 5 | 1563.3 MiB | 1517.6 MiB | 2.92% |\n\nThe patch is not complete because of following issues:\n\na. 0003 and 0004 patches do not cover all the path nodes. I have\ncovered only those which I encountered in the queries I ran. If we\naccept this proposal, I will work on covering all the path nodes.\n\nb. It does not cover the entire RelOptInfo tree that the planner\ncreates. The paths in a lower level RelOptInfo can be deemed as\nuseless only when path creation for all the immediate upper\nRelOptInfos is complete. Thus we need access to both upper and lower\nlevel RelOptInfos at the same time. The RelOptInfos created while\nplanning join are available in standard_join_search(). Thus it's\npossible to free unused paths listed in all the RelOptInfos except the\ntopmost RelOptInfo in this function as done in the patch. But the\ntopmost RelOptInfo acts as an input to upper RelOptInfo in\ngrouping_planner() where we don't have access to the RelOptInfos at\nlower levels in the join tree. Hence we can't free the unused paths\nfrom topmost RelOptInfo and cascade that effect down the RelOptInfo\ntree. This is the reason why we don't see memory reduction in case of\n2-way join. This is also the reason why the numbers above are lower\nthan the actual memory that can be saved. If we decide to move ahead\nwith this approach, I will work on this too.\n\nc. The patch does not call link_path and unlink_path in all the\nrequired places. We will need some work to identify such places, to\nbuild infrastructure and methods to identify such places in future.\n\nAnother approach to fixing this would be to use separate memory\ncontext for creating path nodes and then deleting the entire memory\ncontext at the end of planning once the plan is created. We will need\nto reset path lists as well as cheapest_path members in RelOptInfos as\nwell. This will possibly free up more memory and might be faster. But\nI have not tried it. The approach taken in the patch has an advantage\nover this one i.e. the paths can be freed at any stage in the planning\nusing free_unused_paths_from_rel() implemented in the patch. Thus we\ncan monitor the memory consumption and trigger garbage collection when\nit crosses a certain threashold. Or we may implement both the\napproaches to clean every bit of paths at the end of planning while\ngarbage collecting pathnodes when memory consumption goes beyond\nthreashold. The reference mechanism may have other usages as well.\n\nSuggestions/comments welcome.\n\nReferences\n1. https://www.postgresql.org/message-id/CAExHW5stmOUobE55pMt83r8UxvfCph+Pvo5dNpdrVCsBgXEzDQ@mail.gmail.com\n\n--\nBest Wishes,\nAshutosh Bapat", "msg_date": "Thu, 27 Jul 2023 19:36:32 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Memory consumed by paths during partitionwise join planning" }, { "msg_contents": "On Fri, 28 Jul 2023 at 02:06, Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> Table 1: Join between unpartitioned tables\n> Number of tables | without patch | with patch | % reduction |\n> being joined | | | |\n> --------------------------------------------------------------\n> 2 | 29.0 KiB | 28.9 KiB | 0.66% |\n> 3 | 79.1 KiB | 76.7 KiB | 3.03% |\n> 4 | 208.6 KiB | 198.2 KiB | 4.97% |\n> 5 | 561.6 KiB | 526.3 KiB | 6.28% |\n\nI have mostly just random thoughts and some questions...\n\nHave you done any analysis of the node types that are consuming all\nthat memory? Are Path and subclasses of Path really that dominant in\nthis? The memory savings aren't that impressive and it makes me\nwonder if this is worth the trouble.\n\nWhat's the goal of the memory reduction work? If it's for\nperformance, does this increase performance? Tracking refcounts of\nPaths isn't free.\n\nWhy do your refcounts start at 1? This seems weird:\n\n+ /* Free the path if none references it. */\n+ if (path->ref_count == 1)\n\nDoes ref_count really need to be an int? There's a 16-bit hole you\ncould use between param_info and parallel_aware. I doubt you'd need\n32 bits of space for this.\n\nI agree that it would be nice to get rid of the \"if (!IsA(old_path,\nIndexPath))\" hack in add_path() and it seems like this idea could\nallow that. However, I think if we were to have something like this\nthen you'd want all the logic to be neatly contained inside pathnode.c\nwithout adding any burden in other areas to track or check Path\nrefcounts.\n\nI imagine the patch would be neater if you added some kind of Path\ntraversal function akin to expression_tree_walker() that allows you to\nvisit each subpath recursively and run a callback function on each.\n\nDavid\n\n\n", "msg_date": "Wed, 29 Nov 2023 08:40:02 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Memory consumed by paths during partitionwise join planning" }, { "msg_contents": "On Wed, Nov 29, 2023 at 1:10 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Fri, 28 Jul 2023 at 02:06, Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> > Table 1: Join between unpartitioned tables\n> > Number of tables | without patch | with patch | % reduction |\n> > being joined | | | |\n> > --------------------------------------------------------------\n> > 2 | 29.0 KiB | 28.9 KiB | 0.66% |\n> > 3 | 79.1 KiB | 76.7 KiB | 3.03% |\n> > 4 | 208.6 KiB | 198.2 KiB | 4.97% |\n> > 5 | 561.6 KiB | 526.3 KiB | 6.28% |\n>\n> I have mostly just random thoughts and some questions...\n>\n> Have you done any analysis of the node types that are consuming all\n> that memory? Are Path and subclasses of Path really that dominant in\n> this? The memory savings aren't that impressive and it makes me\n> wonder if this is worth the trouble.\n\nThis thread and patch targets saving memory consumed by Path nodes\ni.e. Path and its subclasses. Breakdown of memory consumed by various\nnodes can be found in [1] for partitioned table queries.\n\n>\n> What's the goal of the memory reduction work? If it's for\n> performance, does this increase performance? Tracking refcounts of\n> Paths isn't free.\n\nThis memory reduction work is part of work to reduce planner's memory\nconsumption while planning queries involving partitioned tables with a\nlarge number (thousands) of partitions [1]. The second table (copies\nbelow) in my first email in this thread gives the memory saved by\nfreeing unused path nodes when planning queries involving partitioned\ntables with a thousand partitions each.\n\nTable 2: join between partitioned tables with partitionwise join\nenabled (enable_partitionwise_join = true).\nNumber of tables | without patch | with patch | % reduction |\nbeing joined | | | |\n------------------------------\n----------------------------------\n 2 | 40.3 MiB | 40.0 MiB | 0.70% |\n 3 | 146.9 MiB | 143.1 MiB | 2.55% |\n 4 | 445.4 MiB | 430.4 MiB | 3.38% |\n 5 | 1563.3 MiB | 1517.6 MiB | 2.92% |\n\n%wise the memory savings are not great but because of sheer amount of\nmemory used, the savings are in MBs. Also I don't think I managed to\nfree all the unused paths for the reasons listed in my original mail.\nBut I was doubtful whether the work is worth it. So I halted my work.\nI see you think that it's not worth it. So one vote against it.\n\n\n>\n> Why do your refcounts start at 1? This seems weird:\n>\n> + /* Free the path if none references it. */\n> + if (path->ref_count == 1)\n\nRefcounts start from 0. This code is from free_unused_paths_from_rel()\nwhich frees paths in the rel->pathlist. At this stage a path is\nreferenced from rel->pathlist, hence the count will >= 1 and we should\nbe freeing paths with refcount > 1 i.e. referenced elsewhere.\n\n>\n> Does ref_count really need to be an int? There's a 16-bit hole you\n> could use between param_info and parallel_aware. I doubt you'd need\n> 32 bits of space for this.\n\n16 bit space allows the refcount to go upto 65535 (or twice of that).\nThis is somewhere between 18C9 and 19C9. If a (the cheapest) path in a\ngiven relation in 19 relation query is referenced in all the joins\nthat that relation is part of, this refcount will be exhausted. If we\naren't dealing with queries involving those many relations already, we\nwill soon. Maybe we should add code to protect from overflows.\n\n>\n> I agree that it would be nice to get rid of the \"if (!IsA(old_path,\n> IndexPath))\" hack in add_path() and it seems like this idea could\n> allow that. However, I think if we were to have something like this\n> then you'd want all the logic to be neatly contained inside pathnode.c\n> without adding any burden in other areas to track or check Path\n> refcounts.\n\nThe code is in following files\n src/backend/optimizer/path/allpaths.c:\nThe definitions of free_unused_paths_from_rel() and\nfree_unused_paths() can be moved to pathnode.c\n\n src/backend/optimizer/util/pathnode.c\n src/include/nodes/pathnodes.h\n src/include/optimizer/pathnode.h\nThose three together I consider as pathnode.c. But let me know if you\nfeel otherwise.\n\nsrc/backend/optimizer/path/joinpath.c\nthis is the only place where other than pathnode.c where we use\nlink_path(). That's required to stop add_path from freeing a\nmaterialized path that is tried multiple times.We can't add_path that\npath to rel since it will get kicked out by the path that it's\nmaterialising for. But I think it's still path creation code so should\nbe ok. Let me know if you feel otherwise.\n\nThere may be other places but I can find those out and consider them\ncase by case basis.\n\n>\n> I imagine the patch would be neater if you added some kind of Path\n> traversal function akin to expression_tree_walker() that allows you to\n> visit each subpath recursively and run a callback function on each.\n\nYes. Agreed.\n\nI am fine to work on this further if the community thinks it is\nacceptable. It seems from your point of view the worth of this work is\nas follows\na. memory savings - not worth it\nb. getting rid of !IsA(old_path, IndexPath) - worth it\nc. problem of same path being added to multiple relations - not worth\nit since you have other solution\n\nCan't judge whether that means it's acceptable or not.\n\n[1] https://www.postgresql.org/message-id/CAExHW5stmOUobE55pMt83r8UxvfCph+Pvo5dNpdrVCsBgXEzDQ@mail.gmail.com\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 1 Dec 2023 12:29:39 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Memory consumed by paths during partitionwise join planning" }, { "msg_contents": "On Fri, 1 Dec 2023 at 19:59, Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> I am fine to work on this further if the community thinks it is\n> acceptable. It seems from your point of view the worth of this work is\n> as follows\n> a. memory savings - not worth it\n> b. getting rid of !IsA(old_path, IndexPath) - worth it\n> c. problem of same path being added to multiple relations - not worth\n> it since you have other solution\n>\n> Can't judge whether that means it's acceptable or not.\n\nI think it's worth trying to get this working and we can run some\nperformance tests to see if it's cheap enough to use.\n\nI've not had enough time to fully process your patches, but on a quick\nlook, I don't quite understand why link_path() does not have to\nrecursively increment ref_counts in each of its subpaths. It seems\nyou're doing the opposite in free_path(), so it seems like this would\nbreak for cases like a SortPath on say, a LimitPath over a SeqScan.\nWhen you call create_sort_path you'll only increment the refcount on\nthe LimitPath. The SeqScan path then has a refcount 1 lower than it\nshould. If something calls unlink_path on the SeqScan path that could\nput the refcount to 0 and break the SortPath by freeing a Path\nindirectly referenced by the sort.\n\nRecursively incrementing the refcounts would slow the patch down a\nbit, so we'd need to test the performance of this. I think at the\nrate we create and free join paths in complex join problems we could\nend up bumping refcounts quite often.\n\nAnother way to fix the pfree issues was to add infrastructure to\nshallow copy paths. We could shallow copy paths whenever we borrow a\nPath from another RelOptInfo and then just reset the parent to the new\nRelOptInfo. That unfortunately makes your memory issue a bit worse.\nWe discussed this a bit in [1]. It also seems there was some\ndiscussion about wrapping a Path up in a ProjectionPath in there. That\nunfortunately also takes us in the opposite direction in terms of your\nmemory reduction goals.\n\nMaybe we can try to move forward with your refcount idea and see how\nthe performance looks. If that's intolerable then that might help us\ndecide on the next best alternative solution.\n\nDavid\n\n[1] https://postgr.es/m/CAApHDvo8tiB5xbJa94f4Mo8Of2fPJ2zG+zQQQTGr-ThjSsymQQ@mail.gmail.com\n\n\n", "msg_date": "Fri, 8 Dec 2023 01:49:15 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Memory consumed by paths during partitionwise join planning" }, { "msg_contents": "On Thu, Dec 7, 2023 at 6:19 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Fri, 1 Dec 2023 at 19:59, Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> > I am fine to work on this further if the community thinks it is\n> > acceptable. It seems from your point of view the worth of this work is\n> > as follows\n> > a. memory savings - not worth it\n> > b. getting rid of !IsA(old_path, IndexPath) - worth it\n> > c. problem of same path being added to multiple relations - not worth\n> > it since you have other solution\n> >\n> > Can't judge whether that means it's acceptable or not.\n>\n> I think it's worth trying to get this working and we can run some\n> performance tests to see if it's cheap enough to use.\n>\n> I've not had enough time to fully process your patches, but on a quick\n> look, I don't quite understand why link_path() does not have to\n> recursively increment ref_counts in each of its subpaths. It seems\n> you're doing the opposite in free_path(), so it seems like this would\n> break for cases like a SortPath on say, a LimitPath over a SeqScan.\n> When you call create_sort_path you'll only increment the refcount on\n> the LimitPath. The SeqScan path then has a refcount 1 lower than it\n> should. If something calls unlink_path on the SeqScan path that could\n> put the refcount to 0 and break the SortPath by freeing a Path\n> indirectly referenced by the sort.\n\nLet me explain how linking and unlinking works. You may skip over it\nif you are comfortable with this logic already.\nrefcounts count how many other paths or objects are referencing a\ngiven path. E.g. we have three path chains as follows\n1. joinpath_1->joinpath_2->seqpath_1,\n2. joinpath_3->joinpath_4->seqpath_1,\n3. joinpath_5->joinpath_2->seqpath_1\n\nPlease note that this is not full path tree/forest. It is difficult to\ndescribe the whole path forest in text format. But this should be\nsufficient to get the gist of how linking and unlinking works.\n\nLet's say all these paths are listed in pathlists of respective\nRelOptInfos. Assume that joinpath_1, joinpath_3 and joinpath_5 are all\npart of the topmost RelOptInfo. Then the refcounts will be as follows\njoinpath_1, joinpath_3 and joinpath_5 -> each 1 (RelOptInfo)\njoinpath_2 - 3 (joinpath_2, joinpath_5 and RelOptInfo)\njoinpath_4 - 2 (joinpath_3, RelOptInfo)\nseqpath_1 - 3 (joinpath_2, joinpath_4, RelOptInfo)\n\nIf joinpath_3 is chosen finally to be converted into a plan,\njoinpath_1, joinpath_5 will be unlinked, their refcount drops to 0 and\nthey will be freed.\nunlinking and freeing them reduces refcount of joinpath 2 to 1. When\nwe will cleanse the pathlist of corresponding RelOptInfo, joinpath_2's\nrefcount will reduce to 0 and it will be freed.\nfreeing joinpath_2 will reduce refcount of seqpath to 2 (joinpath_4\nand RelOptInfo).\njoinpath_3 and joinpath_4 will have their refcounts unchanged.\n\nThis will leave the paths joinpath_3, joinpath_4 and seqpath_1 to be\nconverted to plans. Rest of the paths will be freed.\n\nIf we increment the refcount all the way down the path forest, they\nwon't really be refcounts anymore since they will count indirect\nreferences as well. Then unlink_path also has to traverse the entire\ntree resetting refcounts. Both of those are unnecessary. Please note\nthat it's only free_path which calls unlink_path on the referenced\npaths not unlink_path itself.\n\nIn your example, the path chain looks like this\nsort_path->limit_path->seqpath with refcounts 1, 1, 2 respectively. I\nam assuming that seqpath is referenced by pathlist of its RelOptInfo\nand limitpath. sortpath is referenced by its RelOptInfo or is chosen\nas the final path. limitpath is not referenced in a RelOptInfo but is\nreferenced in sortpath.\n\nNow the only things that can call unlink_path on seqpath are free_path\non limitpath OR pathlist cleaning of its RelOptInfo. In both the cases\nthe refcount will correctly report the number of objects referencing\nit. So I don't see a problem here. If you can provide me a query that\ndoes this, I will test it.\n\n\n> Recursively incrementing the refcounts would slow the patch down a\n> bit, so we'd need to test the performance of this. I think at the\n> rate we create and free join paths in complex join problems we could\n> end up bumping refcounts quite often.\n\nThis won't be necessary as explained above.\n\n>\n> Another way to fix the pfree issues was to add infrastructure to\n> shallow copy paths. We could shallow copy paths whenever we borrow a\n> Path from another RelOptInfo and then just reset the parent to the new\n> RelOptInfo. That unfortunately makes your memory issue a bit worse.\n> We discussed this a bit in [1]. It also seems there was some\n> discussion about wrapping a Path up in a ProjectionPath in there. That\n> unfortunately also takes us in the opposite direction in terms of your\n> memory reduction goals.\n\nYeah.\n\n>\n> Maybe we can try to move forward with your refcount idea and see how\n> the performance looks. If that's intolerable then that might help us\n> decide on the next best alternative solution.\n\nI will report performance numbers (planning time) with my current\npatchset which has limitation listed in [1]. If performance gets\nworse, we will abandon this idea. If not, I will work on completing\nthe patch. Does that work for you?\n\n[1] https://www.postgresql.org/message-id/CAExHW5stmOUobE55pMt83r8UxvfCph+Pvo5dNpdrVCsBgXEzDQ@mail.gmail.com\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 8 Dec 2023 10:32:08 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Memory consumed by paths during partitionwise join planning" }, { "msg_contents": "On Fri, 8 Dec 2023 at 18:02, Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> given path. E.g. we have three path chains as follows\n> 1. joinpath_1->joinpath_2->seqpath_1,\n> 2. joinpath_3->joinpath_4->seqpath_1,\n> 3. joinpath_5->joinpath_2->seqpath_1\n>\n> Please note that this is not full path tree/forest. It is difficult to\n> describe the whole path forest in text format. But this should be\n> sufficient to get the gist of how linking and unlinking works.\n>\n> Let's say all these paths are listed in pathlists of respective\n> RelOptInfos. Assume that joinpath_1, joinpath_3 and joinpath_5 are all\n> part of the topmost RelOptInfo. Then the refcounts will be as follows\n> joinpath_1, joinpath_3 and joinpath_5 -> each 1 (RelOptInfo)\n> joinpath_2 - 3 (joinpath_2, joinpath_5 and RelOptInfo)\n> joinpath_4 - 2 (joinpath_3, RelOptInfo)\n> seqpath_1 - 3 (joinpath_2, joinpath_4, RelOptInfo)\n\nI think this likely works fine providing you always unlink paths from\nthe root of the path tree. When you start unlinking from non-root\npaths I think you could start to see problems unless you increment\nrefcounts recursively.\n\nThe problem I had when working on improving the union planner was when\nbuilding the final paths for a union child.\n\nLet's say I have the query: SELECT a,b FROM t1 UNION SELECT x,y FROM t2;\n\nWhen planning the first union child, I'd like to provide the union\nplanner with a sorted path and also the cheapest scan path on t1 to\nallow it to Hash Aggregate on the cheapest path.\n\nLet's say the planner produces the following paths on t1.\n\nSeqScan_t1\n\nWhen I create the final union child paths I want to try and produce a\nfew sorted paths to allow MergeAppend to work. I'll loop over each\npath in t1's pathlist.\n\nSeqScan_t1 isn't sorted, so I'll make Sort1->SeqScan_t1 add add_path\nthat to final_rel.\n\nNow, I also want to allow cheap Hash Aggregates to implement the\nUNION, so I'll include SeqScan_t1 without sorting it.\n\nIf I now do add_path(final_rel, SeqScan_t1), add_path() could see that\nthe SeqScan cost is fuzzily the same as the Sort->SeqScan_t1 and since\nthe sort version has better PathKeys, add_path will unlink SeqScan_t1.\n\nNow, I don't think your scheme for refcounting paths is broken by this\nbecause you'll increment the refcount of the SeqScan_t1 when the Sort\nuses it as its subpath, but if instead of a Sort->SeqScan that was\nIncrementalSort->Sort->SeqScan, then suddenly you're not incrementing\nthe refcount for the SeqScan when you create the Incremental Sort due\nto the Sort being between the two.\n\nSo, if I now do add_path(final_rel, Sort2->Incremental_Sort->SeqScan_t1)\n\nSort1 is rejected due to cost and we unlink it. Sort1 refcount gets to\n0 and we free the path and the SeqScan_t1's refcount goes down by 1\nbut does not reach 0.\n\nWe then add_path(final_rel, SeqScan_t1) but this path is fuzzily the\nsame cost as Sort2 so we reject SeqScan_t1 due to Sort2 having better\npathkeys and we unlink SeqScan_t1. Refcount on SeqScan_t1 gets to 0\nand we free the path. We need SeqScan_t1 for the incremental sort.\n\nPerhaps in reality here, since SeqScan_t1 exists in the base rel's\npathlist we'd still have a refcount from that, but I assume at some\npoint you want to start freeing up paths from RelOptInfos that we're\ndone with, in which case the SeqScan_t1's refcount would reach 0 at\nthat point and we'd crash during create plan of the\nSort2->Incremental_Sort->SeqScan_t1 plan.\n\nHave I misunderstood something here? As far as I see it with your\nnon-recursive refcounts, it's only safe to free from the root of a\npathtree and isn't safe when unlinking paths that are the subpath of\nanother path unless the unlinking is done from the root.\n\nDavid\n\n\n", "msg_date": "Fri, 8 Dec 2023 20:31:53 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Memory consumed by paths during partitionwise join planning" }, { "msg_contents": "On Fri, Dec 8, 2023 at 1:02 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Fri, 8 Dec 2023 at 18:02, Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> > given path. E.g. we have three path chains as follows\n> > 1. joinpath_1->joinpath_2->seqpath_1,\n> > 2. joinpath_3->joinpath_4->seqpath_1,\n> > 3. joinpath_5->joinpath_2->seqpath_1\n> >\n> > Please note that this is not full path tree/forest. It is difficult to\n> > describe the whole path forest in text format. But this should be\n> > sufficient to get the gist of how linking and unlinking works.\n> >\n> > Let's say all these paths are listed in pathlists of respective\n> > RelOptInfos. Assume that joinpath_1, joinpath_3 and joinpath_5 are all\n> > part of the topmost RelOptInfo. Then the refcounts will be as follows\n> > joinpath_1, joinpath_3 and joinpath_5 -> each 1 (RelOptInfo)\n> > joinpath_2 - 3 (joinpath_2, joinpath_5 and RelOptInfo)\n> > joinpath_4 - 2 (joinpath_3, RelOptInfo)\n> > seqpath_1 - 3 (joinpath_2, joinpath_4, RelOptInfo)\n>\n> I think this likely works fine providing you always unlink paths from\n> the root of the path tree. When you start unlinking from non-root\n> paths I think you could start to see problems unless you increment\n> refcounts recursively.\n>\n> The problem I had when working on improving the union planner was when\n> building the final paths for a union child.\n>\n> Let's say I have the query: SELECT a,b FROM t1 UNION SELECT x,y FROM t2;\n>\n> When planning the first union child, I'd like to provide the union\n> planner with a sorted path and also the cheapest scan path on t1 to\n> allow it to Hash Aggregate on the cheapest path.\n>\n> Let's say the planner produces the following paths on t1.\n>\n> SeqScan_t1\n>\n> When I create the final union child paths I want to try and produce a\n> few sorted paths to allow MergeAppend to work. I'll loop over each\n> path in t1's pathlist.\n>\n> SeqScan_t1 isn't sorted, so I'll make Sort1->SeqScan_t1 add add_path\n> that to final_rel.\n>\n> Now, I also want to allow cheap Hash Aggregates to implement the\n> UNION, so I'll include SeqScan_t1 without sorting it.\n>\n> If I now do add_path(final_rel, SeqScan_t1), add_path() could see that\n> the SeqScan cost is fuzzily the same as the Sort->SeqScan_t1 and since\n> the sort version has better PathKeys, add_path will unlink SeqScan_t1.\n>\n> Now, I don't think your scheme for refcounting paths is broken by this\n> because you'll increment the refcount of the SeqScan_t1 when the Sort\n> uses it as its subpath, but if instead of a Sort->SeqScan that was\n> IncrementalSort->Sort->SeqScan, then suddenly you're not incrementing\n> the refcount for the SeqScan when you create the Incremental Sort due\n> to the Sort being between the two.\n>\n> So, if I now do add_path(final_rel, Sort2->Incremental_Sort->SeqScan_t1)\n>\n> Sort1 is rejected due to cost and we unlink it. Sort1 refcount gets to\n> 0 and we free the path and the SeqScan_t1's refcount goes down by 1\n> but does not reach 0.\n\nWhat is Sort1? Sort1->Seqscan_t1 OR incremental_sort->Sort->seqscan_t1?\n\nI am getting lost here. But I think you have misunderstood the code so\nthis question is irrelevant. See next.\n\n>\n> We then add_path(final_rel, SeqScan_t1) but this path is fuzzily the\n> same cost as Sort2 so we reject SeqScan_t1 due to Sort2 having better\n> pathkeys and we unlink SeqScan_t1. Refcount on SeqScan_t1 gets to 0\n> and we free the path. We need SeqScan_t1 for the incremental sort.\n\nthe path being presented to add_path is never passed to unlink_path().\nIf it's suboptimal to any existing path, it is passed to free_path()\nwhich in its current form will throw an error. But that can fixed.\nfree_path can just ignore a path with refcount > 0, if we want to\npresent the same path to add_path multiple times.\n\n>\n> Perhaps in reality here, since SeqScan_t1 exists in the base rel's\n> pathlist we'd still have a refcount from that, but I assume at some\n> point you want to start freeing up paths from RelOptInfos that we're\n> done with, in which case the SeqScan_t1's refcount would reach 0 at\n> that point and we'd crash during create plan of the\n> Sort2->Incremental_Sort->SeqScan_t1 plan.\n>\n> Have I misunderstood something here? As far as I see it with your\n> non-recursive refcounts, it's only safe to free from the root of a\n> pathtree and isn't safe when unlinking paths that are the subpath of\n> another path unless the unlinking is done from the root.\n>\n\nAs long as we call link_path every time we reference a path, we could\nstart freeing paths from anywhere. The reference count takes care of\nnot freeing referenced paths. Of course, I will need to fix\nfree_path() as above.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 8 Dec 2023 19:36:38 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Memory consumed by paths during partitionwise join planning" }, { "msg_contents": "On Thu, Dec 7, 2023 at 6:19 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> Maybe we can try to move forward with your refcount idea and see how\n> the performance looks. If that's intolerable then that might help us\n> decide on the next best alternative solution.\n>\n\nHere are performance numbers\n\nsetup\n\ncreate table t1 (a integer primary key, b integer);\ncreate table t1_parted (a integer primary key, b integer) partition by range(a);\ncreate 1000 partitions of t1\n\nquery (a five way self-join)\nselect * from t1 a, t1 b, t1 c, t1 d, t1 e where a.a = b.a and b.a =\nc.a and c.a = d.a and d.a = e.a -- unpartitioned table case\nselect * from t1_parted a, t1_parted b, t1_parted c, t1_parted d,\nt1_parted e where a.a = b.a and b.a = c.a and c.a = d.a and d.a =\ne.a; -- partitioned table case\n\nThe numbers with patches attached to [1] with limitations listed in\nthe email are thus\n\nRan each query 10 times through EXPLAIN (SUMMARY) and averaged\nplanning time with and without patch.\nunpartitioned case\nwithout patch: 0.25\nwith patch: 0.19\nthis improvement is probably misleading. The planning time for this\nquery change a lot.\n\npartitioned case (without partitionwise join)\nwithout patch: 14580.65\nwith patch: 14625.12\n% degradation: 0.3%\n\npartitioned case (with partitionwise join)\nwithout patch: 23273.69\nwith patch: 23328.52\n % degradation: 0.2%\n\nThat looks pretty small considering the benefits. What do you think?\n\n[1] https://www.postgresql.org/message-id/CAExHW5stmOUobE55pMt83r8UxvfCph+Pvo5dNpdrVCsBgXEzDQ@mail.gmail.com\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Thu, 14 Dec 2023 17:34:07 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Memory consumed by paths during partitionwise join planning" }, { "msg_contents": "Forgot to mention,\n\nOn Thu, Dec 14, 2023 at 5:34 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Thu, Dec 7, 2023 at 6:19 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> >\n> > Maybe we can try to move forward with your refcount idea and see how\n> > the performance looks. If that's intolerable then that might help us\n> > decide on the next best alternative solution.\n> >\n>\n> Here are performance numbers\n>\n> setup\n>\n> create table t1 (a integer primary key, b integer);\n> create table t1_parted (a integer primary key, b integer) partition by range(a);\n> create 1000 partitions of t1\n>\n> query (a five way self-join)\n> select * from t1 a, t1 b, t1 c, t1 d, t1 e where a.a = b.a and b.a =\n> c.a and c.a = d.a and d.a = e.a -- unpartitioned table case\n> select * from t1_parted a, t1_parted b, t1_parted c, t1_parted d,\n> t1_parted e where a.a = b.a and b.a = c.a and c.a = d.a and d.a =\n> e.a; -- partitioned table case\n>\n> The numbers with patches attached to [1] with limitations listed in\n> the email are thus\n>\n> Ran each query 10 times through EXPLAIN (SUMMARY) and averaged\n> planning time with and without patch.\n> unpartitioned case\n> without patch: 0.25\n> with patch: 0.19\n> this improvement is probably misleading. The planning time for this\n> query change a lot.\n>\n> partitioned case (without partitionwise join)\n> without patch: 14580.65\n> with patch: 14625.12\n> % degradation: 0.3%\n>\n> partitioned case (with partitionwise join)\n> without patch: 23273.69\n> with patch: 23328.52\n> % degradation: 0.2%\n>\n> That looks pretty small considering the benefits. What do you think?\n>\n> [1] https://www.postgresql.org/message-id/CAExHW5stmOUobE55pMt83r8UxvfCph+Pvo5dNpdrVCsBgXEzDQ@mail.gmail.com\n\nIf you want to experiment, please use attached patches. There's a fix\nfor segfault during initdb in them. The patches are still raw.\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Fri, 15 Dec 2023 05:22:54 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Memory consumed by paths during partitionwise join planning" }, { "msg_contents": "On Fri, Dec 15, 2023 at 5:22 AM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n\n> >\n> > That looks pretty small considering the benefits. What do you think?\n> >\n> > [1] https://www.postgresql.org/message-id/CAExHW5stmOUobE55pMt83r8UxvfCph+Pvo5dNpdrVCsBgXEzDQ@mail.gmail.com\n>\n> If you want to experiment, please use attached patches. There's a fix\n> for segfault during initdb in them. The patches are still raw.\n\nFirst patch is no longer required. Here's rebased set\n\nThe patches are raw. make check has some crashes that I need to fix. I\nam waiting to hear whether this is useful and whether the design is on\nthe right track.\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Tue, 6 Feb 2024 18:21:38 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Memory consumed by paths during partitionwise join planning" }, { "msg_contents": "On 6/2/2024 19:51, Ashutosh Bapat wrote:\n> On Fri, Dec 15, 2023 at 5:22 AM Ashutosh Bapat\n> The patches are raw. make check has some crashes that I need to fix. I\n> am waiting to hear whether this is useful and whether the design is on\n> the right track.\nLet me write words of opinion on that feature.\nI generally like the idea of a refcount field. We had problems \ngenerating alternative paths in extensions and designing the Asymmetric \nJoin feature [1] when we proposed an alternative path one level below \nand called the add_path() routine. We had lost the path in the path list \nreferenced from the upper RelOptInfo. This approach allows us to make \nmore handy solutions instead of hacking with a copy/restore pathlist.\nBut I'm not sure about freeing unreferenced paths. I would have to see \nalternatives in the pathlist.\nAbout partitioning. As I discovered planning issues connected to \npartitions, the painful problem is a rule, according to which we are \ntrying to use all nomenclature of possible paths for each partition. \nWith indexes, it quickly increases optimization work. IMO, this can help \na 'symmetrical' approach, which could restrict the scope of possible \npathways for upcoming partitions if we filter some paths in a set of \npreviously planned partitions.\nAlso, I am glad to see a positive opinion about the path_walker() \nroutine. Somewhere else, for example, in [2], it seems we need it too.\n\n[1] \nhttps://www.postgresql.org/message-id/flat/CAOP8fzaVL_2SCJayLL9kj5pCA46PJOXXjuei6-3aFUV45j4LJQ%40mail.gmail.com\n[2] \nhttps://www.postgresql.org/message-id/flat/CAMbWs496%2BN%3DUAjOc%3DrcD3P7B6oJe4rZw08e_TZRUsWbPxZW3Tw%40mail.gmail.com\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n", "msg_date": "Thu, 15 Feb 2024 11:11:38 +0700", "msg_from": "Andrei Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Memory consumed by paths during partitionwise join planning" }, { "msg_contents": "On Thu, Feb 15, 2024 at 9:41 AM Andrei Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n>\n> On 6/2/2024 19:51, Ashutosh Bapat wrote:\n> > On Fri, Dec 15, 2023 at 5:22 AM Ashutosh Bapat\n> > The patches are raw. make check has some crashes that I need to fix. I\n> > am waiting to hear whether this is useful and whether the design is on\n> > the right track.\n> Let me write words of opinion on that feature.\n> I generally like the idea of a refcount field. We had problems\n> generating alternative paths in extensions and designing the Asymmetric\n> Join feature [1] when we proposed an alternative path one level below\n> and called the add_path() routine. We had lost the path in the path list\n> referenced from the upper RelOptInfo. This approach allows us to make\n> more handy solutions instead of hacking with a copy/restore pathlist.\n\nThanks for your comments. I am glad that you find this useful.\n\n> But I'm not sure about freeing unreferenced paths. I would have to see\n> alternatives in the pathlist.\n\nI didn't understand this. Can you please elaborate? A path in any\npathlist is referenced. An unreferenced path should not be in any\npathlist.\n\n> About partitioning. As I discovered planning issues connected to\n> partitions, the painful problem is a rule, according to which we are\n> trying to use all nomenclature of possible paths for each partition.\n> With indexes, it quickly increases optimization work. IMO, this can help\n> a 'symmetrical' approach, which could restrict the scope of possible\n> pathways for upcoming partitions if we filter some paths in a set of\n> previously planned partitions.\n\nfilter or free?\n\n> Also, I am glad to see a positive opinion about the path_walker()\n> routine. Somewhere else, for example, in [2], it seems we need it too.\n>\n\nI agree that we should introduce path_walker() yes.\n\nI am waiting for David's opinion about the performance numbers shared\nin [1] before working further on the patches and bring them out of WIP\nstate.\n\n[1] postgr.es/m/CAExHW5uDkMQL8SicV3_=AYcsWwMTNR8GzJo310J7sv7TMLoL6Q@mail.gmail.com\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Thu, 15 Feb 2024 17:36:16 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Memory consumed by paths during partitionwise join planning" }, { "msg_contents": "On 15/2/2024 19:06, Ashutosh Bapat wrote:\n> On Thu, Feb 15, 2024 at 9:41 AM Andrei Lepikhov\n>> But I'm not sure about freeing unreferenced paths. I would have to see\n>> alternatives in the pathlist.\n> \n> I didn't understand this. Can you please elaborate? A path in any\n> pathlist is referenced. An unreferenced path should not be in any\n> pathlist.\nI mean that at some point, an extension can reconsider the path tree \nafter building the top node of this path. I vaguely recall that we \nalready have (or had) kind of such optimization in the core where part \nof the plan changes after it has been built.\nLive example: right now, I am working on the code like MSSQL has - a \ncombination of NestLoop and HashJoin paths and switching between them in \nreal-time. It requires both paths in the path list at the moment when \nextensions are coming. Even if one of them isn't referenced from the \nupper pathlist, it may still be helpful for the extension.\n\n>> About partitioning. As I discovered planning issues connected to\n>> partitions, the painful problem is a rule, according to which we are\n>> trying to use all nomenclature of possible paths for each partition.\n>> With indexes, it quickly increases optimization work. IMO, this can help\n>> a 'symmetrical' approach, which could restrict the scope of possible\n>> pathways for upcoming partitions if we filter some paths in a set of\n>> previously planned partitions.\n> \n> filter or free?\nFilter.\nI meant that Postres tries to apply IndexScan, BitmapScan, \nIndexOnlyScan, and other strategies, passing throughout the partition \nindexes. The optimizer spends a lot of time doing that. So, why not \nintroduce a symmetrical strategy and give away from the search some \nindexes of types of scan based on the pathifying experience of previous \npartitions of the same table: if you have dozens of partitions, Is it \nbeneficial for the system to find a bit more optimal IndexScan on one \npartition having SeqScans on 999 other?\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n", "msg_date": "Fri, 16 Feb 2024 10:12:47 +0700", "msg_from": "Andrei Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Memory consumed by paths during partitionwise join planning" }, { "msg_contents": "On Fri, Feb 16, 2024 at 8:42 AM Andrei Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> Live example: right now, I am working on the code like MSSQL has - a\n> combination of NestLoop and HashJoin paths and switching between them in\n> real-time. It requires both paths in the path list at the moment when\n> extensions are coming. Even if one of them isn't referenced from the\n> upper pathlist, it may still be helpful for the extension.\n\nThere is no guarantee that every path presented to add_path will be\npreserved. Suboptimal paths are freed as and when add_path discovers\nthat they are suboptimal. So I don't think an extension can rely on\nexistence of a path. But having a refcount makes it easy to preserve\nthe required paths by referencing them.\n\n>\n> >> About partitioning. As I discovered planning issues connected to\n> >> partitions, the painful problem is a rule, according to which we are\n> >> trying to use all nomenclature of possible paths for each partition.\n> >> With indexes, it quickly increases optimization work. IMO, this can help\n> >> a 'symmetrical' approach, which could restrict the scope of possible\n> >> pathways for upcoming partitions if we filter some paths in a set of\n> >> previously planned partitions.\n> >\n> > filter or free?\n> Filter.\n> I meant that Postres tries to apply IndexScan, BitmapScan,\n> IndexOnlyScan, and other strategies, passing throughout the partition\n> indexes. The optimizer spends a lot of time doing that. So, why not\n> introduce a symmetrical strategy and give away from the search some\n> indexes of types of scan based on the pathifying experience of previous\n> partitions of the same table: if you have dozens of partitions, Is it\n> beneficial for the system to find a bit more optimal IndexScan on one\n> partition having SeqScans on 999 other?\n>\nIIUC, you are suggesting that instead of planning each\npartition/partitionwise join, we only create paths with the strategies\nwhich were found to be optimal with previous partitions. That's a good\nheuristic but it won't work if partition properties - statistics,\nindexes etc. differ between groups of partitions.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 19 Feb 2024 17:55:28 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Memory consumed by paths during partitionwise join planning" }, { "msg_contents": "On 19/2/2024 19:25, Ashutosh Bapat wrote:\n> On Fri, Feb 16, 2024 at 8:42 AM Andrei Lepikhov\n> <a.lepikhov@postgrespro.ru> wrote:\n>> Live example: right now, I am working on the code like MSSQL has - a\n>> combination of NestLoop and HashJoin paths and switching between them in\n>> real-time. It requires both paths in the path list at the moment when\n>> extensions are coming. Even if one of them isn't referenced from the\n>> upper pathlist, it may still be helpful for the extension.\n> \n> There is no guarantee that every path presented to add_path will be\n> preserved. Suboptimal paths are freed as and when add_path discovers\n> that they are suboptimal. So I don't think an extension can rely on\n> existence of a path. But having a refcount makes it easy to preserve\n> the required paths by referencing them.\nI don't insist, just provide my use case. It would be ideal if you would \nprovide some external routines for extensions that allow for sticking \nthe path in pathlist even when it has terrible cost estimation.\n> \n>>\n>>>> About partitioning. As I discovered planning issues connected to\n>>>> partitions, the painful problem is a rule, according to which we are\n>>>> trying to use all nomenclature of possible paths for each partition.\n>>>> With indexes, it quickly increases optimization work. IMO, this can help\n>>>> a 'symmetrical' approach, which could restrict the scope of possible\n>>>> pathways for upcoming partitions if we filter some paths in a set of\n>>>> previously planned partitions.\n>>>\n>>> filter or free?\n>> Filter.\n>> I meant that Postres tries to apply IndexScan, BitmapScan,\n>> IndexOnlyScan, and other strategies, passing throughout the partition\n>> indexes. The optimizer spends a lot of time doing that. So, why not\n>> introduce a symmetrical strategy and give away from the search some\n>> indexes of types of scan based on the pathifying experience of previous\n>> partitions of the same table: if you have dozens of partitions, Is it\n>> beneficial for the system to find a bit more optimal IndexScan on one\n>> partition having SeqScans on 999 other?\n>>\n> IIUC, you are suggesting that instead of planning each\n> partition/partitionwise join, we only create paths with the strategies\n> which were found to be optimal with previous partitions. That's a good\n> heuristic but it won't work if partition properties - statistics,\n> indexes etc. differ between groups of partitions.\nSure, but the \"Symmetry\" strategy assumes that on the scope of a \nthousand partitions, especially with parallel append involved, it \ndoesn't cause sensible performance degradation if we find a bit \nsuboptimal path in a small subset of partitions. Does it make sense?\nAs I see, when people use 10-100 partitions for the table, they usually \nstrive to keep indexes symmetrical for all partitions.\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n", "msg_date": "Tue, 20 Feb 2024 09:49:38 +0700", "msg_from": "Andrei Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Memory consumed by paths during partitionwise join planning" }, { "msg_contents": "On Tue, Feb 20, 2024 at 8:19 AM Andrei Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n>\n> On 19/2/2024 19:25, Ashutosh Bapat wrote:\n> > On Fri, Feb 16, 2024 at 8:42 AM Andrei Lepikhov\n> > <a.lepikhov@postgrespro.ru> wrote:\n> >> Live example: right now, I am working on the code like MSSQL has - a\n> >> combination of NestLoop and HashJoin paths and switching between them in\n> >> real-time. It requires both paths in the path list at the moment when\n> >> extensions are coming. Even if one of them isn't referenced from the\n> >> upper pathlist, it may still be helpful for the extension.\n> >\n> > There is no guarantee that every path presented to add_path will be\n> > preserved. Suboptimal paths are freed as and when add_path discovers\n> > that they are suboptimal. So I don't think an extension can rely on\n> > existence of a path. But having a refcount makes it easy to preserve\n> > the required paths by referencing them.\n> I don't insist, just provide my use case. It would be ideal if you would\n> provide some external routines for extensions that allow for sticking\n> the path in pathlist even when it has terrible cost estimation.\n\nWith refcounts you can reference it and store it somewhere other than\npathlist. The path won't be lost until it is dereferrenced.\nRelOptInfo::Pathlist is for optimal paths.\n\n> >\n> >>\n> >>\n> > IIUC, you are suggesting that instead of planning each\n> > partition/partitionwise join, we only create paths with the strategies\n> > which were found to be optimal with previous partitions. That's a good\n> > heuristic but it won't work if partition properties - statistics,\n> > indexes etc. differ between groups of partitions.\n> Sure, but the \"Symmetry\" strategy assumes that on the scope of a\n> thousand partitions, especially with parallel append involved, it\n> doesn't cause sensible performance degradation if we find a bit\n> suboptimal path in a small subset of partitions. Does it make sense?\n> As I see, when people use 10-100 partitions for the table, they usually\n> strive to keep indexes symmetrical for all partitions.\n>\n\nI agree that we need something like that. In order to do that, we need\nmachinery to prove that all partitions have similar properties. Once\nthat is proved, we can skip creating paths for similar partitions. But\nthat's out of scope of this work and complements it.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 20 Feb 2024 10:38:31 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Memory consumed by paths during partitionwise join planning" }, { "msg_contents": "On 6/2/2024 13:51, Ashutosh Bapat wrote:\n> On Fri, Dec 15, 2023 at 5:22 AM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> \n>>>\n>>> That looks pretty small considering the benefits. What do you think?\n>>>\n>>> [1] https://www.postgresql.org/message-id/CAExHW5stmOUobE55pMt83r8UxvfCph+Pvo5dNpdrVCsBgXEzDQ@mail.gmail.com\n>>\n>> If you want to experiment, please use attached patches. There's a fix\n>> for segfault during initdb in them. The patches are still raw.\n> \n> First patch is no longer required. Here's rebased set\n> \n> The patches are raw. make check has some crashes that I need to fix. I\n> am waiting to hear whether this is useful and whether the design is on\n> the right track.\nI think this work is promising, especially in the scope of partitioning.\nI've analysed how it works by basing these patches on the current \nmaster. You propose freeing unused paths after the end of the standard \npath search.\nIn my view, upper paths generally consume less memory than join and scan \npaths. This is primarily due to the limited options provided by Postgres \nso far.\nAt the same time, this technique (while highly useful in general) adds \nfragility and increases complexity: a developer needs to remember to \nlink the path using the pointer in different places of the code.\nSo, maybe go the alternative way? Invent a subquery memory context and \nstore all the path allocations there. It can be freed after setrefs \nfinishes this subquery planning without pulling up this subquery.\n\n-- \nregards, Andrei Lepikhov\n\n\n\n", "msg_date": "Thu, 19 Sep 2024 12:48:40 +0200", "msg_from": "Andrei Lepikhov <lepihov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Memory consumed by paths during partitionwise join planning" }, { "msg_contents": "On Thu, Sep 19, 2024 at 4:18 PM Andrei Lepikhov <lepihov@gmail.com> wrote:\n>\n> On 6/2/2024 13:51, Ashutosh Bapat wrote:\n> > On Fri, Dec 15, 2023 at 5:22 AM Ashutosh Bapat\n> > <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> >>>\n> >>> That looks pretty small considering the benefits. What do you think?\n> >>>\n> >>> [1] https://www.postgresql.org/message-id/CAExHW5stmOUobE55pMt83r8UxvfCph+Pvo5dNpdrVCsBgXEzDQ@mail.gmail.com\n> >>\n> >> If you want to experiment, please use attached patches. There's a fix\n> >> for segfault during initdb in them. The patches are still raw.\n> >\n> > First patch is no longer required. Here's rebased set\n> >\n> > The patches are raw. make check has some crashes that I need to fix. I\n> > am waiting to hear whether this is useful and whether the design is on\n> > the right track.\n> I think this work is promising, especially in the scope of partitioning.\n> I've analysed how it works by basing these patches on the current\n> master. You propose freeing unused paths after the end of the standard\n> path search.\n> In my view, upper paths generally consume less memory than join and scan\n> paths. This is primarily due to the limited options provided by Postgres\n> so far.\n> At the same time, this technique (while highly useful in general) adds\n> fragility and increases complexity: a developer needs to remember to\n> link the path using the pointer in different places of the code.\n> So, maybe go the alternative way? Invent a subquery memory context and\n> store all the path allocations there. It can be freed after setrefs\n> finishes this subquery planning without pulling up this subquery.\n>\n\nUsing memory context for subquery won't help with partitioning right?\nIf the outermost query has a partitioned table with thousands of\npartitions, it will still accumulate those paths till the very end of\nplanning. We could instead use memory context/s to store all the paths\ncreated, then copy the optimal paths into a new memory context at\nstrategic points and blow up the old memory context. And repeat this\ntill we choose the final path and create a plan out of it; at that\npoint we could blow up the memory context containing remaining paths\nas well. That will free the paths as soon as they are rendered\nuseless. I discussed this idea with Alvaro offline. We thought that\nthis approach needs some code to copy paths and then copying paths\nrecursively has some overhead of itself. The current work of adding a\nreference count, OTOH has potential to bring discipline into the way\nwe handle paths. We need to avoid risks posed by dangling pointers.\nFor which Alvaro suggested looking at the way we manage snapshots. But\nI didn't get time to explore that idea yet.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Thu, 19 Sep 2024 16:42:14 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Memory consumed by paths during partitionwise join planning" }, { "msg_contents": "On 19/9/2024 13:12, Ashutosh Bapat wrote:\n> On Thu, Sep 19, 2024 at 4:18 PM Andrei Lepikhov <lepihov@gmail.com> wrote:\n>> At the same time, this technique (while highly useful in general) adds\n>> fragility and increases complexity: a developer needs to remember to\n>> link the path using the pointer in different places of the code.\n>> So, maybe go the alternative way? Invent a subquery memory context and\n>> store all the path allocations there. It can be freed after setrefs\n>> finishes this subquery planning without pulling up this subquery.\n>>\n> \n> Using memory context for subquery won't help with partitioning right?\n> If the outermost query has a partitioned table with thousands of\n> partitions, it will still accumulate those paths till the very end of\n> planning.\nI got it. Just haven't had huge tables in the outer before.\n\n> We could instead use memory context/s to store all the paths\n> created, then copy the optimal paths into a new memory context at\n> strategic points and blow up the old memory context. And repeat this\n> till we choose the final path and create a plan out of it; at that\n> point we could blow up the memory context containing remaining paths\n> as well. That will free the paths as soon as they are rendered\n> useless.\nI think any scalable solution should be based on a per-partition \ncleanup. For starters, why not adopt Tom's patch [1] for selectivity \nestimations? We will see the profit in the case of long lists of clauses.\n\n> I discussed this idea with Alvaro offline. We thought that\n> this approach needs some code to copy paths and then copying paths\n> recursively has some overhead of itself.\nIt needs path_tree_walker at first. We discussed it before but failed. \nMaybe design it beforehand and use it in re-parameterising code?\n\n> The current work of adding a\n> reference count, OTOH has potential to bring discipline into the way\n> we handle paths. We need to avoid risks posed by dangling pointers.\nBoth hands up for having pointer counters: It is painful all the time in \nextensions to invent an approach to safely removing a path you want to \nreplace with a custom one. I just want to say it looks too dangerous \ncompared to the value of a possible positive outcome.\n\n> For which Alvaro suggested looking at the way we manage snapshots. But\n> I didn't get time to explore that idea yet.\nUnfortunately, I can't understand this idea without an explanation.\n\n[1] Optimize planner memory consumption for huge arrays\nhttps://www.postgresql.org/message-id/1367418.1708816059@sss.pgh.pa.us\n\n-- \nregards, Andrei Lepikhov\n\n\n\n", "msg_date": "Thu, 19 Sep 2024 14:16:44 +0200", "msg_from": "Andrei Lepikhov <lepihov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Memory consumed by paths during partitionwise join planning" } ]
[ { "msg_contents": "Hi All,\n\nIn try_partitionwise_join() we create SpecialJoinInfo structures for\nchild joins by translating SpecialJoinInfo structures applicable to\nthe parent join. These SpecialJoinInfos are not used outside\ntry_partitionwise_join() but we do not free memory allocated to those.\nIn try_partitionwise_join() we create as many SpecialJoinInfos as the\nnumber of partitions. But try_partitionwise_join() itself is called as\nmany times as the number of join orders for a given join. Thus the\nnumber of child SpecialJoinInfos that are created increases\nexponentially with the number of partitioned tables being joined.\n\nThe attached patch (0002) fixes this issue by\n1. declaring child SpecialJoinInfo as a local variable, thus\nallocating memory on the stack instead of CurrentMemoryContext. The\nmemory on the stack is freed automatically.\n2. Freeing the Relids in SpecialJoinInfo explicitly after\nSpecialJoinInfo has been used.\n\nWe can not free the object trees in SpecialJoinInfo since those may be\nreferenced in the paths.\n\nSimilar to my previous emails [1], the memory consumption for given\nqueries is measured using attached patch (0001). The table definitions\nand helper functions can be found in setup.sql and queries.sql.\nFollowing table shows the reduction in the memory using the attached\npatch (0002).\n\nNumber of | without | | | Absolute |\njoining tables | patch | with patch | % diff | diff |\n--------------------------------------------------------------------\n 2 | 40.9 MiB | 39.9 MiB | 2.27% | 925.7 KiB |\n 3 | 151.7 MiB | 142.6 MiB | 5.97% | 9.0 MiB |\n 4 | 464.0 MiB | 418.4 MiB | 9.83% | 45.6 MiB |\n 5 | 1663.9 MiB | 1419.4 MiB | 14.69% | 244.5 MiB |\n\nSince the memory consumed by SpecialJoinInfos is exponentially\nproportional to the number of tables being joined, we see that the\npatch is able to save more memory at higher number of tables joined.\n\nThe attached patch (0002) frees members of individual\nSpecialJoinInfos. It might have some impact on the planning time. We\nmay improve that by allocating the members in a separate memory\ncontext. The memory context will be allocated and deleted in each\ninvocation of try_partitionwise_join(). I am assuming that deleting a\nmemory context is more efficient than freeing bits of memory multiple\ntimes. But I have not tried that approach.\n\nAnother approach would be to track the SpecialJoinInfo translations\n(similar to RestrictListInfo translations proposed in other email\nthread [2]) and avoid repeatedly translating the same SpecialJoinInfo.\nThe approach will have similar disadvantages that it may increase size\nof SpecialJoinInfo structure even when planning the queries which do\nnot need partitionwise join. So I haven't tried that approach yet.\n\nAt this stage, I am looking for opinions on\n1. whether it's worth reducing this memory\n2. any suggestions/comments on which approach from above is more\nsuitable or if there are other approaches\n\nReferences\n[1] https://www.postgresql.org/message-id/CAExHW5stmOUobE55pMt83r8UxvfCph+Pvo5dNpdrVCsBgXEzDQ@mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAExHW5s=bCLMMq8n_bN6iU+Pjau0DS3z_6Dn6iLE69ESmsPMJQ@mail.gmail.com\n\n--\nBest Wishes,\nAshutosh Bapat", "msg_date": "Thu, 27 Jul 2023 19:39:45 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Memory consumed by child SpecialJoinInfo in partitionwise join\n planning" }, { "msg_contents": "On Thu, Jul 27, 2023 at 10:10 PM Ashutosh Bapat <\nashutosh.bapat.oss@gmail.com> wrote:\n\n> The attached patch (0002) fixes this issue by\n> 1. declaring child SpecialJoinInfo as a local variable, thus\n> allocating memory on the stack instead of CurrentMemoryContext. The\n> memory on the stack is freed automatically.\n> 2. Freeing the Relids in SpecialJoinInfo explicitly after\n> SpecialJoinInfo has been used.\n\n\nIt doesn't seem too expensive to translate SpecialJoinInfos, so I think\nit's OK to construct and free the SpecialJoinInfo for a child join on\nthe fly. So the approach in 0002 looks reasonable to me. But there is\nsomething that needs to be fixed in 0002.\n\n+ bms_free(child_sjinfo->commute_above_l);\n+ bms_free(child_sjinfo->commute_above_r);\n+ bms_free(child_sjinfo->commute_below_l);\n+ bms_free(child_sjinfo->commute_below_r);\n\nThese four members in SpecialJoinInfo only contain outer join relids.\nThey do not need to be translated. So they would reference the same\nmemory areas in child_sjinfo as in parent_sjinfo. We should not free\nthem, otherwise they would become dangling pointers in parent sjinfo.\n\nThanks\nRichard\n\nOn Thu, Jul 27, 2023 at 10:10 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:\nThe attached patch (0002) fixes this issue by\n1. declaring child SpecialJoinInfo as a local variable, thus\nallocating memory on the stack instead of CurrentMemoryContext. The\nmemory on the stack is freed automatically.\n2. Freeing the Relids in SpecialJoinInfo explicitly after\nSpecialJoinInfo has been used.It doesn't seem too expensive to translate SpecialJoinInfos, so I thinkit's OK to construct and free the SpecialJoinInfo for a child join onthe fly.  So the approach in 0002 looks reasonable to me.  But there issomething that needs to be fixed in 0002.+   bms_free(child_sjinfo->commute_above_l);+   bms_free(child_sjinfo->commute_above_r);+   bms_free(child_sjinfo->commute_below_l);+   bms_free(child_sjinfo->commute_below_r);These four members in SpecialJoinInfo only contain outer join relids.They do not need to be translated.  So they would reference the samememory areas in child_sjinfo as in parent_sjinfo.  We should not freethem, otherwise they would become dangling pointers in parent sjinfo.ThanksRichard", "msg_date": "Fri, 28 Jul 2023 16:33:22 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Memory consumed by child SpecialJoinInfo in partitionwise join\n planning" }, { "msg_contents": "Hi Richard,\n\nOn Fri, Jul 28, 2023 at 2:03 PM Richard Guo <guofenglinux@gmail.com> wrote:\n>\n>\n> It doesn't seem too expensive to translate SpecialJoinInfos, so I think\n> it's OK to construct and free the SpecialJoinInfo for a child join on\n> the fly. So the approach in 0002 looks reasonable to me. But there is\n> something that needs to be fixed in 0002.\n\nThanks for the review. I am fine if going ahead with 0002 is enough.\n\n>\n> + bms_free(child_sjinfo->commute_above_l);\n> + bms_free(child_sjinfo->commute_above_r);\n> + bms_free(child_sjinfo->commute_below_l);\n> + bms_free(child_sjinfo->commute_below_r);\n>\n> These four members in SpecialJoinInfo only contain outer join relids.\n> They do not need to be translated. So they would reference the same\n> memory areas in child_sjinfo as in parent_sjinfo. We should not free\n> them, otherwise they would become dangling pointers in parent sjinfo.\n>\n\nGood catch. Saved my time. I would have caught this rather hard way\nwhen running regression. Thanks a lot.\n\nI think we should 1. add an assert to make sure that commute_above_*\ndo not require any transations i.e. they do not contain any parent\nrelids of the child rels. 2. Do not free those. 3. Add a comment about\nkeeping the build and free functions in sync. I will work on those\nnext.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 28 Jul 2023 19:28:21 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Memory consumed by child SpecialJoinInfo in partitionwise join\n planning" }, { "msg_contents": "On Fri, Jul 28, 2023 at 7:28 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n\n> >\n> > + bms_free(child_sjinfo->commute_above_l);\n> > + bms_free(child_sjinfo->commute_above_r);\n> > + bms_free(child_sjinfo->commute_below_l);\n> > + bms_free(child_sjinfo->commute_below_r);\n> >\n> > These four members in SpecialJoinInfo only contain outer join relids.\n> > They do not need to be translated. So they would reference the same\n> > memory areas in child_sjinfo as in parent_sjinfo. We should not free\n> > them, otherwise they would become dangling pointers in parent sjinfo.\n> >\n>\n>\nAttached patchset fixing those.\n0001 - patch to report planning memory, with to explain.out regression\noutput fix. We may consider committing this as well.\n0002 - with your comment addressed above.\n\n\n>\n> I think we should 1. add an assert to make sure that commute_above_*\n> do not require any transations i.e. they do not contain any parent\n> relids of the child rels.\n\n\nLooking at the definitions of commute_above and commute_below relids and OJ\nrelids, I don't think these assertions are necessary. So didn't add those.\n\n\n> 2. Do not free those.\n\n\nDone\n\n\n> 3. Add a comment about\n> keeping the build and free functions in sync.\n>\n\nDone.\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Fri, 4 Aug 2023 14:11:56 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Memory consumed by child SpecialJoinInfo in partitionwise join\n planning" }, { "msg_contents": "On Fri, Aug 4, 2023 at 2:11 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> Attached patchset fixing those.\n> 0001 - patch to report planning memory, with to explain.out regression output fix. We may consider committing this as well.\n> 0002 - with your comment addressed above.\n\n0003 - Added this patch for handling SpecialJoinInfos for inner joins.\nThese SpecialJoinInfos are created on the fly for parent joins. They\ncan be created on the fly for child joins as well without requiring\nany translations. Thus they do not require any new memory. This patch\nis intended to be merged into 0002 finally.\n\nAdded to the upcoming commitfest https://commitfest.postgresql.org/44/4500/\n\n\n--\nBest Wishes,\nAshutosh Bapat", "msg_date": "Wed, 16 Aug 2023 10:58:18 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Memory consumed by child SpecialJoinInfo in partitionwise join\n planning" }, { "msg_contents": "Hi Ashutosh,\n\nOn Wed, Aug 16, 2023 at 2:28 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> On Fri, Aug 4, 2023 at 2:11 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > Attached patchset fixing those.\n> > 0001 - patch to report planning memory, with to explain.out regression output fix. We may consider committing this as well.\n> > 0002 - with your comment addressed above.\n>\n> 0003 - Added this patch for handling SpecialJoinInfos for inner joins.\n> These SpecialJoinInfos are created on the fly for parent joins. They\n> can be created on the fly for child joins as well without requiring\n> any translations. Thus they do not require any new memory. This patch\n> is intended to be merged into 0002 finally.\n\nI read this thread and have been reading the latest patch. At first\nglance, it seems quite straightforward to me. I agree with Richard\nthat pfree()'ing 4 bitmapsets may not be a lot of added overhead. I\nwill study the patch a bit more.\n\nJust one comment on 0003:\n\n+ /*\n+ * Dummy SpecialJoinInfos do not have any translated fields and hence have\n+ * nothing to free.\n+ */\n+ if (child_sjinfo->jointype == JOIN_INNER)\n+ return;\n\nShould this instead be Assert(child_sjinfo->jointype != JOIN_INNER)?\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 20 Sep 2023 20:54:24 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Memory consumed by child SpecialJoinInfo in partitionwise join\n planning" }, { "msg_contents": "On Wed, Sep 20, 2023 at 5:24 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> I read this thread and have been reading the latest patch. At first\n> glance, it seems quite straightforward to me. I agree with Richard\n> that pfree()'ing 4 bitmapsets may not be a lot of added overhead. I\n> will study the patch a bit more.\n\nThanks.\n\n>\n> Just one comment on 0003:\n>\n> + /*\n> + * Dummy SpecialJoinInfos do not have any translated fields and hence have\n> + * nothing to free.\n> + */\n> + if (child_sjinfo->jointype == JOIN_INNER)\n> + return;\n>\n> Should this instead be Assert(child_sjinfo->jointype != JOIN_INNER)?\n\ntry_partitionwise_join() calls free_child_sjinfo_members()\nunconditionally i.e. it will be called even SpecialJoinInfos of INNER\njoin. Above condition filters those out early. In fact if we convert\ninto an Assert, in a production build the rest of the code will free\nRelids which are referenced somewhere else causing dangling pointers.\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Wed, 20 Sep 2023 18:54:37 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Memory consumed by child SpecialJoinInfo in partitionwise join\n planning" }, { "msg_contents": "On Wed, Sep 20, 2023 at 10:24 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> On Wed, Sep 20, 2023 at 5:24 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > Just one comment on 0003:\n> >\n> > + /*\n> > + * Dummy SpecialJoinInfos do not have any translated fields and hence have\n> > + * nothing to free.\n> > + */\n> > + if (child_sjinfo->jointype == JOIN_INNER)\n> > + return;\n> >\n> > Should this instead be Assert(child_sjinfo->jointype != JOIN_INNER)?\n>\n> try_partitionwise_join() calls free_child_sjinfo_members()\n> unconditionally i.e. it will be called even SpecialJoinInfos of INNER\n> join. Above condition filters those out early. In fact if we convert\n> into an Assert, in a production build the rest of the code will free\n> Relids which are referenced somewhere else causing dangling pointers.\n\nYou're right. I hadn't realized that the parent SpecialJoinInfo\npassed to try_partitionwise_join() might itself be a dummy. I guess I\nshould've read the whole thing before commenting.\n\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 21 Sep 2023 10:06:58 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Memory consumed by child SpecialJoinInfo in partitionwise join\n planning" }, { "msg_contents": "On Thu, Sep 21, 2023 at 6:37 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Wed, Sep 20, 2023 at 10:24 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> > On Wed, Sep 20, 2023 at 5:24 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > Just one comment on 0003:\n> > >\n> > > + /*\n> > > + * Dummy SpecialJoinInfos do not have any translated fields and hence have\n> > > + * nothing to free.\n> > > + */\n> > > + if (child_sjinfo->jointype == JOIN_INNER)\n> > > + return;\n> > >\n> > > Should this instead be Assert(child_sjinfo->jointype != JOIN_INNER)?\n> >\n> > try_partitionwise_join() calls free_child_sjinfo_members()\n> > unconditionally i.e. it will be called even SpecialJoinInfos of INNER\n> > join. Above condition filters those out early. In fact if we convert\n> > into an Assert, in a production build the rest of the code will free\n> > Relids which are referenced somewhere else causing dangling pointers.\n>\n> You're right. I hadn't realized that the parent SpecialJoinInfo\n> passed to try_partitionwise_join() might itself be a dummy. I guess I\n> should've read the whole thing before commenting.\n\nMaybe there's something to improve in terms of readability, I don't know.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Thu, 21 Sep 2023 09:50:19 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Memory consumed by child SpecialJoinInfo in partitionwise join\n planning" }, { "msg_contents": "Hi Ashutosh,\n\nOn Thu, Sep 21, 2023 at 1:20 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> On Thu, Sep 21, 2023 at 6:37 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Wed, Sep 20, 2023 at 10:24 PM Ashutosh Bapat\n> > <ashutosh.bapat.oss@gmail.com> wrote:\n> > > On Wed, Sep 20, 2023 at 5:24 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > > Just one comment on 0003:\n> > > >\n> > > > + /*\n> > > > + * Dummy SpecialJoinInfos do not have any translated fields and hence have\n> > > > + * nothing to free.\n> > > > + */\n> > > > + if (child_sjinfo->jointype == JOIN_INNER)\n> > > > + return;\n> > > >\n> > > > Should this instead be Assert(child_sjinfo->jointype != JOIN_INNER)?\n> > >\n> > > try_partitionwise_join() calls free_child_sjinfo_members()\n> > > unconditionally i.e. it will be called even SpecialJoinInfos of INNER\n> > > join. Above condition filters those out early. In fact if we convert\n> > > into an Assert, in a production build the rest of the code will free\n> > > Relids which are referenced somewhere else causing dangling pointers.\n> >\n> > You're right. I hadn't realized that the parent SpecialJoinInfo\n> > passed to try_partitionwise_join() might itself be a dummy. I guess I\n> > should've read the whole thing before commenting.\n>\n> Maybe there's something to improve in terms of readability, I don't know.\n\nHere are some comments.\n\nPlease merge 0003 into 0002.\n\n+ /*\n+ * But the list of operator OIDs and the list of expressions may be\n+ * referenced somewhere else. Do not free those.\n+ */\n\nI don't see build_child_join_sjinfo() copying any list of OIDs, so I'm\nnot sure what the comment is referring to. Also, instead of \"the list\nof expressions\", it might be more useful to mention semi_rhs_expr,\nbecause that's the only one being copied.\n\nThe comment for SpecialJoinInfo mentions that they are stored in\nPlannerInfo.join_info_list, but the child SpecialJoinInfos used by\npartitionwise joins are not, which I noticed has not been mentioned\nanywhere. Should we make a note of that in the SpecialJoinInfo's\ncomment?\n\nJust out of curiosity, is their not being present in join_info_list\nproblematic in some manner, such as missed optimization opportunities\nfor child joins? I noticed there is a loop over join_info_list in\nadd_paths_to_join_rel(), which does get called for child joinrels. I\nknow this a bit off-topic for the thread, but thought to ask in case\nyou've already thought about it.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 27 Sep 2023 18:00:36 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Memory consumed by child SpecialJoinInfo in partitionwise join\n planning" }, { "msg_contents": "On Wed, Sep 27, 2023 at 2:30 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> Here are some comments.\n\nThanks for your review.\n\n>\n> Please merge 0003 into 0002.\n\nDone.\n\n>\n> + /*\n> + * But the list of operator OIDs and the list of expressions may be\n> + * referenced somewhere else. Do not free those.\n> + */\n>\n> I don't see build_child_join_sjinfo() copying any list of OIDs, so I'm\n> not sure what the comment is referring to. Also, instead of \"the list\n> of expressions\", it might be more useful to mention semi_rhs_expr,\n> because that's the only one being copied.\n\nlist of OID is semi_operators. It's copied as is from parent\nSpecialJoinInfo. But the way it's mentioned in the comment is\nconfusing. I have modified the prologue of function to provide a\ngeneral guidance on what can be freed and what should not be. and then\nspecific examples. Let me know if this is more clear.\n\n>\n> The comment for SpecialJoinInfo mentions that they are stored in\n> PlannerInfo.join_info_list, but the child SpecialJoinInfos used by\n> partitionwise joins are not, which I noticed has not been mentioned\n> anywhere. Should we make a note of that in the SpecialJoinInfo's\n> comment?\n\nGood point. Since SpecialJoinInfos for JOIN_INNER are mentioned there,\nit makes sense to mention transient child SpecialJoinInfos as well.\nDone.\n\n>\n> Just out of curiosity, is their not being present in join_info_list\n> problematic in some manner, such as missed optimization opportunities\n> for child joins? I noticed there is a loop over join_info_list in\n> add_paths_to_join_rel(), which does get called for child joinrels. I\n> know this a bit off-topic for the thread, but thought to ask in case\n> you've already thought about it.\n\nThe function has a comment and code to take care of this at the very beginning\n/*\n* PlannerInfo doesn't contain the SpecialJoinInfos created for joins\n* between child relations, even if there is a SpecialJoinInfo node for\n* the join between the topmost parents. So, while calculating Relids set\n* representing the restriction, consider relids of topmost parent of\n* partitions.\n*/\nif (joinrel->reloptkind == RELOPT_OTHER_JOINREL)\njoinrelids = joinrel->top_parent_relids;\nelse\njoinrelids = joinrel->relids;\n\nPFA latest patch set\n0001 - same as previous patch set\n0002 - 0002 and 0003 from previous patch set squashed together\n0003 - changes addressing above comments.\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Wed, 27 Sep 2023 16:37:38 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Memory consumed by child SpecialJoinInfo in partitionwise join\n planning" }, { "msg_contents": "On Wed, Sep 27, 2023 at 8:07 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> On Wed, Sep 27, 2023 at 2:30 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > Just out of curiosity, is their not being present in join_info_list\n> > problematic in some manner, such as missed optimization opportunities\n> > for child joins? I noticed there is a loop over join_info_list in\n> > add_paths_to_join_rel(), which does get called for child joinrels. I\n> > know this a bit off-topic for the thread, but thought to ask in case\n> > you've already thought about it.\n>\n> The function has a comment and code to take care of this at the very beginning\n> /*\n> * PlannerInfo doesn't contain the SpecialJoinInfos created for joins\n> * between child relations, even if there is a SpecialJoinInfo node for\n> * the join between the topmost parents. So, while calculating Relids set\n> * representing the restriction, consider relids of topmost parent of\n> * partitions.\n> */\n> if (joinrel->reloptkind == RELOPT_OTHER_JOINREL)\n> joinrelids = joinrel->top_parent_relids;\n> else\n> joinrelids = joinrel->relids;\n\nAh, that's accounted for. Thanks.\n\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 27 Sep 2023 20:52:29 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Memory consumed by child SpecialJoinInfo in partitionwise join\n planning" }, { "msg_contents": "On Wed, Sep 27, 2023 at 8:07 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> On Wed, Sep 27, 2023 at 2:30 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > + /*\n> > + * But the list of operator OIDs and the list of expressions may be\n> > + * referenced somewhere else. Do not free those.\n> > + */\n> >\n> > I don't see build_child_join_sjinfo() copying any list of OIDs, so I'm\n> > not sure what the comment is referring to. Also, instead of \"the list\n> > of expressions\", it might be more useful to mention semi_rhs_expr,\n> > because that's the only one being copied.\n>\n> list of OID is semi_operators. It's copied as is from parent\n> SpecialJoinInfo. But the way it's mentioned in the comment is\n> confusing. I have modified the prologue of function to provide a\n> general guidance on what can be freed and what should not be. and then\n> specific examples. Let me know if this is more clear.\n\nThanks. I would've kept the notes about specific members inside the\nfunction. Also, no need to have a note for pointers that are not\ndeep-copied to begin with, such as semi_operators. IOW, something\nlike the following would have sufficed:\n\n@@ -1735,6 +1735,10 @@ build_child_join_sjinfo(PlannerInfo *root,\nSpecialJoinInfo *parent_sjinfo,\n /*\n * free_child_sjinfo_members\n * Free memory consumed by members of a child SpecialJoinInfo.\n+ *\n+ * Only members that are translated copies of their counterpart in the parent\n+ * SpecialJoinInfo are freed here. However, members that could be referenced\n+ * elsewhere are not freed.\n */\n static void\n free_child_sjinfo_members(SpecialJoinInfo *child_sjinfo)\n@@ -1755,10 +1759,7 @@ free_child_sjinfo_members(SpecialJoinInfo *child_sjinfo)\n bms_free(child_sjinfo->syn_lefthand);\n bms_free(child_sjinfo->syn_righthand);\n\n- /*\n- * But the list of operator OIDs and the list of expressions may be\n- * referenced somewhere else. Do not free those.\n- */\n+ /* semi_rhs_exprs may be referenced, so don't free. */\n }\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 29 Sep 2023 12:05:59 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Memory consumed by child SpecialJoinInfo in partitionwise join\n planning" }, { "msg_contents": "On Fri, Sep 29, 2023 at 8:36 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> IOW, something\n> like the following would have sufficed:\n>\n> @@ -1735,6 +1735,10 @@ build_child_join_sjinfo(PlannerInfo *root,\n> SpecialJoinInfo *parent_sjinfo,\n> /*\n> * free_child_sjinfo_members\n> * Free memory consumed by members of a child SpecialJoinInfo.\n> + *\n> + * Only members that are translated copies of their counterpart in the parent\n> + * SpecialJoinInfo are freed here. However, members that could be referenced\n> + * elsewhere are not freed.\n> */\n> static void\n> free_child_sjinfo_members(SpecialJoinInfo *child_sjinfo)\n> @@ -1755,10 +1759,7 @@ free_child_sjinfo_members(SpecialJoinInfo *child_sjinfo)\n> bms_free(child_sjinfo->syn_lefthand);\n> bms_free(child_sjinfo->syn_righthand);\n>\n> - /*\n> - * But the list of operator OIDs and the list of expressions may be\n> - * referenced somewhere else. Do not free those.\n> - */\n> + /* semi_rhs_exprs may be referenced, so don't free. */\n> }\n\nWorks for me. PFA patchset with these changes. I have still left the\nchanges addressing your comments as a separate patch for easier\nreview.\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Tue, 3 Oct 2023 18:42:28 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Memory consumed by child SpecialJoinInfo in partitionwise join\n planning" }, { "msg_contents": "Rebased patchset. No changes in actual patches. 0001 is squashed\nversion of patches submitted at\nhttps://www.postgresql.org/message-id/CAExHW5sCJX7696sF-OnugAiaXS=Ag95=-m1cSrjcmyYj8Pduuw@mail.gmail.com.\n\nOn Tue, Oct 3, 2023 at 6:42 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Fri, Sep 29, 2023 at 8:36 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > IOW, something\n> > like the following would have sufficed:\n> >\n> > @@ -1735,6 +1735,10 @@ build_child_join_sjinfo(PlannerInfo *root,\n> > SpecialJoinInfo *parent_sjinfo,\n> > /*\n> > * free_child_sjinfo_members\n> > * Free memory consumed by members of a child SpecialJoinInfo.\n> > + *\n> > + * Only members that are translated copies of their counterpart in the parent\n> > + * SpecialJoinInfo are freed here. However, members that could be referenced\n> > + * elsewhere are not freed.\n> > */\n> > static void\n> > free_child_sjinfo_members(SpecialJoinInfo *child_sjinfo)\n> > @@ -1755,10 +1759,7 @@ free_child_sjinfo_members(SpecialJoinInfo *child_sjinfo)\n> > bms_free(child_sjinfo->syn_lefthand);\n> > bms_free(child_sjinfo->syn_righthand);\n> >\n> > - /*\n> > - * But the list of operator OIDs and the list of expressions may be\n> > - * referenced somewhere else. Do not free those.\n> > - */\n> > + /* semi_rhs_exprs may be referenced, so don't free. */\n> > }\n>\n> Works for me. PFA patchset with these changes. I have still left the\n> changes addressing your comments as a separate patch for easier\n> review.\n>\n> --\n> Best Wishes,\n> Ashutosh Bapat\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 30 Oct 2023 20:50:06 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Memory consumed by child SpecialJoinInfo in partitionwise join\n planning" }, { "msg_contents": "On Wed, 4 Oct 2023 at 04:02, Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Fri, Sep 29, 2023 at 8:36 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > IOW, something\n> > like the following would have sufficed:\n> >\n> > @@ -1735,6 +1735,10 @@ build_child_join_sjinfo(PlannerInfo *root,\n> > SpecialJoinInfo *parent_sjinfo,\n> > /*\n> > * free_child_sjinfo_members\n> > * Free memory consumed by members of a child SpecialJoinInfo.\n> > + *\n> > + * Only members that are translated copies of their counterpart in the parent\n> > + * SpecialJoinInfo are freed here. However, members that could be referenced\n> > + * elsewhere are not freed.\n> > */\n> > static void\n> > free_child_sjinfo_members(SpecialJoinInfo *child_sjinfo)\n> > @@ -1755,10 +1759,7 @@ free_child_sjinfo_members(SpecialJoinInfo *child_sjinfo)\n> > bms_free(child_sjinfo->syn_lefthand);\n> > bms_free(child_sjinfo->syn_righthand);\n> >\n> > - /*\n> > - * But the list of operator OIDs and the list of expressions may be\n> > - * referenced somewhere else. Do not free those.\n> > - */\n> > + /* semi_rhs_exprs may be referenced, so don't free. */\n> > }\n>\n> Works for me. PFA patchset with these changes. I have still left the\n> changes addressing your comments as a separate patch for easier\n> review.\n\nCFBot shows that the patch does not apply anymore as in [1]:\n\n=== Applying patches on top of PostgreSQL commit ID\n55627ba2d334ce98e1f5916354c46472d414bda6 ===\n=== applying patch\n./0001-Report-memory-used-for-planning-a-query-in--20231003.patch\n...\nHunk #1 succeeded at 89 (offset -3 lines).\npatching file src/test/regress/expected/explain.out\nHunk #5 FAILED at 285.\nHunk #6 succeeded at 540 (offset 4 lines).\n1 out of 6 hunks FAILED -- saving rejects to file\nsrc/test/regress/expected/explain.out.rej\n\nPlease post an updated version for the same.\n\n[1] - http://cfbot.cputube.org/patch_46_4500.log\n\nRegards,\nVignesh\n\n\n", "msg_date": "Fri, 26 Jan 2024 20:42:22 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Memory consumed by child SpecialJoinInfo in partitionwise join\n planning" }, { "msg_contents": "On Fri, Jan 26, 2024 at 8:42 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Wed, 4 Oct 2023 at 04:02, Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > On Fri, Sep 29, 2023 at 8:36 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > IOW, something\n> > > like the following would have sufficed:\n.. snip...\n> >\n> > Works for me. PFA patchset with these changes. I have still left the\n> > changes addressing your comments as a separate patch for easier\n> > review.\n>\n> CFBot shows that the patch does not apply anymore as in [1]:\n>\n> === Applying patches on top of PostgreSQL commit ID\n> 55627ba2d334ce98e1f5916354c46472d414bda6 ===\n> === applying patch\n> ./0001-Report-memory-used-for-planning-a-query-in--20231003.patch\n> ...\n> Hunk #1 succeeded at 89 (offset -3 lines).\n> patching file src/test/regress/expected/explain.out\n> Hunk #5 FAILED at 285.\n> Hunk #6 succeeded at 540 (offset 4 lines).\n> 1 out of 6 hunks FAILED -- saving rejects to file\n> src/test/regress/expected/explain.out.rej\n>\n> Please post an updated version for the same.\n>\n> [1] - http://cfbot.cputube.org/patch_46_4500.log\n\nThanks Vignesh. PFA patches rebased on the latest HEAD. The patch\naddressing Amit's comments is still a separate patch for him to\nreview.\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Tue, 30 Jan 2024 11:14:44 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Memory consumed by child SpecialJoinInfo in partitionwise join\n planning" }, { "msg_contents": "On 30/1/2024 12:44, Ashutosh Bapat wrote:\n> Thanks Vignesh. PFA patches rebased on the latest HEAD. The patch\n> addressing Amit's comments is still a separate patch for him to\n> review.\nThanks for this improvement. Working with partitions, I frequently see \npeaks of memory consumption during planning. So, maybe one more case can \nbe resolved here.\nPatch 0001 looks good. I'm not sure about free_child_sjinfo_members. Do \nwe really need it as a separate routine? It might be better to inline \nthis code.\nPatch 0002 adds valuable comments, and I'm OK with that.\n\nAlso, as I remember, some extensions, such as pg_hint_plan, call \nbuild_child_join_sjinfo. It is OK to break the interface with a major \nversion. But what if they need child_sjinfo a bit longer and collect \nlinks to this structure? I don't think it is a real stopper, but it is \nworth additional analysis.\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n", "msg_date": "Wed, 14 Feb 2024 11:19:59 +0700", "msg_from": "Andrei Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Memory consumed by child SpecialJoinInfo in partitionwise join\n planning" }, { "msg_contents": "On Wed, Feb 14, 2024 at 9:50 AM Andrei Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n>\n> On 30/1/2024 12:44, Ashutosh Bapat wrote:\n> > Thanks Vignesh. PFA patches rebased on the latest HEAD. The patch\n> > addressing Amit's comments is still a separate patch for him to\n> > review.\n> Thanks for this improvement. Working with partitions, I frequently see\n> peaks of memory consumption during planning. So, maybe one more case can\n> be resolved here.\n> Patch 0001 looks good. I'm not sure about free_child_sjinfo_members. Do\n> we really need it as a separate routine? It might be better to inline\n> this code.\n\ntry_partitionwise_join() is already 200 lines long. A separate\nfunction is better than adding more lines to try_partitionwise_join().\nAlso if someone wants to call build_child_join_sjinfo() outside\ntry_partitionwise_join() may find free_child_sjinfo_members() handy.\n\n> Patch 0002 adds valuable comments, and I'm OK with that.\n>\n> Also, as I remember, some extensions, such as pg_hint_plan, call\n> build_child_join_sjinfo. It is OK to break the interface with a major\n> version. But what if they need child_sjinfo a bit longer and collect\n> links to this structure? I don't think it is a real stopper, but it is\n> worth additional analysis.\n\nIf these extensions call build_child_join_sjinfo() and do not call\nfree_child_sjinfo_members, they can keep their child sjinfo as long as\nthey want. I didn't understand your concern.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Wed, 14 Feb 2024 12:02:34 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Memory consumed by child SpecialJoinInfo in partitionwise join\n planning" }, { "msg_contents": "On 14/2/2024 13:32, Ashutosh Bapat wrote:\n> On Wed, Feb 14, 2024 at 9:50 AM Andrei Lepikhov\n> <a.lepikhov@postgrespro.ru> wrote:\n>>\n>> On 30/1/2024 12:44, Ashutosh Bapat wrote:\n>>> Thanks Vignesh. PFA patches rebased on the latest HEAD. The patch\n>>> addressing Amit's comments is still a separate patch for him to\n>>> review.\n>> Thanks for this improvement. Working with partitions, I frequently see\n>> peaks of memory consumption during planning. So, maybe one more case can\n>> be resolved here.\n>> Patch 0001 looks good. I'm not sure about free_child_sjinfo_members. Do\n>> we really need it as a separate routine? It might be better to inline\n>> this code.\n> \n> try_partitionwise_join() is already 200 lines long. A separate\n> function is better than adding more lines to try_partitionwise_join().\n> Also if someone wants to call build_child_join_sjinfo() outside\n> try_partitionwise_join() may find free_child_sjinfo_members() handy.\nMake sense, thanks.\n>> Patch 0002 adds valuable comments, and I'm OK with that.\n>>\n>> Also, as I remember, some extensions, such as pg_hint_plan, call\n>> build_child_join_sjinfo. It is OK to break the interface with a major\n>> version. But what if they need child_sjinfo a bit longer and collect\n>> links to this structure? I don't think it is a real stopper, but it is\n>> worth additional analysis.\n> \n> If these extensions call build_child_join_sjinfo() and do not call\n> free_child_sjinfo_members, they can keep their child sjinfo as long as\n> they want. I didn't understand your concern.\nNothing special. This patch makes external code responsible for \nallocation of the structure and it adds more paths to do something \nwrong. Current code looks good.\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n", "msg_date": "Wed, 14 Feb 2024 14:33:39 +0700", "msg_from": "Andrei Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Memory consumed by child SpecialJoinInfo in partitionwise join\n planning" }, { "msg_contents": "Hi,\n\nI took a quick look at this patch today. I certainly agree with the\nintent to reduce the amount of memory during planning, assuming it's not\noverly disruptive. And I think this patch is fairly localized and looks\nsensible.\n\nThat being said I'm a big fan of using a local variable on stack and\nfilling it. I'd probably go with the usual palloc/pfree, because that\nmakes it much easier to use - the callers would not be responsible for\nallocating the SpecialJoinInfo struct. Sure, it's a little bit of\noverhead, but with the AllocSet caching I doubt it's measurable.\n\nI did put this through check-world on amd64/arm64, with valgrind,\nwithout any issue. I also tried the scripts shared by Ashutosh in his\ninitial message (with some minor fixes, adding MEMORY to explain etc).\n\nThe results with the 20240130 patches are like this:\n\n tables master patched\n -----------------------------\n 2 40.8 39.9\n 3 151.7 142.6\n 4 464.0 418.5\n 5 1663.9 1419.5\n\nThat's certainly a nice improvement, and it even reduces the amount of\ntime for planning (the 5-table join goes from 18s to 17s on my laptop).\nThat's nice, although 17 seconds for planning is not ... great.\n\nThat being said, the amount of remaining memory needed by planning is\nstill pretty high - we save ~240MB for a join of 5 tables, but we still\nneed ~1.4GB. Yes, this is a bit extreme example, and it probably is not\nvery common to join 5 tables with 1000 partitions each ...\n\nDo we know what are the other places consuming the 1.4GB of memory?\nConsidering my recent thread about scalability, where malloc() turned\nout to be one of the main culprits, I wonder if maybe there's a lot to\ngain by reducing the memory usage ... Our attitude to memory usage is\nthat it doesn't really matter if we keep it allocated for a bit, because\nwe'll free it shortly. And that may be true for \"modest\" memory usage,\nbut with 1.4GB that doesn't seem great, and the malloc overhead can be\npretty bad.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sun, 18 Feb 2024 18:25:10 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Memory consumed by child SpecialJoinInfo in partitionwise join\n planning" }, { "msg_contents": "On Sun, Feb 18, 2024 at 10:55 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> Hi,\n>\n> I took a quick look at this patch today. I certainly agree with the\n> intent to reduce the amount of memory during planning, assuming it's not\n> overly disruptive. And I think this patch is fairly localized and looks\n> sensible.\n\nThanks for looking at the patch-set.\n\n>\n> That being said I'm a big fan of using a local variable on stack and\n> filling it. I'd probably go with the usual palloc/pfree, because that\n> makes it much easier to use - the callers would not be responsible for\n> allocating the SpecialJoinInfo struct. Sure, it's a little bit of\n> overhead, but with the AllocSet caching I doubt it's measurable.\n\nYou are suggesting that instead of declaring a local variable of type\nSpecialJoinInfo, build_child_join_sjinfo() should palloc() and return\nSpecialJoinInfo which will be freed by free_child_sjinfo()\n(free_child_sjinfo_members in the patch). I am fine with that.\n\n>\n> I did put this through check-world on amd64/arm64, with valgrind,\n> without any issue. I also tried the scripts shared by Ashutosh in his\n> initial message (with some minor fixes, adding MEMORY to explain etc).\n>\n> The results with the 20240130 patches are like this:\n>\n> tables master patched\n> -----------------------------\n> 2 40.8 39.9\n> 3 151.7 142.6\n> 4 464.0 418.5\n> 5 1663.9 1419.5\n>\n> That's certainly a nice improvement, and it even reduces the amount of\n> time for planning (the 5-table join goes from 18s to 17s on my laptop).\n> That's nice, although 17 seconds for planning is not ... great.\n>\n> That being said, the amount of remaining memory needed by planning is\n> still pretty high - we save ~240MB for a join of 5 tables, but we still\n> need ~1.4GB. Yes, this is a bit extreme example, and it probably is not\n> very common to join 5 tables with 1000 partitions each ...\n>\n\nYes.\n\n> Do we know what are the other places consuming the 1.4GB of memory?\n\nPlease find the breakdown at [1]. Investigating further, I found that\nmemory consumed by Bitmapsets is not explicitly visible through that\nbreakdown. But Bitmapsets consume a lot of memory in this case. I\nshared google slides with raw numbers in [2]. The Gslides can be found\nat https://docs.google.com/presentation/d/1S9BiAADhX-Fv9tDbx5R5Izq4blAofhZMhHcO1c-wzfI/edit#slide=id.p.\nAccording to that measurement, Bitmapset objects created while\nplanning a 5-way join between partitioned tables with 1000 partitions\neach consumed 769.795 MiB as compared to 19.728 KiB consumed by\nBitmapsets when planning a 5-way non-partitioned join. See slide 3 for\ndetailed numbers.\n\n> Considering my recent thread about scalability, where malloc() turned\n> out to be one of the main culprits, I wonder if maybe there's a lot to\n> gain by reducing the memory usage ... Our attitude to memory usage is\n> that it doesn't really matter if we keep it allocated for a bit, because\n> we'll free it shortly. And that may be true for \"modest\" memory usage,\n> but with 1.4GB that doesn't seem great, and the malloc overhead can be\n> pretty bad.\n>\n\nThat probably explains why we see some modest speedups because of my\nmemory saving patches. Thanks for sharing it.\n\n[1] https://www.postgresql.org/message-id/CAExHW5stmOUobE55pMt83r8UxvfCph+Pvo5dNpdrVCsBgXEzDQ@mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAExHW5t0cPNjJRPRtt%3D%2Bc5SiTeBPHvx%3DSd2n8EO%2B7XdVuE8_YQ%40mail.gmail.com\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 19 Feb 2024 18:31:07 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Memory consumed by child SpecialJoinInfo in partitionwise join\n planning" }, { "msg_contents": "Hi Ashutosh,\n\nOn Mon, Feb 19, 2024 at 10:01 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> On Sun, Feb 18, 2024 at 10:55 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> >\n> > Hi,\n> >\n> > I took a quick look at this patch today. I certainly agree with the\n> > intent to reduce the amount of memory during planning, assuming it's not\n> > overly disruptive. And I think this patch is fairly localized and looks\n> > sensible.\n>\n> Thanks for looking at the patch-set.\n> >\n> > That being said I'm a big fan of using a local variable on stack and\n> > filling it. I'd probably go with the usual palloc/pfree, because that\n> > makes it much easier to use - the callers would not be responsible for\n> > allocating the SpecialJoinInfo struct. Sure, it's a little bit of\n> > overhead, but with the AllocSet caching I doubt it's measurable.\n>\n> You are suggesting that instead of declaring a local variable of type\n> SpecialJoinInfo, build_child_join_sjinfo() should palloc() and return\n> SpecialJoinInfo which will be freed by free_child_sjinfo()\n> (free_child_sjinfo_members in the patch). I am fine with that.\n\nAgree with Tomas about using makeNode() / pfree(). Having the pfree()\nkind of makes it extra-evident that those SpecialJoinInfos are\ntransitory.\n\n> > I did put this through check-world on amd64/arm64, with valgrind,\n> > without any issue. I also tried the scripts shared by Ashutosh in his\n> > initial message (with some minor fixes, adding MEMORY to explain etc).\n> >\n> > The results with the 20240130 patches are like this:\n> >\n> > tables master patched\n> > -----------------------------\n> > 2 40.8 39.9\n> > 3 151.7 142.6\n> > 4 464.0 418.5\n> > 5 1663.9 1419.5\n\nCould you please post the numbers with the palloc() / pfree() version?\n\n-- \nThanks, Amit Langote\n\n\n", "msg_date": "Fri, 15 Mar 2024 15:15:07 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Memory consumed by child SpecialJoinInfo in partitionwise join\n planning" }, { "msg_contents": "Hi Amit,\n\n\nOn Fri, Mar 15, 2024 at 11:45 AM Amit Langote <amitlangote09@gmail.com>\nwrote:\n\n> > >\n> > > That being said I'm a big fan of using a local variable on stack and\n> > > filling it. I'd probably go with the usual palloc/pfree, because that\n> > > makes it much easier to use - the callers would not be responsible for\n> > > allocating the SpecialJoinInfo struct. Sure, it's a little bit of\n> > > overhead, but with the AllocSet caching I doubt it's measurable.\n> >\n> > You are suggesting that instead of declaring a local variable of type\n> > SpecialJoinInfo, build_child_join_sjinfo() should palloc() and return\n> > SpecialJoinInfo which will be freed by free_child_sjinfo()\n> > (free_child_sjinfo_members in the patch). I am fine with that.\n>\n> Agree with Tomas about using makeNode() / pfree(). Having the pfree()\n> kind of makes it extra-evident that those SpecialJoinInfos are\n> transitory.\n>\n\nAttached patch-set\n\n0001 - original patch as is\n0002 - addresses your first set of comments\n0003 - uses palloc and pfree to allocate and deallocate SpecialJoinInfo\nstructure.\n\nI will squash both 0002 and 0003 into 0001 once you review those changes\nand are fine with those.\n\n\n>\n> > > I did put this through check-world on amd64/arm64, with valgrind,\n> > > without any issue. I also tried the scripts shared by Ashutosh in his\n> > > initial message (with some minor fixes, adding MEMORY to explain etc).\n> > >\n> > > The results with the 20240130 patches are like this:\n> > >\n> > > tables master patched\n> > > -----------------------------\n> > > 2 40.8 39.9\n> > > 3 151.7 142.6\n> > > 4 464.0 418.5\n> > > 5 1663.9 1419.5\n>\n> Could you please post the numbers with the palloc() / pfree() version?\n>\n>\nHere are they\n tables | master | patched\n--------+---------+---------\n 2 | 29 MB | 28 MB\n 3 | 102 MB | 93 MB\n 4 | 307 MB | 263 MB\n 5 | 1076 MB | 843 MB\n\nThe numbers look slightly different from my earlier numbers. But they were\nquite old. The patch used to measure memory that time is different from the\none that we committed. So there's a slight difference in the way we measure\nmemory as well.\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Mon, 18 Mar 2024 16:41:27 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Memory consumed by child SpecialJoinInfo in partitionwise join\n planning" }, { "msg_contents": "On Mon, Mar 18, 2024 at 20:11 Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n\n> Hi Amit,\n>\n>\n> On Fri, Mar 15, 2024 at 11:45 AM Amit Langote <amitlangote09@gmail.com>\n> wrote:\n>\n>> > >\n>> > > That being said I'm a big fan of using a local variable on stack and\n>> > > filling it. I'd probably go with the usual palloc/pfree, because that\n>> > > makes it much easier to use - the callers would not be responsible for\n>> > > allocating the SpecialJoinInfo struct. Sure, it's a little bit of\n>> > > overhead, but with the AllocSet caching I doubt it's measurable.\n>> >\n>> > You are suggesting that instead of declaring a local variable of type\n>> > SpecialJoinInfo, build_child_join_sjinfo() should palloc() and return\n>> > SpecialJoinInfo which will be freed by free_child_sjinfo()\n>> > (free_child_sjinfo_members in the patch). I am fine with that.\n>>\n>> Agree with Tomas about using makeNode() / pfree(). Having the pfree()\n>> kind of makes it extra-evident that those SpecialJoinInfos are\n>> transitory.\n>>\n>\n> Attached patch-set\n>\n> 0001 - original patch as is\n> 0002 - addresses your first set of comments\n> 0003 - uses palloc and pfree to allocate and deallocate SpecialJoinInfo\n> structure.\n>\n> I will squash both 0002 and 0003 into 0001 once you review those changes\n> and are fine with those.\n>\n\nThanks for the new patches.\n\n> > I did put this through check-world on amd64/arm64, with valgrind,\n>> > > without any issue. I also tried the scripts shared by Ashutosh in his\n>> > > initial message (with some minor fixes, adding MEMORY to explain etc).\n>> > >\n>> > > The results with the 20240130 patches are like this:\n>> > >\n>> > > tables master patched\n>> > > -----------------------------\n>> > > 2 40.8 39.9\n>> > > 3 151.7 142.6\n>> > > 4 464.0 418.5\n>> > > 5 1663.9 1419.5\n>>\n>> Could you please post the numbers with the palloc() / pfree() version?\n>>\n>>\n> Here are they\n> tables | master | patched\n> --------+---------+---------\n> 2 | 29 MB | 28 MB\n> 3 | 102 MB | 93 MB\n> 4 | 307 MB | 263 MB\n> 5 | 1076 MB | 843 MB\n>\n> The numbers look slightly different from my earlier numbers. But they were\n> quite old. The patch used to measure memory that time is different from the\n> one that we committed. So there's a slight difference in the way we measure\n> memory as well.\n>\n\nSorry, I should’ve mentioned that I was interested in seeing cpu times to\ncompare the two approaches. Specifically, to see if the palloc / frees add\nnoticeable overhead.\n\nOn Mon, Mar 18, 2024 at 20:11 Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:Hi Amit,On Fri, Mar 15, 2024 at 11:45 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> >\n> > That being said I'm a big fan of using a local variable on stack and\n> > filling it. I'd probably go with the usual palloc/pfree, because that\n> > makes it much easier to use - the callers would not be responsible for\n> > allocating the SpecialJoinInfo struct. Sure, it's a little bit of\n> > overhead, but with the AllocSet caching I doubt it's measurable.\n>\n> You are suggesting that instead of declaring a local variable of type\n> SpecialJoinInfo, build_child_join_sjinfo() should palloc() and return\n> SpecialJoinInfo which will be freed by free_child_sjinfo()\n> (free_child_sjinfo_members in the patch). I am fine with that.\n\nAgree with Tomas about using makeNode() / pfree().  Having the pfree()\nkind of makes it extra-evident that those SpecialJoinInfos are\ntransitory.Attached patch-set0001 - original patch as is0002 - addresses your first set of comments0003 - uses palloc and pfree to allocate and deallocate SpecialJoinInfo structure.I will squash both 0002 and 0003 into 0001 once you review those changes and are fine with those.Thanks for the new patches.\n> > I did put this through check-world on amd64/arm64, with valgrind,\n> > without any issue. I also tried the scripts shared by Ashutosh in his\n> > initial message (with some minor fixes, adding MEMORY to explain etc).\n> >\n> > The results with the 20240130 patches are like this:\n> >\n> >    tables    master    patched\n> >   -----------------------------\n> >         2      40.8       39.9\n> >         3     151.7      142.6\n> >         4     464.0      418.5\n> >         5    1663.9     1419.5\n\nCould you please post the numbers with the palloc() / pfree() version?\nHere are they tables | master  | patched --------+---------+---------      2 | 29 MB   | 28 MB      3 | 102 MB  | 93 MB      4 | 307 MB  | 263 MB      5 | 1076 MB | 843 MBThe numbers look slightly different from my earlier numbers. But they were quite old. The patch used to measure memory that time is different from the one that we committed. So there's a slight difference in the way we measure memory as well.Sorry, I should’ve mentioned that I was interested in seeing cpu times to compare the two approaches. Specifically, to see if the palloc / frees add noticeable overhead.", "msg_date": "Mon, 18 Mar 2024 20:35:18 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Memory consumed by child SpecialJoinInfo in partitionwise join\n planning" }, { "msg_contents": "On Mon, Mar 18, 2024 at 5:05 PM Amit Langote <amitlangote09@gmail.com>\nwrote:\n\n> On Mon, Mar 18, 2024 at 20:11 Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\n> wrote:\n>\n>> Hi Amit,\n>>\n>>\n>> On Fri, Mar 15, 2024 at 11:45 AM Amit Langote <amitlangote09@gmail.com>\n>> wrote:\n>>\n>>> > >\n>>> > > That being said I'm a big fan of using a local variable on stack and\n>>> > > filling it. I'd probably go with the usual palloc/pfree, because that\n>>> > > makes it much easier to use - the callers would not be responsible\n>>> for\n>>> > > allocating the SpecialJoinInfo struct. Sure, it's a little bit of\n>>> > > overhead, but with the AllocSet caching I doubt it's measurable.\n>>> >\n>>> > You are suggesting that instead of declaring a local variable of type\n>>> > SpecialJoinInfo, build_child_join_sjinfo() should palloc() and return\n>>> > SpecialJoinInfo which will be freed by free_child_sjinfo()\n>>> > (free_child_sjinfo_members in the patch). I am fine with that.\n>>>\n>>> Agree with Tomas about using makeNode() / pfree(). Having the pfree()\n>>> kind of makes it extra-evident that those SpecialJoinInfos are\n>>> transitory.\n>>>\n>>\n>> Attached patch-set\n>>\n>> 0001 - original patch as is\n>> 0002 - addresses your first set of comments\n>> 0003 - uses palloc and pfree to allocate and deallocate SpecialJoinInfo\n>> structure.\n>>\n>> I will squash both 0002 and 0003 into 0001 once you review those changes\n>> and are fine with those.\n>>\n>\n> Thanks for the new patches.\n>\n> > > I did put this through check-world on amd64/arm64, with valgrind,\n>>> > > without any issue. I also tried the scripts shared by Ashutosh in his\n>>> > > initial message (with some minor fixes, adding MEMORY to explain\n>>> etc).\n>>> > >\n>>> > > The results with the 20240130 patches are like this:\n>>> > >\n>>> > > tables master patched\n>>> > > -----------------------------\n>>> > > 2 40.8 39.9\n>>> > > 3 151.7 142.6\n>>> > > 4 464.0 418.5\n>>> > > 5 1663.9 1419.5\n>>>\n>>> Could you please post the numbers with the palloc() / pfree() version?\n>>>\n>>>\n>> Here are they\n>> tables | master | patched\n>> --------+---------+---------\n>> 2 | 29 MB | 28 MB\n>> 3 | 102 MB | 93 MB\n>> 4 | 307 MB | 263 MB\n>> 5 | 1076 MB | 843 MB\n>>\n>> The numbers look slightly different from my earlier numbers. But they\n>> were quite old. The patch used to measure memory that time is different\n>> from the one that we committed. So there's a slight difference in the way\n>> we measure memory as well.\n>>\n>\n> Sorry, I should’ve mentioned that I was interested in seeing cpu times to\n> compare the two approaches. Specifically, to see if the palloc / frees add\n> noticeable overhead.\n>\n\nNo problem. Here you go\n\n tables | master | patched | perc_change\n--------+----------+----------+-------------\n 2 | 477.87 | 492.32 | -3.02\n 3 | 1970.83 | 1989.52 | -0.95\n 4 | 6305.01 | 6268.81 | 0.57\n 5 | 19261.56 | 18684.86 | 2.99\n\n+ve change indicates reduced planning time. It seems that the planning time\nimproves as the number of tables increases. But all the numbers are well\nwithin noise levels and thus may not show any improvement or deterioration.\nIf you want, I can repeat the experiment.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Mon, Mar 18, 2024 at 5:05 PM Amit Langote <amitlangote09@gmail.com> wrote:On Mon, Mar 18, 2024 at 20:11 Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:Hi Amit,On Fri, Mar 15, 2024 at 11:45 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> >\n> > That being said I'm a big fan of using a local variable on stack and\n> > filling it. I'd probably go with the usual palloc/pfree, because that\n> > makes it much easier to use - the callers would not be responsible for\n> > allocating the SpecialJoinInfo struct. Sure, it's a little bit of\n> > overhead, but with the AllocSet caching I doubt it's measurable.\n>\n> You are suggesting that instead of declaring a local variable of type\n> SpecialJoinInfo, build_child_join_sjinfo() should palloc() and return\n> SpecialJoinInfo which will be freed by free_child_sjinfo()\n> (free_child_sjinfo_members in the patch). I am fine with that.\n\nAgree with Tomas about using makeNode() / pfree().  Having the pfree()\nkind of makes it extra-evident that those SpecialJoinInfos are\ntransitory.Attached patch-set0001 - original patch as is0002 - addresses your first set of comments0003 - uses palloc and pfree to allocate and deallocate SpecialJoinInfo structure.I will squash both 0002 and 0003 into 0001 once you review those changes and are fine with those.Thanks for the new patches.\n> > I did put this through check-world on amd64/arm64, with valgrind,\n> > without any issue. I also tried the scripts shared by Ashutosh in his\n> > initial message (with some minor fixes, adding MEMORY to explain etc).\n> >\n> > The results with the 20240130 patches are like this:\n> >\n> >    tables    master    patched\n> >   -----------------------------\n> >         2      40.8       39.9\n> >         3     151.7      142.6\n> >         4     464.0      418.5\n> >         5    1663.9     1419.5\n\nCould you please post the numbers with the palloc() / pfree() version?\nHere are they tables | master  | patched --------+---------+---------      2 | 29 MB   | 28 MB      3 | 102 MB  | 93 MB      4 | 307 MB  | 263 MB      5 | 1076 MB | 843 MBThe numbers look slightly different from my earlier numbers. But they were quite old. The patch used to measure memory that time is different from the one that we committed. So there's a slight difference in the way we measure memory as well.Sorry, I should’ve mentioned that I was interested in seeing cpu times to compare the two approaches. Specifically, to see if the palloc / frees add noticeable overhead.\nNo problem. Here you go tables |  master  | patched  | perc_change --------+----------+----------+-------------      2 |   477.87 |   492.32 |       -3.02      3 |  1970.83 |  1989.52 |       -0.95      4 |  6305.01 |  6268.81 |        0.57      5 | 19261.56 | 18684.86 |        2.99+ve change indicates reduced planning time. It seems that the planning time improves as the number of tables increases. But all the numbers are well within noise levels and thus may not show any improvement or deterioration. If you want, I can repeat the experiment.-- Best Wishes,Ashutosh Bapat", "msg_date": "Mon, 18 Mar 2024 17:26:52 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Memory consumed by child SpecialJoinInfo in partitionwise join\n planning" }, { "msg_contents": "On Mon, Mar 18, 2024 at 8:57 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> On Mon, Mar 18, 2024 at 5:05 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> On Mon, Mar 18, 2024 at 20:11 Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:\n>>> On Fri, Mar 15, 2024 at 11:45 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>>>> Could you please post the numbers with the palloc() / pfree() version?\n>>>\n>>> Here are they\n>>> tables | master | patched\n>>> --------+---------+---------\n>>> 2 | 29 MB | 28 MB\n>>> 3 | 102 MB | 93 MB\n>>> 4 | 307 MB | 263 MB\n>>> 5 | 1076 MB | 843 MB\n>>>\n>>> The numbers look slightly different from my earlier numbers. But they were quite old. The patch used to measure memory that time is different from the one that we committed. So there's a slight difference in the way we measure memory as well.\n>>\n>>\n>> Sorry, I should’ve mentioned that I was interested in seeing cpu times to compare the two approaches. Specifically, to see if the palloc / frees add noticeable overhead.\n>\n> No problem. Here you go\n>\n> tables | master | patched | perc_change\n> --------+----------+----------+-------------\n> 2 | 477.87 | 492.32 | -3.02\n> 3 | 1970.83 | 1989.52 | -0.95\n> 4 | 6305.01 | 6268.81 | 0.57\n> 5 | 19261.56 | 18684.86 | 2.99\n>\n> +ve change indicates reduced planning time. It seems that the planning time improves as the number of tables increases. But all the numbers are well within noise levels and thus may not show any improvement or deterioration. If you want, I can repeat the experiment.\n\nThanks. I also wanted to see cpu time difference between the two\napproaches of SpecialJoinInfo allocation -- on stack and on heap. So\ncomparing times with the previous version of the patch using stack\nallocation and the new one with palloc. I'm hoping that the new\napproach is only worse in the noise range.\n\n-- \nThanks, Amit Langote\n\n\n", "msg_date": "Mon, 18 Mar 2024 21:09:49 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Memory consumed by child SpecialJoinInfo in partitionwise join\n planning" }, { "msg_contents": "On Mon, Mar 18, 2024 at 5:40 PM Amit Langote <amitlangote09@gmail.com>\nwrote:\n\n>\n> >>\n> >> Sorry, I should’ve mentioned that I was interested in seeing cpu times\n> to compare the two approaches. Specifically, to see if the palloc / frees\n> add noticeable overhead.\n> >\n> > No problem. Here you go\n> >\n> > tables | master | patched | perc_change\n> > --------+----------+----------+-------------\n> > 2 | 477.87 | 492.32 | -3.02\n> > 3 | 1970.83 | 1989.52 | -0.95\n> > 4 | 6305.01 | 6268.81 | 0.57\n> > 5 | 19261.56 | 18684.86 | 2.99\n> >\n> > +ve change indicates reduced planning time. It seems that the planning\n> time improves as the number of tables increases. But all the numbers are\n> well within noise levels and thus may not show any improvement or\n> deterioration. If you want, I can repeat the experiment.\n>\n> Thanks. I also wanted to see cpu time difference between the two\n> approaches of SpecialJoinInfo allocation -- on stack and on heap. So\n> comparing times with the previous version of the patch using stack\n> allocation and the new one with palloc. I'm hoping that the new\n> approach is only worse in the noise range.\n>\n\nAh, sorry, I didn't read it carefully. Alvaro made me realise that I have\nbeen gathering numbers using assert enabled builds, so they are not that\nreliable. Here they are with non-assert enabled builds.\n\nplanning time (in ms) reported by EXPLAIN.\n tables | master | stack_alloc | perc_change_1 | palloc | perc_change_2\n| total_perc_change\n--------+----------+-------------+---------------+----------+---------------+-------------------\n 2 | 338.1 | 333.92 | 1.24 | 332.16 | 0.53\n| 1.76\n 3 | 1507.93 | 1475.76 | 2.13 | 1472.79 | 0.20\n| 2.33\n 4 | 5099.45 | 4980.35 | 2.34 | 4947.3 | 0.66\n| 2.98\n 5 | 15442.64 | 15531.94 | -0.58 | 15393.41 | 0.89\n| 0.32\n\nstack_alloc = timings with SpecialJoinInfo on stack\nperc_change_1 = percentage change of above wrt master\npalloc = timings with palloc'ed SpecialJoinInfo\nperc_change_2 = percentage change of above wrt timings with stack_alloc\ntotal_perc_change = percentage change between master and all patches\n\ntotal change is within noise. Timing with palloc is better than that with\nSpecialJoinInfo on stack but only marginally. I don't think that means\npalloc based allocation is better or worse than stack based.\n\nYou requested CPU time difference between stack based SpecialJoinInfo and\npalloc'ed SpecialJoinInfo. Here it is\ntables | stack_alloc | palloc | perc_change\n--------+-------------+----------+-------------\n 2 | 0.438204 | 0.438986 | -0.18\n 3 | 1.224672 | 1.238781 | -1.15\n 4 | 3.511317 | 3.663334 | -4.33\n 5 | 9.283856 | 9.340516 | -0.61\n\nYes, there's a consistent degradation albeit within noise levels.\n\nThe memory measurements\n tables | master | with_patch\n--------+--------+------------\n 2 | 26 MB | 26 MB\n 3 | 93 MB | 84 MB\n 4 | 277 MB | 235 MB\n 5 | 954 MB | 724 MB\n\nThe first row shows the same value because of rounding. The actual values\nthere are 27740432 and 26854704 resp.\n\nPlease let me know if you need anything.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Mon, Mar 18, 2024 at 5:40 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>>\n>> Sorry, I should’ve mentioned that I was interested in seeing cpu times to compare the two approaches. Specifically, to see if the palloc / frees add noticeable overhead.\n>\n> No problem. Here you go\n>\n>  tables |  master  | patched  | perc_change\n> --------+----------+----------+-------------\n>       2 |   477.87 |   492.32 |       -3.02\n>       3 |  1970.83 |  1989.52 |       -0.95\n>       4 |  6305.01 |  6268.81 |        0.57\n>       5 | 19261.56 | 18684.86 |        2.99\n>\n> +ve change indicates reduced planning time. It seems that the planning time improves as the number of tables increases. But all the numbers are well within noise levels and thus may not show any improvement or deterioration. If you want, I can repeat the experiment.\n\nThanks.  I also wanted to see cpu time difference between the two\napproaches of SpecialJoinInfo allocation -- on stack and on heap.  So\ncomparing times with the previous version of the patch using stack\nallocation and the new one with palloc.  I'm hoping that the new\napproach is only worse in the noise range.Ah, sorry, I didn't read it carefully. Alvaro made me realise that I have been gathering numbers using assert enabled builds, so they are not that reliable. Here they are with non-assert enabled builds.planning time (in ms) reported by EXPLAIN. tables |  master  | stack_alloc | perc_change_1 |  palloc  | perc_change_2 | total_perc_change --------+----------+-------------+---------------+----------+---------------+-------------------      2 |    338.1 |      333.92 |          1.24 |   332.16 |          0.53 |              1.76      3 |  1507.93 |     1475.76 |          2.13 |  1472.79 |          0.20 |              2.33      4 |  5099.45 |     4980.35 |          2.34 |   4947.3 |          0.66 |              2.98      5 | 15442.64 |    15531.94 |         -0.58 | 15393.41 |          0.89 |              0.32stack_alloc = timings with SpecialJoinInfo on stackperc_change_1 = percentage change of above wrt masterpalloc = timings with palloc'ed SpecialJoinInfoperc_change_2 = percentage change of above wrt timings with stack_alloctotal_perc_change = percentage change between master and all patchestotal change is within noise. Timing with palloc is better than that with SpecialJoinInfo on stack but only marginally. I don't think that means palloc based allocation is better or worse than stack based.You requested CPU time difference between stack based SpecialJoinInfo and palloc'ed SpecialJoinInfo. Here it istables | stack_alloc |  palloc  | perc_change --------+-------------+----------+-------------      2 |    0.438204 | 0.438986 |       -0.18      3 |    1.224672 | 1.238781 |       -1.15      4 |    3.511317 | 3.663334 |       -4.33      5 |    9.283856 | 9.340516 |       -0.61Yes, there's a consistent degradation albeit within noise levels.The memory measurements tables | master | with_patch --------+--------+------------      2 | 26 MB  | 26 MB      3 | 93 MB  | 84 MB      4 | 277 MB | 235 MB      5 | 954 MB | 724 MBThe first row shows the same value because of rounding. The actual values there are 27740432 and 26854704 resp.Please let me know if you need anything.-- Best Wishes,Ashutosh Bapat", "msg_date": "Mon, 18 Mar 2024 21:17:31 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Memory consumed by child SpecialJoinInfo in partitionwise join\n planning" }, { "msg_contents": "Hi Ashutosh,\n\nOn Tue, Mar 19, 2024 at 12:47 AM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> On Mon, Mar 18, 2024 at 5:40 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> >>\n>> >> Sorry, I should’ve mentioned that I was interested in seeing cpu times to compare the two approaches. Specifically, to see if the palloc / frees add noticeable overhead.\n>> >\n>> > No problem. Here you go\n>> >\n>> > tables | master | patched | perc_change\n>> > --------+----------+----------+-------------\n>> > 2 | 477.87 | 492.32 | -3.02\n>> > 3 | 1970.83 | 1989.52 | -0.95\n>> > 4 | 6305.01 | 6268.81 | 0.57\n>> > 5 | 19261.56 | 18684.86 | 2.99\n>> >\n>> > +ve change indicates reduced planning time. It seems that the planning time improves as the number of tables increases. But all the numbers are well within noise levels and thus may not show any improvement or deterioration. If you want, I can repeat the experiment.\n>>\n>> Thanks. I also wanted to see cpu time difference between the two\n>> approaches of SpecialJoinInfo allocation -- on stack and on heap. So\n>> comparing times with the previous version of the patch using stack\n>> allocation and the new one with palloc. I'm hoping that the new\n>> approach is only worse in the noise range.\n>\n>\n> Ah, sorry, I didn't read it carefully. Alvaro made me realise that I have been gathering numbers using assert enabled builds, so they are not that reliable. Here they are with non-assert enabled builds.\n>\n> planning time (in ms) reported by EXPLAIN.\n> tables | master | stack_alloc | perc_change_1 | palloc | perc_change_2 | total_perc_change\n> --------+----------+-------------+---------------+----------+---------------+-------------------\n> 2 | 338.1 | 333.92 | 1.24 | 332.16 | 0.53 | 1.76\n> 3 | 1507.93 | 1475.76 | 2.13 | 1472.79 | 0.20 | 2.33\n> 4 | 5099.45 | 4980.35 | 2.34 | 4947.3 | 0.66 | 2.98\n> 5 | 15442.64 | 15531.94 | -0.58 | 15393.41 | 0.89 | 0.32\n>\n> stack_alloc = timings with SpecialJoinInfo on stack\n> perc_change_1 = percentage change of above wrt master\n> palloc = timings with palloc'ed SpecialJoinInfo\n> perc_change_2 = percentage change of above wrt timings with stack_alloc\n> total_perc_change = percentage change between master and all patches\n>\n> total change is within noise. Timing with palloc is better than that with SpecialJoinInfo on stack but only marginally. I don't think that means palloc based allocation is better or worse than stack based.\n>\n> You requested CPU time difference between stack based SpecialJoinInfo and palloc'ed SpecialJoinInfo. Here it is\n> tables | stack_alloc | palloc | perc_change\n> --------+-------------+----------+-------------\n> 2 | 0.438204 | 0.438986 | -0.18\n> 3 | 1.224672 | 1.238781 | -1.15\n> 4 | 3.511317 | 3.663334 | -4.33\n> 5 | 9.283856 | 9.340516 | -0.61\n>\n> Yes, there's a consistent degradation albeit within noise levels.\n>\n> The memory measurements\n> tables | master | with_patch\n> --------+--------+------------\n> 2 | 26 MB | 26 MB\n> 3 | 93 MB | 84 MB\n> 4 | 277 MB | 235 MB\n> 5 | 954 MB | 724 MB\n>\n> The first row shows the same value because of rounding. The actual values there are 27740432 and 26854704 resp.\n>\n> Please let me know if you need anything.\n\nThanks for repeating the benchmark. So I don't see a problem with\nkeeping the existing palloc() way of allocating the\nSpecialJoinInfos(). We're adding a few cycles by adding\nfree_child_join_sjinfo() (see my delta patch about the rename), but\nthe reduction in memory consumption due to it, which is our goal here,\nfar outweighs what looks like a minor CPU slowdown.\n\nI have squashed 0002, 0003 into 0001, done some changes of my own, and\nattached the delta patch as 0002. I've listed the changes in the\ncommit message. Let me know if anything seems amiss.\n\nI'd like to commit 0001 + 0002 + any other changes based on your\ncomments (if any) sometime early next week.\n\n\n--\nThanks, Amit Langote", "msg_date": "Fri, 22 Mar 2024 18:23:42 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Memory consumed by child SpecialJoinInfo in partitionwise join\n planning" }, { "msg_contents": "On Fri, Mar 22, 2024 at 2:54 PM Amit Langote <amitlangote09@gmail.com>\nwrote:\n\n> >\n> > Please let me know if you need anything.\n>\n> Thanks for repeating the benchmark. So I don't see a problem with\n> keeping the existing palloc() way of allocating the\n> SpecialJoinInfos(). We're adding a few cycles by adding\n> free_child_join_sjinfo() (see my delta patch about the rename), but\n> the reduction in memory consumption due to it, which is our goal here,\n> far outweighs what looks like a minor CPU slowdown.\n>\n> I have squashed 0002, 0003 into 0001, done some changes of my own, and\n> attached the delta patch as 0002. I've listed the changes in the\n> commit message. Let me know if anything seems amiss.\n>\n> Thanks Amit for your changes.\n\n> * Revert (IMO) unnecessary modifications of build_child_join_sjinfo().\n> For example, hunks to rename sjinfo to child_sjinfo seem unnecessary,\n> because it's clear from the context that they are \"child\" sjinfos.\n>\n\nOk.\n\n> * Rename free_child_sjinfo_members() to free_child_join_sjinfo() to\n> be symmetric with build_child_join_sjinfo(). Note that the function\n> is charged with freeing the sjinfo itself too. Like\n> build_child_join_sjinfo(), name the argument sjinfo, not\n> child_sjinfo.\n\nYes. It should have been done in my patch.\n\n>\n> * Don't set the child sjinfo pointer to NULL after freeing it. While\n> a good convention in general, it's not really common in the planner\n> code, so doing it here seems a bit overkill\n\nThat function is the last line in the block where pointer variable is\ndeclared.\nAs of now there is no danger of the dangling pointer being used.\n\n>\n> * Rename make_dummy_sjinfo() to init_dummy_sjinfo() because the\n> function is not really making it per se, only initializing fields\n> in the SpecialJoinInfo struct made available by the caller.\n\nRight.\n\n>\n> * Make init_dummy_sjinfo() take Relids instead of RelOptInfos because\n> that's what's needed. Also allows to reduce changes to\n> build_child_join_sjinfo()'s signature.\n\nI remember, in one of the versions it was Relids and then I changed to\nRelOptInfos for some reason. The latest version seems to use just Relids. So\nthis looks better.\n\n>\n> * Update the reason mentioned in a comment in free_child_join_sjinfo()\n> about why semi_rhs_expr is not freed -- it may be freed, but there's\n> no way to \"deep\" free it, so we leave it alone.\n>\n> * Update a comment somewhere about why SpecialJoinInfo can be\n> \"transient\" sometimes.\n\nWith the later code, semi_rhs_expr is indeed free'able. It wasn't so when I\nwrote the patches. Thanks for updating the comment. I wish we would have\nfreeObject(). Nothing that can be done for this patch.\n\nAttached patches\nSquashed your 0001 and 0002 into 0001. I have also reworded the commit\nmessage to reflect the latest code changes.\n0002 has some minor edits. Please feel free to include or reject when\ncommitting the patch.\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Fri, 22 Mar 2024 16:18:38 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Memory consumed by child SpecialJoinInfo in partitionwise join\n planning" }, { "msg_contents": "On Fri, Mar 22, 2024 at 7:48 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> With the later code, semi_rhs_expr is indeed free'able. It wasn't so when I\n> wrote the patches. Thanks for updating the comment. I wish we would have\n> freeObject(). Nothing that can be done for this patch.\n\nYes.\n\n> Attached patches\n> Squashed your 0001 and 0002 into 0001. I have also reworded the commit message to reflect the latest code changes.\n> 0002 has some minor edits. Please feel free to include or reject when committing the patch.\n\nI've pushed this now.\n\nWhen updating the commit message, I realized that you had made the\nright call to divide the the changes around not translating the dummy\nSpecialJoinInfos into a separate patch. So, I pushed it like that.\n\nThanks for working on this.\n\n-- \nThanks, Amit Langote\n\n\n", "msg_date": "Mon, 25 Mar 2024 18:16:50 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Memory consumed by child SpecialJoinInfo in partitionwise join\n planning" }, { "msg_contents": "On Mon, Mar 25, 2024 at 2:47 PM Amit Langote <amitlangote09@gmail.com>\nwrote:\n\n> > Attached patches\n> > Squashed your 0001 and 0002 into 0001. I have also reworded the commit\n> message to reflect the latest code changes.\n> > 0002 has some minor edits. Please feel free to include or reject when\n> committing the patch.\n>\n> I've pushed this now.\n>\n\nThanks a lot.\n\n\n>\n> When updating the commit message, I realized that you had made the\n> right call to divide the the changes around not translating the dummy\n> SpecialJoinInfos into a separate patch. So, I pushed it like that.\n>\n>\nOk. Thanks for separating the changes.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Mon, Mar 25, 2024 at 2:47 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> Attached patches\n> Squashed your 0001 and 0002 into 0001. I have also reworded the commit message to reflect the latest code changes.\n> 0002 has some minor edits. Please feel free to include or reject when committing the patch.\n\nI've pushed this now.Thanks a lot. \n\nWhen updating the commit message, I realized that you had made the\nright call to divide the the changes around not translating the dummy\nSpecialJoinInfos into a separate patch.  So, I pushed it like that.Ok. Thanks for separating the changes.-- Best Wishes,Ashutosh Bapat", "msg_date": "Mon, 25 Mar 2024 15:40:00 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Memory consumed by child SpecialJoinInfo in partitionwise join\n planning" }, { "msg_contents": "On Mon, Mar 25, 2024 at 5:17 PM Amit Langote <amitlangote09@gmail.com>\nwrote:\n\n> I've pushed this now.\n>\n> When updating the commit message, I realized that you had made the\n> right call to divide the the changes around not translating the dummy\n> SpecialJoinInfos into a separate patch. So, I pushed it like that.\n\n\nSorry I'm late for the party. I noticed that in commit 6190d828cd the\ncomment of init_dummy_sjinfo() mentions the non-existing 'rel1' and\n'rel2', which should be addressed. Also, I think we can make function\nconsider_new_or_clause() call init_dummy_sjinfo() to simplify the code a\nbit.\n\nAttached is a trivial patch for the fixes.\n\nThanks\nRichard", "msg_date": "Mon, 25 Mar 2024 18:25:19 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Memory consumed by child SpecialJoinInfo in partitionwise join\n planning" }, { "msg_contents": "On Mon, Mar 25, 2024 at 7:25 PM Richard Guo <guofenglinux@gmail.com> wrote:\n> On Mon, Mar 25, 2024 at 5:17 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> I've pushed this now.\n>>\n>> When updating the commit message, I realized that you had made the\n>> right call to divide the the changes around not translating the dummy\n>> SpecialJoinInfos into a separate patch. So, I pushed it like that.\n>\n>\n> Sorry I'm late for the party. I noticed that in commit 6190d828cd the\n> comment of init_dummy_sjinfo() mentions the non-existing 'rel1' and\n> 'rel2', which should be addressed.\n\nThanks for catching that. Oops.\n\n> Also, I think we can make function\n> consider_new_or_clause() call init_dummy_sjinfo() to simplify the code a\n> bit.\n\nIndeed.\n\n> Attached is a trivial patch for the fixes.\n\nWill apply shortly.\n\n-- \nThanks, Amit Langote\n\n\n", "msg_date": "Mon, 25 Mar 2024 19:31:37 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Memory consumed by child SpecialJoinInfo in partitionwise join\n planning" }, { "msg_contents": "On Mon, Mar 25, 2024 at 4:01 PM Amit Langote <amitlangote09@gmail.com>\nwrote:\n\n> On Mon, Mar 25, 2024 at 7:25 PM Richard Guo <guofenglinux@gmail.com>\n> wrote:\n> > On Mon, Mar 25, 2024 at 5:17 PM Amit Langote <amitlangote09@gmail.com>\n> wrote:\n> >> I've pushed this now.\n> >>\n> >> When updating the commit message, I realized that you had made the\n> >> right call to divide the the changes around not translating the dummy\n> >> SpecialJoinInfos into a separate patch. So, I pushed it like that.\n> >\n> >\n> > Sorry I'm late for the party. I noticed that in commit 6190d828cd the\n> > comment of init_dummy_sjinfo() mentions the non-existing 'rel1' and\n> > 'rel2', which should be addressed.\n>\n> Thanks for catching that. Oops.\n>\n\nGood catch.\n\n\n>\n> > Also, I think we can make function\n> > consider_new_or_clause() call init_dummy_sjinfo() to simplify the code a\n> > bit.\n>\n> Indeed.\n>\n\nThanks for fixing it.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Mon, Mar 25, 2024 at 4:01 PM Amit Langote <amitlangote09@gmail.com> wrote:On Mon, Mar 25, 2024 at 7:25 PM Richard Guo <guofenglinux@gmail.com> wrote:\n> On Mon, Mar 25, 2024 at 5:17 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> I've pushed this now.\n>>\n>> When updating the commit message, I realized that you had made the\n>> right call to divide the the changes around not translating the dummy\n>> SpecialJoinInfos into a separate patch.  So, I pushed it like that.\n>\n>\n> Sorry I'm late for the party.  I noticed that in commit 6190d828cd the\n> comment of init_dummy_sjinfo() mentions the non-existing 'rel1' and\n> 'rel2', which should be addressed.\n\nThanks for catching that.  Oops.Good catch. \n\n>  Also, I think we can make function\n> consider_new_or_clause() call init_dummy_sjinfo() to simplify the code a\n> bit.\n\nIndeed.Thanks for fixing it.-- Best Wishes,Ashutosh Bapat", "msg_date": "Mon, 25 Mar 2024 16:26:34 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Memory consumed by child SpecialJoinInfo in partitionwise join\n planning" } ]
[ { "msg_contents": "I've been looking into some options for reducing the amount of downtime\nrequired for pg_upgrade, and $SUBJECT seemed like something that would be\nworthwhile independent of that effort. The attached work-in-progress patch\nadds the elapsed time spent in each step, which looks like this:\n\n Performing Consistency Checks\n -----------------------------\n Checking cluster versions ok (took 0 ms)\n Checking database user is the install user ok (took 3 ms)\n Checking database connection settings ok (took 4 ms)\n Checking for prepared transactions ok (took 2 ms)\n Checking for system-defined composite types in user tables ok (took 82 ms)\n Checking for reg* data types in user tables ok (took 55 ms)\n ...\n\nThis information can be used to better understand where the time is going\nand to validate future improvements. I'm open to suggestions on formatting\nthe timing information, assuming folks are interested in this idea.\n\nThoughts?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 27 Jul 2023 16:51:34 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "add timing information to pg_upgrade" }, { "msg_contents": "On Fri, Jul 28, 2023 at 5:21 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> This information can be used to better understand where the time is going\n> and to validate future improvements. I'm open to suggestions on formatting\n> the timing information, assuming folks are interested in this idea.\n>\n> Thoughts?\n\n+1 for adding time taken.\n\nSome comments on the patch:\n1.\n+ gettimeofday(&step_start, NULL);\n+ gettimeofday(&step_end, NULL);\n+ start_ms = (step_start.tv_sec * 1000L) + (step_start.tv_usec / 1000L);\n+ end_ms = (step_end.tv_sec * 1000L) + (step_end.tv_usec / 1000L);\n+ elapsed_ms = end_ms - start_ms;\n\nHow about using INSTR_TIME_SET_CURRENT, INSTR_TIME_SUBTRACT and\nINSTR_TIME_GET_MILLISEC macros instead of gettimeofday and explicit\ncalculations?\n\n2.\n> Checking database user is the install user ok (took 3 ms)\n> Checking database connection settings ok (took 4 ms)\n\n+ report_status(PG_REPORT, \"ok (took %ld ms)\", elapsed_ms);\n\nI think it's okay to just leave it with \"ok \\t %ld ms\", elapsed_ms);\nwithout \"took\". FWIW, pg_regress does that way, see below:\n\nok 2 + boolean 50 ms\nok 3 + char 32 ms\nok 4 + name 33 ms\n\n\n> Performing Consistency Checks\n> -----------------------------\n> Checking cluster versions ok (took 0 ms)\n> Checking database user is the install user ok (took 3 ms)\n> Checking database connection settings ok (took 4 ms)\n> Checking for prepared transactions ok (took 2 ms)\n> Checking for system-defined composite types in user tables ok (took 82 ms)\n> Checking for reg* data types in user tables ok (took 55 ms)\n\nJust curious, out of all the reported pg_upgrade prep_status()-es,\nwhich ones are taking more time?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 28 Jul 2023 13:10:14 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: add timing information to pg_upgrade" }, { "msg_contents": "On Fri, Jul 28, 2023 at 01:10:14PM +0530, Bharath Rupireddy wrote:\n> How about using INSTR_TIME_SET_CURRENT, INSTR_TIME_SUBTRACT and\n> INSTR_TIME_GET_MILLISEC macros instead of gettimeofday and explicit\n> calculations?\n\nThat seems like a good idea.\n\n> + report_status(PG_REPORT, \"ok (took %ld ms)\", elapsed_ms);\n> \n> I think it's okay to just leave it with \"ok \\t %ld ms\", elapsed_ms);\n> without \"took\". FWIW, pg_regress does that way, see below:\n\nI'm okay with simply adding the time. However, I wonder if we want to\nswitch to seconds, minutes, hours, etc. if the step takes longer. This\nwould add a bit of complexity, but it would improve human readability.\nAlternatively, we could keep the code simple and machine readable by always\nusing milliseconds.\n\n> Just curious, out of all the reported pg_upgrade prep_status()-es,\n> which ones are taking more time?\n\nI haven't done any testing on meaningful workloads yet, but the following\nshow up on an empty cluster: \"creating dump of database schemas\",\n\"analyzing all rows in the new cluster\", \"setting next transaction ID and\nepoch for new cluster\", \"restoring datbase schemas in the new cluster\", and\n\"sync data directory to disk\". I imagine the dump, restore, and\nfile-copying steps will stand out once you start adding data.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 28 Jul 2023 10:38:14 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: add timing information to pg_upgrade" }, { "msg_contents": "On Fri, Jul 28, 2023 at 10:38:14AM -0700, Nathan Bossart wrote:\n> I'm okay with simply adding the time. However, I wonder if we want to\n> switch to seconds, minutes, hours, etc. if the step takes longer. This\n> would add a bit of complexity, but it would improve human readability.\n> Alternatively, we could keep the code simple and machine readable by always\n> using milliseconds.\n\n... or maybe we show both like psql does. Attached іs a new version of the\npatch in which I've made use of the INSTR_TIME_* macros and enhanced the\noutput formatting (largely borrowed from PrintTiming() in\nsrc/bin/psql/common.c).\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 28 Jul 2023 11:47:35 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: add timing information to pg_upgrade" }, { "msg_contents": "On Sat, Jul 29, 2023 at 12:17 AM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n>\n> On Fri, Jul 28, 2023 at 10:38:14AM -0700, Nathan Bossart wrote:\n> > I'm okay with simply adding the time. However, I wonder if we want to\n> > switch to seconds, minutes, hours, etc. if the step takes longer. This\n> > would add a bit of complexity, but it would improve human readability.\n> > Alternatively, we could keep the code simple and machine readable by always\n> > using milliseconds.\n>\n> ... or maybe we show both like psql does. Attached іs a new version of the\n> patch in which I've made use of the INSTR_TIME_* macros and enhanced the\n> output formatting (largely borrowed from PrintTiming() in\n> src/bin/psql/common.c).\n\nThe v2 patch LGTM. I tested it with an upgrade of the 22GB database,\nthe output is here [1].\n\nWhile on this, I noticed a thing unrelated to your patch that there's\nno space between the longest status message of size 60 bytes and ok -\n'Checking for incompatible \"aclitem\" data type in user tablesok\n23.932 ms'. I think MESSAGE_WIDTH needs to be bumped up - 64 or more.\n\n[1]\nPerforming Consistency Checks\n-----------------------------\nChecking cluster versions ok 0.000 ms\nChecking database user is the install user ok 1.767 ms\nChecking database connection settings ok 2.210 ms\nChecking for prepared transactions ok 1.411 ms\nChecking for system-defined composite types in user tables ok 29.471 ms\nChecking for reg* data types in user tables ok 26.251 ms\nChecking for contrib/isn with bigint-passing mismatch ok 0.000 ms\nChecking for incompatible \"aclitem\" data type in user tablesok 23.932 ms\nChecking for user-defined encoding conversions ok 8.350 ms\nChecking for user-defined postfix operators ok 8.229 ms\nChecking for incompatible polymorphic functions ok 15.271 ms\nChecking for tables WITH OIDS ok 6.120 ms\nChecking for invalid \"sql_identifier\" user columns ok 24.509 ms\nCreating dump of global objects ok 14.007 ms\nCreating dump of database schemas\n ok 176.690 ms\nChecking for presence of required libraries ok 0.011 ms\nChecking database user is the install user ok 2.566 ms\nChecking for prepared transactions ok 2.065 ms\nChecking for new cluster tablespace directories ok 0.000 ms\n\nIf pg_upgrade fails after this point, you must re-initdb the\nnew cluster before continuing.\n\nPerforming Upgrade\n------------------\nSetting locale and encoding for new cluster ok 3.014 ms\nAnalyzing all rows in the new cluster ok 373.270 ms\nFreezing all rows in the new cluster ok 81.064 ms\nDeleting files from new pg_xact ok 0.089 ms\nCopying old pg_xact to new server ok 2.204 ms\nSetting oldest XID for new cluster ok 38.314 ms\nSetting next transaction ID and epoch for new cluster ok 111.503 ms\nDeleting files from new pg_multixact/offsets ok 0.078 ms\nCopying old pg_multixact/offsets to new server ok 1.790 ms\nDeleting files from new pg_multixact/members ok 0.050 ms\nCopying old pg_multixact/members to new server ok 1.532 ms\nSetting next multixact ID and offset for new cluster ok 36.770 ms\nResetting WAL archives ok 37.182 ms\nSetting frozenxid and minmxid counters in new cluster ok 47.879 ms\nRestoring global objects in the new cluster ok 11.615 ms\nRestoring database schemas in the new cluster\n ok 141.839 ms\nCopying user relation files\n ok\n151308.601 ms (02:31.309)\nSetting next OID for new cluster ok 44.800 ms\nSync data directory to disk ok\n4461.213 ms (00:04.461)\nCreating script to delete old cluster ok 0.059 ms\nChecking for extension updates ok 66.899 ms\n\nUpgrade Complete\n----------------\nOptimizer statistics are not transferred by pg_upgrade.\nOnce you start the new server, consider running:\n /home/ubuntu/postgres/HEAD/bin/vacuumdb --all --analyze-in-stages\nRunning this script will delete the old cluster's data files:\n ./delete_old_cluster.sh\n\nreal 2m38.133s\nuser 0m0.151s\nsys 0m21.556s\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 29 Jul 2023 12:17:40 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: add timing information to pg_upgrade" }, { "msg_contents": "On Sat, Jul 29, 2023 at 12:17:40PM +0530, Bharath Rupireddy wrote:\n> While on this, I noticed a thing unrelated to your patch that there's\n> no space between the longest status message of size 60 bytes and ok -\n> 'Checking for incompatible \"aclitem\" data type in user tablesok\n> 23.932 ms'. I think MESSAGE_WIDTH needs to be bumped up - 64 or more.\n\nGood catch. I think I'd actually propose just removing \"in user tables\" or\nthe word \"incompatible\" from these messages to keep them succinct.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 29 Jul 2023 14:14:18 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: add timing information to pg_upgrade" }, { "msg_contents": "On Sun, Jul 30, 2023 at 2:44 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Sat, Jul 29, 2023 at 12:17:40PM +0530, Bharath Rupireddy wrote:\n> > While on this, I noticed a thing unrelated to your patch that there's\n> > no space between the longest status message of size 60 bytes and ok -\n> > 'Checking for incompatible \"aclitem\" data type in user tablesok\n> > 23.932 ms'. I think MESSAGE_WIDTH needs to be bumped up - 64 or more.\n>\n> Good catch. I think I'd actually propose just removing \"in user tables\" or\n> the word \"incompatible\" from these messages to keep them succinct.\n\nEither of \"Checking for \\\"aclitem\\\" data type usage\" or \"Checking for\n\\\"aclitem\\\" data type in user tables\" seems okay to me, however, I\nprefer the second wording.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 31 Jul 2023 11:34:57 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: add timing information to pg_upgrade" }, { "msg_contents": "On Mon, Jul 31, 2023 at 11:34:57AM +0530, Bharath Rupireddy wrote:\n> Either of \"Checking for \\\"aclitem\\\" data type usage\" or \"Checking for\n> \\\"aclitem\\\" data type in user tables\" seems okay to me, however, I\n> prefer the second wording.\n\nOkay. I used the second wording for all the data type checks in v3. I\nalso marked the timing strings for translation. I considered trying to\nextract psql's PrintTiming() so that it could be reused in other utilities,\nbut the small differences would likely make translation difficult, and the\nlogic isn't terribly long or sophisticated.\n\nMy only remaining concern is that this timing information could cause\npg_upgrade's output to exceed 80 characters per line. I don't know if this\nis something that folks are very worried about in 2023, but it still seemed\nworth bringing up.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 31 Jul 2023 11:37:02 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: add timing information to pg_upgrade" }, { "msg_contents": "On 28.07.23 01:51, Nathan Bossart wrote:\n> I've been looking into some options for reducing the amount of downtime\n> required for pg_upgrade, and $SUBJECT seemed like something that would be\n> worthwhile independent of that effort. The attached work-in-progress patch\n> adds the elapsed time spent in each step, which looks like this:\n> \n> Performing Consistency Checks\n> -----------------------------\n> Checking cluster versions ok (took 0 ms)\n> Checking database user is the install user ok (took 3 ms)\n> Checking database connection settings ok (took 4 ms)\n> Checking for prepared transactions ok (took 2 ms)\n> Checking for system-defined composite types in user tables ok (took 82 ms)\n> Checking for reg* data types in user tables ok (took 55 ms)\n> ...\n> \n> This information can be used to better understand where the time is going\n> and to validate future improvements.\n\nBut who would use that, other than, you know, you, right now?\n\nI think the pg_upgrade output is already too full with \nnot-really-actionable information (like most of the above \"Checking ...\" \nare not really interesting for a regular user).\n\n\n", "msg_date": "Tue, 1 Aug 2023 09:45:09 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: add timing information to pg_upgrade" }, { "msg_contents": "On 31.07.23 20:37, Nathan Bossart wrote:\n> -\tprep_status(\"Checking for incompatible \\\"aclitem\\\" data type in user tables\");\n> +\tprep_status(\"Checking for \\\"aclitem\\\" data type in user tables\");\n\nWhy these changes? I think this is losing precision about what it's doing.\n\n\n\n", "msg_date": "Tue, 1 Aug 2023 09:46:02 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: add timing information to pg_upgrade" }, { "msg_contents": "> On 1 Aug 2023, at 09:45, Peter Eisentraut <peter@eisentraut.org> wrote:\n> On 28.07.23 01:51, Nathan Bossart wrote:\n\n>> This information can be used to better understand where the time is going\n>> and to validate future improvements.\n> \n> But who would use that, other than, you know, you, right now?\n> \n> I think the pg_upgrade output is already too full with not-really-actionable information (like most of the above \"Checking ...\" are not really interesting for a regular user).\n\nMaybe if made opt-in with a --debug option, or even a compiler option for\nenabling only in specialized debugging builds?\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 1 Aug 2023 09:58:24 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: add timing information to pg_upgrade" }, { "msg_contents": "On Tue, Aug 01, 2023 at 09:46:02AM +0200, Peter Eisentraut wrote:\n> On 31.07.23 20:37, Nathan Bossart wrote:\n>> -\tprep_status(\"Checking for incompatible \\\"aclitem\\\" data type in user tables\");\n>> +\tprep_status(\"Checking for \\\"aclitem\\\" data type in user tables\");\n> \n> Why these changes? I think this is losing precision about what it's doing.\n\nThe message is too long, so there's no space between it and the \"ok\"\nmessage:\n\n\tChecking for incompatible \"aclitem\" data type in user tablesok\n\nInstead of altering the messages, we could bump MESSAGE_WIDTH from 60 to\n62 or 64. Do you prefer that approach? (BTW this probably needs to be\nback-patched to v16.)\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 1 Aug 2023 08:45:54 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: add timing information to pg_upgrade" }, { "msg_contents": "On Tue, Aug 01, 2023 at 09:58:24AM +0200, Daniel Gustafsson wrote:\n>> On 1 Aug 2023, at 09:45, Peter Eisentraut <peter@eisentraut.org> wrote:\n>> On 28.07.23 01:51, Nathan Bossart wrote:\n> \n>>> This information can be used to better understand where the time is going\n>>> and to validate future improvements.\n>> \n>> But who would use that, other than, you know, you, right now?\n\nOne of the main purposes of this thread is to gauge interest. I'm hoping\nthere are other developers who are interested in reducing\npg_upgrade-related downtime, and it seemed like it'd be nice to have\nbuilt-in functionality for measuring the step times instead of requiring\nfolks to apply this patch every time. And I think users might also be\ninterested in this information, if for no other reason than to help us\npinpoint which steps are taking longer for various workloads.\n\n>> I think the pg_upgrade output is already too full with not-really-actionable information (like most of the above \"Checking ...\" are not really interesting for a regular user).\n\nPerhaps. But IMO it's nice to know that it's doing things and making\nprogress, even if you don't understand exactly what it's doing all the\ntime. That being said, I wouldn't be opposed to hiding some of this output\nbehind a --verbose or --debug option or consolidating some of the steps\ninto fewer status messages.\n\n> Maybe if made opt-in with a --debug option, or even a compiler option for\n> enabling only in specialized debugging builds?\n\nI'm totally okay with making the timing information an opt-in feature.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 1 Aug 2023 09:00:06 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: add timing information to pg_upgrade" }, { "msg_contents": "On Tue, Aug 1, 2023 at 9:00 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> >> On 1 Aug 2023, at 09:45, Peter Eisentraut <peter@eisentraut.org> wrote:\n> >> But who would use that, other than, you know, you, right now?\n\n/me raises hand\n\nOr at least, me back when I was hacking on pg_upgrade performance.\nThis, or something like it, would have been fantastic.\n\n> >> I think the pg_upgrade output is already too full with not-really-actionable information (like most of the above \"Checking ...\" are not really interesting for a regular user).\n>\n> Perhaps. But IMO it's nice to know that it's doing things and making\n> progress, even if you don't understand exactly what it's doing all the\n> time.\n\n+1. One of our findings at $prevjob was that some users *really* want\nsome indication, anything at all, that things are progressing and\naren't stuck. There was a lot of anxiety around upgrades.\n\n(There are probably _better_ ways to indicate progress than the\ncurrent step divisions... But even poor progress indicators seemed to\nlower blood pressures, IIRC.)\n\n> That being said, I wouldn't be opposed to hiding some of this output\n> behind a --verbose or --debug option or consolidating some of the steps\n> into fewer status messages.\n\nI agree that millisecond-level timing should probably be opt-in.\n\n--Jacob\n\n\n", "msg_date": "Tue, 1 Aug 2023 11:28:27 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: add timing information to pg_upgrade" }, { "msg_contents": "On 01.08.23 18:00, Nathan Bossart wrote:\n> One of the main purposes of this thread is to gauge interest. I'm hoping\n> there are other developers who are interested in reducing\n> pg_upgrade-related downtime, and it seemed like it'd be nice to have\n> built-in functionality for measuring the step times instead of requiring\n> folks to apply this patch every time. And I think users might also be\n> interested in this information, if for no other reason than to help us\n> pinpoint which steps are taking longer for various workloads.\n\nIf it's just for developers and expert users, perhaps existing \nshell-level functionality like \"pg_upgrade | ts\" would suffice?\n\n\n", "msg_date": "Wed, 2 Aug 2023 09:14:06 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: add timing information to pg_upgrade" }, { "msg_contents": "On 01.08.23 17:45, Nathan Bossart wrote:\n> The message is too long, so there's no space between it and the \"ok\"\n> message:\n> \n> \tChecking for incompatible \"aclitem\" data type in user tablesok\n> \n> Instead of altering the messages, we could bump MESSAGE_WIDTH from 60 to\n> 62 or 64. Do you prefer that approach?\n\nI think we should change the output format to be more like initdb, like\n\n Doing something ... ok\n\nwithout horizontally aligning all the \"ok\"s.\n\n\n\n", "msg_date": "Wed, 2 Aug 2023 09:15:27 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: add timing information to pg_upgrade" }, { "msg_contents": "On Wed, Aug 2, 2023 at 12:45 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n>\n> On 01.08.23 17:45, Nathan Bossart wrote:\n> > The message is too long, so there's no space between it and the \"ok\"\n> > message:\n> >\n> > Checking for incompatible \"aclitem\" data type in user tablesok\n> >\n> > Instead of altering the messages, we could bump MESSAGE_WIDTH from 60 to\n> > 62 or 64. Do you prefer that approach?\n>\n> I think we should change the output format to be more like initdb, like\n>\n> Doing something ... ok\n>\n> without horizontally aligning all the \"ok\"s.\n\nWhile this looks simple, we might end up with a lot of diff and\nchanges after removing MESSAGE_WIDTH. There's a significant part of\npg_upgrade code that deals with MESSAGE_WIDTH. I don't think it's\nworth the effort. Therefore, I'd prefer the simplest possible fix -\nchange the message to '\"Checking for \\\"aclitem\\\" data type in user\ntables\". It may be an overkill, but we can consider adding\nAssert(sizeof(message) < MESSAGE_WIDTH) in progress report functions\nto not encourage new messages to end up in the same formatting issue.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 2 Aug 2023 13:02:53 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: add timing information to pg_upgrade" }, { "msg_contents": "On Wed, Aug 2, 2023 at 12:44 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n>\n> On 01.08.23 18:00, Nathan Bossart wrote:\n> > One of the main purposes of this thread is to gauge interest. I'm hoping\n> > there are other developers who are interested in reducing\n> > pg_upgrade-related downtime, and it seemed like it'd be nice to have\n> > built-in functionality for measuring the step times instead of requiring\n> > folks to apply this patch every time. And I think users might also be\n> > interested in this information, if for no other reason than to help us\n> > pinpoint which steps are taking longer for various workloads.\n>\n> If it's just for developers and expert users, perhaps existing\n> shell-level functionality like \"pg_upgrade | ts\" would suffice?\n\nInteresting. I had to install moreutils package to get ts. And,\nsomething like ts command may or may not be available on all\nplatforms. Moreover, the ts command gives me the timestamps for each\nof the messages printed, so an extra step is required to calculate the\ntime taken for an operation. I think it'd be better if pg_upgrade can\ncalculate the time taken for each operation, and I'm okay if it is an\nopt-in feature with --verbose option.\n\n[1]\nAug 02 07:44:17 Sync data directory to disk ok\nAug 02 07:44:17 Creating script to delete old cluster ok\nAug 02 07:44:17 Checking for extension updates ok\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 2 Aug 2023 14:00:00 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: add timing information to pg_upgrade" }, { "msg_contents": "On 02.08.23 10:30, Bharath Rupireddy wrote:\n> Moreover, the ts command gives me the timestamps for each\n> of the messages printed, so an extra step is required to calculate the\n> time taken for an operation.\n\nThere is \"ts -i\" for that.\n\n\n\n", "msg_date": "Wed, 2 Aug 2023 11:22:26 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: add timing information to pg_upgrade" }, { "msg_contents": "On Wed, Aug 02, 2023 at 09:14:06AM +0200, Peter Eisentraut wrote:\n> On 01.08.23 18:00, Nathan Bossart wrote:\n>> One of the main purposes of this thread is to gauge interest. I'm hoping\n>> there are other developers who are interested in reducing\n>> pg_upgrade-related downtime, and it seemed like it'd be nice to have\n>> built-in functionality for measuring the step times instead of requiring\n>> folks to apply this patch every time. And I think users might also be\n>> interested in this information, if for no other reason than to help us\n>> pinpoint which steps are taking longer for various workloads.\n> \n> If it's just for developers and expert users, perhaps existing shell-level\n> functionality like \"pg_upgrade | ts\" would suffice?\n\nSure, we could just leave it at that unless anyone sees a reason to bake it\nin.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 2 Aug 2023 08:59:15 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: add timing information to pg_upgrade" }, { "msg_contents": "On Wed, Aug 02, 2023 at 01:02:53PM +0530, Bharath Rupireddy wrote:\n> On Wed, Aug 2, 2023 at 12:45 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n>> I think we should change the output format to be more like initdb, like\n>>\n>> Doing something ... ok\n>>\n>> without horizontally aligning all the \"ok\"s.\n> \n> While this looks simple, we might end up with a lot of diff and\n> changes after removing MESSAGE_WIDTH. There's a significant part of\n> pg_upgrade code that deals with MESSAGE_WIDTH. I don't think it's\n> worth the effort. Therefore, I'd prefer the simplest possible fix -\n> change the message to '\"Checking for \\\"aclitem\\\" data type in user\n> tables\". It may be an overkill, but we can consider adding\n> Assert(sizeof(message) < MESSAGE_WIDTH) in progress report functions\n> to not encourage new messages to end up in the same formatting issue.\n\nI don't think it's that difficult. ІMO the bigger question is whether we\nwant to back-patch such a change to v16 at this point.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 2 Aug 2023 09:09:14 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: add timing information to pg_upgrade" }, { "msg_contents": "On Wed, Aug 02, 2023 at 09:09:14AM -0700, Nathan Bossart wrote:\n> On Wed, Aug 02, 2023 at 01:02:53PM +0530, Bharath Rupireddy wrote:\n>> On Wed, Aug 2, 2023 at 12:45 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n>>> I think we should change the output format to be more like initdb, like\n>>>\n>>> Doing something ... ok\n>>>\n>>> without horizontally aligning all the \"ok\"s.\n>> \n>> While this looks simple, we might end up with a lot of diff and\n>> changes after removing MESSAGE_WIDTH. There's a significant part of\n>> pg_upgrade code that deals with MESSAGE_WIDTH. I don't think it's\n>> worth the effort. Therefore, I'd prefer the simplest possible fix -\n>> change the message to '\"Checking for \\\"aclitem\\\" data type in user\n>> tables\". It may be an overkill, but we can consider adding\n>> Assert(sizeof(message) < MESSAGE_WIDTH) in progress report functions\n>> to not encourage new messages to end up in the same formatting issue.\n> \n> I don't think it's that difficult. ІMO the bigger question is whether we\n> want to back-patch such a change to v16 at this point.\n\nHere is a work-in-progress patch that seems to get things pretty close to\nwhat Peter is suggesting.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 2 Aug 2023 10:39:39 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: add timing information to pg_upgrade" }, { "msg_contents": "On 01.08.23 17:45, Nathan Bossart wrote:\n> On Tue, Aug 01, 2023 at 09:46:02AM +0200, Peter Eisentraut wrote:\n>> On 31.07.23 20:37, Nathan Bossart wrote:\n>>> -\tprep_status(\"Checking for incompatible \\\"aclitem\\\" data type in user tables\");\n>>> +\tprep_status(\"Checking for \\\"aclitem\\\" data type in user tables\");\n>>\n>> Why these changes? I think this is losing precision about what it's doing.\n> \n> The message is too long, so there's no space between it and the \"ok\"\n> message:\n> \n> \tChecking for incompatible \"aclitem\" data type in user tablesok\n> \n> Instead of altering the messages, we could bump MESSAGE_WIDTH from 60 to\n> 62 or 64. Do you prefer that approach? (BTW this probably needs to be\n> back-patched to v16.)\n\nLet's change MESSAGE_WIDTH to 62 in v16, and then pursue the larger \nrestructuring leisurely.\n\n\n\n", "msg_date": "Tue, 22 Aug 2023 11:49:33 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: add timing information to pg_upgrade" }, { "msg_contents": "On Tue, Aug 22, 2023 at 11:49:33AM +0200, Peter Eisentraut wrote:\n> Let's change MESSAGE_WIDTH to 62 in v16, and then pursue the larger\n> restructuring leisurely.\n\nSounds good. I plan to commit this within the next couple of days.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 22 Aug 2023 07:06:23 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: add timing information to pg_upgrade" }, { "msg_contents": "On Tue, Aug 22, 2023 at 07:06:23AM -0700, Nathan Bossart wrote:\n> On Tue, Aug 22, 2023 at 11:49:33AM +0200, Peter Eisentraut wrote:\n>> Let's change MESSAGE_WIDTH to 62 in v16, and then pursue the larger\n>> restructuring leisurely.\n> \n> Sounds good. I plan to commit this within the next couple of days.\n\nHere's the patch. I'm going to run a couple of tests before committing,\nbut I don't think anything else is required.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 23 Aug 2023 09:35:20 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: add timing information to pg_upgrade" }, { "msg_contents": "On Wed, Aug 23, 2023 at 09:35:20AM -0700, Nathan Bossart wrote:\n> On Tue, Aug 22, 2023 at 07:06:23AM -0700, Nathan Bossart wrote:\n>> On Tue, Aug 22, 2023 at 11:49:33AM +0200, Peter Eisentraut wrote:\n>>> Let's change MESSAGE_WIDTH to 62 in v16, and then pursue the larger\n>>> restructuring leisurely.\n>> \n>> Sounds good. I plan to commit this within the next couple of days.\n> \n> Here's the patch. I'm going to run a couple of tests before committing,\n> but I don't think anything else is required.\n\ncommitted\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 24 Aug 2023 10:23:35 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: add timing information to pg_upgrade" } ]
[ { "msg_contents": "Hi all,\n\nWhile digging into the LWLock code, I have noticed that\nGetNamedLWLockTranche() assumes that its caller should hold the LWLock\nAddinShmemInitLock to prevent any kind of race conditions when\ninitializing shmem areas, but we don't make sure that's the case.\n\nThe sole caller of GetNamedLWLockTranche() in core respects that, but\nout-of-core code may not be that careful. How about adding an\nassertion based on LWLockHeldByMeInMode() to make sure that the\nShmemInit lock is taken when this routine is called, like in the\nattached?\n\nThanks,\n--\nMichael", "msg_date": "Fri, 28 Jul 2023 12:24:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Add assertion on held AddinShmemInitLock in GetNamedLWLockTranche()" }, { "msg_contents": "On Fri, Jul 28, 2023 at 8:54 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> Hi all,\n>\n> While digging into the LWLock code, I have noticed that\n> GetNamedLWLockTranche() assumes that its caller should hold the LWLock\n> AddinShmemInitLock to prevent any kind of race conditions when\n> initializing shmem areas, but we don't make sure that's the case.\n>\n> The sole caller of GetNamedLWLockTranche() in core respects that, but\n> out-of-core code may not be that careful. How about adding an\n> assertion based on LWLockHeldByMeInMode() to make sure that the\n> ShmemInit lock is taken when this routine is called, like in the\n> attached?\n\n+1 for asserting that the caller holds AddinShmemInitLock to prevent\nreads while someone else is adding their LWLocks.\n\n+ Assert(LWLockHeldByMeInMode(AddinShmemInitLock, LW_EXCLUSIVE));\n\nWhy to block multiple readers (if at all there exists any), with\nLWLockHeldByMeInMode(..., LW_EXCLUSIVE)? I think\nAssert(LWLockHeldByMe(AddinShmemInitLock)); suffices in\nGetNamedLWLockTranche.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 28 Jul 2023 11:07:49 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add assertion on held AddinShmemInitLock in\n GetNamedLWLockTranche()" }, { "msg_contents": "On Fri, Jul 28, 2023 at 11:07:49AM +0530, Bharath Rupireddy wrote:\n> Why to block multiple readers (if at all there exists any), with\n> LWLockHeldByMeInMode(..., LW_EXCLUSIVE)? I think\n> Assert(LWLockHeldByMe(AddinShmemInitLock)); suffices in\n> GetNamedLWLockTranche.\n\nI am not sure to follow this argument. Why would it make sense for\nanybody to use that in shared mode? We document that the exclusive\nmode is required because we retrieve the lock position in shmem that\nan extension needs to know about when it initializes its own shmem\nstate, and that cannot be done safely without LW_EXCLUSIVE. Any\nextension I can find in https://codesearch.debian.net/ does that as\nwell.\n--\nMichael", "msg_date": "Thu, 10 Aug 2023 19:52:05 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Add assertion on held AddinShmemInitLock in\n GetNamedLWLockTranche()" } ]
[ { "msg_contents": "Hi,\n\nWhile hacking on the pruning and page-level freezing code and am a bit\nconfused by the test guarding eager freezing [1]:\n\n /*\n * Freeze the page when heap_prepare_freeze_tuple indicates that at least\n * one XID/MXID from before FreezeLimit/MultiXactCutoff is present. Also\n * freeze when pruning generated an FPI, if doing so means that we set the\n * page all-frozen afterwards (might not happen until final heap pass).\n */\n if (pagefrz.freeze_required || tuples_frozen == 0 ||\n (prunestate->all_visible && prunestate->all_frozen &&\n fpi_before != pgWalUsage.wal_fpi))\n {\n\nI'm trying to understand the condition fpi_before !=\npgWalUsage.wal_fpi -- don't eager freeze if pruning emitted an FPI.\n\nIs this test meant to guard against unnecessary freezing or to avoid\nfreezing when the cost is too high? That is, are we trying to\ndetermine how likely it is that the page has been recently modified\nand avoid eager freezing when it would be pointless (because the page\nwill soon be modified again)? Or are we trying to determine how likely\nthe freeze record is to emit an FPI and avoid eager freezing when it\nisn't worth the cost? Or something else?\n\nThe commit message says:\n\n> Also teach VACUUM to trigger page-level freezing whenever it detects\n> that heap pruning generated an FPI. We'll have already written a large\n> amount of WAL just to do that much, so it's very likely a good idea to\n> get freezing out of the way for the page early.\n\nAnd I found the thread where it was discussed [2]. Several possible\nexplanations are mentioned in the thread.\n\nBut, the final rationale is still not clear to me. Could we add a\ncomment above the if condition specifying both:\na) what the test is a proxy for\nb) the intended outcome (when do we expect to eager freeze)\nAnd perhaps we could even describe a scenario where this heuristic is effective?\n\n- Melanie\n\n[1] https://github.com/postgres/postgres/blob/master/src/backend/access/heap/vacuumlazy.c#L1802\n[2] https://www.postgresql.org/message-id/flat/CAH2-Wzm_%3DmrWO%2BkUAJbR_gM_6RzpwVA8n8e4nh3dJGHdw_urew%40mail.gmail.com#c5aae6ea65e07de92a24a35445473448\n\n\n", "msg_date": "Fri, 28 Jul 2023 11:12:48 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": true, "msg_subject": "Eager page freeze criteria clarification" }, { "msg_contents": "On Fri, Jul 28, 2023 at 11:13 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> if (pagefrz.freeze_required || tuples_frozen == 0 ||\n> (prunestate->all_visible && prunestate->all_frozen &&\n> fpi_before != pgWalUsage.wal_fpi))\n> {\n>\n> I'm trying to understand the condition fpi_before !=\n> pgWalUsage.wal_fpi -- don't eager freeze if pruning emitted an FPI.\n\nYou mean \"prunestate->all_visible && prunestate->all_frozen\", which is\na condition of applying FPI-based eager freezing, but not traditional\nlazy freezing?\n\nObviously, the immediate impact of that is that the FPI trigger\ncondition is not met unless we know for sure that the page will be\nmarked all-visible and all-frozen in the visibility map afterwards. A\npage that's eligible to become all-visible will also be seen as\neligible to become all-frozen in the vast majority of cases, but there\nare some rare and obscure cases involving MultiXacts that must be\nconsidered here. There is little point in freezing early unless we\nhave at least some chance of not having to freeze the same page again\nin the future (ideally forever).\n\nThere is generally no point in freezing most of the tuples on a page\nwhen there'll still be one or more not-yet-eligible unfrozen tuples\nthat get left behind -- we might as well not bother with freezing at\nall when we see a page where that'll be the outcome from freezing.\nHowever, there is (once again) a need to account for rare and extreme\ncases -- it still needs to be possible to do that. Specifically, we\nmay be forced to freeze a page that's left with some remaining\nunfrozen tuples when VACUUM is fundamentally incapable of freezing\nthem due to its OldestXmin/removable cutoff being too old. That can\nhappen when VACUUM needs to freeze according to the traditional\nage-based settings, and yet the OldestXmin/removable cutoff gets held\nback (by a leaked replication slot or whatever).\n\n(Actually, VACUUM FREEZE freeze will freeze only a subset of the\ntuples from some heap pages far more often. VACUUM FREEZE seems like a\nbad design to me, though -- it uses the most aggressive possible XID\ncutoff for freezing when it should probably hold off on freezing those\nindividual pages where we determine that it makes little sense. We\nneed to focus more on physical pages and their costs, and less on XID\ncutoffs.)\n\n> Is this test meant to guard against unnecessary freezing or to avoid\n> freezing when the cost is too high? That is, are we trying to\n> determine how likely it is that the page has been recently modified\n> and avoid eager freezing when it would be pointless (because the page\n> will soon be modified again)?\n\nSort of. This cost of freezing over time is weirdly nonlinear, so it's\nhard to give a simple answer.\n\nThe justification for the FPI trigger optimization is that FPIs are\noverwhelmingly the cost that really matters when it comes to freezing\n(and vacuuming in general) -- so we might as well make the best out of\na bad situation when pruning happens to get an FPI. There can easily\nbe a 10x or more cost difference (measured in total WAL volume)\nbetween freezing without an FPI and freezing with an FPI.\n\n> Or are we trying to determine how likely\n> the freeze record is to emit an FPI and avoid eager freezing when it\n> isn't worth the cost?\n\nNo, that's not something that we're doing right now (we just talked\nabout doing something like that). In 16 VACUUM just \"makes the best\nout of a bad situation\" when an FPI was already required during\npruning. We have already \"paid for the privilege\" of writing some WAL\nfor the page at that point, so it's reasonable to not squander a\nwindow of opportunity to avoid future FPIs in future VACUUM\noperations, by freezing early.\n\nWe're \"taking a chance\" on being able to get freezing out of the way\nearly when an FPI triggers freezing. It's not guaranteed to work out\nin each individual case, of course, but even if we assume it's fairly\nunlikely to work out (which is very pessimistic) it's still very\nlikely a good deal.\n\nThis strategy (the 16 strategy of freezing eagerly because we already\ngot an FPI) seems safer than a strategy involving freezing eagerly\nbecause we won't get an FPI as a result. If for no other reason than\nthis: with the approach in 16 we already know for sure that we'll have\nwritten an FPI anyway. It's hard to imagine somebody being okay with\nthe FPIs, but not being okay with the other extra WAL.\n\n> But, the final rationale is still not clear to me. Could we add a\n> comment above the if condition specifying both:\n> a) what the test is a proxy for\n> b) the intended outcome (when do we expect to eager freeze)\n> And perhaps we could even describe a scenario where this heuristic is effective?\n\nThere are lots of scenarios where it'll be effective. I agree that\nthere is a need to document this stuff a lot better. I have a pending\ndoc patch that overhauls the user-facing docs in this area.\n\nMy latest draft is here:\n\nhttps://postgr.es/m/CAH2-Wz=UUJz+MMb1AxFzz-HDA=1t1Fx_KmrdOVoPZXkpA-TFwg@mail.gmail.com\nhttps://www.postgresql.org/message-id/attachment/146830/routine-vacuuming.html\n\nI've been meaning to get back to that, but other commitments have kept\nme from it. I'd welcome your involvement with that effort.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Fri, 28 Jul 2023 14:59:53 -0400", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Fri, Jul 28, 2023 at 3:00 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Fri, Jul 28, 2023 at 11:13 AM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> > if (pagefrz.freeze_required || tuples_frozen == 0 ||\n> > (prunestate->all_visible && prunestate->all_frozen &&\n> > fpi_before != pgWalUsage.wal_fpi))\n> > {\n> >\n> > I'm trying to understand the condition fpi_before !=\n> > pgWalUsage.wal_fpi -- don't eager freeze if pruning emitted an FPI.\n>\n> You mean \"prunestate->all_visible && prunestate->all_frozen\", which is\n> a condition of applying FPI-based eager freezing, but not traditional\n> lazy freezing?\n\nAh no, a very key word in my sentence was off:\nI meant \"don't eager freeze *unless* pruning emitted an FPI\"\nI was asking about the FPI-trigger optimization specifically.\nI'm clear on the all_frozen/all_visible criteria.\n\n> > Is this test meant to guard against unnecessary freezing or to avoid\n> > freezing when the cost is too high? That is, are we trying to\n> > determine how likely it is that the page has been recently modified\n> > and avoid eager freezing when it would be pointless (because the page\n> > will soon be modified again)?\n>\n> Sort of. This cost of freezing over time is weirdly nonlinear, so it's\n> hard to give a simple answer.\n>\n> The justification for the FPI trigger optimization is that FPIs are\n> overwhelmingly the cost that really matters when it comes to freezing\n> (and vacuuming in general) -- so we might as well make the best out of\n> a bad situation when pruning happens to get an FPI. There can easily\n> be a 10x or more cost difference (measured in total WAL volume)\n> between freezing without an FPI and freezing with an FPI.\n...\n> In 16 VACUUM just \"makes the best\n> out of a bad situation\" when an FPI was already required during\n> pruning. We have already \"paid for the privilege\" of writing some WAL\n> for the page at that point, so it's reasonable to not squander a\n> window of opportunity to avoid future FPIs in future VACUUM\n> operations, by freezing early.\n>\n> We're \"taking a chance\" on being able to get freezing out of the way\n> early when an FPI triggers freezing. It's not guaranteed to work out\n> in each individual case, of course, but even if we assume it's fairly\n> unlikely to work out (which is very pessimistic) it's still very\n> likely a good deal.\n>\n> This strategy (the 16 strategy of freezing eagerly because we already\n> got an FPI) seems safer than a strategy involving freezing eagerly\n> because we won't get an FPI as a result. If for no other reason than\n> this: with the approach in 16 we already know for sure that we'll have\n> written an FPI anyway. It's hard to imagine somebody being okay with\n> the FPIs, but not being okay with the other extra WAL.\n\nI see. I don't have an opinion on the \"best of a bad situation\"\nargument. Though, I think it is worth amending the comment in the code\nto include this explanation.\n\nBut, ISTM that there should also be some independent heuristic to\ndetermine whether or not it makes sense to freeze the page. That could\nbe related to whether or not it will be cheap to do so (in which case\nwe can check if we will have to emit an FPI as part of the freeze\nrecord) or it could be related to whether or not the freezing is\nlikely to be pointless (we are likely to update the page again soon).\n\nIt sounds like it was discussed before, but I'd be interested in\nrevisiting it and happy to test out various ideas.\n\n- Melanie\n\n\n", "msg_date": "Fri, 28 Jul 2023 15:27:11 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Fri, Jul 28, 2023 at 3:27 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> I see. I don't have an opinion on the \"best of a bad situation\"\n> argument. Though, I think it is worth amending the comment in the code\n> to include this explanation.\n\nI think that the user-facing docs should be completely overhauled to\nmake that clear, and reasonably accessible. It's hard to talk about in\ncode comments because it's something that can only be effective over\ntime, across multiple VACUUM operations.\n\n> But, ISTM that there should also be some independent heuristic to\n> determine whether or not it makes sense to freeze the page. That could\n> be related to whether or not it will be cheap to do so (in which case\n> we can check if we will have to emit an FPI as part of the freeze\n> record) or it could be related to whether or not the freezing is\n> likely to be pointless (we are likely to update the page again soon).\n\nIt sounds like you're interested in adding additional criteria to\ntrigger page-level freezing. That's something that the current\nstructure anticipates.\n\nTo give a slightly contrived example: it would be very easy to add\nanother condition, where (say) we call random() to determine whether\nor not to freeze the page. You'd very likely want to gate this new\ntrigger criteria in the same way as the FPI criteria (i.e. only do it\nwhen \"prunestate->all_visible && prunestate->all_frozen\" hold). We've\ndecoupled the decision to freeze from what it means to execute\nfreezing itself.\n\nHaving additional trigger criteria makes a lot of sense. I'm sure that\nit makes sense to add at least one more, and it seems possible that\nadding several more makes sense. Obviously, that will have to be new\nwork targeting 17, though. I made a decision to stop working on\nVACUUM, though, so I'm afraid I won't be able to offer much help with\nany of this. (Happy to give more background information, though.)\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 28 Jul 2023 15:45:01 -0400", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Fri, Jul 28, 2023 at 3:00 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Or are we trying to determine how likely\n> > the freeze record is to emit an FPI and avoid eager freezing when it\n> > isn't worth the cost?\n>\n> No, that's not something that we're doing right now (we just talked\n> about doing something like that). In 16 VACUUM just \"makes the best\n> out of a bad situation\" when an FPI was already required during\n> pruning. We have already \"paid for the privilege\" of writing some WAL\n> for the page at that point, so it's reasonable to not squander a\n> window of opportunity to avoid future FPIs in future VACUUM\n> operations, by freezing early.\n\nThis doesn't make sense to me. It's true that if the pruning record\nemitted an FPI, then the freezing record probably won't need to do so\neither, unless by considerable ill-fortune the redo pointer was moved\nin the tiny window between pruning and freezing. But isn't that also\ntrue that if the pruning record *didn't* emit an FPI? In that case,\nthe pruning record wasn't the first WAL-logged modification to the\npage during the then-current checkpoint cycle, and some earlier\nmodification already did it. But in that case also the freezing won't\nneed to emit a new FPI, except for the same identical case where the\nredo pointer moves at just the wrong time.\n\nPut differently, I can't see any reason to care whether pruning\nemitted an FPI or not. Either way, it's very unlikely that freezing\nneeds to do so.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 28 Jul 2023 16:19:47 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "Hi,\n\nOn 2023-07-28 16:19:47 -0400, Robert Haas wrote:\n> On Fri, Jul 28, 2023 at 3:00 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > > Or are we trying to determine how likely\n> > > the freeze record is to emit an FPI and avoid eager freezing when it\n> > > isn't worth the cost?\n> >\n> > No, that's not something that we're doing right now (we just talked\n> > about doing something like that). In 16 VACUUM just \"makes the best\n> > out of a bad situation\" when an FPI was already required during\n> > pruning. We have already \"paid for the privilege\" of writing some WAL\n> > for the page at that point, so it's reasonable to not squander a\n> > window of opportunity to avoid future FPIs in future VACUUM\n> > operations, by freezing early.\n> \n> This doesn't make sense to me. It's true that if the pruning record\n> emitted an FPI, then the freezing record probably won't need to do so\n> either, unless by considerable ill-fortune the redo pointer was moved\n> in the tiny window between pruning and freezing. But isn't that also\n> true that if the pruning record *didn't* emit an FPI? In that case,\n> the pruning record wasn't the first WAL-logged modification to the\n> page during the then-current checkpoint cycle, and some earlier\n> modification already did it. But in that case also the freezing won't\n> need to emit a new FPI, except for the same identical case where the\n> redo pointer moves at just the wrong time.\n> \n> Put differently, I can't see any reason to care whether pruning\n> emitted an FPI or not. Either way, it's very unlikely that freezing\n> needs to do so.\n\n+many\n\nUsing FPIs having been generated during pruning as part of the logic to decide\nwhether to freeze makes no sense to me. We likely need some heuristic here,\nbut I suspect it has to look quite different than the FPI condition.\n\nGreetings,\n\nAndres\n\n\n", "msg_date": "Fri, 28 Jul 2023 13:30:22 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Fri, Jul 28, 2023 at 4:30 PM Andres Freund <andres@anarazel.de> wrote:\n> > Put differently, I can't see any reason to care whether pruning\n> > emitted an FPI or not. Either way, it's very unlikely that freezing\n> > needs to do so.\n>\n> +many\n>\n> Using FPIs having been generated during pruning as part of the logic to decide\n> whether to freeze makes no sense to me. We likely need some heuristic here,\n> but I suspect it has to look quite different than the FPI condition.\n\nI think that it depends on how you frame it. It makes zero sense that\nthat's the only thing that can do something like this at all. It makes\nsome sense as an incremental improvement, since there is no way that\nwe can lose by too much, even if you make arbitrarily pessimistic\nassumptions. In other words, you won't be able to come up with an\nadversarial benchmark where this loses by too much.\n\nAs I said, I'm no longer interested in working on VACUUM (though I do\nhope that Melanie or somebody else picks up where I left off). I have\nnothing to say about any new work in this area. If you want me to do\nsomething in the scope of the work on 16, as a release management\ntask, please be clear about what that is.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Fri, 28 Jul 2023 16:49:04 -0400", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Fri, Jul 28, 2023 at 4:30 PM Andres Freund <andres@anarazel.de> wrote:\n> Using FPIs having been generated during pruning as part of the logic to decide\n> whether to freeze makes no sense to me. We likely need some heuristic here,\n> but I suspect it has to look quite different than the FPI condition.\n\nWhy do we need a heuristic at all?\n\nWhat we're talking about here, AIUI, is under what circumstance we\nshould eagerly freeze a page that can be fully frozen, leaving no\ntuples that vacuum ever needs to worry about again except if the page\nis further modified. Freezing in such circumstances seems highly\nlikely to work out in our favor. The only argument against it is that\nthe page might soon be modified again, in which case the effort of\nfreezing would be mostly wasted. But I'm not sure whether that's\nreally a valid rationale, because the win from not having to vacuum\nthe page again if it isn't modified seems pretty big, and emitting a\nfreeze record for a page we've just pruned doesn't seem that\nexpensive. If it is a valid rationale, my first thought would be to\nmake the heuristic based on vacuum_freeze_min_age. XID age isn't a\nparticularly great proxy for whether the page is still actively being\nmodified, but at least it has tradition going for it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 28 Jul 2023 16:49:43 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Fri, Jul 28, 2023 at 4:49 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Fri, Jul 28, 2023 at 4:30 PM Andres Freund <andres@anarazel.de> wrote:\n> > > Put differently, I can't see any reason to care whether pruning\n> > > emitted an FPI or not. Either way, it's very unlikely that freezing\n> > > needs to do so.\n> >\n> > +many\n> >\n> > Using FPIs having been generated during pruning as part of the logic to decide\n> > whether to freeze makes no sense to me. We likely need some heuristic here,\n> > but I suspect it has to look quite different than the FPI condition.\n>\n> I think that it depends on how you frame it. It makes zero sense that\n> that's the only thing that can do something like this at all. It makes\n> some sense as an incremental improvement, since there is no way that\n> we can lose by too much, even if you make arbitrarily pessimistic\n> assumptions. In other words, you won't be able to come up with an\n> adversarial benchmark where this loses by too much.\n\nWhen you were working on this, what were the downsides of only having the\ncriteria that 1) page would be all_visible/all_frozen and 2) we did prune\nsomething (i.e. no consideration of FPIs)?\n\n> As I said, I'm no longer interested in working on VACUUM (though I do\n> hope that Melanie or somebody else picks up where I left off). I have\n> nothing to say about any new work in this area. If you want me to do\n> something in the scope of the work on 16, as a release management\n> task, please be clear about what that is.\n\nI'm only talking about this in the context of future work for 17.\n\n- Melanie\n\n\n", "msg_date": "Fri, 28 Jul 2023 17:03:02 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Fri, Jul 28, 2023 at 5:03 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> When you were working on this, what were the downsides of only having the\n> criteria that 1) page would be all_visible/all_frozen and 2) we did prune\n> something (i.e. no consideration of FPIs)?\n\nConditioning freezing on \"all_visible && all_frozen\" seems likely to\nalways be a good idea when it comes to any sort of speculative trigger\ncriteria. Most of the wins in this area will come from avoiding future\nFPIs, and you can't hope to do that unless you freeze everything on\nthe page. The cost of freezing when \"all_visible && all_frozen\" does\nnot hold may be quite low, but the benefits will also be very low.\nAlso, the way that recovery conflicts are generated for freeze records\nsomewhat depends on this right now -- though you could probably fix\nthat if you had to.\n\nNote that we *don't* really limit ourselves to cases where an FPI was\ngenerated from pruning, exactly -- even on 16. With page-level\nchecksums we can also generate an FPI just to set a hint bit. That\ntriggers freezing too, even on insert-only tables (assuming checksums\nare enabled). I expect that that will be an important condition for\ntriggering freezing in practice.\n\nYour question about 2 seems equivalent to \"why not just always\nfreeze?\". I don't think that that's a bad question -- quite the\nopposite. Even trying to give an answer to this question would amount\nto getting involved in new work on VACUUM, though. So I don't think I\ncan help you here.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 28 Jul 2023 17:30:58 -0400", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Fri, Jul 28, 2023 at 3:27 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> On Fri, Jul 28, 2023 at 3:00 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > > Is this test meant to guard against unnecessary freezing or to avoid\n> > > freezing when the cost is too high? That is, are we trying to\n> > > determine how likely it is that the page has been recently modified\n> > > and avoid eager freezing when it would be pointless (because the page\n> > > will soon be modified again)?\n> >\n> > Sort of. This cost of freezing over time is weirdly nonlinear, so it's\n> > hard to give a simple answer.\n> >\n> > The justification for the FPI trigger optimization is that FPIs are\n> > overwhelmingly the cost that really matters when it comes to freezing\n> > (and vacuuming in general) -- so we might as well make the best out of\n> > a bad situation when pruning happens to get an FPI. There can easily\n> > be a 10x or more cost difference (measured in total WAL volume)\n> > between freezing without an FPI and freezing with an FPI.\n> ...\n> > In 16 VACUUM just \"makes the best\n> > out of a bad situation\" when an FPI was already required during\n> > pruning. We have already \"paid for the privilege\" of writing some WAL\n> > for the page at that point, so it's reasonable to not squander a\n> > window of opportunity to avoid future FPIs in future VACUUM\n> > operations, by freezing early.\n> >\n> > We're \"taking a chance\" on being able to get freezing out of the way\n> > early when an FPI triggers freezing. It's not guaranteed to work out\n> > in each individual case, of course, but even if we assume it's fairly\n> > unlikely to work out (which is very pessimistic) it's still very\n> > likely a good deal.\n> >\n> > This strategy (the 16 strategy of freezing eagerly because we already\n> > got an FPI) seems safer than a strategy involving freezing eagerly\n> > because we won't get an FPI as a result. If for no other reason than\n> > this: with the approach in 16 we already know for sure that we'll have\n> > written an FPI anyway. It's hard to imagine somebody being okay with\n> > the FPIs, but not being okay with the other extra WAL.\n>\n> I see. I don't have an opinion on the \"best of a bad situation\"\n> argument. Though, I think it is worth amending the comment in the code\n> to include this explanation.\n>\n> But, ISTM that there should also be some independent heuristic to\n> determine whether or not it makes sense to freeze the page. That could\n> be related to whether or not it will be cheap to do so (in which case\n> we can check if we will have to emit an FPI as part of the freeze\n> record) or it could be related to whether or not the freezing is\n> likely to be pointless (we are likely to update the page again soon).\n>\n> It sounds like it was discussed before, but I'd be interested in\n> revisiting it and happy to test out various ideas.\n\nHi, in service of \"testing various ideas\", I've benchmarked the\nheuristics proposed in this thread, as well a few others that Andres and\nI considered, for determining whether or not to opportunistically freeze\na page during vacuum. Note that this heuristic would be in addition to\nthe existing criterion that we only opportunistically freeze pages that\ncan be subsequently set all frozen in the visibility map.\n\nI believe that there are two goals that should dictate whether or not we\nshould perform opportunistic freezing:\n\n 1. Is it cheap? For example, if the buffer is already dirty, then no\n write amplification occurs, since it must be written out anyway.\n Freezing is also less expensive if we can do it without emitting an\n FPI.\n\n 2. Will it be effective; that is, will it stay frozen?\n Opportunistically freezing a page that will immediately be modified is\n a waste.\n\nThe current heuristic on master meets neither of these goals: it freezes\na page if pruning emitted an FPI for it. This doesn't evaluate whether\nor not freezing itself would be cheap, but rather attempts to hide\nfreezing behind an expensive operation. Furthermore, it often fails to\nfreeze cold data and may indiscriminately freeze hot data.\n\nFor the second goal, I've relied on past data to predict future\nbehavior, so I tried several criteria to estimate the likelihood that a\npage will not be imminently modified. What was most effective was\nAndres' suggestion of comparing the page LSN to the insert LSN at the\nend of the last vacuum of that table; this approximates whether the page\nhas been recently modified, which is a decent proxy for whether it'll be\nmodified in the future. To do this, we need to save that insert LSN\nsomewhere. In the attached WIP patch, I saved it in the table stats, for\nnow -- knowing that those are not crash-safe.\n\nOther discarded heuristic ideas included comparing the next transaction\nID at the end of the vacuum of a relation to the visibility cutoff xid\nin the page -- but that wasn't effective for freezing data from bulk\nloads.\n\nThe algorithms I evaluated all attempt to satisfy goal (1) by freezing\nonly if the buffer is already dirty and also by considering whether or\nnot an FPI would be emitted. Those that attempt to satisfy goal (2) do\nso using the LSN comparison with varying thresholds. I ended up testing\nmaster and the following five alternatives:\n\n 1. Dirty buffer, no FPI required\n\n 2. Dirty buffer, no FPI required OR page LSN is older than 10% of the\n LSNs since the last vacuum of the table.\n\n 3. Dirty buffer, no FPI required AND page LSN is older than 10% of the\n LSNs since the last vacuum of the table.\n\n 4. Dirty buffer, no FPI required OR page LSN is older than 33% of the\n LSNs since the last vacuum of the table.\n\n 5. Dirty buffer, no FPI required AND page LSN is older than 33% of the\n LSNs since the last vacuum of the table.\n\nI ran several benchmarks and compared these based on two metrics:\n\n 1. Percentage of pages frozen at the end of the benchmark. For\n workloads with a working set much smaller than their data set, this\n metric should be high. Conversely, for workloads with a working set\n that is more-or-less their entire data set, this metric should be low.\n\n 2. Page freezes per page frozen at the end of the benchmark. This\n should be as low as possible. Since each benchmark starts with zero\n frozen pages, a metric of 1 indicates that each frozen page was frozen\n only once.\n\nSome of the benchmarks were run for a fixed number of transactions.\nThose that do not specify a number of transactions were run for 45\nminutes. I collected metrics from OS utilities and Postgres statistics\nto examine throughput, FPIs emitted, and many other performance metrics\nover the course of the benchmark. Below, I've summarized the results and\npointed out any notable negative performance impacts.\n\nOverall, the two algorithms that seem to strike the best balance are (4)\nand (5).\n\nThe OR condition in algorithm (4), as you might expect, results in\nfreezing much more of the cold data in workloads with a smaller working\nset than data set. It tends to cause more FPIs to be emitted, since the\nage criteria alone can trigger freezing -- even when the freeze record\nwould contain an FPI. Though, these FPIs may happen when the cold data\nis eventually frozen in a wraparound vacuum. That is, the absence of\nFPIs tracks the absence of frozen data quite closely.\n\nFor a workload in which only 10% of the data is being updated, master\noften freezes the wrong data and still emits FPIs. For this kind of\nworkload, algorithms 1 and 2 also did not perform well and emitted more\nFPIs than the other algorithms. In my examples, I found that the 10%\ncutoff froze data too aggressively -- freezing data that was modified\nsoon after.\n\nThe workloads I benchmarked were as follows:\n\nA. gaussian tpcb-like + select-only:\n pgbench scale 600 (DB < SB) with indexes on updated columns\n WL 1: 2 clients tpcb-like pgbench with gaussian access distribution\n WL 2: 16 clients select-only pgbench\n freezing more is better\n\nB. tpcb-like\n pgbench scale 600, 16 clients\n freezing less is better\n\nC. shifting hot set, autovacuum off, vacuum at end\n 1 client inserting a single row and updating an indexed column of that\n row. 2 million transactions.\n freezing more is better\n\nD. shifting hot set, delete old data\n 10 MB table with index on updated column\n WL 1: 1 client inserting one row, updating that row\n WL 2: 1 client, rate limited to 0.02 TPS, delete old data keeping\n table at 5000 rows\n freezing less is better\n\nE. shifting hot set, delete new data, access new data\n WL 1: 1 client cycling through 2 inserts of a single row each,\n updating an indexed column of the most recently inserted row, and then\n deleting that row\n WL 2: rate-limited to 0.2 TPS, selecting data from the last 300\n seconds\n freezing more is better\n\nF. many COPYs, autovacuum on\n 1 client, copying a total of 50 GB of data, autovacuum will run ~2x,\n ~2 checkpoints\n freezing more is better\n\nG. several COPYs, autovacuum off, vacuum at end\n 1 client, copying a total of 10 GB of data, no checkpoints\n freezing more is better\n\nH. append only table, autovacuum off, vacuum at end\n 1 client, inserting a single row at a time for 3million transactions\n freezing more is better\n\nI. work queue\n 1 client, inserting a row, sleep for half a second, delete that row\n for 5000 transactions.\n freezing less is better\n\nNote that the page freezes/page frozen metric can be misleading when the\noverall number of pages freezes is low. This is the case for master. It\ndid few page freezes but those tended to be pages that were modified\nagain soon after.\n\n\n Page Freezes/Page Frozen (less is better)\n\n| | Master | (1) | (2) | (3) | (4) | (5) |\n|---+--------+---------+---------+---------+---------+---------|\n| A | 28.50 | 3.89 | 1.08 | 1.15 | 1.10 | 1.10 |\n| B | 1.00 | 1.06 | 1.65 | 1.03 | 1.59 | 1.00 |\n| C | N/A | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 |\n| D | 2.00 | 5199.15 | 5276.85 | 4830.45 | 5234.55 | 2193.55 |\n| E | 7.90 | 3.21 | 2.73 | 2.70 | 2.69 | 2.43 |\n| F | N/A | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 |\n| G | N/A | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 |\n| H | N/A | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 |\n| I | N/A | 42.00 | 42.00 | N/A | 41.00 | N/A |\n\n\n % Frozen at end of run\n\n| | Master | (1) | (2) | (3) | (4) | (5) |\n|---+--------+-----+-----+-----+------+-----+\n| A | 0 | 1 | 99 | 0 | 81 | 0 |\n| B | 71 | 96 | 99 | 3 | 98 | 2 |\n| C | 0 | 9 | 100 | 6 | 92 | 5 |\n| D | 0 | 1 | 1 | 1 | 1 | 1 |\n| E | 0 | 63 | 100 | 68 | 100 | 67 |\n| F | 0 | 5 | 14 | 6 | 14 | 5 |\n| G | 0 | 100 | 100 | 92 | 100 | 67 |\n| H | 0 | 11 | 100 | 9 | 86 | 5 |\n| I | 0 | 100 | 100 | 0 | 100 | 0 |\n\n\nI can provide exact pgbench commands, configurations, or detailed\nresults upon request.\n\n- Melanie\n\n\n", "msg_date": "Mon, 28 Aug 2023 10:00:14 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Mon, Aug 28, 2023 at 10:00 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> For the second goal, I've relied on past data to predict future\n> behavior, so I tried several criteria to estimate the likelihood that a\n> page will not be imminently modified. What was most effective was\n> Andres' suggestion of comparing the page LSN to the insert LSN at the\n> end of the last vacuum of that table; this approximates whether the page\n> has been recently modified, which is a decent proxy for whether it'll be\n> modified in the future. To do this, we need to save that insert LSN\n> somewhere. In the attached WIP patch, I saved it in the table stats, for\n> now -- knowing that those are not crash-safe.\n\nI wonder what the real plan here is for where to store this. It's not\nobvious that we need this to be crash-safe; it's after all only for\nuse by a heuristic, and there's no actual breakage if the heuristic\ngoes wrong. At the same time, it doesn't exactly feel like a\nstatistic.\n\nThen there's the question of whether it's the right metric. My first\nreaction is to think that it sounds pretty good. One thing I really\nlike about it is that if the table is being vacuumed frequently, then\nwe freeze less aggressively, and if the table is being vacuumed\ninfrequently, then we freeze more aggressively. That seems like a very\ndesirable property. It also seems broadly good that this metric\ndoesn't really care about reads. If there are a lot of reads on the\nsystem, or no reads at all, it doesn't really change the chances that\na certain page is going to be written again soon, and since reads\ndon't change the insert LSN, here again it seems to do the right\nthing. I'm a little less clear about whether it's good that it doesn't\nreally depend on wall-clock time. Certainly, that's desirable from the\npoint of view of not wanting to have to measure wall-clock time in\nplaces where we otherwise wouldn't have to, which tends to end up\nbeing expensive. However, if I were making all of my freezing\ndecisions manually, I might be more freeze-positive on a low-velocity\nsystem where writes are more stretched out across time than on a\nhigh-velocity system where we're blasting through the LSN space at a\nhigher rate. But maybe that's not a very important consideration, and\nI don't know what we'd do about it anyway.\n\n> Page Freezes/Page Frozen (less is better)\n>\n> | | Master | (1) | (2) | (3) | (4) | (5) |\n> |---+--------+---------+---------+---------+---------+---------|\n> | A | 28.50 | 3.89 | 1.08 | 1.15 | 1.10 | 1.10 |\n> | B | 1.00 | 1.06 | 1.65 | 1.03 | 1.59 | 1.00 |\n> | C | N/A | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 |\n> | D | 2.00 | 5199.15 | 5276.85 | 4830.45 | 5234.55 | 2193.55 |\n> | E | 7.90 | 3.21 | 2.73 | 2.70 | 2.69 | 2.43 |\n> | F | N/A | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 |\n> | G | N/A | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 |\n> | H | N/A | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 |\n> | I | N/A | 42.00 | 42.00 | N/A | 41.00 | N/A |\n\nHmm. I would say that the interesting rows here are A, D, and I, with\nrows C and E deserving honorable mention. In row A, master is bad. In\nrow D, your algorithms are all bad, really bad. I don't quite\nunderstand how it can be that bad, actually. Row I looks bad for\nalgorithms 1, 2, and 4: they freeze pages because it looks cheap, but\nthe work doesn't really pay off.\n\n> % Frozen at end of run\n>\n> | | Master | (1) | (2) | (3) | (4) | (5) |\n> |---+--------+-----+-----+-----+------+-----+\n> | A | 0 | 1 | 99 | 0 | 81 | 0 |\n> | B | 71 | 96 | 99 | 3 | 98 | 2 |\n> | C | 0 | 9 | 100 | 6 | 92 | 5 |\n> | D | 0 | 1 | 1 | 1 | 1 | 1 |\n> | E | 0 | 63 | 100 | 68 | 100 | 67 |\n> | F | 0 | 5 | 14 | 6 | 14 | 5 |\n> | G | 0 | 100 | 100 | 92 | 100 | 67 |\n> | H | 0 | 11 | 100 | 9 | 86 | 5 |\n> | I | 0 | 100 | 100 | 0 | 100 | 0 |\n\nSo all of the algorithms here, but especially 1, 2, and 4, freeze a\nlot more often than master.\n\nIf I understand correctly, we'd like to see small numbers for B, D,\nand I, and large numbers for the other workloads. None of the\nalgorithms seem to achieve that. (3) and (5) seem like they always\nbehave as well or better than master, but they produce small numbers\nfor A, C, F, and H. (1), (2), and (4) regress B and I relative to\nmaster but do better than (3) and (5) on A, C, and the latter two also\non E.\n\nB is such an important benchmarking workload that I'd be loathe to\nregress it, so if I had to pick on the basis of this data, my vote\nwould be (3) or (5), provided whatever is happening with (D) in the\nprevious metric is not as bad as it looks. What's your reason for\npreferring (4) and (5) over (2) and (3)? I'm not clear that these\nnumbers give us much of an idea whether 10% or 33% or something else\nis better in general.\n\nTo be honest, having now spent more time looking at the benchmark\nresults, I feel slightly less good about using the LSN as a metric\nhere. These results, to me, clearly suggest that some recency metric\nis needed. But they don't seem to make a compelling case for this\nparticular one. Neither do they make a case that this is the wrong\none. They just don't seem that revealing either way.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 28 Aug 2023 12:26:01 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Mon, Aug 28, 2023 at 7:00 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> I believe that there are two goals that should dictate whether or not we\n> should perform opportunistic freezing:\n>\n> 1. Is it cheap? For example, if the buffer is already dirty, then no\n> write amplification occurs, since it must be written out anyway.\n> Freezing is also less expensive if we can do it without emitting an\n> FPI.\n>\n> 2. Will it be effective; that is, will it stay frozen?\n> Opportunistically freezing a page that will immediately be modified is\n> a waste.\n>\n> The current heuristic on master meets neither of these goals: it freezes\n> a page if pruning emitted an FPI for it. This doesn't evaluate whether\n> or not freezing itself would be cheap, but rather attempts to hide\n> freezing behind an expensive operation.\n\nThe goal is to minimize the number of FPIs over time, in general.\nThat's hardly the same thing as hiding the cost.\n\n> Furthermore, it often fails to\n> freeze cold data and may indiscriminately freeze hot data.\n\nYou need to account for the cost of not freezing, too. Controlling the\noverall freeze debt at the level of whole tables (and the level of the\nwhole system) is very important. In fact it's probably the single most\nimportant thing. A good model might end up increasing the cost of\nfreezing on a simple quantitative basis, while making life much better\noverall. Huge balloon payments for freezing are currently a huge\nproblem.\n\nPerformance stability might come with a cost in some cases. There\nisn't necessarily anything wrong with that at all.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 28 Aug 2023 09:48:13 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Mon, Aug 28, 2023 at 7:00 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> To do this, we need to save that insert LSN\n> somewhere. In the attached WIP patch, I saved it in the table stats, for\n> now -- knowing that those are not crash-safe.\n\nWhat patch? You didn't actually attach one.\n\n> Other discarded heuristic ideas included comparing the next transaction\n> ID at the end of the vacuum of a relation to the visibility cutoff xid\n> in the page -- but that wasn't effective for freezing data from bulk\n> loads.\n\nI've long emphasized the importance of designs that just try to avoid\ndisaster. With that in mind, I wonder: have you thought about\nconditioning page freezing on whether or not there are already some\nfrozen tuples on the page? You could perhaps give some weight to\nwhether or not the page already has at least one or two preexisting\nfrozen tuples when deciding on whether to freeze it once again now.\nYou'd be more eager about freezing pages that have no frozen tuples\nwhatsoever, compared to what you'd do with an otherwise equivalent\npage that has no unfrozen tuples.\n\nSmall mistakes are inevitable here. They have to be okay. What's not\nokay is a small mistake that effectively becomes a big mistake because\nit gets repeated across each VACUUM operation, again and again,\nceaselessly. You can probably be quite aggressive about freezing, to\ngood effect -- provided you can be sure that the downside when it\nturns out to have been a bad idea is self-limiting, in whatever way.\nMaking more small mistakes might actually be a good thing --\nespecially if it can dramatically lower the chances of ever making any\nreally big mistakes.\n\nVACUUM is not a passive observer of the system. It's just another\ncomponent in the system. So what VACUUM sees in the table depends in\nno small part on what previous VACUUMs actually did. It follows that\nVACUUM should be concerned about how what it does might either help or\nhinder future VACUUMs. My preexisting frozen tuples suggestion is\nreally just an example of how that principle might be applied. Many\nvariations on the same general idea are possible.\n\nThere are already various extremely weird feedback loops where what\nVACUUM did last time affects what it'll do this time. These are\nvicious circles. So ISTM that there is a lot to be said for disrupting\nthose vicious circles, and even deliberately engineering heuristics\nthat have the potential to create virtuous circles for the right\nworkload.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 28 Aug 2023 12:09:32 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Mon, Aug 28, 2023 at 3:09 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I've long emphasized the importance of designs that just try to avoid\n> disaster. With that in mind, I wonder: have you thought about\n> conditioning page freezing on whether or not there are already some\n> frozen tuples on the page? You could perhaps give some weight to\n> whether or not the page already has at least one or two preexisting\n> frozen tuples when deciding on whether to freeze it once again now.\n> You'd be more eager about freezing pages that have no frozen tuples\n> whatsoever, compared to what you'd do with an otherwise equivalent\n> page that has no unfrozen tuples.\n\nI'm sure this could be implemented, but it's unclear to me why you\nwould expect it to perform well. Freezing a page that has no frozen\ntuples yet isn't cheaper than freezing one that does, so for this idea\nto be a win, the presence of frozen tuples on the page would have to\nbe a signal that the page is likely to be modified again in the near\nfuture. In general, I don't see any reason why we should expect that\nto be the case. One could easily construct a workload where it is the\ncase -- for instance, set up one table T1 where 90% of the tuples are\nrepeatedly updated and the other 10% are never touched, and another\ntable T2 that is insert-only. Once frozen, the never-updated tuples in\nT1 become sentinels that we can use to know that the table isn't\ninsert-only. But I don't think that's very interesting: you can\nconstruct a test case like this for any proposed criterion, just by\nstructuring the test workload so that whatever criterion is being\ntested is a perfect predictor of whether the page will be modified\nsoon.\n\nWhat really matters here is finding a criterion that is likely to\nperform well in general, on a test case not known to us beforehand.\nThis isn't an entirely feasible goal, because just as you can\nconstruct a test case where any given criterion performs well, so you\ncan also construct one where any given criterion performs poorly. But\nI think a rule that has a clear theory of operation must be preferable\nto one that doesn't. The theory that Melanie and Andres are advancing\nis that a page that has been modified recently (in insert-LSN-time) is\nmore likely to be modified again soon than one that has not i.e. the\nnear future will be like the recent past. I'm not sure what the theory\nbehind the rule you propose here might be; if you articulated it\nsomewhere in your email, I seem to have missed it.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 28 Aug 2023 16:17:22 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Mon, Aug 28, 2023 at 12:26 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Aug 28, 2023 at 10:00 AM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> Then there's the question of whether it's the right metric. My first\n> reaction is to think that it sounds pretty good. One thing I really\n> like about it is that if the table is being vacuumed frequently, then\n> we freeze less aggressively, and if the table is being vacuumed\n> infrequently, then we freeze more aggressively. That seems like a very\n> desirable property. It also seems broadly good that this metric\n> doesn't really care about reads. If there are a lot of reads on the\n> system, or no reads at all, it doesn't really change the chances that\n> a certain page is going to be written again soon, and since reads\n> don't change the insert LSN, here again it seems to do the right\n> thing. I'm a little less clear about whether it's good that it doesn't\n> really depend on wall-clock time. Certainly, that's desirable from the\n> point of view of not wanting to have to measure wall-clock time in\n> places where we otherwise wouldn't have to, which tends to end up\n> being expensive. However, if I were making all of my freezing\n> decisions manually, I might be more freeze-positive on a low-velocity\n> system where writes are more stretched out across time than on a\n> high-velocity system where we're blasting through the LSN space at a\n> higher rate. But maybe that's not a very important consideration, and\n> I don't know what we'd do about it anyway.\n\nBy low-velocity, do you mean lower overall TPS? In that case, wouldn't you be\nless likely to run into xid wraparound and thus need less aggressive\nopportunistic freezing?\n\n> > Page Freezes/Page Frozen (less is better)\n> >\n> > | | Master | (1) | (2) | (3) | (4) | (5) |\n> > |---+--------+---------+---------+---------+---------+---------|\n> > | A | 28.50 | 3.89 | 1.08 | 1.15 | 1.10 | 1.10 |\n> > | B | 1.00 | 1.06 | 1.65 | 1.03 | 1.59 | 1.00 |\n> > | C | N/A | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 |\n> > | D | 2.00 | 5199.15 | 5276.85 | 4830.45 | 5234.55 | 2193.55 |\n> > | E | 7.90 | 3.21 | 2.73 | 2.70 | 2.69 | 2.43 |\n> > | F | N/A | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 |\n> > | G | N/A | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 |\n> > | H | N/A | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 |\n> > | I | N/A | 42.00 | 42.00 | N/A | 41.00 | N/A |\n>\n> Hmm. I would say that the interesting rows here are A, D, and I, with\n> rows C and E deserving honorable mention. In row A, master is bad.\n\nSo, this is where the caveat about absolute number of page freezes\nmatters. In algorithm A, master only did 57 page freezes (spread across\nthe various pgbench tables). At the end of the run, 2 pages were still\nfrozen.\n\n> In row D, your algorithms are all bad, really bad. I don't quite\n> understand how it can be that bad, actually.\n\nSo, I realize now that this test was poorly designed. I meant it to be a\nworst case scenario, but I think one critical part was wrong. In this\nexample one client is going at full speed inserting a row and then\nupdating it. Then another rate-limited client is deleting old data\nperiodically to keep the table at a constant size. I meant to bulk load\nthe table with enough data that the delete job would have data to delete\nfrom the start. With the default autovacuum settings, over the course of\n45 minutes, I usually saw around 40 autovacuums of the table. Due to the\nrate limiting, the first autovacuum of the table ends up freezing many\npages that are deleted soon after. Thus the total number of page freezes\nis very high.\n\nI will redo benchmarking of workload D and start the table with the\nnumber of rows which the DELETE job seeks to maintain. My back of the\nenvelope math says that this will mean ratios closer to a dozen (and not\n5000).\n\nAlso, I had doubled checkpoint timeout, which likely led master to\nfreeze so few pages (2 total freezes, neither of which were still frozen\nat the end of the run). This is an example where master's overall low\nnumber of page freezes makes it difficult to compare to the alternatives\nusing a ratio.\n\nI didn't initially question the numbers because it seems like freezing\ndata and then deleting it right after would naturally be one of the\nworst cases for opportunistic freezing, but certainly not this bad.\n\n> Row I looks bad for algorithms 1, 2, and 4: they freeze pages because\n> it looks cheap, but the work doesn't really pay off.\n\nYes, the work queue example looks like it is hard to handle.\n\n> > % Frozen at end of run\n> >\n> > | | Master | (1) | (2) | (3) | (4) | (5) |\n> > |---+--------+-----+-----+-----+------+-----+\n> > | A | 0 | 1 | 99 | 0 | 81 | 0 |\n> > | B | 71 | 96 | 99 | 3 | 98 | 2 |\n> > | C | 0 | 9 | 100 | 6 | 92 | 5 |\n> > | D | 0 | 1 | 1 | 1 | 1 | 1 |\n> > | E | 0 | 63 | 100 | 68 | 100 | 67 |\n> > | F | 0 | 5 | 14 | 6 | 14 | 5 |\n> > | G | 0 | 100 | 100 | 92 | 100 | 67 |\n> > | H | 0 | 11 | 100 | 9 | 86 | 5 |\n> > | I | 0 | 100 | 100 | 0 | 100 | 0 |\n>\n> So all of the algorithms here, but especially 1, 2, and 4, freeze a\n> lot more often than master.\n>\n> If I understand correctly, we'd like to see small numbers for B, D,\n> and I, and large numbers for the other workloads. None of the\n> algorithms seem to achieve that. (3) and (5) seem like they always\n> behave as well or better than master, but they produce small numbers\n> for A, C, F, and H. (1), (2), and (4) regress B and I relative to\n> master but do better than (3) and (5) on A, C, and the latter two also\n> on E.\n>\n> B is such an important benchmarking workload that I'd be loathe to\n> regress it, so if I had to pick on the basis of this data, my vote\n> would be (3) or (5), provided whatever is happening with (D) in the\n> previous metric is not as bad as it looks. What's your reason for\n> preferring (4) and (5) over (2) and (3)? I'm not clear that these\n> numbers give us much of an idea whether 10% or 33% or something else\n> is better in general.\n\n(1) seems bad to me because it doesn't consider whether or not freezing\nwill be useful -- only if it will be cheap. It froze very little of the\ncold data in a workload where a small percentage of it was being\nmodified (especially workloads A, C, H). And it froze a lot of data in\nworkloads where it was being uniformly modified (workload B).\n\nI suggested (4) and (5) because I think the \"older than 33%\" threshold\nis better than the \"older than 10%\" threshold. I chose both because I am\nstill unclear on our values. Are we willing to freeze more aggressively\nat the expense of emitting more FPIs? As long as it doesn't affect\nthroughput? For pretty much all of these workloads, the algorithms which\nfroze based on page modification recency OR FPI required emitted many\nmore FPIs than those which froze based only on page modification\nrecency.\n\nI've attached the WIP patch that I forgot in my previous email.\n\nI'll rerun workload D in a more reasonable way and be back with results.\n\n- Melanie", "msg_date": "Mon, 28 Aug 2023 16:30:15 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Mon, Aug 28, 2023 at 1:17 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I'm sure this could be implemented, but it's unclear to me why you\n> would expect it to perform well. Freezing a page that has no frozen\n> tuples yet isn't cheaper than freezing one that does, so for this idea\n> to be a win, the presence of frozen tuples on the page would have to\n> be a signal that the page is likely to be modified again in the near\n> future. In general, I don't see any reason why we should expect that\n> to be the case.\n\nWhat I've described in a scheme for deciding to not freeze where that\nwould usually happen -- a scheme for *vetoing* page freezing -- rather\nthan a scheme for deciding to freeze. On its own, what I suggested\ncannot help at all. It assumes a world in which we're already deciding\nto freeze much more frequently, based on whatever other criteria. It's\nintended to complement something like the LSN scheme.\n\n> What really matters here is finding a criterion that is likely to\n> perform well in general, on a test case not known to us beforehand.\n> This isn't an entirely feasible goal, because just as you can\n> construct a test case where any given criterion performs well, so you\n> can also construct one where any given criterion performs poorly. But\n> I think a rule that has a clear theory of operation must be preferable\n> to one that doesn't. The theory that Melanie and Andres are advancing\n> is that a page that has been modified recently (in insert-LSN-time) is\n> more likely to be modified again soon than one that has not i.e. the\n> near future will be like the recent past.\n\nI don't think that it's all that useful on its own. You just cannot\nignore the fact that the choice to not freeze now doesn't necessarily\nmean that you get to rereview that choice in the near future.\nParticularly with large tables, the opportunities to freeze at all are\nfew and far between -- if for no other reason than the general design\nof autovacuum scheduling. Worse still, any unfrozen all-visible pages\ncan just accumulate as all-visible pages, until the next aggressive\nVACUUM happens whenever. How can that not be extremely important?\n\nThat isn't an argument against a scheme that uses LSNs (many kinds of\ninformation might be weighed) -- it's an argument in favor of paying\nattention to the high level cadence of VACUUM. That much seems\nessential. I think that there might well be room for having several\ncomplementary schemes like the LSN scheme. Or one big scheme that\nweighs multiple factors together, if you prefer. That all seems\nbasically reasonable to me.\n\nAdaptive behavior is important with something as complicated as this.\nAdaptive schemes all seem to involve trial and error. The cost of\nfreezing too much is relatively well understood, and can be managed\nsensibly. So we should err in that direction -- a direction that is\nrelatively easy to understand, to notice, and to pull back from having\ngone too far. Putting off freezing for a very long time is a source of\nmuch of the seemingly intractable complexity in this area.\n\nAnother way of addressing that is getting rid of aggressive VACUUM as\na concept. But I'm not going to revisit that topic now, or likely\never.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 28 Aug 2023 14:05:57 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Mon, Aug 28, 2023 at 5:06 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, Aug 28, 2023 at 1:17 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > I'm sure this could be implemented, but it's unclear to me why you\n> > would expect it to perform well. Freezing a page that has no frozen\n> > tuples yet isn't cheaper than freezing one that does, so for this idea\n> > to be a win, the presence of frozen tuples on the page would have to\n> > be a signal that the page is likely to be modified again in the near\n> > future. In general, I don't see any reason why we should expect that\n> > to be the case.\n>\n> What I've described in a scheme for deciding to not freeze where that\n> would usually happen -- a scheme for *vetoing* page freezing -- rather\n> than a scheme for deciding to freeze. On its own, what I suggested\n> cannot help at all. It assumes a world in which we're already deciding\n> to freeze much more frequently, based on whatever other criteria. It's\n> intended to complement something like the LSN scheme.\n\nI like the direction of this idea. It could avoid repeatedly freezing\na page that is being modified over and over. I tried it out with a\nshort-running version of workload I (approximating a work queue) --\nand it prevents unnecessary freezing.\n\nThe problem with this criterion is that if you freeze a page and then\never modify it again -- even once, vacuum will not be allowed to\nfreeze it again until it is forced to. Perhaps we could use a very\nstrict LSN cutoff for pages which contain any frozen tuples -- for\nexample: only freeze a page containing a mix of frozen and unfrozen\ntuples if it has not been modified since before the last vacuum of the\nrelation ended.\n\n- Melanie\n\n\n", "msg_date": "Mon, 28 Aug 2023 19:15:39 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Mon, Aug 28, 2023 at 4:15 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> On Mon, Aug 28, 2023 at 5:06 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > What I've described in a scheme for deciding to not freeze where that\n> > would usually happen -- a scheme for *vetoing* page freezing -- rather\n> > than a scheme for deciding to freeze. On its own, what I suggested\n> > cannot help at all. It assumes a world in which we're already deciding\n> > to freeze much more frequently, based on whatever other criteria. It's\n> > intended to complement something like the LSN scheme.\n>\n> I like the direction of this idea. It could avoid repeatedly freezing\n> a page that is being modified over and over. I tried it out with a\n> short-running version of workload I (approximating a work queue) --\n> and it prevents unnecessary freezing.\n\nI see a lot of value in it as an enabler of freezing at the earliest\nopportunity, which is usually the way to go.\n\n> The problem with this criterion is that if you freeze a page and then\n> ever modify it again -- even once, vacuum will not be allowed to\n> freeze it again until it is forced to.\n\nRight. Like you, I was thinking of something that dampened VACUUM's\nnewfound enthusiasm for freezing, in whatever way -- not a discrete\nrule. A mechanism with a sense of proportion about what it meant for\nthe page to look a certain way. The strength of the signal would\nperhaps be highest (i.e. most discouraging of further freezing) on a\npage that has only a few relatively recent unfrozen tuples. XID age\ncould actually be useful here!\n\n> Perhaps we could use a very\n> strict LSN cutoff for pages which contain any frozen tuples -- for\n> example: only freeze a page containing a mix of frozen and unfrozen\n> tuples if it has not been modified since before the last vacuum of the\n> relation ended.\n\nYou might also need to account for things like interactions with the\nFSM. If the new unfrozen tuple packs the page to the brim, and the new\nrow is from an insert, then maybe the mechanism shouldn't dampen our\nenthusiasm for freezing the page at all. Similarly, on a page that has\nno unfrozen tuples at all just yet, it would make sense for the\nalgorithm to be most enthusiastic about freezing those pages that can\nfit no more tuples. Perhaps we'd be willing to accept the cost of an\nextra FPI with such a page, for example.\n\nIt will take time to validate these sorts of ideas thoroughly, of\ncourse. I agree with what you said upthread about it also being a\nquestion of values and priorities.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 28 Aug 2023 18:43:46 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Mon, Aug 28, 2023 at 4:30 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> By low-velocity, do you mean lower overall TPS? In that case, wouldn't you be\n> less likely to run into xid wraparound and thus need less aggressive\n> opportunistic freezing?\n\nYes. But it also means that we've got slack capacity to do extra work\nnow without really hurting anything. If we can leverage that capacity\nto reduce the pain of future bulk operations, that seems good. When\nresources are tight, doing speculative work immediately becomes less\nappealing. It's pretty hard to take such things into account, though.\nI was just mentioning it.\n\n> So, this is where the caveat about absolute number of page freezes\n> matters. In algorithm A, master only did 57 page freezes (spread across\n> the various pgbench tables). At the end of the run, 2 pages were still\n> frozen.\n\nI'm increasingly feeling that it's hard to make sense of the ratio.\nMaybe show the number of pages frozen, the number that are frozen at\nthe end, and the number of pages in the database at the end of the run\nas three separate values.\n\n> (1) seems bad to me because it doesn't consider whether or not freezing\n> will be useful -- only if it will be cheap. It froze very little of the\n> cold data in a workload where a small percentage of it was being\n> modified (especially workloads A, C, H). And it froze a lot of data in\n> workloads where it was being uniformly modified (workload B).\n\nSure, but if the cost of being wrong is low, you can be wrong a lot\nand still be pretty happy. It's unclear to me to what extent we should\ngamble on making only inexpensive mistakes and to what extent we\nshould gamble on making only infrequent mistakes, but they're both\nvalid strategies.\n\n> I suggested (4) and (5) because I think the \"older than 33%\" threshold\n> is better than the \"older than 10%\" threshold. I chose both because I am\n> still unclear on our values. Are we willing to freeze more aggressively\n> at the expense of emitting more FPIs? As long as it doesn't affect\n> throughput? For pretty much all of these workloads, the algorithms which\n> froze based on page modification recency OR FPI required emitted many\n> more FPIs than those which froze based only on page modification\n> recency.\n\nLet's assume for a moment that the rate at which the insert LSN is\nadvancing is roughly constant over time, so that it serves as a good\nproxy for wall-clock time. Consider four tables A, B, C, and D that\nare, respectively, vacuumed once per minute, once per hour, once per\nday, and once per week. With a 33% threshold, pages in table A will be\nfrozen if they haven't been modified in 20 seconds, page in table B\nwill be frozen if they haven't been modified in 20 minutes, pages in\ntable C will be frozen if they haven't been modified in 8 hours, and\npages in table D will be frozen if they haven't been modified in 2\ndays, 8 hours. My intuition is that this feels awfully aggressive for\nA and awfully passive for D.\n\nTo expand on that: apparently D doesn't actually get much write\nactivity, else there would be more vacuuming happening. So it's very\nlikely that pages in table D are going to get checkpointed and evicted\nbefore they get modified again. Freezing them therefore seems like a\ngood bet: it's a lot cheaper to freeze those pages when they're\nalready in shared_buffers and dirty than it is if we have to read and\nwrite them specifically for freezing. It's less obvious that what\nwe're doing in table A is wrong, and it could be exactly right, but\nworkloads where a row is modified, a human thinks about something\n(e.g. whether to complete the proposed purchase), and then the same\nrow is modified again are not uncommon, and human thinking times can\neasily exceed 20 seconds. On the other hand, workloads where a row is\nmodified, a computer thinks about something, and then the row is\nmodified again are also quite common, and computer thinking times can\neasily be less than 20 seconds. It feels like a toss-up whether we get\nit right. For this kind of table, I suspect we'd be happier freezing\npages that are about to be evicted or about to be written as part of a\ncheckpoint rather than freezing pages opportunistically in vacuum.\n\nMaybe that's something we need to think harder about. If we froze\ndirty pages that wouldn't need a new FPI just before evicting them,\nand just before they would be written out for a checkpoint, under what\ncircumstances would we still want vacuum to opportunistically freeze?\nI think the answer might be \"only when it's very cheap.\" If it's very\ncheap to freeze now, it's appealing to gamble on doing it before a\ncheckpoint comes along and makes the same operation require an extra\nFPI. But if it isn't too cheap, then why not just wait and see what\nhappens? If the buffer gets modified again before it gets written out,\nthen freeing immediately is a waste and freeze-on-evict is better. If\nit doesn't, freezing immediately and freeze-on-evict are the same\nprice.\n\nI'm hand-waving a bit here because maybe freeze-on-evict is subject to\nsome conditions and freezing-immediately is more unconditional or\nsomething like that. But I think the basic point is valid, namely,\nsometimes it might be better to defer the decision to the last point\nin time at which we can reasonably make it, and by focusing on what\nvacuum (or even HOT pruning) do, we're pulling that decision forward\nin time, which is good if it makes it cheaper by piggybacking on a\nprevious FPI, but not good if it means that we're guessing whether the\npage will be accessed again soon when we could just as well wait and\nsee whether that happens or not.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 29 Aug 2023 10:22:32 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Tue, Aug 29, 2023 at 10:22 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Let's assume for a moment that the rate at which the insert LSN is\n> advancing is roughly constant over time, so that it serves as a good\n> proxy for wall-clock time. Consider four tables A, B, C, and D that\n> are, respectively, vacuumed once per minute, once per hour, once per\n> day, and once per week. With a 33% threshold, pages in table A will be\n> frozen if they haven't been modified in 20 seconds, page in table B\n> will be frozen if they haven't been modified in 20 minutes, pages in\n> table C will be frozen if they haven't been modified in 8 hours, and\n> pages in table D will be frozen if they haven't been modified in 2\n> days, 8 hours. My intuition is that this feels awfully aggressive for\n> A and awfully passive for D.\n>\n> [ discussion of freeze-on-evict ]\n\nAnother way of thinking about this is: instead of replacing this\nheuristic with a complicated freeze-on-evict system, maybe we just\nneed to adjust the heuristic. Maybe using an LSN is a good idea, but\nmaybe the LSN of the last table vacuum isn't the right one to be\nusing, or maybe it shouldn't be the only one that we use. For\ninstance, what about using the redo LSN of the last checkpoint, or the\ncheckpoint before the last checkpoint, or something like that?\nSomebody might find that unprincipled, but it seems to me that the\ncheckpoint cycle has a lot to do with whether or not opportunistic\nfreezing makes sense. If a page is likely to be modified again before\na checkpoint forces it to be written, then freezing it is likely a\nwaste. If it's likely to be written out of shared_buffers before it's\nmodified again, then freezing it now is a pretty good bet. A given\npage could be evicted from shared_buffers, and thus written, sooner\nthan the next checkpoint, but if it's dirty now, it definitely won't\nbe written any later than the next checkpoint. By looking at the LSN\nof a page that I'm about to modify just before I modify it, I can make\nsome kind of a guess as to whether this is a page that is being\nmodified more or less than once per checkpoint cycle and adjust\nfreezing behavior accordingly.\n\nOne could also use a hybrid of the two values e.g. normally use the\ninsert LSN of the last VACUUM, but if that's newer than the redo LSN\nof the last checkpoint, then use the latter instead, to avoid doing\ntoo much freezing of pages that may have been quiescent for only a few\ntens of seconds. I don't know if that's better or worse. As I think\nabout it, I realize that I don't really know why Andres suggested a\nlast-vacuum-LSN-based heuristic in the first place. Before, I wrote of\nthis that \"One thing I really like about it is that if the table is\nbeing vacuumed frequently, then we freeze less aggressively, and if\nthe table is being vacuumed infrequently, then we freeze more\naggressively.\" But actually, I don't think it does that. If the table\nis being vacuumed frequently, then the last-vacuum-LSN will be newer,\nwhich means we'll freeze *more* aggressively. And I'm not sure why we\nwant to do that. If the table is being vacuumed a lot, it's probably\nalso being modified a lot, which suggests that we ought to be more\ncautious about freezing, rather than the reverse.\n\nJust spitballing here.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 1 Sep 2023 15:34:16 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Fri, Sep 1, 2023 at 12:34 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> If the table\n> is being vacuumed frequently, then the last-vacuum-LSN will be newer,\n> which means we'll freeze *more* aggressively.\n\nSort of like how if you set autovacuum_vacuum_scale_factor low enough,\nwith standard pgbench, with heap fill factor tuned, autovacuum will\nnever think that the table doesn't need to be vacuumed. It will\ncontinually see enough dead heap-only tuples to get another autovacuum\neach time. Though there won't be any LP_DEAD items at any point --\nregardless of when VACUUM actually runs.\n\nWhen I reported this a couple of years ago, I noticed that autovacuum\nwould spin whenever I set autovacuum_vacuum_scale_factor to 0.02. But\nautovacuum would *never* run (outside of antiwraparound autovacuums)\nwhen it was set just a little higher (perhaps 0.03 or 0.04). So there\nwas some inflection point at which its behavior totally changed.\n\n> And I'm not sure why we\n> want to do that. If the table is being vacuumed a lot, it's probably\n> also being modified a lot, which suggests that we ought to be more\n> cautious about freezing, rather than the reverse.\n\nWhy wouldn't it be both things at the same time, for the same table?\n\nWhy not also avoid setting pages all-visible? The WAL records aren't\ntoo much smaller than most freeze records these days -- 64 bytes on\nmost systems. I realize that the rules for FPIs are a bit different\nwhen page-level checksums aren't enabled, but fundamentally it's the\nsame situation. No?\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Fri, 1 Sep 2023 18:07:09 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "Hi,\n\nOn 2023-08-28 12:26:01 -0400, Robert Haas wrote:\n> On Mon, Aug 28, 2023 at 10:00 AM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> > For the second goal, I've relied on past data to predict future\n> > behavior, so I tried several criteria to estimate the likelihood that a\n> > page will not be imminently modified. What was most effective was\n> > Andres' suggestion of comparing the page LSN to the insert LSN at the\n> > end of the last vacuum of that table; this approximates whether the page\n> > has been recently modified, which is a decent proxy for whether it'll be\n> > modified in the future. To do this, we need to save that insert LSN\n> > somewhere. In the attached WIP patch, I saved it in the table stats, for\n> > now -- knowing that those are not crash-safe.\n>\n> I wonder what the real plan here is for where to store this. It's not\n> obvious that we need this to be crash-safe; it's after all only for\n> use by a heuristic, and there's no actual breakage if the heuristic\n> goes wrong. At the same time, it doesn't exactly feel like a\n> statistic.\n\nI'm not certain either. This is generally something that's not satisfying\nright now - although IMO not necessarily for the reason you mention. Given\nthat we already store, e.g., the time of the last autovacuum in the stats, I\ndon't see a problem also storing a corresponding LSN. My issue is more that\nthis kind of information not being crashsafe is really problematic - it's a\nwell known issue that autovacuum just doesn't do anything for a while after a\ncrash-restart (or pitr restore or ...), for example.\n\nGiven that all the other datapoints are stored in the stats, I think just\nstoring the LSNs alongside is reasonable.\n\n\n> Then there's the question of whether it's the right metric. My first\n> reaction is to think that it sounds pretty good. One thing I really\n> like about it is that if the table is being vacuumed frequently, then\n> we freeze less aggressively, and if the table is being vacuumed\n> infrequently, then we freeze more aggressively. That seems like a very\n> desirable property. It also seems broadly good that this metric\n> doesn't really care about reads. If there are a lot of reads on the\n> system, or no reads at all, it doesn't really change the chances that\n> a certain page is going to be written again soon, and since reads\n> don't change the insert LSN, here again it seems to do the right\n> thing. I'm a little less clear about whether it's good that it doesn't\n> really depend on wall-clock time.\n\nYea, it'd be useful to have a reasonably approximate wall clock time for the\nlast modification of a page. We just don't have infrastructure for determining\nthat. We'd need an LSN->time mapping (xid->time wouldn't be particularly\nuseful, I think).\n\nA very rough approximate modification time can be computed by assuming an even\nrate of WAL generation, and using the LSN at the time of the last vacuum and\nthe time of the last vacuum, to compute the approximate age.\n\nFor a while I thought that'd not give us anything that just using LSNs gives\nus, but I think it might allow coming up with a better cutoff logic: Instead\nof using a cutoff like \"page LSN is older than 10% of the LSNs since the last\nvacuum of the table\", it would allow us to approximate \"page has not been\nmodified in the last 15 seconds\" or such. I think that might help avoid\nunnecessary freezing on tables with very frequent vacuuming.\n\n\n> Certainly, that's desirable from the point of view of not wanting to have to\n> measure wall-clock time in places where we otherwise wouldn't have to, which\n> tends to end up being expensive.\n\nIMO the bigger issue is that we don't want to store a timestamp on each page.\n\n\n> > Page Freezes/Page Frozen (less is better)\n\nAs, I think, Robert mentioned downthread, I'm not sure this is a useful thing\nto judge the different heuristics by. If the number of pages frozen is small,\nthe ratio quickly can be very large, without the freezing having a negative\neffect.\n\nI suspect interesting top-level figures to compare would be:\n\n1) WAL volume (to judge the amount of unnecessary FPIs)\n\n2) data reads + writes (to see the effect of repeated vacuuming of the same\n blocks)\n\n3) number of vacuums and/or time spent vacuuming (freezing less aggressively\n might increase the number of vacuums due to anti-wrap vacuums, at the same\n time, freezing too aggressively could lead to vacuums taking too long)\n\n4) throughput of the workload (to see potential regressions due to vacuuming\n overhead)\n\n5) for transactional workloads: p99 latency (to see if vacuuming increases\n commit latency and such, just using average tends to hide too much)\n\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 5 Sep 2023 22:09:00 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Wed, Sep 6, 2023 at 1:09 AM Andres Freund <andres@anarazel.de> wrote:\n> Yea, it'd be useful to have a reasonably approximate wall clock time for the\n> last modification of a page. We just don't have infrastructure for determining\n> that. We'd need an LSN->time mapping (xid->time wouldn't be particularly\n> useful, I think).\n>\n> A very rough approximate modification time can be computed by assuming an even\n> rate of WAL generation, and using the LSN at the time of the last vacuum and\n> the time of the last vacuum, to compute the approximate age.\n>\n> For a while I thought that'd not give us anything that just using LSNs gives\n> us, but I think it might allow coming up with a better cutoff logic: Instead\n> of using a cutoff like \"page LSN is older than 10% of the LSNs since the last\n> vacuum of the table\", it would allow us to approximate \"page has not been\n> modified in the last 15 seconds\" or such. I think that might help avoid\n> unnecessary freezing on tables with very frequent vacuuming.\n\nYes. I'm uncomfortable with the last-vacuum-LSN approach mostly\nbecause of the impact on very frequently vacuumed tables, and\nsecondarily because of the impact on very infrequently vacuumed\ntables.\n\nDownthread, I proposed using the RedoRecPtr of the latest checkpoint\nrather than the LSN of the previou vacuum. I still like that idea.\nIt's a value that we already have, with no additional bookkeeping. It\nvaries over a much narrower range than the interval between vacuums on\na table. The vacuum interval could be as short as tens of seconds as\nlong as years, while the checkpoint interval is almost always going to\nbe between a few minutes at the low end and some tens of minutes at\nthe high end, hours at the very most. That's quite appealing. Also, I\nthink the time between checkpoints actually matters here, because in\nsome sense we're looking to get dirty, already-FPI'd pages frozen\nbefore they get written out, or before a new FPI becomes necessary,\nand checkpoints are one way for the first of those things to happen\nand the only way for the second one to happen.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 6 Sep 2023 10:35:17 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Fri, Sep 1, 2023 at 9:07 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> When I reported this a couple of years ago, I noticed that autovacuum\n> would spin whenever I set autovacuum_vacuum_scale_factor to 0.02. But\n> autovacuum would *never* run (outside of antiwraparound autovacuums)\n> when it was set just a little higher (perhaps 0.03 or 0.04). So there\n> was some inflection point at which its behavior totally changed.\n\nI think that tables have a \"natural\" amount of bloat that is a\nfunction of the workload. If you remove all the bloat by, say, running\nVACUUM FULL, they quickly bloat again to that point, and then\nstabilize (hopefully). It seems like if you set the scale factor to a\nvalue below that \"natural\" amount of bloat, you might well spin,\nespecially on a very write-heavy workload like pgbench, because you're\nbasically trying to squeeze water from a stone on that point.\n\nIt's interesting that when you raised the threshold the rate of\nvacuuming dropped to zero. I haven't seen that behavior. pgbench does\nrely very, very heavily on HOT-pruning, so VACUUM only cleans up a\nsmall percentage of the dead tuples that get created. However, it's\nstill needed for index vacuuming, and I'm not sure what would cause it\nnot to trigger for that purpose.\n\n> > And I'm not sure why we\n> > want to do that. If the table is being vacuumed a lot, it's probably\n> > also being modified a lot, which suggests that we ought to be more\n> > cautious about freezing, rather than the reverse.\n>\n> Why wouldn't it be both things at the same time, for the same table?\n\nBoth of what things?\n\n> Why not also avoid setting pages all-visible? The WAL records aren't\n> too much smaller than most freeze records these days -- 64 bytes on\n> most systems. I realize that the rules for FPIs are a bit different\n> when page-level checksums aren't enabled, but fundamentally it's the\n> same situation. No?\n\nIt's an interesting point. AFAIK, whether or not page-level checksums\nare enabled doesn't really matter here. But it seems fair to ask - if\nfreezing is too aggressive, why is setting all-visible not also too\naggressive? I don't have a good answer to that question right now.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 6 Sep 2023 10:46:03 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Wed, Sep 6, 2023 at 7:46 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> It's interesting that when you raised the threshold the rate of\n> vacuuming dropped to zero. I haven't seen that behavior. pgbench does\n> rely very, very heavily on HOT-pruning, so VACUUM only cleans up a\n> small percentage of the dead tuples that get created. However, it's\n> still needed for index vacuuming, and I'm not sure what would cause it\n> not to trigger for that purpose.\n\nThis was a case where index vacuuming was never required. It's just a\nsimple and easy to recreate example of what I think of as a more\ngeneral problem.\n\n> > > And I'm not sure why we\n> > > want to do that. If the table is being vacuumed a lot, it's probably\n> > > also being modified a lot, which suggests that we ought to be more\n> > > cautious about freezing, rather than the reverse.\n> >\n> > Why wouldn't it be both things at the same time, for the same table?\n>\n> Both of what things?\n\nWhy wouldn't we expect a table to have some pages that ought to be\nfrozen right away, and others where freezing should in theory be put\noff indefinitely? I think that that's very common.\n\n> > Why not also avoid setting pages all-visible? The WAL records aren't\n> > too much smaller than most freeze records these days -- 64 bytes on\n> > most systems. I realize that the rules for FPIs are a bit different\n> > when page-level checksums aren't enabled, but fundamentally it's the\n> > same situation. No?\n>\n> It's an interesting point. AFAIK, whether or not page-level checksums\n> are enabled doesn't really matter here. But it seems fair to ask - if\n> freezing is too aggressive, why is setting all-visible not also too\n> aggressive? I don't have a good answer to that question right now.\n\nAs you know, I am particularly concerned about the tendency of\nunfrozen all-visible pages to accumulate without bound (at least\nwithout bound expressed in physical units such as pages). The very\nfact that pages are being set all-visible by VACUUM can be seen as a\npart of a high-level systemic problem -- a problem that plays out over\ntime, across multiple VACUUM operations. So even if the cost of\nsetting pages all-visible happened to be much lower than the cost of\nfreezing (which it isn't), setting pages all-visible without freezing\nhas unique downsides.\n\nIf VACUUM freezes too aggressively, then (pretty much by definition)\nwe can be sure that the next VACUUM will scan the same pages -- there\nmay be some scope for VACUUM to \"learn from its mistake\" when we err\nin the direction of over-freezing. But when VACUUM makes the opposite\nmistake (doesn't freeze when it should have), it won't scan those same\npages again for a long time, by design. It therefore has no plausible\nway of \"learning from its mistakes\" before it becomes an extremely\nexpensive and painful lesson (which happens whenever the next\naggressive VACUUM takes place). This is in large part a consequence of\nthe way that VACUUM dutifully sets pages all-visible whenever\npossible. That behavior interacts badly with many workloads, over\ntime.\n\nVACUUM simply ignores such second-order effects. Perhaps it would be\npractical to address some of the issues in this area by avoiding\nsetting pages all-visible without freezing them, in some general\nsense. That at least creates a kind of symmetry between mistakes in\nthe direction of under-freezing and mistakes in the direction of\nover-freezing. That might enable VACUUM to course-correct in either\ndirection.\n\nMelanie is already planning on combining the WAL records (PRUNE,\nFREEZE_PAGE, and VISIBLE). Perhaps that'll weaken the argument for\nsetting unfrozen pages all-visible even further.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 6 Sep 2023 09:20:23 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Wed, Sep 6, 2023 at 12:20 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> This was a case where index vacuuming was never required. It's just a\n> simple and easy to recreate example of what I think of as a more\n> general problem.\n\nOK.\n\n> Why wouldn't we expect a table to have some pages that ought to be\n> frozen right away, and others where freezing should in theory be put\n> off indefinitely? I think that that's very common.\n\nOh, I see. I agree that that's pretty common. I think what that means\nin practice is that we need to avoid relying too much on\nrelation-level statistics to guide behavior with respect to individual\nrelation pages. On the other hand, I don't think it means that we\ncan't look at relation-wide or even system-wide statistics at all.\nSure, those statistics may not be perfect, but some things are not\npractical to track on a page granularity, and having some\ncourse-grained information can, I think, be better than having nothing\nat all, if you're careful about how much and in what way you rely on\nit.\n\n> As you know, I am particularly concerned about the tendency of\n> unfrozen all-visible pages to accumulate without bound (at least\n> without bound expressed in physical units such as pages). The very\n> fact that pages are being set all-visible by VACUUM can be seen as a\n> part of a high-level systemic problem -- a problem that plays out over\n> time, across multiple VACUUM operations. So even if the cost of\n> setting pages all-visible happened to be much lower than the cost of\n> freezing (which it isn't), setting pages all-visible without freezing\n> has unique downsides.\n\nI generally agree with all of that.\n\n> If VACUUM freezes too aggressively, then (pretty much by definition)\n> we can be sure that the next VACUUM will scan the same pages -- there\n> may be some scope for VACUUM to \"learn from its mistake\" when we err\n> in the direction of over-freezing. But when VACUUM makes the opposite\n> mistake (doesn't freeze when it should have), it won't scan those same\n> pages again for a long time, by design. It therefore has no plausible\n> way of \"learning from its mistakes\" before it becomes an extremely\n> expensive and painful lesson (which happens whenever the next\n> aggressive VACUUM takes place). This is in large part a consequence of\n> the way that VACUUM dutifully sets pages all-visible whenever\n> possible. That behavior interacts badly with many workloads, over\n> time.\n\nI think this is an insightful commentary with which I partially agree.\nAs I see it, the difference is that when you make the mistake of\nmarking something all-visible or freezing it too aggressively, you\nincur a price that you pay almost immediately. When you make the\nmistake of not marking something all-visible when it would have been\nbest to do so, you incur a price that you pay later, when the next\nVACUUM happens. When you make the mistake of not marking something\nall-frozen when it would have been best to do so, you incur a price\nthat you pay even later, not at the next VACUUM but at some VACUUM\nfurther off. So there are different trade-offs. When you pay the price\nfor a mistake immediately or nearly immediately, it can potentially\nharm the performance of the foreground workload, if you're making a\nlot of mistakes. That sucks. On the other hand, when you defer paying\nthe price until some later bulk operation, the costs of all of your\nmistakes get added up and then you pay the whole price all at once,\nwhich means you can be suddenly slapped with an enormous bill that you\nweren't expecting. That sucks, too, just in a different way.\n\n> VACUUM simply ignores such second-order effects. Perhaps it would be\n> practical to address some of the issues in this area by avoiding\n> setting pages all-visible without freezing them, in some general\n> sense. That at least creates a kind of symmetry between mistakes in\n> the direction of under-freezing and mistakes in the direction of\n> over-freezing. That might enable VACUUM to course-correct in either\n> direction.\n>\n> Melanie is already planning on combining the WAL records (PRUNE,\n> FREEZE_PAGE, and VISIBLE). Perhaps that'll weaken the argument for\n> setting unfrozen pages all-visible even further.\n\nYeah, so I think the question here is whether it's ever a good idea to\nmark a page all-visible without also freezing it. If it's not, then we\nshould either mark fewer pages all-visible, or freeze more of them.\nMaybe I'm all wet here, but I think it depends on the situation. If a\npage is already dirty and has had an FPI since the last checkpoint,\nthen it's pretty appealing to freeze whenever we mark all-visible. We\nstill have to consider whether the incremental CPU cost and WAL volume\nare worth it, but assuming those costs are small enough not to be a\nbig problem, it seems like a pretty good bet. Making a page\nun-all-visible has some cost, but making a page un-all-frozen really\ndoesn't, so cool. On the other hand, if we have a page that isn't\ndirty, hasn't had a recent FPI, and doesn't need pruning, but which\ncan be marked all-visible, freezing it is a potentially more\nsignificant cost, because marking the buffer all-visible doesn't force\na new FPI, and freezing does.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 6 Sep 2023 16:21:31 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "Hi,\n\nOn 2023-09-06 10:35:17 -0400, Robert Haas wrote:\n> On Wed, Sep 6, 2023 at 1:09 AM Andres Freund <andres@anarazel.de> wrote:\n> > Yea, it'd be useful to have a reasonably approximate wall clock time for the\n> > last modification of a page. We just don't have infrastructure for determining\n> > that. We'd need an LSN->time mapping (xid->time wouldn't be particularly\n> > useful, I think).\n> >\n> > A very rough approximate modification time can be computed by assuming an even\n> > rate of WAL generation, and using the LSN at the time of the last vacuum and\n> > the time of the last vacuum, to compute the approximate age.\n> >\n> > For a while I thought that'd not give us anything that just using LSNs gives\n> > us, but I think it might allow coming up with a better cutoff logic: Instead\n> > of using a cutoff like \"page LSN is older than 10% of the LSNs since the last\n> > vacuum of the table\", it would allow us to approximate \"page has not been\n> > modified in the last 15 seconds\" or such. I think that might help avoid\n> > unnecessary freezing on tables with very frequent vacuuming.\n> \n> Yes. I'm uncomfortable with the last-vacuum-LSN approach mostly\n> because of the impact on very frequently vacuumed tables, and\n> secondarily because of the impact on very infrequently vacuumed\n> tables.\n> \n> Downthread, I proposed using the RedoRecPtr of the latest checkpoint\n> rather than the LSN of the previou vacuum. I still like that idea.\n\nAssuming that \"downthread\" references\nhttps://postgr.es/m/CA%2BTgmoYb670VcDFbekjn2YQOKF9a7e-kBFoj2WJF1HtH7YPaWQ%40mail.gmail.com\ncould you sketch out the logic you're imagining a bit more?\n\n\n> It's a value that we already have, with no additional bookkeeping. It\n> varies over a much narrower range than the interval between vacuums on\n> a table. The vacuum interval could be as short as tens of seconds as\n> long as years, while the checkpoint interval is almost always going to\n> be between a few minutes at the low end and some tens of minutes at\n> the high end, hours at the very most. That's quite appealing.\n\nThe reason I was thinking of using the \"lsn at the end of the last vacuum\", is\nthat it seems to be more adapative to the frequency of vacuuming.\n\nOne the one hand, if a table is rarely autovacuumed because it is huge,\n(InsertLSN-RedoRecPtr) might or might not be representative of the workload\nover a longer time. On the other hand, if a page in a frequently vacuumed\ntable has an LSN from around the last vacuum (or even before), it should be\nfrozen, but will appear to be recent in RedoRecPtr based heuristics?\n\n\nPerhaps we can mix both approaches. We can use the LSN and time of the last\nvacuum to establish an LSN->time mapping that's reasonably accurate for a\nrelation. For infrequently vacuumed tables we can use the time between\ncheckpoints to establish a *more aggressive* cutoff for freezing then what a\npercent-of-time-since-last-vacuum appach would provide. If e.g. a table gets\nvacuumed every 100 hours and checkpoint timeout is 1 hour, no realistic\npercent-of-time-since-last-vacuum setting will allow freezing, as all dirty\npages will be too new. To allow freezing a decent proportion of those, we\ncould allow freezing pages that lived longer than ~20%\ntime-between-recent-checkpoints.\n\n\nHm, possibly stupid idea: What about using shared_buffers residency as a\nfactor? If vacuum had to read in a page to vacuum it, a) we would need read IO\nto freeze it later, as we'll soon evict the page via the ringbuffer b)\nnon-residency indicates the page isn't constantly being modified?\n\n\n> Also, I think the time between checkpoints actually matters here, because in\n> some sense we're looking to get dirty, already-FPI'd pages frozen before\n> they get written out, or before a new FPI becomes necessary, and checkpoints\n> are one way for the first of those things to happen and the only way for the\n> second one to happen.\n\nIntuitively it seems easier to take care of that by checking if an FPI is\nneeded or not?\n\nI guess we, eventually, might want to also freeze if an FPI would be\ngenerated, if we are reasonably certain that the page isn't going to be\nmodified again soon (e.g. when table stats indicate effectively aren't any\nupdates / deletes). Although perhaps it's better to take care of that via\nfreeze-on-writeout style logic.\n\nI suspect than even for freeze-on-writeout we might end up needing some\nheuristics\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 7 Sep 2023 21:07:13 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "Hi,\n\nOn 2023-09-06 10:46:03 -0400, Robert Haas wrote:\n> On Fri, Sep 1, 2023 at 9:07 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Why not also avoid setting pages all-visible? The WAL records aren't\n> > too much smaller than most freeze records these days -- 64 bytes on\n> > most systems. I realize that the rules for FPIs are a bit different\n> > when page-level checksums aren't enabled, but fundamentally it's the\n> > same situation. No?\n>\n> It's an interesting point. AFAIK, whether or not page-level checksums\n> are enabled doesn't really matter here.\n\nI think it does matter:\n\nvoid\nvisibilitymap_set(Relation rel, BlockNumber heapBlk, Buffer heapBuf,\n XLogRecPtr recptr, Buffer vmBuf, TransactionId cutoff_xid,\n uint8 flags)\n...\n recptr = log_heap_visible(rel, heapBuf, vmBuf, cutoff_xid, flags);\n\n /*\n * If data checksums are enabled (or wal_log_hints=on), we\n * need to protect the heap page from being torn.\n *\n * If not, then we must *not* update the heap page's LSN. In\n * this case, the FPI for the heap page was omitted from the\n * WAL record inserted above, so it would be incorrect to\n * update the heap page's LSN.\n */\n if (XLogHintBitIsNeeded())\n {\n Page heapPage = BufferGetPage(heapBuf);\n\n PageSetLSN(heapPage, recptr);\n }\n\nand\n\n/*\n * Perform XLogInsert for a heap-visible operation. 'block' is the block\n * being marked all-visible, and vm_buffer is the buffer containing the\n * corresponding visibility map block. Both should have already been modified\n * and dirtied.\n *\n * snapshotConflictHorizon comes from the largest xmin on the page being\n * marked all-visible. REDO routine uses it to generate recovery conflicts.\n *\n * If checksums or wal_log_hints are enabled, we may also generate a full-page\n * image of heap_buffer. Otherwise, we optimize away the FPI (by specifying\n * REGBUF_NO_IMAGE for the heap buffer), in which case the caller should *not*\n * update the heap page's LSN.\n */\nXLogRecPtr\nlog_heap_visible(Relation rel, Buffer heap_buffer, Buffer vm_buffer,\n\t\t\t\t TransactionId snapshotConflictHorizon, uint8 vmflags)\n{\n\nI.e. setting an, otherwise unmodified, page all-visible won't trigger an FPI\nif checksums are disabled, but will FPI with checksums enabled. I think that's\na substantial difference in WAL volume for insert-only workloads...\n\nIn contrast to that, freezing will almost always trigger an FPI (except for\nempty pages, but we imo ought to stop setting empty pages all frozen [1]).\n\n\nYep, a quick experiment confirms that:\n\nDROP TABLE IF EXISTS foo;\nCREATE TABLE foo AS SELECT generate_series(1, 10000000);\nCHECKPOINT;\nVACUUM (VERBOSE) foo;\n\nchecksums off: WAL usage: 44249 records, 3 full page images, 2632091 bytes\nchecksums on: WAL usage: 132748 records, 44253 full page images, 388758161 bytes\n\n\nI initially was confused by the 3x wal records - I was expecting 2x. The\nreason is that with checksums on, we emit an FPI during the visibility check,\nwhich then triggers the current heuristic for opportunistic freezing. The\nsaving grace is that WAL volume is completely dominated by the FPIs:\n\nType N (%) Record size (%) FPI size (%) Combined size (%)\n---- - --- ----------- --- -------- --- ------------- ---\nXLOG/FPI_FOR_HINT 44253 ( 33.34) 2168397 ( 7.84) 361094232 (100.00) 363262629 ( 93.44)\nTransaction/INVALIDATION 1 ( 0.00) 78 ( 0.00) 0 ( 0.00) 78 ( 0.00)\nStandby/INVALIDATIONS 1 ( 0.00) 90 ( 0.00) 0 ( 0.00) 90 ( 0.00)\nHeap2/FREEZE_PAGE 44248 ( 33.33) 22876120 ( 82.72) 0 ( 0.00) 22876120 ( 5.88)\nHeap2/VISIBLE 44248 ( 33.33) 2610642 ( 9.44) 16384 ( 0.00) 2627026 ( 0.68)\nHeap/INPLACE 1 ( 0.00) 188 ( 0.00) 0 ( 0.00) 188 ( 0.00)\n -------- -------- -------- --------\nTotal 132752 27655515 [7.11%] 361110616 [92.89%] 388766131 [100%]\n\nIn realistic tables, where rows are wider than a single int, FPI_FOR_HINT\ndominates even further, as the FREEZE_PAGE would be smaller if there weren't\n226 tuples on each page...\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/20230328014806.3vjochayt2bu3hr3%40awork3.anarazel.de\n\n\n", "msg_date": "Thu, 7 Sep 2023 21:45:22 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "Hi,\n\nOn 2023-09-06 16:21:31 -0400, Robert Haas wrote:\n> On Wed, Sep 6, 2023 at 12:20 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > If VACUUM freezes too aggressively, then (pretty much by definition)\n> > we can be sure that the next VACUUM will scan the same pages -- there\n> > may be some scope for VACUUM to \"learn from its mistake\" when we err\n> > in the direction of over-freezing. But when VACUUM makes the opposite\n> > mistake (doesn't freeze when it should have), it won't scan those same\n> > pages again for a long time, by design. It therefore has no plausible\n> > way of \"learning from its mistakes\" before it becomes an extremely\n> > expensive and painful lesson (which happens whenever the next\n> > aggressive VACUUM takes place). This is in large part a consequence of\n> > the way that VACUUM dutifully sets pages all-visible whenever\n> > possible. That behavior interacts badly with many workloads, over\n> > time.\n>\n> I think this is an insightful commentary with which I partially agree.\n> As I see it, the difference is that when you make the mistake of\n> marking something all-visible or freezing it too aggressively, you\n> incur a price that you pay almost immediately. When you make the\n> mistake of not marking something all-visible when it would have been\n> best to do so, you incur a price that you pay later, when the next\n> VACUUM happens. When you make the mistake of not marking something\n> all-frozen when it would have been best to do so, you incur a price\n> that you pay even later, not at the next VACUUM but at some VACUUM\n> further off. So there are different trade-offs. When you pay the price\n> for a mistake immediately or nearly immediately, it can potentially\n> harm the performance of the foreground workload, if you're making a\n> lot of mistakes.\n\nWe have to make a *lot* of mistakes to badly harm the foreground workload. As\nlong as we don't constantly trigger FPIs, the worst case effects of freezing\nunnecessarily aren't that large, particularly if we keep the number of\nXLogInsert()s constant (via the combining of records that Melanie is working\non). Once/if we want to opportunistically freeze when it *does* trigger an\nFPI, we *do* need to be more certain that it's not pointless work.\n\nWe could further bound the worst case overhead by having a range\nrepresentation for freeze plans. That should be quite doable, we can add\nanother flag for xl_heap_freeze_plan.frzflags, which indicates the tuples in\nthe offset array are start/end tids, rather than individual tids.\n\n\n> That sucks. On the other hand, when you defer paying\n> the price until some later bulk operation, the costs of all of your\n> mistakes get added up and then you pay the whole price all at once,\n> which means you can be suddenly slapped with an enormous bill that you\n> weren't expecting. That sucks, too, just in a different way.\n\nIt's particularly bad in the case of freezing because there's practically no\nbackpressure against deferring more work than the system can handle. If we had\nmade foreground processes freeze one page for every unfrozen page they create,\nonce the table reaches a certain percentage of old unfrozen pages, it'd be a\ndifferent story...\n\n\nI think it's important that we prevent exploding the WAL volume due to\nopportunistic freezing the same page over and over, but as long as we take\ncare to not do that, I think the impact on the foreground is going to be\nsmall.\n\nI'm sure that we can come up with cases where it's noticeable, e.g. because\nthe system is already completely bottlecked by WAL IO throughput and a small\nincrease in WAL volume is going to push things over the edge. But such systems\nare going to be in substantial trouble at the next anti-wraparound vacuum and\nwill be out of commission for days once they hit the anti-wraparound shutdown\nlimits.\n\n\nSome backpressure in the form of a small performance decrease for foreground\nwork might even be good there.\n\n\n\n> > VACUUM simply ignores such second-order effects. Perhaps it would be\n> > practical to address some of the issues in this area by avoiding\n> > setting pages all-visible without freezing them, in some general\n> > sense. That at least creates a kind of symmetry between mistakes in\n> > the direction of under-freezing and mistakes in the direction of\n> > over-freezing. That might enable VACUUM to course-correct in either\n> > direction.\n> >\n> > Melanie is already planning on combining the WAL records (PRUNE,\n> > FREEZE_PAGE, and VISIBLE). Perhaps that'll weaken the argument for\n> > setting unfrozen pages all-visible even further.\n>\n> Yeah, so I think the question here is whether it's ever a good idea to\n> mark a page all-visible without also freezing it. If it's not, then we\n> should either mark fewer pages all-visible, or freeze more of them.\n> Maybe I'm all wet here, but I think it depends on the situation. If a\n> page is already dirty and has had an FPI since the last checkpoint,\n> then it's pretty appealing to freeze whenever we mark all-visible. We\n> still have to consider whether the incremental CPU cost and WAL volume\n> are worth it, but assuming those costs are small enough not to be a\n> big problem, it seems like a pretty good bet. Making a page\n> un-all-visible has some cost, but making a page un-all-frozen really\n> doesn't, so cool. On the other hand, if we have a page that isn't\n> dirty, hasn't had a recent FPI, and doesn't need pruning, but which\n> can be marked all-visible, freezing it is a potentially more\n> significant cost, because marking the buffer all-visible doesn't force\n> a new FPI, and freezing does.\n\n+1\n\nOne thing that bothers in this area when using will-trigger-FPI style logic,\nis that it makes checksums=on/off have behavioural impact. If we e.g. make\nsetting all-visible conditional on not triggering an FPI, there will be plenty\nworkloads where a checksums=off system will get an index only scan, but a\nchecksums=on system won't.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 7 Sep 2023 22:26:05 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Thu, Sep 7, 2023 at 9:45 PM Andres Freund <andres@anarazel.de> wrote:\n> I.e. setting an, otherwise unmodified, page all-visible won't trigger an FPI\n> if checksums are disabled, but will FPI with checksums enabled. I think that's\n> a substantial difference in WAL volume for insert-only workloads...\n\nNote that all RDS Postgres users get page-level checksums. Overall,\nthe FPI trigger mechanism is going to noticeably improve performance\ncharacteristics for many users. Simple and crude though it is.\n\n> Type N (%) Record size (%) FPI size (%) Combined size (%)\n> ---- - --- ----------- --- -------- --- ------------- ---\n> XLOG/FPI_FOR_HINT 44253 ( 33.34) 2168397 ( 7.84) 361094232 (100.00) 363262629 ( 93.44)\n> Transaction/INVALIDATION 1 ( 0.00) 78 ( 0.00) 0 ( 0.00) 78 ( 0.00)\n> Standby/INVALIDATIONS 1 ( 0.00) 90 ( 0.00) 0 ( 0.00) 90 ( 0.00)\n> Heap2/FREEZE_PAGE 44248 ( 33.33) 22876120 ( 82.72) 0 ( 0.00) 22876120 ( 5.88)\n> Heap2/VISIBLE 44248 ( 33.33) 2610642 ( 9.44) 16384 ( 0.00) 2627026 ( 0.68)\n> Heap/INPLACE 1 ( 0.00) 188 ( 0.00) 0 ( 0.00) 188 ( 0.00)\n> -------- -------- -------- --------\n> Total 132752 27655515 [7.11%] 361110616 [92.89%] 388766131 [100%]\n>\n> In realistic tables, where rows are wider than a single int, FPI_FOR_HINT\n> dominates even further, as the FREEZE_PAGE would be smaller if there weren't\n> 226 tuples on each page...\n\nIf FREEZE_PAGE WAL volume is really what holds back further high level\nimprovements in this area, then it can be worked on directly -- it's\nnot a fixed cost. It wouldn't be particularly difficult, in fact.\nThese are records that still mostly consist of long runs of contiguous\npage offset numbers. They're ideally suited for compression using some\nkind of simple variant of run length encoding. The freeze plan\ndeduplication stuff in 16 made a big difference, but it's still not\nvery hard to improve matters here.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 7 Sep 2023 22:29:04 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "Hi,\n\nOn 2023-09-07 21:45:22 -0700, Andres Freund wrote:\n> In contrast to that, freezing will almost always trigger an FPI (except for\n> empty pages, but we imo ought to stop setting empty pages all frozen [1]).\n> \n> \n> Yep, a quick experiment confirms that:\n> \n> DROP TABLE IF EXISTS foo;\n> CREATE TABLE foo AS SELECT generate_series(1, 10000000);\n> CHECKPOINT;\n> VACUUM (VERBOSE) foo;\n> \n> checksums off: WAL usage: 44249 records, 3 full page images, 2632091 bytes\n> checksums on: WAL usage: 132748 records, 44253 full page images, 388758161 bytes\n> \n> \n> I initially was confused by the 3x wal records - I was expecting 2x. The\n> reason is that with checksums on, we emit an FPI during the visibility check,\n> which then triggers the current heuristic for opportunistic freezing. The\n> saving grace is that WAL volume is completely dominated by the FPIs:\n> \n> Type N (%) Record size (%) FPI size (%) Combined size (%)\n> ---- - --- ----------- --- -------- --- ------------- ---\n> XLOG/FPI_FOR_HINT 44253 ( 33.34) 2168397 ( 7.84) 361094232 (100.00) 363262629 ( 93.44)\n> Transaction/INVALIDATION 1 ( 0.00) 78 ( 0.00) 0 ( 0.00) 78 ( 0.00)\n> Standby/INVALIDATIONS 1 ( 0.00) 90 ( 0.00) 0 ( 0.00) 90 ( 0.00)\n> Heap2/FREEZE_PAGE 44248 ( 33.33) 22876120 ( 82.72) 0 ( 0.00) 22876120 ( 5.88)\n> Heap2/VISIBLE 44248 ( 33.33) 2610642 ( 9.44) 16384 ( 0.00) 2627026 ( 0.68)\n> Heap/INPLACE 1 ( 0.00) 188 ( 0.00) 0 ( 0.00) 188 ( 0.00)\n> -------- -------- -------- --------\n> Total 132752 27655515 [7.11%] 361110616 [92.89%] 388766131 [100%]\n> \n> In realistic tables, where rows are wider than a single int, FPI_FOR_HINT\n> dominates even further, as the FREEZE_PAGE would be smaller if there weren't\n> 226 tuples on each page...\n\nThe above is not a great demonstration of the overhead of setting all-visible,\nas the FPIs are triggered via FPI_FOR_HINTs, independent of setting\nall-visible. Adding \"SELECT count(*) FROM foo\" before the checkpoint sets them\nearlier and results in:\n\nchecksum off:\n\nWAL usage: 44249 records, 3 full page images, 2627915 bytes\n\nType N (%) Record size (%) FPI size (%) Combined size (%)\n---- - --- ----------- --- -------- --- ------------- ---\nTransaction/INVALIDATION 1 ( 0.00) 78 ( 0.00) 0 ( 0.00) 78 ( 0.00)\nStandby/INVALIDATIONS 1 ( 0.00) 90 ( 0.00) 0 ( 0.00) 90 ( 0.00)\nHeap2/VISIBLE 44248 ( 99.99) 2610642 ( 99.99) 16384 ( 95.15) 2627026 ( 99.96)\nHeap/INPLACE 1 ( 0.00) 53 ( 0.00) 836 ( 4.85) 889 ( 0.03)\n -------- -------- -------- --------\nTotal 44251 2610863 [99.34%] 17220 [0.66%] 2628083 [100%]\n\nchecksums on:\n\nWAL usage: 44252 records, 44254 full page images, 363935830 bytes\n\nType N (%) Record size (%) FPI size (%) Combined size (%)\n---- - --- ----------- --- -------- --- ------------- ---\nXLOG/FPI_FOR_HINT 3 ( 0.01) 147 ( 0.01) 24576 ( 0.01) 24723 ( 0.01)\nTransaction/INVALIDATION 1 ( 0.00) 78 ( 0.00) 0 ( 0.00) 78 ( 0.00)\nStandby/INVALIDATIONS 1 ( 0.00) 90 ( 0.00) 0 ( 0.00) 90 ( 0.00)\nHeap2/VISIBLE 44248 ( 99.99) 2831882 ( 99.99) 361078336 ( 99.99) 363910218 ( 99.99)\nHeap/INPLACE 1 ( 0.00) 53 ( 0.00) 836 ( 0.00) 889 ( 0.00)\n -------- -------- -------- --------\nTotal 44254 2832250 [0.78%] 361103748 [99.22%] 363935998 [100%]\n\n\nMoving the hint bit setting to before the checkpoint also \"avoids\" the\nfreezing.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 7 Sep 2023 22:36:34 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "Hi,\n\nOn 2023-09-07 22:29:04 -0700, Peter Geoghegan wrote:\n> On Thu, Sep 7, 2023 at 9:45 PM Andres Freund <andres@anarazel.de> wrote:\n> > I.e. setting an, otherwise unmodified, page all-visible won't trigger an FPI\n> > if checksums are disabled, but will FPI with checksums enabled. I think that's\n> > a substantial difference in WAL volume for insert-only workloads...\n> \n> Note that all RDS Postgres users get page-level checksums. Overall,\n> the FPI trigger mechanism is going to noticeably improve performance\n> characteristics for many users. Simple and crude though it is.\n\nYou mean the current heuristic or some new heuristic we're coming up with in\nthis thread? If the former, unfortunately I think the current heuristic often\nwon't trigger in cases where freezing would be fine, e.g. on an insert-mostly\n(or hot pruned) workload with some read accesses. If the tuples are already\nhint-bitted, there's no FPI during heap_page_prune(), and thus we don't freeze\n- even though we *do* subsequently trigger an FPI, while setting all-visible.\n\nSee e.g. the stats for the modified version of the scenario, where there the\ntable is hint-bitted that I have since posted:\nhttps://postgr.es/m/20230908053634.hyn46pugqp4lsiw5%40awork3.anarazel.de\n\nThere we freeze neither with nor without checksums, despite incurring FPIs\nwhen checksums are enabled.\n\n\n> > Type N (%) Record size (%) FPI size (%) Combined size (%)\n> > ---- - --- ----------- --- -------- --- ------------- ---\n> > XLOG/FPI_FOR_HINT 44253 ( 33.34) 2168397 ( 7.84) 361094232 (100.00) 363262629 ( 93.44)\n> > Transaction/INVALIDATION 1 ( 0.00) 78 ( 0.00) 0 ( 0.00) 78 ( 0.00)\n> > Standby/INVALIDATIONS 1 ( 0.00) 90 ( 0.00) 0 ( 0.00) 90 ( 0.00)\n> > Heap2/FREEZE_PAGE 44248 ( 33.33) 22876120 ( 82.72) 0 ( 0.00) 22876120 ( 5.88)\n> > Heap2/VISIBLE 44248 ( 33.33) 2610642 ( 9.44) 16384 ( 0.00) 2627026 ( 0.68)\n> > Heap/INPLACE 1 ( 0.00) 188 ( 0.00) 0 ( 0.00) 188 ( 0.00)\n> > -------- -------- -------- --------\n> > Total 132752 27655515 [7.11%] 361110616 [92.89%] 388766131 [100%]\n> >\n> > In realistic tables, where rows are wider than a single int, FPI_FOR_HINT\n> > dominates even further, as the FREEZE_PAGE would be smaller if there weren't\n> > 226 tuples on each page...\n> \n> If FREEZE_PAGE WAL volume is really what holds back further high level\n> improvements in this area, then it can be worked on directly -- it's\n> not a fixed cost. It wouldn't be particularly difficult, in fact.\n\nAgreed!\n\n\n> These are records that still mostly consist of long runs of contiguous\n> page offset numbers. They're ideally suited for compression using some\n> kind of simple variant of run length encoding. The freeze plan\n> deduplication stuff in 16 made a big difference, but it's still not\n> very hard to improve matters here.\n\nYea, even just using ranges of offsets should help in a lot of cases.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 7 Sep 2023 22:45:59 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Thu, Sep 7, 2023 at 10:46 PM Andres Freund <andres@anarazel.de> wrote:\n> You mean the current heuristic or some new heuristic we're coming up with in\n> this thread? If the former, unfortunately I think the current heuristic often\n> won't trigger in cases where freezing would be fine, e.g. on an insert-mostly\n> (or hot pruned) workload with some read accesses. If the tuples are already\n> hint-bitted, there's no FPI during heap_page_prune(), and thus we don't freeze\n> - even though we *do* subsequently trigger an FPI, while setting all-visible.\n\nIt's obviously not adequate, or anything like that. I didn't say that it was.\n\nI've heard plenty of complaints about freezing and antiwraparound\nautovacuuming, and basically no complaints about the cost of FPIs due\nto checksums (apparently Robert wasn't aware of the problem, at least\nnot as it relates to VACUUM setting the VM). This is true even though\none might expect it to be the other way around, based on simple\nanalysis using pg_waldump.\n\nThere are probably lots of reasons for why that is, but the biggest\nreason is likely that users just really hate huge balloon payments for\nthings like freezing. It's just about the worst thing about the\nsystem.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 7 Sep 2023 23:18:26 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Mon, Aug 28, 2023 at 4:30 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> On Mon, Aug 28, 2023 at 12:26 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > In row D, your algorithms are all bad, really bad. I don't quite\n> > understand how it can be that bad, actually.\n>\n> So, I realize now that this test was poorly designed. I meant it to be a\n> worst case scenario, but I think one critical part was wrong. In this\n> example one client is going at full speed inserting a row and then\n> updating it. Then another rate-limited client is deleting old data\n> periodically to keep the table at a constant size. I meant to bulk load\n> the table with enough data that the delete job would have data to delete\n> from the start. With the default autovacuum settings, over the course of\n> 45 minutes, I usually saw around 40 autovacuums of the table. Due to the\n> rate limiting, the first autovacuum of the table ends up freezing many\n> pages that are deleted soon after. Thus the total number of page freezes\n> is very high.\n>\n> I will redo benchmarking of workload D and start the table with the\n> number of rows which the DELETE job seeks to maintain. My back of the\n> envelope math says that this will mean ratios closer to a dozen (and not\n> 5000).\n...\n> I'll rerun workload D in a more reasonable way and be back with results.\n\nHi,\n\nI have edited and rerun all of the benchmarks for a subset of the\nalgorithms. I ran workloads A-I, except for G (G is not an interesting\nworkload), on Postgres master as well as Postgres with freeze heuristic\nalgorithms 4 and 5.\n\nAs a refresher, algorithms 4 and 5 are as follows:\n\nFreeze tuples on a page opportunistically if the page would be totally\nfrozen and:\n\n 4. Buffer is already dirty and no FPI is required OR page LSN is older\n than 33% of the LSNs since the last vacuum of the table.\n\n 5. Buffer is already dirty and no FPI is required AND page LSN is older\n than 33% of the LSNs since the last vacuum of the table.\n\nOn master, the heuristic is to freeze a page opportunistically if it\nwould be totally frozen and if pruning emitted an FPI.\n\nI made a few configuration changes for all benchmarks -- most notably\nautovacuum is on for all workloads and autovacuum_naptime is decreased\nto 10 seconds. The runtime of all time-based workloads was changed to 40\nminutes. All workloads were run for a fixed duration except for the COPY\nworkload (workload F).\n\nIt is also worth noting that data loaded into the standard pgbench\ntables (with pgbench -i) was loaded with COPY and not COPY FREEZE. Also,\npgbench -i and the pgbench runs were passed the --no-vacuum option.\n\nThe updated workloads are as follows:\n\nA. Gaussian TPC-B like + select-only:\n pgbench scale 450 (DB < SB) with indexes on updated columns\n WL 1: 16 clients doing TPC-B like pgbench but with Gaussian access\n distribution (with parameter 6) for updated tables\n WL 2: 2 clients, rate-limited to 1000 TPS, doing select-only pgbench\n\nB. TPC-B like\n pgbench scale 450\n 16 clients doing built-in pgbench TPC-C like script\n\nC. Shifting hot set:\n 16 clients inserting a single row then updating an indexed column in\n that row\n\nD. Shifting hot set, delete old data\n WL 1: 16 clients inserting one row then updating an indexed column in\n that row\n WL 2: 1 client, rate limited to 10 TPS, deleting data more than 1\n minute old\n\nE. Shifting hot set, delete new data, access new data\n WL 1: 16 clients cycling through 2 single row inserts, an update of\n an indexed column in a single recently inserted row, and a delete of\n a single recently inserted row\n WL 2: 1 client, rate-limited to 0.2 TPS, selecting data from the last\n 5 minutes\n\nF. Many COPYs\n 1 client, copying a total of 90 GB of data in 70 individual COPYs\n\nG. N/A\n\nH. Append only table\n 16 clients, inserting a single row at a time\n\nI. Work queue\n WL 1: 16 clients inserting a single row, updating a non-indexed\n column in that row twice, then deleting that row\n WL 2: 1 client, rate-limited to 0.02 TPS, COPYing data into another\n table\n\nBelow are some of the data I collected, including how much WAL was\ngenerated, how much IO various backends did, TPS, and P99 latency.\nThese were collected at the end of the pgbench run.\n\nMost numbers were rounded to the nearest whole number to make the chart\neasier to see. So, for example, a 0 in WAL GB does not mean that no WAL\nwas emitted. P99 latency below 0 was rounded to a single decimal place.\n\nAll IO time is in milliseconds.\n\n\"P99 lat\" was calculated using the \"time\" field from pgbench's\n\"per-transaction logging\" option [1]. It is the \"transaction's elapsed time\"\nor duration. I converted it to milliseconds.\n\n\"pages frozen\" here refers to the number of pages marked frozen in the\nvisibility map at the end of the run. \"page freezes\" refers to the\nnumber of times vacuum froze tuples in lazy_scan_prune(). This does not\ninclude cases in which the page was found to be all frozen but no tuples\nwere explicitly frozen.\n\n\"AVs\" is the number of autovacuums for the table or tables specified.\n\"M\" (under algo) indicates unpatched Postgres master.\n\npgbench_accounts is abbreviated as \"Acct\", pgbench_history as \"Hist\",\nand pgbench_branches + pgbench_tellers as B+T.\n\nWorkload A:\n\n+------+--------+---------------------+---------------------+------------------+\n| algo | WAL GB | cptr bgwriter writes | other reads/writes | IO time AV worker|\n+------+--------+---------------------+---------------------+------------------+\n| M | 37 | 3,064,470 | 119,021 | 199 |\n| 4 | 41 | 3,071,229 | 119,010 | 199 |\n| 5 | 37 | 3,067,462 | 119,043 | 198 |\n+------+--------+---------------------+---------------------+------------------+\n\n+------+---------+-----------+----------------+----------------------+\n| algo | TPS read | TPS write | P99 latency read | P99 latency write |\n+------+---------+-----------+----------------+----------------------+\n| M | 1000 | 5,391 | 0.1 | 4.0 |\n| 4 | 1000 | 5,391 | 0.1 | 3.9 |\n| 5 | 1000 | 5,430 | 0.1 | 3.9 |\n+------+---------+-----------+----------------+----------------------+\n\n+------+---------+---------+---------+\n| algo | Acct AVs | B+T AVs| Hist AVs|\n+------+---------+---------+---------+\n| M | 2 | 459 | 21 |\n| 4 | 2 | 467 | 20 |\n| 5 | 2 | 469 | 21 |\n+------+---------+---------+---------+\n\n+------+------------------+------------------+---------------+\n| algo | Acct Page Freezes| Acct Pages Frozen| Acct % Frozen |\n+------+------------------+--------------------+-------------+\n| M | 2,555 | 646 | 0% |\n| 4 | 866,141 | 385,039 | 52% |\n| 5 | 4 | 0 | 0% |\n+------+------------------+--------------------+-------------+\n\n+------+-------------------+-------------------+---------------+\n| algo | Hist Page Freezes | Hist Pages Frozen | Hist % Frozen |\n+------+-------------------+-------------------+---------------+\n| M | 0 | 0 | 0% |\n| 4 | 70,227 | 69,835 | 85% |\n| 5 | 19,204 | 19,024 | 23% |\n+------+-------------------+-------------------+---------------+\n\n+------+-------------------+-------------------+---------------+\n| algo | B+T Page Freezes | B+T Pages Frozen | B+T % Frozen |\n+------+-------------------+-------------------+---------------+\n| M | 152 | 297 | 33% |\n| 4 | 74,323 | 809 | 100% |\n| 5 | 6,215 | 384 | 48% |\n+------+-------------------+-------------------+---------------+\n\nAlgorithm 4 leads to a substantial increase in the amount of WAL (due to\nadditional FPIs). However, the impact on P99 latency and TPS is minimal.\nThe relatively low number of page freezes on pgbench_accounts is likely\ndue to the fact that freezing only happens in lazy_scan_prune(), before\nindex vacuuming. pgbench_accounts' updated column was indexed, so many\ntuples could not be removed before freezing, making pages ineligible for\nopportunistic freezing.\n\nThis does not explain why algorithm 5 froze fewer pages of\npgbench_accounts than even master. I plan to investigate this further. I\nalso noticed that the overall write TPS seems low for my system. It is\nlikely due to row-level contention because of the Gaussian access\ndistribution, but it merits further analysis.\n\nAdditionally, I plan to redesign this benchmark. I will run a single\npgbench instance and use pgbench's script weight feature to issue more\nwrite script executions than select-only script executions. Using two\npgbench instances and rate-limiting the select-only script makes it\ndifficult to tell if there is a potential impact to concurrent read\nthroughput.\n\n\nWorkload B:\n\n+------+--------+---------------------+-------------------+------------------+\n| algo | WAL GB | cptr bgwriter writes| other reads/writes| IO time AV worker|\n+------+--------+---------------------+---------------------+----------------+\n| M | 73 | 5,972,992 | 647,342 | 75 |\n| 4 | 74 | 5,976,167 | 640,682 | 66 |\n| 5 | 73 | 5,962,465 | 644,060 | 70 |\n+------+--------+---------------------+---------------------+----------------+\n\n+------+--------+-------------+\n| algo | TPS | P99 latency |\n+------+--------+-------------+\n| M | 21,336 | 1.5 |\n| 4 | 21,458 | 1.5 |\n| 5 | 21,384 | 1.5 |\n+------+--------+-------------+\n\n+------+---------+---------+---------+\n| algo | Acct AVs| B+T AVs | Hist AVs|\n+------+---------+---------+---------+\n| M | 1 | 472 | 22 |\n| 4 | 1 | 470 | 21 |\n| 5 | 1 | 469 | 19 |\n+------+---------+---------+---------+\n\n+------+------------------+--------------------+---------------+\n| algo | Acct Page Freezes| Acct Pages Frozen | Acct % Frozen |\n+------+------------------+--------------------+---------------+\n| M | 0 | 0 | 0% |\n| 4 | 269,850 | 0 | 0% |\n| 5 | 194 | 0 | 0% |\n+------+------------------+-------------------+----------------+\n\n+------+-------------------+---------------------+---------------+\n| algo | B+T Page Freezes | B+T Pages Frozen | B+T % Frozen |\n+------+-------------------+---------------------+---------------+\n| M | 0 | 123 | 34% |\n| 4 | 34,268 | 148 | 44% |\n| 5 | 25 | 114 | 33% |\n+------+-------------------+---------------------+---------------+\n\n+------+-------------------+---------------------+---------------+\n| algo | Hist Page Freezes | Hist Pages Frozen | Hist % Frozen |\n+------+-------------------+---------------------+---------------+\n| M | 0 | 0 | 0% |\n| 4 | 297,083 | 296,688 | 90% |\n| 5 | 85,573 | 85,353 | 26% |\n+------+-------------------+---------------------+---------------+\n\nAlgorithm 4 is more aggressive and does much more freezing. Since there\nare no indexes, more page freezing can happen for updated tables. These\npages, however, are uniformly updated and do not stay frozen, so the\nfreezing is pointless. This freezing doesn't seem to substantially\nimpact P99 latency, TPS, or volume of WAL emitted.\n\nWorkload C:\n\n+------+--------+---------------------+---------------------+------------------+\n| algo | WAL GB | cptr bgwriter writes| other reads/writes | IO time AV worker|\n+------+--------+---------------------+---------------------+------------------+\n| M | 16 | 910,583 | 61,124 | 75 |\n| 4 | 17 | 896,213 | 61,124 | 75 |\n| 5 | 15 | 900,068 | 61,124 | 76 |\n+------+--------+---------------------+---------------------+------------------+\n\n+------+-------+--------------+\n| algo | TPS | P99 latency |\n+------+-------+--------------+\n| M | 10,653| 2 |\n| 4 | 10,620| 2 |\n| 5 | 10,591| 2 |\n+------+-------+--------------+\n\n+------+-----+--------------+-------------+----------+\n| algo | AVs | Page Freezes | Pages Frozen | % Frozen|\n+------+-----+--------------+-------------+----------+\n| M | 4 | 130,600 | 0 | 0% |\n| 4 | 4 | 364,781 | 197,135 | 60% |\n| 5 | 4 | 0 | 0 | 0% |\n+------+-----+--------------+-------------+----------+\n\nIt is notable that algorithm 5 again did not freeze any pages.\n\n\nWorkload D:\n\n+------+--------+---------------------+---------------------+------------------+\n| algo | WAL GB | cptr bgwriter writes| other reads/writes | IO time AV worker|\n+------+--------+---------------------+---------------------+------------------+\n| M | 12 | 50,745 | 481 | 1.02 |\n| 4 | 12 | 49,590 | 481 | 0.99 |\n| 5 | 12 | 51,044 | 481 | 0.98 |\n+------+--------+---------------------+---------------------+------------------+\n\n+------+----------+------------+-------------------+--------------------+\n| algo | TPS write | TPS delete| P99 latency write | P99 latency delete |\n+------+----------+------------+--------------------+-------------------+\n| M | 11,034 | 10 | 1.7 | 107 |\n| 4 | 10,992 | 10 | 1.7 | 93 |\n| 5 | 11,014 | 10 | 1.7 | 95 |\n+------+----------+------------+-------------------+--------------------+\n\n+------+-----+--------------+-------------+-----------+\n| algo | AVs | Page Freezes | Pages Frozen | % Frozen |\n+------+-----+--------------+-------------+-----------+\n| M | 230| 2,066 | 332 | 6% |\n| 4 | 233| 631,220 | 1,404 | 25% |\n| 5 | 231| 456,066 | 2,352 | 44% |\n+------+-----+--------------+--------------+----------+\n\nThe P99 delete latency is higher for master. However write TPS is also\nhigher, so it is possible that the decreased P99 latency for the delete\nis unrelated to freezing. Notably algorithm 4 froze least effectively --\nit did the most page freezes per page frozen at the end of the run.\n\n\nWorkload E:\n\n+------+--------+---------------------+---------------------+------------------+\n| algo | WAL GB | cptr bgwriter writes | other reads/writes | IO time AV worker|\n+------+--------+---------------------+---------------------+------------------+\n| M | 15 | 738,854 | 482 | 1.0 |\n| 4 | 15 | 745,182 | 482 | 0.8 |\n| 5 | 15 | 733,916 | 482 | 0.9 |\n+------+--------+---------------------+---------------------+------------------+\n\n+------+---------+-----------+--------------------+------------------+\n| algo | TPS read | TPS write| P99 latency read ms| P99 latency write|\n+------+---------+-----------+--------------------+------------------+\n| M | 0.2 | 20,171 | 1,655 | 2 |\n| 4 | 0.2 | 20,230 | 1,611 | 2 |\n| 5 | 0.2 | 20,165 | 1,579 | 2 |\n+------+---------+-----------+--------------------+------------------+\n\n+------+-----+--------------+--------------+----------+\n| algo | AVs | Page Freezes | Pages Frozen | % Frozen |\n+------+-----+--------------+--------------+----------+\n| M | 23 | 2,396 | 0 | 0% |\n| 4 | 23 | 231,816 | 121,373 | 71% |\n| 5 | 23 | 68,302 | 36,294 | 21% |\n+------+-----+--------------+--------------+----------+\n\nThe additional page freezes done by algorithm 4 seem somewhat effective\nand do not appear to have an impact on throughput, latency, or a large\nimpact on WAL volume.\n\n\nWorkload F:\n\n+------+--------+---------------------+--------------------+------------------+\n| algo | WAL GB | cptr bgwriter writes| other reads/writes | IO time AV worker|\n+------+--------+---------------------+---------------------+-----------------+\n| M | 173 | 1,202,231 | 53,957,448 | 12,389 |\n| 4 | 189 | 1,212,521 | 55,589,140 | 13,084 |\n| 5 | 173 | 1,194,242 | 54,260,118 | 13,407 |\n+------+--------+---------------------+--------------------+------------------+\n\n+------+--------------+\n| algo | P99 latency |\n+------+--------------+\n| M | 19875 |\n| 4 | 19314 |\n| 5 | 19701 |\n+------+--------------+\n\n+------+-----+--------------+--------------+----------+\n| algo | AVs | Page Freezes | Pages Frozen | % Frozen |\n+------+-----+--------------+--------------+----------+\n| M | 3 | 0 | 0 | 0% |\n| 4 | 3 | 2,598,727 | 2,598,725 | 24% |\n| 5 | 3 | 475,063 | 475,064 | 4% |\n+------+-----+--------------+--------------+----------+\n\nAlgorithm 4 produced a notably higher volume of WAL but also managed to\nfreeze a decent portion of the table. P99 latency is higher -- meaning\neach COPY did take longer with algorithm 4.\n\n\nWorkload H:\n\n+------+--------+----------------------+---------------------+------------------+\n| algo | WAL GB | cptr bgwriter writes | other reads/writes | IO time\nAV worker |\n+------+--------+----------------------+---------------------+------------------+\n| M | 12 | 922,121 | 318 |\n 1.03 |\n| 4 | 16 | 921,543 | 318 |\n 0.90 |\n| 5 | 12 | 921,309 | 318 |\n 0.74 |\n+------+--------+----------------------+---------------------+------------------+\n\n+------+--------+--------------+\n| algo | TPS | P99 latency |\n+------+--------+--------------+\n| M | 21,926 | 0.98 |\n| 4 | 22,028 | 0.97 |\n| 5 | 21,919 | 0.99 |\n+------+--------+--------------+\n\n+------+-----+--------------+-------------+-----------+\n| algo | AVs | Page Freezes | Pages Frozen | % Frozen |\n+------+-----+--------------+--------------+----------+\n| M | 6 | 0 | 0 | 0% |\n| 4 | 6 | 560,317 | 560,198 | 94% |\n| 5 | 6 | 11,921 | 11,921 | 2% |\n+------+-----+--------------+--------------+----------+\n\nAlgorithm 4 produced a much higher volume of WAL and froze much more of\nthe table. This did not negatively impact throughput or latency.\nWorkload 5 froze dramatically few pages as compared to algorithm 4.\n\n\nWorkload I:\n\n+------+--------+---------------------+---------------------+-------------------+\n| algo | WAL GB | cptr bgwriter writes | other reads/writes | IO time\nAV worker |\n+------+--------+---------------------+---------------------+-------------------+\n| M | 32 | 123,947 | 32,733,204 |\n 16,803 |\n| 4 | 59 | 124,233 | 32,734,188 |\n 17,151 |\n| 5 | 32 | 124,324 | 32,733,166 |\n 17,122 |\n+------+--------+---------------------+---------------------+-------------------+\n\n+------+-----------+---------+------------------+------------------+\n| algo | TPS queue | TPS copy| P99 latency queue| P99 latency copy |\n+------+-----------+---------+------------------+------------------+\n| M | 5,407 | 0.02 | 4 | 8,386 |\n| 4 | 5,277 | 0.02 | 5 | 8,520 |\n| 5 | 5,401 | 0.02 | 4 | 8,399 |\n+------+-----------+---------+------------------+------------------+\n\n+------+-----------+----------+\n| algo | Queue AVs | Copy AVs |\n+------+-----------+----------+\n| M | 236 | 5 |\n| 4 | 236 | 5 |\n| 5 | 235 | 5 |\n+------+----------+-----------+\n\n+------+-------------------+--------------------+---------------+\n| algo | Queue Page Freezes| Queue Pages Frozen | Queue % Frozen|\n+------+-------------------+--------------------+---------------+\n| M | 1 | 1 | 100% |\n| 4 | 1 | 1 | 100% |\n| 5 | 1 | 1 | 100% |\n+------+-------------------+--------------------+---------------+\n\n+------+-------------------+-------------------+---------------+\n| algo | Copy Page Freezes | Copy Pages Frozen | Copy % Frozen |\n+------+-------------------+-------------------+---------------+\n| M | 0 | 0 | 0% |\n| 4 | 3,821,658 | 3,821,655 | 64% |\n| 5 | 431,044 | 431,043 | 7% |\n+------+-------------------+-------------------+---------------+\n\nThis is a case in which algorithm 4 seemed to have a measurable negative\nperformance impact. It emitted twice the WAL and had substantially worse\nP99 latency and throughput.\n\n-------\n\nMy takeaways from all of the workload results are as follows:\n\nAlgorithm 4 is too aggressive and regresses performance compared to\nmaster.\n\nAlgorithm 5 freezes surprisingly few pages, especially in workloads\nwhere only the most recent data is being accessed or modified\n(append-only, update recent data).\n\nA work queue-like workload with other concurrent workloads is a\nparticular challenge for the freeze heuristic, and we should think more\nabout how to handle this.\n\nWe should consider freezing again after index vacuuming and unused tuple\nreaping.\n\nIn many cases a moderate amount of additional freezing (like algorithm\n5) does not impact P99 latency and throughput in a meaningful way.\n\nNext steps:\nI plan to make some changes to the benchmarks and rerun them with the\nnew algorithms suggested on the thread.\n\n- Melanie\n\n[1] https://www.postgresql.org/docs/current/pgbench.html\n\n\n", "msg_date": "Sat, 23 Sep 2023 15:53:33 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Sat, Sep 23, 2023 at 3:53 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> Workload F:\n>\n> +------+--------+---------------------+--------------------+------------------+\n> | algo | WAL GB | cptr bgwriter writes| other reads/writes | IO time AV worker|\n> +------+--------+---------------------+---------------------+-----------------+\n> | M | 173 | 1,202,231 | 53,957,448 | 12,389 |\n> | 4 | 189 | 1,212,521 | 55,589,140 | 13,084 |\n> | 5 | 173 | 1,194,242 | 54,260,118 | 13,407 |\n> +------+--------+---------------------+--------------------+------------------+\n>\n> +------+--------------+\n> | algo | P99 latency |\n> +------+--------------+\n> | M | 19875 |\n> | 4 | 19314 |\n> | 5 | 19701 |\n> +------+--------------+\n\nAndres mentioned that the P99 latency for the COPY workload (workload F)\nmight not be meaningful, so I have calculated the duration total, mean,\nmedian, min, max and standard deviation in milliseconds.\n\nWorkload F:\n+------+------------+-------+--------+--------+--------+---------+\n| algo | Total | Mean| Median | Min | Max | Stddev |\n+------+------------+-------+--------+--------+--------+---------+\n| M | 1,270,903 | 18,155| 17,755 | 17,090 | 19,994 | 869 |\n| 4 | 1,167,135 | 16,673| 16,421 | 15,585 | 19,485 | 811 |\n| 5 | 1,250,145 | 17,859| 17,704 | 15,763 | 19,871 | 1,009 |\n+------+------------+-------+--------+--------+--------+---------+\n\nInterestingly, algorithm 4 had the lowest total duration for all COPYs.\nSome investigation of other data collected during the runs led us to\nbelieve this may be due to autovacuum workers doing more IO with\nalgorithm 4 and thus generating more WAL and ending up initializing more\nWAL files themselves. Whereas on master and with algorithm 5, client\nbackends had to initialize WAL files themselves, leading COPYs to take\nlonger. This was supported by the presence of more WALInit wait events\nfor client backends on master and with algorithm 5.\n\nCalculating these made me realize that my conclusions about the work\nqueue workload (workload I) didn't make much sense. Because this\nworkload updated a non-indexed column, most pruning was HOT pruning done\non access and basically no page freezing was done by vacuum. This means\nwe weren't seeing negative performance effects of freezing related to\nthe work queue table.\n\nThe difference in this benchmark came from the relatively poor\nperformance of the concurrent COPYs when that table was frozen more\naggressively. I plan to run a new version of this workload which updates\nan indexed column for comparison and does not use a concurrent COPY.\n\nThis is the duration total, mean, median, min, max, and standard\ndeviation in milliseconds of the COPYs which ran concurrently with the\nwork queue pgbench.\n\nWorkload I COPYs:\n+------+--------+-------+--------+--------+--------+---------+\n| algo | Total | Mean | Median | Min | Max | Stddev |\n+------+--------+-------+--------+--------+--------+---------+\n| M | 191,032| 4,898| 4,726 | 4,486 | 9,353 | 800 |\n| 4 | 193,534| 4,962| 4,793 | 4,533 | 9,381 | 812 |\n| 5 | 194,351| 4,983| 4,771 | 4,617 | 9,159 | 783 |\n+------+--------+-------+--------+--------+--------+---------+\n\nI think this shows that algorithm 4 COPYs performed the worst. This is\nin contrast to the COPY-only workload (F) which did not show worse\nperformance for algorithm 4. I think this means I should modify the work\nqueue example and use something other than concurrent COPYs to avoid\nobscuring characteristics of the work queue example.\n\n- Melanie\n\n\n", "msg_date": "Sun, 24 Sep 2023 12:45:21 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Fri, Sep 8, 2023 at 12:07 AM Andres Freund <andres@anarazel.de> wrote:\n> > Downthread, I proposed using the RedoRecPtr of the latest checkpoint\n> > rather than the LSN of the previou vacuum. I still like that idea.\n>\n> Assuming that \"downthread\" references\n> https://postgr.es/m/CA%2BTgmoYb670VcDFbekjn2YQOKF9a7e-kBFoj2WJF1HtH7YPaWQ%40mail.gmail.com\n> could you sketch out the logic you're imagining a bit more?\n\nI'm not exactly sure what the question is here. I mean, it doesn't\nquite make sense to just ask whether the page LSN is newer than the\nlast checkpoint's REDO record, because I think that's basically just\nasking whether or not we would need an FPI, and the freeze criteria\nthat Melanie has been considering incorporate that check in some other\nway already. But maybe some variant on that idea is useful - the\ndistance to the second-most-recent checkpoint, or a multiple or\npercentage of the distance to the most-recent checkpoint, or whatever.\n\n> The reason I was thinking of using the \"lsn at the end of the last vacuum\", is\n> that it seems to be more adapative to the frequency of vacuuming.\n\nYes, but I think it's *too* adaptive. The frequency of vacuuming can\nplausibly be multiple times per minute or not even annually. That's\ntoo big a range of variation. The threshold for freezing can vary by\nhow actively the table or the page is updated, but I don't think it\nshould vary by six orders of magnitude. Under what theory does it make\nsense to say that say \"this row in table hasn't been modified in 20\nseconds, so let's freeze it, but this row in table B hasn't been\nmodified in 8 months, so let's not freeze it because it might be\nmodified again soon\"? If you use the LSN at the end of the last\nvacuum, you're going to end up making decisions exactly like that,\nwhich seems wrong to me.\n\n> Perhaps we can mix both approaches. We can use the LSN and time of the last\n> vacuum to establish an LSN->time mapping that's reasonably accurate for a\n> relation. For infrequently vacuumed tables we can use the time between\n> checkpoints to establish a *more aggressive* cutoff for freezing then what a\n> percent-of-time-since-last-vacuum appach would provide. If e.g. a table gets\n> vacuumed every 100 hours and checkpoint timeout is 1 hour, no realistic\n> percent-of-time-since-last-vacuum setting will allow freezing, as all dirty\n> pages will be too new. To allow freezing a decent proportion of those, we\n> could allow freezing pages that lived longer than ~20%\n> time-between-recent-checkpoints.\n\nYeah, I don't know if that's exactly the right idea, but I think it's\nin the direction that I was thinking about. I'd even be happy with\n100% of the time-between-recent checkpoints, maybe even 200% of\ntime-between-recent checkpoints. But I think there probably should be\nsome threshold beyond which we say \"look, this doesn't look like it\ngets touched that much, let's just freeze it so we don't have to come\nback to it again later.\"\n\nI think part of the calculus here should probably be that when the\nfreeze threshold is long, the potential gains from making it even\nlonger are not that much. If I change the freeze threshold on a table\nfrom 1 minute to 1 hour, I can potentially save uselessly freezing\nthat page 59 times per hour, every hour, forever, if the page always\ngets modified right after I touch it. If I change the freeze threshold\non a table from 1 hour to 1 day, I can only save 23 unnecessary\nfreezes per day. Percentage-wise, the overhead of being wrong is the\nsame in both cases: I can have as many extra freeze operations as I\nhave page modifications, if I pick the worst possible times to freeze\nin every case. But in absolute terms, the savings in the second\nscenario are a lot less. I think if a user is accessing a table\nfrequently, the overhead of jamming a useless freeze in between every\ntable access is going to be a lot more noticeable then when the table\nis only accessed every once in a while. And I also think it's a lot\nless likely that we'll reliably get it wrong. Workloads that touch a\npage and then touch it again ~N seconds later can exist for all values\nof N, but I bet they're way more common for small values of N than\nlarge ones.\n\nIs there also a need for a similar guard in the other direction? Let's\nsay that autovacuum_naptime=15s and on some particular table it\ntriggers every time. I've actually seen this on small queue tables. Do\nyou think that, in such tables, we should freeze pages that haven't\nbeen modified in 15s?\n\n> Hm, possibly stupid idea: What about using shared_buffers residency as a\n> factor? If vacuum had to read in a page to vacuum it, a) we would need read IO\n> to freeze it later, as we'll soon evict the page via the ringbuffer b)\n> non-residency indicates the page isn't constantly being modified?\n\nThis doesn't seem completely stupid, but I fear it would behave\ndramatically differently on a workload a little smaller than s_b vs.\none a little larger than s_b, and that doesn't seem good.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Sep 2023 14:45:07 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Sat, Sep 23, 2023 at 3:53 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> Freeze tuples on a page opportunistically if the page would be totally\n> frozen and:\n>\n> 4. Buffer is already dirty and no FPI is required OR page LSN is older\n> than 33% of the LSNs since the last vacuum of the table.\n>\n> 5. Buffer is already dirty and no FPI is required AND page LSN is older\n> than 33% of the LSNs since the last vacuum of the table.\n>\n> On master, the heuristic is to freeze a page opportunistically if it\n> would be totally frozen and if pruning emitted an FPI.\n> -------\n>\n> My takeaways from all of the workload results are as follows:\n>\n> Algorithm 4 is too aggressive and regresses performance compared to\n> master.\n>\n> Algorithm 5 freezes surprisingly few pages, especially in workloads\n> where only the most recent data is being accessed or modified\n> (append-only, update recent data).\n>\n> A work queue-like workload with other concurrent workloads is a\n> particular challenge for the freeze heuristic, and we should think more\n> about how to handle this.\n\nI feel like we have a sort of Goldilocks problem here: the porridge is\neither too hot or too cold, never just right. Say we just look at\nworkload B, looking at the difference between pgbench_accounts (which\nis randomly and frequently updated and thus shouldn't be\nopportunistically frozen) and pgbench_history (which is append-only\nand should thus be frozen aggressively). Algorithm 4 gets\npgbench_history right and pgbench_accounts wrong, and master does the\nopposite. In a perfect world, we'd have an algorithm which could\ndistinguish sharply between those cases, ramping up to maximum\naggressiveness on pgbench_history while doing nothing at all to\npgbench_accounts.\n\nAlgorithm 5 partially accomplishes this, but the results aren't\nsuper-inspiring either. It doesn't add many page freezes in the case\nwhere freezing is bad, but it also only manages to freeze a quarter of\npgbench_history, where algorithm 4 manages to freeze basically all of\nit. That's a pretty stark difference. Given that algorithm 5 seems to\nmake some mistakes on some of the other workloads, I don't think it's\nentirely clear that it's an improvement over master, at least in\npractical terms.\n\nIt might be worth thinking harder about what it takes specifically to\nget the pgbench_history case, aka the append-only table case, correct.\nOne thing that probably doesn't work very well is to freeze pages that\nare more than X minutes old. Algorithm 5 uses an LSN threshold instead\nof a wall-clock based threshold, but the effect is the same. I think\nthe problem here is that the vacuum operation essentially happens in\nan instant. At the instant that it happens, some fraction of the data\nadded since the last vacuum is older than whatever threshold you pick,\nand the rest is newer. If data is added at a constant rate and you\nwant to freeze at least 90% of the data, your recency threshold has to\nbe no more than 10% of the time since the last vacuum. But with\nautovacuum_naptimes=60s, that's like 6 seconds, and that's way too\naggressive for a table like pgbench_accounts. It seems to me that it's\nnot possible to get both cases right by twiddling the threshold,\nbecause pgbench_history wants the threshold to be 0, and\npgbench_accounts wants it to be ... perhaps not infinity, because\nmaybe the distribution is Gaussian or Zipfian or something rather than\nuniform, but probably a couple of minutes.\n\nSo I feel like if we want to get both pgbench_history and\npgbench_accounts right, we need to consider some additional piece of\ninformation that makes those cases distinguishable. Either of those\ntables can contain a page that hasn't been accessed in 20 seconds, but\nthe correct behavior for such a page differs between one case and the\nother. One random idea that I had was to refuse to opportunistically\nfreeze a page more than once while it remains resident in\nshared_buffers. The first time we do it, we set a bit in the buffer\nheader or something that suppresses further opportunistic freezing.\nWhen the buffer is evicted the bit is cleared. So we can still be\nwrong on a heavily updated table like pgbench_acccounts, but if the\ntable fits in shared_buffers, we'll soon realize that we're getting it\nwrong a lot and will stop making the same mistake over and over. But\nthis kind of idea only works if the working set is small enough to fit\nin shared_buffers, so I don't think it's actually a great plan, unless\nwe only care about suppressing excess freezing on workloads that fit\nin shared_buffers.\n\nA variant on the same theme could be to keep some table-level counters\nand use them to assess how often we're getting it wrong. If we're\noften thawing recently-frozen pages, don't freeze so aggressively. But\nthis will not work if different parts of the same table behave\ndifferently.\n\nIf we don't want to do something like this that somehow responds to\nthe characteristics of a particular page or table, then it seems we\neither have to freeze quite aggressively to pick up insert-only cases\nand accept that this will lead to some wasted effort in heavy-update\ncases, or else freeze less aggressively and accept that we're going\nnot going to freeze insert-only pages consistently.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 25 Sep 2023 16:19:22 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Mon, Sep 25, 2023 at 11:45 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > The reason I was thinking of using the \"lsn at the end of the last vacuum\", is\n> > that it seems to be more adapative to the frequency of vacuuming.\n>\n> Yes, but I think it's *too* adaptive. The frequency of vacuuming can\n> plausibly be multiple times per minute or not even annually. That's\n> too big a range of variation.\n\n+1. The risk of VACUUM chasing its own tail seems very real. We want\nVACUUM to be adaptive to the workload, not adaptive to itself.\n\n> Yeah, I don't know if that's exactly the right idea, but I think it's\n> in the direction that I was thinking about. I'd even be happy with\n> 100% of the time-between-recent checkpoints, maybe even 200% of\n> time-between-recent checkpoints. But I think there probably should be\n> some threshold beyond which we say \"look, this doesn't look like it\n> gets touched that much, let's just freeze it so we don't have to come\n> back to it again later.\"\n\nThe sole justification for any strategy that freezes lazily is that it\ncan avoid useless freezing when freezing turns out to be unnecessary\n-- that's it. So I find it more natural to think of freezing as the\ndefault action, and *not freezing* as the thing that requires\njustification. Thinking about it \"backwards\" like that just seems\nsimpler to me. There is only one possible reason to not freeze, but\nseveral reasons to freeze.\n\n> I think part of the calculus here should probably be that when the\n> freeze threshold is long, the potential gains from making it even\n> longer are not that much. If I change the freeze threshold on a table\n> from 1 minute to 1 hour, I can potentially save uselessly freezing\n> that page 59 times per hour, every hour, forever, if the page always\n> gets modified right after I touch it. If I change the freeze threshold\n> on a table from 1 hour to 1 day, I can only save 23 unnecessary\n> freezes per day.\n\nI totally agree with you on this point. It seems related to my point\nabout \"freezing being the conceptual default action\" in VACUUM.\n\nGenerally speaking, over-freezing is a problem when we reach the same\nwrong conclusion (freeze the page) about the same relatively few pages\nover and over -- senselessly repeating those mistakes really adds up\nwhen you're vacuuming the same table very frequently. On the other\nhand, under-freezing is typically a problem when we reach the same\nwrong conclusion (don't freeze the page) about lots of pages only once\nin a very long while. I strongly suspect that there is very little\ngray area between the two, across the full spectrum of application\ncharacteristics.\n\nMost individual pages have very little chance of being modified in the\nshort to medium term. In a perfect world, with a perfect algorithm,\nwe'd almost certainly be freezing most pages at the earliest\nopportunity. It is nevertheless also true that a freezing policy that\nis only somewhat more aggressive than this ideal oracle algorithm will\nfreeze way too aggressive (by at least some measures). There isn't\nmuch of a paradox to resolve here: it's all down to the cadence of\nvacuuming, and of rows subject to constant churn.\n\nAs you point out, the \"same policy\" can produce dramatically different\noutcomes when you actually consider what the consequences of the\npolicy are over time, when applied by VACUUM under a variety of\ndifferent workload conditions. So any freezing policy must be designed\nwith due consideration for those sorts of things. If VACUUM doesn't\nfreeze the page now, then when will it freeze it? For most individual\npages, that time will come (again, pages that benefit from lazy\nvacuuming are the exception rather than the rule). Right now, VACUUM\nalmost behaves as if it thought \"that's not my problem, it's a problem\nfor future me!\".\n\nTrying to differentiate between pages that we must not over freeze and\npages that we must not under freeze seems important. Generational\ngarbage collection (as used by managed VM runtimes) does something\nthat seems a little like this. It's based on the empirical observation\nthat \"most objects die young\". What the precise definition of \"young\"\nreally is varies significantly, but that turns out to be less of a\nproblem than you might think -- it can be derived through feedback\ncycles. If you look at memory lifetimes on a logarithmic scale, very\ndifferent sorts of applications tend to look like they have remarkably\nsimilar memory allocation characteristics.\n\n> Percentage-wise, the overhead of being wrong is the\n> same in both cases: I can have as many extra freeze operations as I\n> have page modifications, if I pick the worst possible times to freeze\n> in every case. But in absolute terms, the savings in the second\n> scenario are a lot less.\n\nVery true.\n\nI'm surprised that there hasn't been any discussion of the absolute\namount of system-wide freeze debt on this thread. If 90% of the pages\nin the entire database are frozen, it'll generally be okay if we make\nthe wrong call by freezing lazily when we shouldn't. This is doubly\ntrue within small to medium sized tables, where the cost of catching\nup on freezing cannot ever be too bad (concentrations of unfrozen\npages in one big table are what really hurt users).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 25 Sep 2023 14:16:46 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "Hi,\n\nOn 2023-09-25 14:45:07 -0400, Robert Haas wrote:\n> On Fri, Sep 8, 2023 at 12:07 AM Andres Freund <andres@anarazel.de> wrote:\n> > > Downthread, I proposed using the RedoRecPtr of the latest checkpoint\n> > > rather than the LSN of the previou vacuum. I still like that idea.\n> >\n> > Assuming that \"downthread\" references\n> > https://postgr.es/m/CA%2BTgmoYb670VcDFbekjn2YQOKF9a7e-kBFoj2WJF1HtH7YPaWQ%40mail.gmail.com\n> > could you sketch out the logic you're imagining a bit more?\n> \n> I'm not exactly sure what the question is here.\n\nThat I'd like you to expand on \"using the RedoRecPtr of the latest checkpoint\nrather than the LSN of the previou vacuum.\" - I can think of ways of doing so\nthat could end up with quite different behaviour...\n\n\n> > Perhaps we can mix both approaches. We can use the LSN and time of the last\n> > vacuum to establish an LSN->time mapping that's reasonably accurate for a\n> > relation. For infrequently vacuumed tables we can use the time between\n> > checkpoints to establish a *more aggressive* cutoff for freezing then what a\n> > percent-of-time-since-last-vacuum appach would provide. If e.g. a table gets\n> > vacuumed every 100 hours and checkpoint timeout is 1 hour, no realistic\n> > percent-of-time-since-last-vacuum setting will allow freezing, as all dirty\n> > pages will be too new. To allow freezing a decent proportion of those, we\n> > could allow freezing pages that lived longer than ~20%\n> > time-between-recent-checkpoints.\n> \n> Yeah, I don't know if that's exactly the right idea, but I think it's\n> in the direction that I was thinking about. I'd even be happy with\n> 100% of the time-between-recent checkpoints, maybe even 200% of\n> time-between-recent checkpoints. But I think there probably should be\n> some threshold beyond which we say \"look, this doesn't look like it\n> gets touched that much, let's just freeze it so we don't have to come\n> back to it again later.\"\n\nAs long as the most extreme cases are prevented, unnecessarily freezing is imo\nfar less harmful than freezing too little.\n\nI'm worried that using something as long as 100-200%\ntime-between-recent-checkpoints won't handle insert-mostly workload well,\nwhich IME are also workloads suffering quite badly under our current scheme -\nand which are quite common.\n\n\n> I think part of the calculus here should probably be that when the\n> freeze threshold is long, the potential gains from making it even\n> longer are not that much. If I change the freeze threshold on a table\n> from 1 minute to 1 hour, I can potentially save uselessly freezing\n> that page 59 times per hour, every hour, forever, if the page always\n> gets modified right after I touch it. If I change the freeze threshold\n> on a table from 1 hour to 1 day, I can only save 23 unnecessary\n> freezes per day. Percentage-wise, the overhead of being wrong is the\n> same in both cases: I can have as many extra freeze operations as I\n> have page modifications, if I pick the worst possible times to freeze\n> in every case. But in absolute terms, the savings in the second\n> scenario are a lot less. I think if a user is accessing a table\n> frequently, the overhead of jamming a useless freeze in between every\n> table access is going to be a lot more noticeable then when the table\n> is only accessed every once in a while. And I also think it's a lot\n> less likely that we'll reliably get it wrong. Workloads that touch a\n> page and then touch it again ~N seconds later can exist for all values\n> of N, but I bet they're way more common for small values of N than\n> large ones.\n\nTrue. And with larger Ns it also becomes more likely that we'd need to freeze\nthe rows anyway. I've seen tables being hit with several anti-wraparound vacuums\na day, but not several anti-wraparound vacuums a minute...\n\n\n> Is there also a need for a similar guard in the other direction? Let's\n> say that autovacuum_naptime=15s and on some particular table it\n> triggers every time. I've actually seen this on small queue tables. Do\n> you think that, in such tables, we should freeze pages that haven't\n> been modified in 15s?\n\nI don't think it matters much, proportionally to the workload of rewriting\nnearly all of the table every few seconds, the overhead of freezing a bunch of\nalready dirty pages is neglegible.\n\n\n> > Hm, possibly stupid idea: What about using shared_buffers residency as a\n> > factor? If vacuum had to read in a page to vacuum it, a) we would need read IO\n> > to freeze it later, as we'll soon evict the page via the ringbuffer b)\n> > non-residency indicates the page isn't constantly being modified?\n> \n> This doesn't seem completely stupid, but I fear it would behave\n> dramatically differently on a workload a little smaller than s_b vs.\n> one a little larger than s_b, and that doesn't seem good.\n\nHm. I'm not sure that that's a real problem. In the case of a workload bigger\nthan s_b, having to actually read the page again increases the cost of\nfreezing later, even if the workload is just a bit bigger than s_b.\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 26 Sep 2023 08:11:00 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "Hi,\n\nOn 2023-09-25 14:16:46 -0700, Peter Geoghegan wrote:\n> On Mon, Sep 25, 2023 at 11:45 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I'm surprised that there hasn't been any discussion of the absolute\n> amount of system-wide freeze debt on this thread. If 90% of the pages\n> in the entire database are frozen, it'll generally be okay if we make\n> the wrong call by freezing lazily when we shouldn't. This is doubly\n> true within small to medium sized tables, where the cost of catching\n> up on freezing cannot ever be too bad (concentrations of unfrozen\n> pages in one big table are what really hurt users).\n\nI generally agree with the idea of using \"freeze debt\" as an input - IIRC I\nproposed using the number of frozen pages vs number of pages as the input to\nthe heuristic in one of the other threads about the topic a while back. I also\nthink we should read a portion of all-visible-but-not-frozen pages during\nnon-aggressive vacuums to manage the freeze debt.\n\nHowever, I'm not at all convinced doing this on a system wide level is a good\nidea. Databases do often contain multiple types of workloads at the same\ntime. E.g., we want to freeze aggressively in a database that has the bulk of\nits size in archival partitions but has lots of unfrozen data in an active\npartition. And databases have often loads of data that's going to change\nfrequently / isn't long lived, and we don't want to super aggressively freeze\nthat, just because it's a large portion of the data.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 26 Sep 2023 08:19:43 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Tue, Sep 26, 2023 at 8:19 AM Andres Freund <andres@anarazel.de> wrote:\n> However, I'm not at all convinced doing this on a system wide level is a good\n> idea. Databases do often contain multiple types of workloads at the same\n> time. E.g., we want to freeze aggressively in a database that has the bulk of\n> its size in archival partitions but has lots of unfrozen data in an active\n> partition. And databases have often loads of data that's going to change\n> frequently / isn't long lived, and we don't want to super aggressively freeze\n> that, just because it's a large portion of the data.\n\nI didn't say that we should always have most of the data in the\ndatabase frozen, though. Just that we can reasonably be more lazy\nabout freezing the remainder of pages if we observe that most pages\nare already frozen. How they got that way is another discussion.\n\nI also think that the absolute amount of debt (measured in physical\nunits such as unfrozen pages) should be kept under control. But that\nisn't something that can ever be expected to work on the basis of a\nsimple threshold -- if only because autovacuum scheduling just doesn't\nwork that way, and can't really be adapted to work that way.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 26 Sep 2023 09:07:13 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Tue, Sep 26, 2023 at 11:11 AM Andres Freund <andres@anarazel.de> wrote:\n> That I'd like you to expand on \"using the RedoRecPtr of the latest checkpoint\n> rather than the LSN of the previou vacuum.\" - I can think of ways of doing so\n> that could end up with quite different behaviour...\n\nYeah, me too. I'm not sure what is best.\n\n> As long as the most extreme cases are prevented, unnecessarily freezing is imo\n> far less harmful than freezing too little.\n>\n> I'm worried that using something as long as 100-200%\n> time-between-recent-checkpoints won't handle insert-mostly workload well,\n> which IME are also workloads suffering quite badly under our current scheme -\n> and which are quite common.\n\nI wrote about this problem in my other reply and I'm curious as to\nyour thoughts about it. Basically, suppose we forget about all of\nMelanie's tests except for three cases: (1) an insert-only table, (2)\nan update-heavy workload with uniform distribution, and (3) an\nupdate-heavy workload with skew. In case (1), freezing is good. In\ncase (2), freezing is bad. In case (3), freezing is good for cooler\npages and bad for hotter ones. I postulate that any\nrecency-of-modification threshold that handles (1) well will handle\n(2) poorly, and that the only way to get both right is to take some\nother factor into account. You seem to be arguing that we can just\nfreeze aggressively in case (2) and it won't cost much, but it doesn't\nsound to me like Melanie believes that and I don't think I do either.\n\n> > This doesn't seem completely stupid, but I fear it would behave\n> > dramatically differently on a workload a little smaller than s_b vs.\n> > one a little larger than s_b, and that doesn't seem good.\n>\n> Hm. I'm not sure that that's a real problem. In the case of a workload bigger\n> than s_b, having to actually read the page again increases the cost of\n> freezing later, even if the workload is just a bit bigger than s_b.\n\nThat is true, but I don't think it means that there is no problem. It\ncould lead to a situation where, for a while, a table never needs any\nsignificant freezing, because we always freeze aggressively. When it\ngrows large enough, we suddenly stop freezing it aggressively, and now\nit starts experiencing vacuums that do a whole bunch of work all at\nonce. A user who notices that is likely to be pretty confused about\nwhat happened, and maybe not too happy when they find out.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 26 Sep 2023 13:49:32 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "Hi,\n\n(I wrote the first part of the email before Robert and I chatted on a call, I\nleft it in the email for posterity)\n\nOn 2023-09-26 13:49:32 -0400, Robert Haas wrote:\n> On Tue, Sep 26, 2023 at 11:11 AM Andres Freund <andres@anarazel.de> wrote:\n> > As long as the most extreme cases are prevented, unnecessarily freezing is imo\n> > far less harmful than freezing too little.\n> >\n> > I'm worried that using something as long as 100-200%\n> > time-between-recent-checkpoints won't handle insert-mostly workload well,\n> > which IME are also workloads suffering quite badly under our current scheme -\n> > and which are quite common.\n>\n> I wrote about this problem in my other reply and I'm curious as to\n> your thoughts about it. Basically, suppose we forget about all of\n> Melanie's tests except for three cases: (1) an insert-only table, (2)\n> an update-heavy workload with uniform distribution, and (3) an\n> update-heavy workload with skew. In case (1), freezing is good. In\n> case (2), freezing is bad. In case (3), freezing is good for cooler\n> pages and bad for hotter ones. I postulate that any\n> recency-of-modification threshold that handles (1) well will handle\n> (2) poorly, and that the only way to get both right is to take some\n> other factor into account. You seem to be arguing that we can just\n> freeze aggressively in case (2) and it won't cost much, but it doesn't\n> sound to me like Melanie believes that and I don't think I do either.\n\nI don't believe we can freeze aggressively in all cases of 2) without causing\nproblems. A small-ish table that's vacuumed constantly, where all rows are\nconstantly frozen and then updated, will suffer a lot from the WAL\noverhead. Whereas superfluously freezing a row in a table with many millions\nof rows, where each row is only occasionally updated, due to the update rate\nbeing much smaller than the number of rows, will have acceptable overhead.\n\nWhat I *do* believe is that for all but the most extreme cases, it's safer to\nfreeze too much than to freeze too little. There definitely are negative\nconsequences, but they're more bounded and less surprising than not freezing\nfor ages and then suddenly freezing everything at once.\n\nWhether 2) really exists in the real world for huge tables, is of course\nsomewhat debatable...\n\n\n\n> > > This doesn't seem completely stupid, but I fear it would behave\n> > > dramatically differently on a workload a little smaller than s_b vs.\n> > > one a little larger than s_b, and that doesn't seem good.\n> >\n> > Hm. I'm not sure that that's a real problem. In the case of a workload bigger\n> > than s_b, having to actually read the page again increases the cost of\n> > freezing later, even if the workload is just a bit bigger than s_b.\n>\n> That is true, but I don't think it means that there is no problem. It\n> could lead to a situation where, for a while, a table never needs any\n> significant freezing, because we always freeze aggressively.\n\nWhat do you mean with \"always freeze aggressively\" - do you mean 'aggressive'\nautovacuums? Or opportunistic freezing being aggressive? I don't know why the\nformer would be the case?\n\n\n> When it grows large enough, we suddenly stop freezing it aggressively, and\n> now it starts experiencing vacuums that do a whole bunch of work all at\n> once. A user who notices that is likely to be pretty confused about what\n> happened, and maybe not too happy when they find out.\n\nHm - isn't my proposal exactly the other way round? I'm proposing that a page\nis frozen more aggressively if not already in shared buffers - which will\nbecome more common once the table has grown \"large enough\"?\n\n(the remainder was written after that call)\n\n\nI think there were three main ideas that we discussed:\n\n1) We don't need to be accurate in the freezing decisions for individual\n pages, we \"just\" need to avoid the situation that over time we commonly\n freeze pages that will be updated again \"soon\".\n2) It might be valuable to adjust the \"should freeze page opportunistically\"\n based on feedback.\n3) We might need to classify the workload for a table and use different\n heruristics for different workloads.\n\n\nFor 2), one of the mechanisms we discussed was to collect information about\nthe \"age\" of a page when we \"unfreeze\" it. If we frequently unfreeze pages\nthat were recently frozen, we need to be less aggressive in opportunistic\nfreezing going forward. If that never happens, we can be more aggressive.\n\nThe basic idea for classifying the age of a page when unfreezing is to use\n\"insert_lsn - page_lsn\", pretty simple. We can convert that into time using\nthe averaged WAL generation rate. What's a bit harder is figuring out how to\nusefully aggregate the age across multiple \"unfreezes\".\n\nI was initially thinking of just using the mean, but Robert was rightly\nconcerned that that'd the mean would be moved a lot when occasionally freezing\nvery old pages, potentially leading to opportunistically freezing young pages\ntoo aggressively. The median would be a better choice, but at least with the\nnaive algorithms we can't maintain that over time in the stats.\n\nOne way to deal with that would be to not track the average age in\nLSN-difference-bytes, but convert the value to some age metric at that\ntime. If we e.g. were to convert the byte-age into an approximate age in\ncheckpoints, with quadratic bucketing (e.g. 0 -> current checkpoint, 1 -> 1\ncheckpoint, 2 -> 2 checkpoints ago, 3 -> 4 checkpoints ago, ...), using a mean\nof that age would probably be fine.\n\nOnce we have a metric like that, we can increase the aggressiveness of\nopportunistic freezing if recently frozen pages aren't frequently frozen, and\ndecrease aggressiveness when they are.\n\n\nFor 3), we could do something similar. If we made updates/deletes track how\nlong ago (again in LSN difference) the page was modified last, we should be\nable to quite effectively differentiate between a workload that only modifies\nrecent data, and one that doesn't have such locality.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 27 Sep 2023 09:34:13 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "Hi,\n\nOn 2023-09-26 09:07:13 -0700, Peter Geoghegan wrote:\n> On Tue, Sep 26, 2023 at 8:19 AM Andres Freund <andres@anarazel.de> wrote:\n> > However, I'm not at all convinced doing this on a system wide level is a good\n> > idea. Databases do often contain multiple types of workloads at the same\n> > time. E.g., we want to freeze aggressively in a database that has the bulk of\n> > its size in archival partitions but has lots of unfrozen data in an active\n> > partition. And databases have often loads of data that's going to change\n> > frequently / isn't long lived, and we don't want to super aggressively freeze\n> > that, just because it's a large portion of the data.\n> \n> I didn't say that we should always have most of the data in the\n> database frozen, though. Just that we can reasonably be more lazy\n> about freezing the remainder of pages if we observe that most pages\n> are already frozen. How they got that way is another discussion.\n> \n> I also think that the absolute amount of debt (measured in physical\n> units such as unfrozen pages) should be kept under control. But that\n> isn't something that can ever be expected to work on the basis of a\n> simple threshold -- if only because autovacuum scheduling just doesn't\n> work that way, and can't really be adapted to work that way.\n\nI don't think doing this on a system wide basis with a metric like #unfrozen\npages is a good idea. It's quite common to have short lived data in some\ntables while also having long-lived data in other tables. Making opportunistic\nfreezing more aggressive in that situation will just hurt, without a benefit\n(potentially even slowing down the freezing of older data!). And even within a\nsingle table, making freezing more aggressive because there's a decent sized\npart of the table that is updated regularly and thus not frozen, doesn't make\nsense.\n\nIf we want to take global freeze debt into account, which I think is a good\nidea, we'll need a smarter way to represent the debt than just the number of\nunfrozen pages. I think we would need to track the age of unfrozen pages in\nsome way. If there are a lot of unfrozen pages with a recent xid, then it's\nfine, but if they are older and getting older, it's a problem and we need to\nbe more aggressive. The problem I see is how track the age of unfrozen data -\nit'd be easy enough to track the mean(oldest-64bit-xid-on-page), but then we\nagain have the issue of rare outliers moving the mean too much...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 27 Sep 2023 10:01:21 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Wed, Sep 27, 2023 at 10:01 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-09-26 09:07:13 -0700, Peter Geoghegan wrote:\n> I don't think doing this on a system wide basis with a metric like #unfrozen\n> pages is a good idea. It's quite common to have short lived data in some\n> tables while also having long-lived data in other tables. Making opportunistic\n> freezing more aggressive in that situation will just hurt, without a benefit\n> (potentially even slowing down the freezing of older data!). And even within a\n> single table, making freezing more aggressive because there's a decent sized\n> part of the table that is updated regularly and thus not frozen, doesn't make\n> sense.\n\nI never said that #unfrozen pages should be the sole criterion, for\nanything. Just that it would influence the overall strategy, making\nthe system veer towards more aggressive freezing. It would complement\na more sophisticated algorithm that decides whether or not to freeze a\npage based on its individual characteristics.\n\nFor example, maybe the page-level algorithm would have a random\ncomponent. That could potentially be where the global (or at least\ntable level) view gets to influence things -- the random aspect is\nweighed using the global view of debt. That kind of thing seems like\nan interesting avenue of investigation.\n\n> If we want to take global freeze debt into account, which I think is a good\n> idea, we'll need a smarter way to represent the debt than just the number of\n> unfrozen pages. I think we would need to track the age of unfrozen pages in\n> some way. If there are a lot of unfrozen pages with a recent xid, then it's\n> fine, but if they are older and getting older, it's a problem and we need to\n> be more aggressive.\n\nTables like pgbench_history will have lots of unfrozen pages with a\nrecent XID that get scanned during every VACUUM. We should be freezing\nsuch pages at the earliest opportunity.\n\n> The problem I see is how track the age of unfrozen data -\n> it'd be easy enough to track the mean(oldest-64bit-xid-on-page), but then we\n> again have the issue of rare outliers moving the mean too much...\n\nI think that XID age is mostly not very important compared to the\nabsolute amount of unfrozen pages, and the cost profile of freezing\nnow versus later. (XID age *is* important in emergencies, but that's\nmostly not what we're discussing right now.)\n\nTo be clear, that doesn't mean that XID age shouldn't play an\nimportant role in helping VACUUM to differentiate between pages that\nshould not be frozen and pages that should be frozen. But even there\nit should probably be treated as a cue. And, the relationship between\nthe XIDs on the page is probably more important than their absolute\nage (or relationship to XIDs that appear elsewhere more generally).\n\nFor example, if a page is filled with heap tuples whose XIDs all match\n(i.e. tuples that were all inserted by the same transaction), or XIDs\nthat are at least very close together, then that could make VACUUM\nmore enthusiastic about freezing now. OTOH if the XIDs are more\nheterogeneous (but never very old), or if some xmin fields were\nfrozen, then VACUUM should show much less enthusiasm for freezing if\nit's expensive.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 27 Sep 2023 10:25:00 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "Hi,\n\nOn 2023-09-27 10:25:00 -0700, Peter Geoghegan wrote:\n> On Wed, Sep 27, 2023 at 10:01 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2023-09-26 09:07:13 -0700, Peter Geoghegan wrote:\n> > I don't think doing this on a system wide basis with a metric like #unfrozen\n> > pages is a good idea. It's quite common to have short lived data in some\n> > tables while also having long-lived data in other tables. Making opportunistic\n> > freezing more aggressive in that situation will just hurt, without a benefit\n> > (potentially even slowing down the freezing of older data!). And even within a\n> > single table, making freezing more aggressive because there's a decent sized\n> > part of the table that is updated regularly and thus not frozen, doesn't make\n> > sense.\n>\n> I never said that #unfrozen pages should be the sole criterion, for\n> anything. Just that it would influence the overall strategy, making\n> the system veer towards more aggressive freezing. It would complement\n> a more sophisticated algorithm that decides whether or not to freeze a\n> page based on its individual characteristics.\n>\n> For example, maybe the page-level algorithm would have a random\n> component. That could potentially be where the global (or at least\n> table level) view gets to influence things -- the random aspect is\n> weighed using the global view of debt. That kind of thing seems like\n> an interesting avenue of investigation.\n\nI don't disagree that we should do something in that direction - I just don't\nsee the raw number of unfrozen pages being useful in that regard. If you have\na database where no pages live long, we don't need to freeze\noppportunistically, yet the fraction of unfrozen pages will be huge.\n\n\n> > If we want to take global freeze debt into account, which I think is a good\n> > idea, we'll need a smarter way to represent the debt than just the number of\n> > unfrozen pages. I think we would need to track the age of unfrozen pages in\n> > some way. If there are a lot of unfrozen pages with a recent xid, then it's\n> > fine, but if they are older and getting older, it's a problem and we need to\n> > be more aggressive.\n>\n> Tables like pgbench_history will have lots of unfrozen pages with a\n> recent XID that get scanned during every VACUUM. We should be freezing\n> such pages at the earliest opportunity.\n\nI think we ought to be able to freeze tables with as simple a workload as\npgbench_history has aggressively without taking a global freeze debt into\naccount.\n\n\n> > The problem I see is how track the age of unfrozen data -\n> > it'd be easy enough to track the mean(oldest-64bit-xid-on-page), but then we\n> > again have the issue of rare outliers moving the mean too much...\n>\n> I think that XID age is mostly not very important compared to the\n> absolute amount of unfrozen pages, and the cost profile of freezing\n> now versus later. (XID age *is* important in emergencies, but that's\n> mostly not what we're discussing right now.)\n\nWe definitely *also* should take the number of unfrozen pages into account. I\njust don't determining freeze debt primarily using the number of unfrozen\npages will be useful. The presence of unfrozen pages that are likely to be\nupdated again soon is not a problem and makes the simple metric pretty much\nuseless.\n\n\n> To be clear, that doesn't mean that XID age shouldn't play an\n> important role in helping VACUUM to differentiate between pages that\n> should not be frozen and pages that should be frozen.\n\nI think we need to take it into acocunt to determine a useful freeze debt on a\ntable level (and potentially system wide too).\n\nAssuming we could compute it cheaply enough, if we had an approximate median\noldest-64bit-xid-on-page and the number of unfrozen pages, we could\ndifferentiate between tables that have lots of recent unfrozen pages (the\nmedian will be low) and pages with lots of unfrozen pages that are unlikely to\nbe updated again (the median will be high and growing). Something like the\nmedian 64bit xid would be interesting because it'd not get \"invalidated\" if\nrelfrozenxid is increased.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 27 Sep 2023 10:46:33 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Wed, Sep 27, 2023 at 10:46 AM Andres Freund <andres@anarazel.de> wrote:\n> I don't disagree that we should do something in that direction - I just don't\n> see the raw number of unfrozen pages being useful in that regard. If you have\n> a database where no pages live long, we don't need to freeze\n> oppportunistically, yet the fraction of unfrozen pages will be huge.\n\nWe don't know how to reliably predict the future. We can do our best\nto ameliorate problems with such workloads, using a slew of different\nstrategies (including making the behaviors configurable, holding off\non freezing a second or a third time, etc). Such workloads are not\nvery common, and won't necessarily suffer too much. I think that\nthey're pretty much limited to queue-like tables.\n\nAny course of action will likely have some downsides.\nMelanie/you/whomever will need to make a trade-off, knowing that\nsomebody somewhere isn't going to be completely happy about it. I just\ndon't think that anybody will ever be able to come up with an\nalgorithm that can generalize well enough for things to not work out\nthat way. I really care about the problems in this area being\naddressed more comprehensively, so I'm certainly not going to be the\none that just refuses to accept a trade-off (within reason, of\ncourse).\n\n> > > If we want to take global freeze debt into account, which I think is a good\n> > > idea, we'll need a smarter way to represent the debt than just the number of\n> > > unfrozen pages. I think we would need to track the age of unfrozen pages in\n> > > some way. If there are a lot of unfrozen pages with a recent xid, then it's\n> > > fine, but if they are older and getting older, it's a problem and we need to\n> > > be more aggressive.\n> >\n> > Tables like pgbench_history will have lots of unfrozen pages with a\n> > recent XID that get scanned during every VACUUM. We should be freezing\n> > such pages at the earliest opportunity.\n>\n> I think we ought to be able to freeze tables with as simple a workload as\n> pgbench_history has aggressively without taking a global freeze debt into\n> account.\n\nAgreed.\n\n> We definitely *also* should take the number of unfrozen pages into account. I\n> just don't determining freeze debt primarily using the number of unfrozen\n> pages will be useful. The presence of unfrozen pages that are likely to be\n> updated again soon is not a problem and makes the simple metric pretty much\n> useless.\n\nI never said \"primarily\".\n\n> > To be clear, that doesn't mean that XID age shouldn't play an\n> > important role in helping VACUUM to differentiate between pages that\n> > should not be frozen and pages that should be frozen.\n>\n> I think we need to take it into acocunt to determine a useful freeze debt on a\n> table level (and potentially system wide too).\n\nIf we reach the stage where XID age starts to matter, we're no longer\ntalking about performance stability IMV. We're talking about avoiding\nwraparound, which is mostly a different problem.\n\nRecall that I started this whole line of discussion about debt in\nreaction to a point of Robert's about lazy freezing. My original point\nwas that (as a general rule) we can afford to be lazy when we're not\nrisking too much -- when the eventual cost of catching up (having\ngotten it wrong) isn't too painful (i.e. doesn't lead to a big balloon\npayment down the road). Laziness is the thing that requires\njustification. Putting off vital maintenance work for as long as\npossible only makes sense under fairly limited circumstances.\n\nA complicating factor here is the influence of the free space map. The\nway that that works (the total lack of hysteresis to discourage using\nspace from old pages) is probably something that makes the problems\nwith freezing harder to solve. Maybe that should be put in scope.\n\n> Assuming we could compute it cheaply enough, if we had an approximate median\n> oldest-64bit-xid-on-page and the number of unfrozen pages, we could\n> differentiate between tables that have lots of recent unfrozen pages (the\n> median will be low) and pages with lots of unfrozen pages that are unlikely to\n> be updated again (the median will be high and growing). Something like the\n> median 64bit xid would be interesting because it'd not get \"invalidated\" if\n> relfrozenxid is increased.\n\nI'm glad that you're mostly of the view that we should be freezing a\nlot more aggressively overall, but I think that you're still too\nfocussed on avoiding small problems. I understand why novel new\nproblems are generally more of a concern than established old\nproblems, but there needs to be a sense of proportion. Performance\nstability is incredibly important, and isn't zero cost.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 27 Sep 2023 11:23:47 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Fri, Sep 8, 2023 at 12:07 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2023-09-06 10:35:17 -0400, Robert Haas wrote:\n> > On Wed, Sep 6, 2023 at 1:09 AM Andres Freund <andres@anarazel.de> wrote:\n> > > Yea, it'd be useful to have a reasonably approximate wall clock time for the\n> > > last modification of a page. We just don't have infrastructure for determining\n> > > that. We'd need an LSN->time mapping (xid->time wouldn't be particularly\n> > > useful, I think).\n> > >\n> > > A very rough approximate modification time can be computed by assuming an even\n> > > rate of WAL generation, and using the LSN at the time of the last vacuum and\n> > > the time of the last vacuum, to compute the approximate age.\n> > >\n> > > For a while I thought that'd not give us anything that just using LSNs gives\n> > > us, but I think it might allow coming up with a better cutoff logic: Instead\n> > > of using a cutoff like \"page LSN is older than 10% of the LSNs since the last\n> > > vacuum of the table\", it would allow us to approximate \"page has not been\n> > > modified in the last 15 seconds\" or such. I think that might help avoid\n> > > unnecessary freezing on tables with very frequent vacuuming.\n> >\n> > Yes. I'm uncomfortable with the last-vacuum-LSN approach mostly\n> > because of the impact on very frequently vacuumed tables, and\n> > secondarily because of the impact on very infrequently vacuumed\n> > tables.\n> >\n> > Downthread, I proposed using the RedoRecPtr of the latest checkpoint\n> > rather than the LSN of the previou vacuum. I still like that idea.\n>\n> Assuming that \"downthread\" references\n> https://postgr.es/m/CA%2BTgmoYb670VcDFbekjn2YQOKF9a7e-kBFoj2WJF1HtH7YPaWQ%40mail.gmail.com\n> could you sketch out the logic you're imagining a bit more?\n>\n>\n> > It's a value that we already have, with no additional bookkeeping. It\n> > varies over a much narrower range than the interval between vacuums on\n> > a table. The vacuum interval could be as short as tens of seconds as\n> > long as years, while the checkpoint interval is almost always going to\n> > be between a few minutes at the low end and some tens of minutes at\n> > the high end, hours at the very most. That's quite appealing.\n>\n> The reason I was thinking of using the \"lsn at the end of the last vacuum\", is\n> that it seems to be more adapative to the frequency of vacuuming.\n>\n> One the one hand, if a table is rarely autovacuumed because it is huge,\n> (InsertLSN-RedoRecPtr) might or might not be representative of the workload\n> over a longer time. On the other hand, if a page in a frequently vacuumed\n> table has an LSN from around the last vacuum (or even before), it should be\n> frozen, but will appear to be recent in RedoRecPtr based heuristics?\n>\n>\n> Perhaps we can mix both approaches. We can use the LSN and time of the last\n> vacuum to establish an LSN->time mapping that's reasonably accurate for a\n> relation. For infrequently vacuumed tables we can use the time between\n> checkpoints to establish a *more aggressive* cutoff for freezing then what a\n> percent-of-time-since-last-vacuum appach would provide. If e.g. a table gets\n> vacuumed every 100 hours and checkpoint timeout is 1 hour, no realistic\n> percent-of-time-since-last-vacuum setting will allow freezing, as all dirty\n> pages will be too new. To allow freezing a decent proportion of those, we\n> could allow freezing pages that lived longer than ~20%\n> time-between-recent-checkpoints.\n\nOne big sticking point for me (brought up elsewhere in this thread, but,\nAFAICT never resolved) is that it seems like the fact that we mark pages\nall-visible even when not freezing them means that no matter what\nheuristic we use, we won't have the opportunity to freeze the pages we\nmost want to freeze.\n\nGetting to the specific proposal here -- having two+ heuristics and\nusing them based on table characteristics:\n\nTake the append-only case. Let's say that we had a way to determine if\nan insert-only table should use the vacuum LSN heuristic or older than X\ncheckpoints heuristic. Given the default\nautovacuum_vacuum_insert_threshold, every 1000 tuples inserted,\nautovacuum will vacuum the table. If you insert to the table often, then\nit is a frequently vacuumed table and, if I'm understanding the proposal\ncorrectly, you would want to use a vacuum LSN-based threshold to\ndetermine which pages are old enough to freeze. Using only the\npage-older-than-X%-of-LSNs-since-last-vacuum heuristic and assuming no\nconcurrent workloads, each autovacuum will leave X% of the pages\nunfrozen. However, those pages will likely be marked all visible. That\nmeans that the next non-aggressive autovacuum will not even scan those\npages (assuming that run of pages exceeds the skip pages threshold). So\nsome chunk of pages will get left behind on every autovacuum.\n\nOn the other hand, let's say that the table is not frequently inserted\nto, so we consider it an infrequently updated table and want to use the\ncheckpoint-based heuristic. If the page hasn't been modified for\nmultiple checkpoints, then the next modification will require an FPI.\nBecause there aren't dead tuples on the page, we can't expect pruning to\nemit the FPI. So, freezing the page will always emit an FPI in this\ncase.\n\nIt seems like the ideal freeze pattern for an insert-only table would be\nto freeze as soon as the page is full before any checkpoints which could\nforce you to emit an FPI.\n\n- Melanie\n\n\n", "msg_date": "Wed, 27 Sep 2023 14:36:23 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Wed, Sep 27, 2023 at 11:36 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> One big sticking point for me (brought up elsewhere in this thread, but,\n> AFAICT never resolved) is that it seems like the fact that we mark pages\n> all-visible even when not freezing them means that no matter what\n> heuristic we use, we won't have the opportunity to freeze the pages we\n> most want to freeze.\n\nI think of it as being a bit like an absorbing state from a Markov\nchain. It's a perfect recipe for weird path dependent behavior, which\nseems to make your task just about impossible. It literally makes\nvacuuming more frequently lead to less useful work being performed! It\ndoes so *reliably*, in the simplest of cases.\n\nIdeally, you'd be designing an algorithm for a system where we could\nexpect to get approximately the desired outcome whenever we have\napproximately the right idea. A system where mistakes tend to \"cancel\neach other out\" over time. As long as you have \"set all-visible but\ndon't freeze\" behavior as your starting point, then ISTM that your\nalgorithm almost has to make no mistakes at all to really improve on\nwhat we have right now. As things stand, when your algorithm decides\nto not freeze (for pages that are still set all-visible in the VM),\nyou really can't afford to be wrong. (I think that you get this\nalready, but it's a point worth emphasizing.)\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 27 Sep 2023 11:57:46 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Wed, Sep 27, 2023 at 12:34 PM Andres Freund <andres@anarazel.de> wrote:\n> What do you mean with \"always freeze aggressively\" - do you mean 'aggressive'\n> autovacuums? Or opportunistic freezing being aggressive? I don't know why the\n> former would be the case?\n\nI meant the latter.\n\n> > When it grows large enough, we suddenly stop freezing it aggressively, and\n> > now it starts experiencing vacuums that do a whole bunch of work all at\n> > once. A user who notices that is likely to be pretty confused about what\n> > happened, and maybe not too happy when they find out.\n>\n> Hm - isn't my proposal exactly the other way round? I'm proposing that a page\n> is frozen more aggressively if not already in shared buffers - which will\n> become more common once the table has grown \"large enough\"?\n\nOK, but it's counterintuitive either way, IMHO.\n\n> I think there were three main ideas that we discussed:\n>\n> 1) We don't need to be accurate in the freezing decisions for individual\n> pages, we \"just\" need to avoid the situation that over time we commonly\n> freeze pages that will be updated again \"soon\".\n> 2) It might be valuable to adjust the \"should freeze page opportunistically\"\n> based on feedback.\n> 3) We might need to classify the workload for a table and use different\n> heruristics for different workloads.\n\nI agree with all of that. Good summary.\n\n> One way to deal with that would be to not track the average age in\n> LSN-difference-bytes, but convert the value to some age metric at that\n> time. If we e.g. were to convert the byte-age into an approximate age in\n> checkpoints, with quadratic bucketing (e.g. 0 -> current checkpoint, 1 -> 1\n> checkpoint, 2 -> 2 checkpoints ago, 3 -> 4 checkpoints ago, ...), using a mean\n> of that age would probably be fine.\n\nYes. I think it's possible that we could even get by with just two\nbuckets. Say current checkpoint and not. Or current-or-previous\ncheckpoint and not. And just look at what percentage of accesses fall\ninto this first bucket -- it should be small or we're doing it wrong.\nIt seems like the only thing we actually need to avoid is freezing the\nsame ages over and over again in a tight loop.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 27 Sep 2023 15:25:23 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Wed, Sep 27, 2023 at 2:36 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> It seems like the ideal freeze pattern for an insert-only table would be\n> to freeze as soon as the page is full before any checkpoints which could\n> force you to emit an FPI.\n\nYes. So imagine we have two freeze criteria:\n\n1. Do not ever opportunistically freeze pages.\n2. Opportunistically freeze pages whenever we can completely freeze the page.\n\nFor a table where the same rows are frequently updated or where the\ntable is frequently truncated, we want to apply (1). For an\ninsert-only table, we want to apply (2). Maybe I'm all wet here, but\nit's starting to seem to me that in almost all other cases, we also\nwant to apply (2). If that's true, then the only thing we need to do\nhere is identify the special cases where (1) is correct.\n\nIt'd be even better if we could adapt this behavior down to the page\nlevel - imagine a large mostly cold table with a hot tail. In a\nperfect world we apply (1) to the tail and (2) to the rest. But this\nmight be too much of a stretch to actually get right.\n\n> One big sticking point for me (brought up elsewhere in this thread, but,\n> AFAICT never resolved) is that it seems like the fact that we mark pages\n> all-visible even when not freezing them means that no matter what\n> heuristic we use, we won't have the opportunity to freeze the pages we\n> most want to freeze.\n\nThe only solution to this problem that I can see is what Peter\nproposed earlier: if we're not prepared to freeze the page, then don't\nmark it all-visible either. This might be the right thing to do, and\nif it is, we could even go further and get rid of the two as separate\nconcepts completely. However, I think it would be OK to leave all of\nthat to one side for the moment, *especially* if we adopt some\nproposal that does a lot more opportunistic freezing than we do\ncurrently. Because then the problem just wouldn't come up nearly as\nmuch as it does now. One patch can't fix everything, and shouldn't\ntry.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 27 Sep 2023 15:45:06 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "Hi,\n\nOn 2023-09-27 15:45:06 -0400, Robert Haas wrote:\n> > One big sticking point for me (brought up elsewhere in this thread, but,\n> > AFAICT never resolved) is that it seems like the fact that we mark pages\n> > all-visible even when not freezing them means that no matter what\n> > heuristic we use, we won't have the opportunity to freeze the pages we\n> > most want to freeze.\n> \n> The only solution to this problem that I can see is what Peter\n> proposed earlier: if we're not prepared to freeze the page, then don't\n> mark it all-visible either. This might be the right thing to do, and\n> if it is, we could even go further and get rid of the two as separate\n> concepts completely.\n\nI suspect that medium term the better approach would be to be much more\naggressive about setting all-visible, including as part of page-level\nvisibility checks, and to deal with the concern of vacuum not processing such\npages soon by not just looking at unmarked pages, but also a portion of the\nall-visible-but-not-frozen pages (the more all-visible-but-not-frozen pages\nthere are, the more of them each vacuum should process). I think all-visible\nis too important for index only scans, for us to be able to remove it, or\ndelay setting it until freezing makes sense.\n\nMy confidence in my gut feeling isn't all that high here, though.\n\n\n> However, I think it would be OK to leave all of\n> that to one side for the moment, *especially* if we adopt some\n> proposal that does a lot more opportunistic freezing than we do\n> currently. Because then the problem just wouldn't come up nearly as\n> much as it does now. One patch can't fix everything, and shouldn't\n> try.\n\nAgreed.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 27 Sep 2023 13:03:12 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Wed, Sep 27, 2023 at 12:45 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > One big sticking point for me (brought up elsewhere in this thread, but,\n> > AFAICT never resolved) is that it seems like the fact that we mark pages\n> > all-visible even when not freezing them means that no matter what\n> > heuristic we use, we won't have the opportunity to freeze the pages we\n> > most want to freeze.\n>\n> The only solution to this problem that I can see is what Peter\n> proposed earlier: if we're not prepared to freeze the page, then don't\n> mark it all-visible either. This might be the right thing to do, and\n> if it is, we could even go further and get rid of the two as separate\n> concepts completely. However, I think it would be OK to leave all of\n> that to one side for the moment, *especially* if we adopt some\n> proposal that does a lot more opportunistic freezing than we do\n> currently. Because then the problem just wouldn't come up nearly as\n> much as it does now. One patch can't fix everything, and shouldn't\n> try.\n\nI guess that it might make sense to leave the issue of\nall-visible-only pages out of it for the time being. I would just\npoint out that that means that you'll be limited to adding strictly\nopportunistic behaviors, where it just so happens that VACUUM stumbles\nupon pages where freezing early likely makes sense. Basically, an\nincremental improvement on the FPI thing -- not the sort of thing\nwhere I'd expect Melanie's current approach of optimizing a whole\ncross-section of representative workloads to really help with.\n\nI have a separate concern about it that I'll raise shortly, in my\nresponse to Andres.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 27 Sep 2023 13:04:25 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Wed, Sep 27, 2023 at 1:03 PM Andres Freund <andres@anarazel.de> wrote:\n> I suspect that medium term the better approach would be to be much more\n> aggressive about setting all-visible, including as part of page-level\n> visibility checks, and to deal with the concern of vacuum not processing such\n> pages soon by not just looking at unmarked pages, but also a portion of the\n> all-visible-but-not-frozen pages (the more all-visible-but-not-frozen pages\n> there are, the more of them each vacuum should process). I think all-visible\n> is too important for index only scans, for us to be able to remove it, or\n> delay setting it until freezing makes sense.\n>\n> My confidence in my gut feeling isn't all that high here, though.\n\nI think that this is a bad idea. I see two main problems with\n\"splitting the difference\" like this:\n\n1. If we randomly scan some all-visible pages in non-aggressive\nVACUUMs, then we're sure to get FPIs in order to be able to freeze.\n\nAs a general rule, I think that we're better of gambling against\nfuture FPIs, and then pulling back if we go too far. The fact that we\nwent one VACUUM operation without the workload unsetting an\nall-visible page isn't that much of a signal about what might happen\nto that page.\n\n2. Large tables (i.e. the tables where it really matters) just don't\nhave that many VACUUM operations, relative to everything else. Who\nsays we'll get more than one opportunity per page with these tables,\neven with this behavior of scanning all-visible pages in\nnon-aggressive VACUUMs?\n\nBig append-only tables simply won't get the opportunity to catch up in\nthe next non-aggressive VACUUM if there simply isn't one.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 27 Sep 2023 13:14:41 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "Hi,\n\nOn 2023-09-27 13:14:41 -0700, Peter Geoghegan wrote:\n> On Wed, Sep 27, 2023 at 1:03 PM Andres Freund <andres@anarazel.de> wrote:\n> > I suspect that medium term the better approach would be to be much more\n> > aggressive about setting all-visible, including as part of page-level\n> > visibility checks, and to deal with the concern of vacuum not processing such\n> > pages soon by not just looking at unmarked pages, but also a portion of the\n> > all-visible-but-not-frozen pages (the more all-visible-but-not-frozen pages\n> > there are, the more of them each vacuum should process). I think all-visible\n> > is too important for index only scans, for us to be able to remove it, or\n> > delay setting it until freezing makes sense.\n> >\n> > My confidence in my gut feeling isn't all that high here, though.\n> \n> I think that this is a bad idea. I see two main problems with\n> \"splitting the difference\" like this:\n> \n> 1. If we randomly scan some all-visible pages in non-aggressive\n> VACUUMs, then we're sure to get FPIs in order to be able to freeze.\n> \n> As a general rule, I think that we're better of gambling against\n> future FPIs, and then pulling back if we go too far. The fact that we\n> went one VACUUM operation without the workload unsetting an\n> all-visible page isn't that much of a signal about what might happen\n> to that page.\n\nI think we can afford to be quite aggressive about opportunistically freezing\nwhen doing so wouldn't emit an FPI. I am much more concerned about cases where\nopportunistic freezing requires an FPI - it'll often *still* be the right\nchoice to freeze the page, but we need a way to prevent that from causing a\nlot of WAL in worse cases.\n\nI think the case where no-fpi-required-freezing is a problem are small,\nfrequently updated, tables, which are very frequently vacuumed.\n\n\n\n> 2. Large tables (i.e. the tables where it really matters) just don't\n> have that many VACUUM operations, relative to everything else.\n\nI think we need to make vacuums on large tables much more aggressive than they\nare now, independent of opportunistic freezing heuristics. It's idiotic that\non large tables we delay vacuuming until multi-pass vacuums are pretty much\nguaranteed. We don't have the stats to detect that, but if we had them, we\nshould trigger autovacuums dead items + dead tuples gets to a significant\nfraction of what can be tracked in m_w_m.\n\nThe current logic made some sense when we didn't have the VM, but now\nautovacuum scheduling is influenced by the portion of the table that that\nvacuum will never look at, which makes no sense. One can argue that that\nfrozen-portion is a proxy for the size of the indexes that would need to be\nscanned - but that also doesn't make much sense to me, because the number &\nsize of indexes depends a lot on the workload and because we skip index\nvacuums when not necessary.\n\nI guess we could just use (relation_size - skipped_pages) *\nautovacuum_vacuum_scale_factor as the threshold, but we don't have a cheap way\nto compute the number of frozen pages right now (we do have relallvisible for\nnon-aggressive vacuums).\n\n\n> Who says we'll get more than one opportunity per page with these tables,\n> even with this behavior of scanning all-visible pages in non-aggressive\n> VACUUMs? Big append-only tables simply won't get the opportunity to catch\n> up in the next non-aggressive VACUUM if there simply isn't one.\n\nI agree that we need to freeze pages in append only tables ASAP. I don't think\nthey're that hard a case to detect though. The harder case is the - IME very\ncommon - case of tables that are largely immutable but have a moving tail\nthat's hotly updated.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 27 Sep 2023 13:45:20 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Wed, Sep 27, 2023 at 1:45 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-09-27 13:14:41 -0700, Peter Geoghegan wrote:\n> > As a general rule, I think that we're better of gambling against\n> > future FPIs, and then pulling back if we go too far. The fact that we\n> > went one VACUUM operation without the workload unsetting an\n> > all-visible page isn't that much of a signal about what might happen\n> > to that page.\n>\n> I think we can afford to be quite aggressive about opportunistically freezing\n> when doing so wouldn't emit an FPI.\n\nI agree. This part is relatively easy -- it is more or less v2 of the\nFPI thing. The only problem that I see is that it just isn't that\ncompelling on its own -- it just doesn't seem ambitious enough.\n\n> I am much more concerned about cases where\n> opportunistic freezing requires an FPI - it'll often *still* be the right\n> choice to freeze the page, but we need a way to prevent that from causing a\n> lot of WAL in worse cases.\n\nWhat about my idea of holding back when some tuples are already frozen\nfrom before? Admittedly that's still a fairly raw idea, but something\nalong those lines seems promising.\n\nIf you limit yourself to what I've called the easy cases, then you\ncan't really expect to make a dent in the problems that we see with\nlarge tables -- where FPIs are pretty much the norm rather than the\nexception. Again, that's fine, but I'd be disappointed if you and\nMelanie can't do better than that for 17.\n\n> > 2. Large tables (i.e. the tables where it really matters) just don't\n> > have that many VACUUM operations, relative to everything else.\n>\n> I think we need to make vacuums on large tables much more aggressive than they\n> are now, independent of opportunistic freezing heuristics. It's idiotic that\n> on large tables we delay vacuuming until multi-pass vacuums are pretty much\n> guaranteed.\n\nNot having to do all of the freezing at once will often still make\nsense in cases where we \"lose\". It's hard to precisely describe how to\nassess such things (what's the break even point?), but that makes it\nno less true. Constantly losing by a small amount is usually better\nthan massive drops in performance.\n\n> The current logic made some sense when we didn't have the VM, but now\n> autovacuum scheduling is influenced by the portion of the table that that\n> vacuum will never look at, which makes no sense.\n\nYep:\n\nhttps://www.postgresql.org/message-id/CAH2-Wz=MGFwJEpEjVzXwEjY5yx=UuNPzA6Bt4DSMasrGLUq9YA@mail.gmail.com\n\n> > Who says we'll get more than one opportunity per page with these tables,\n> > even with this behavior of scanning all-visible pages in non-aggressive\n> > VACUUMs? Big append-only tables simply won't get the opportunity to catch\n> > up in the next non-aggressive VACUUM if there simply isn't one.\n>\n> I agree that we need to freeze pages in append only tables ASAP. I don't think\n> they're that hard a case to detect though. The harder case is the - IME very\n> common - case of tables that are largely immutable but have a moving tail\n> that's hotly updated.\n\nDoing well with such \"moving tail of updates\" tables is exactly what\ndoing well on TPC-C requires. It's easy to detect that a table is\neither one of these two things. Which of the two it is specifically\npresents greater difficulties. So I don't see strict append-only\nversus \"append and update each row once\" as all that different,\npractically speaking, from the point of view of VACUUM.\n\nOften, the individual pages from TPC-C's order lines table look very\nmuch like pgbench_history style pages would -- just because VACUUM is\neither ahead or behind the current \"hot tail\" position with almost all\npages that it scans, due to the current phase of the moon. This is\npartly why I place so much emphasis on teaching VACUUM to understand\nthat what it sees in each page might well have a lot to do with when\nit happened to show up, as opposed to something about the\nworkload/table itself.\n\nBTW, if you're worried about the \"hot tail\" case in particular then\nyou should definitely put the FSM stuff in scope here -- it's a huge\npart of the overall problem, since it makes the pages take so much\nlonger to \"settle\" than you might expect when just considering the\nworkload abstractly.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 27 Sep 2023 14:26:42 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Wed, Sep 27, 2023 at 2:26 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Wed, Sep 27, 2023 at 1:45 PM Andres Freund <andres@anarazel.de> wrote:\n> > I think we need to make vacuums on large tables much more aggressive than they\n> > are now, independent of opportunistic freezing heuristics. It's idiotic that\n> > on large tables we delay vacuuming until multi-pass vacuums are pretty much\n> > guaranteed.\n>\n> Not having to do all of the freezing at once will often still make\n> sense in cases where we \"lose\".\n\nOne more thing on this, and the subject of large table that keep\ngetting larger (including those with a \"hot tail\" of updates):\n\nSince autovacuum runs against such tables at geometric intervals (as\ndetermined by autovacuum_vacuum_insert_scale_factor), the next VACUUM\nis always going to be longer and more expensive than this VACUUM,\nforever (ignoring the influence of aggressive mode for a second). This\nwould even be true if we didn't have the related problem of\nautovacuum_vacuum_insert_scale_factor not accounting for the fact that\nwhen VACUUM starts and when VACUUM ends aren't exactly the same thing\nin large tables [1] -- that aspect just makes the problem even worse.\n\nBasically, the whole \"wait and see\" approach makes zero sense here\nbecause we really do need to be aggressive about freezing just to keep\nup with the workload. The number of pages we'll scan in the next\nVACUUM will always be significantly larger, even if we're very\naggressive about freezing (theoretically it might not be, but then\nwhat VACUUM does doesn't matter that much either way). Time is very\nmuch not on our side here. So we need to anticipate what happens next\nwith the workload, and how that affects VACUUM in the future -- not\njust how VACUUM affects the workload. (VACUUM is just another part of\nthe workload, in fact.)\n\n[1] https://www.postgresql.org/message-id/CAH2-Wzn=bZ4wynYB0hBAeF4kGXGoqC=PZVKHeerBU-je9AQF=g@mail.gmail.com\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 27 Sep 2023 14:44:38 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Wed, Sep 27, 2023 at 3:25 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Sep 27, 2023 at 12:34 PM Andres Freund <andres@anarazel.de> wrote:\n> > One way to deal with that would be to not track the average age in\n> > LSN-difference-bytes, but convert the value to some age metric at that\n> > time. If we e.g. were to convert the byte-age into an approximate age in\n> > checkpoints, with quadratic bucketing (e.g. 0 -> current checkpoint, 1 -> 1\n> > checkpoint, 2 -> 2 checkpoints ago, 3 -> 4 checkpoints ago, ...), using a mean\n> > of that age would probably be fine.\n>\n> Yes. I think it's possible that we could even get by with just two\n> buckets. Say current checkpoint and not. Or current-or-previous\n> checkpoint and not. And just look at what percentage of accesses fall\n> into this first bucket -- it should be small or we're doing it wrong.\n> It seems like the only thing we actually need to avoid is freezing the\n> same ages over and over again in a tight loop.\n\nAt the risk of seeming too execution-focused, I want to try and get more\nspecific. Here is a description of an example implementation to test my\nunderstanding:\n\nIn table-level stats, save two numbers: younger_than_cpt/older_than_cpt\nstoring the number of instances of unfreezing a page which is either\nyounger or older than the start of the most recent checkpoint at the\ntime of its unfreezing\n\nUpon update or delete (and insert?), if the page being modified is\nfrozen and\n if insert LSN - RedoRecPtr > insert LSN - old page LSN\n page is younger, younger_than_cpt += 1\n\n otherwise, older_than_cpt += 1\n\nThe ratio of younger/total and older/total can be used to determine how\naggressive opportunistic freezing will be.\n\nThis has the downside of counting most unfreezings directly after a\ncheckpoint in the older_than_cpt bucket. That is: older_than_cpt !=\nlonger_frozen_duration at certain times in the checkpoint cycle.\n\nNow, I'm trying to imagine how this would interact in a meaningful way\nwith opportunistic freezing behavior during vacuum.\n\nYou would likely want to combine it with one of the other heuristics we\ndiscussed.\n\nFor example:\nFor a table with only 20% younger unfreezings, when vacuuming that page,\n\n if insert LSN - RedoRecPtr < insert LSN - page LSN\n page is older than the most recent checkpoint start, so freeze it\n regardless of whether or not it would emit an FPI\n\nWhat aggressiveness levels should there be? What should change at each\nlevel? What criteria should pages have to meet to be subject to the\naggressiveness level?\n\nI have some ideas, but I'd like to try an algorithm along these lines\nwith an updated work queue workload and the insert-only workload. And I\nwant to make sure I understand the proposal first.\n\n- Melanie\n\n\n", "msg_date": "Wed, 27 Sep 2023 19:09:41 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Wed, Sep 27, 2023 at 4:09 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> At the risk of seeming too execution-focused, I want to try and get more\n> specific. Here is a description of an example implementation to test my\n> understanding:\n>\n> In table-level stats, save two numbers: younger_than_cpt/older_than_cpt\n> storing the number of instances of unfreezing a page which is either\n> younger or older than the start of the most recent checkpoint at the\n> time of its unfreezing\n\nCan you define \"unfreeze\"? I don't know if this newly invented term\nrefers to unsetting a page that was marked all-frozen following (say)\nan UPDATE, or if it refers to choosing to not freeze when the option\nwas available (in the sense that it was possible to do it and fully\nmark the page all-frozen in the VM). Or something else.\n\nI also find the term \"opportunistic freezing\" confusing. What's\nopportunistic about it? It seems as if the term has been used as a\nsynonym of \"freezing not triggered by min_freeze_age\" on this thread,\nor \"freezing that is partly conditioned on setting the page all-frozen\nin the VM\", but I'm not even sure of that.\n\nThe choice to freeze or not freeze pretty much always relies on\nguesswork about what'll happen to the page in the future, no?\nObviously we wouldn't even apply the FPI trigger criteria if we could\nsomehow easily determine that it won't work out (to some degree that's\nwhat conditioning it on being able to set the all-frozen VM bit\nactually does).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 27 Sep 2023 16:38:51 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Wed, Sep 27, 2023 at 7:39 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Wed, Sep 27, 2023 at 4:09 PM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> > At the risk of seeming too execution-focused, I want to try and get more\n> > specific. Here is a description of an example implementation to test my\n> > understanding:\n> >\n> > In table-level stats, save two numbers: younger_than_cpt/older_than_cpt\n> > storing the number of instances of unfreezing a page which is either\n> > younger or older than the start of the most recent checkpoint at the\n> > time of its unfreezing\n>\n> Can you define \"unfreeze\"? I don't know if this newly invented term\n> refers to unsetting a page that was marked all-frozen following (say)\n> an UPDATE, or if it refers to choosing to not freeze when the option\n> was available (in the sense that it was possible to do it and fully\n> mark the page all-frozen in the VM). Or something else.\n\nBy \"unfreeze\", I mean unsetting a page all frozen in the visibility\nmap when modifying the page for the first time after it was last\nfrozen.\n\nI would probably call choosing not to freeze when the option is\navailable \"no freeze\". I have been thinking of what to call it because\nI want to add some developer stats for myself indicating why a page\nthat was freezable was not frozen.\n\n> I also find the term \"opportunistic freezing\" confusing. What's\n> opportunistic about it? It seems as if the term has been used as a\n> synonym of \"freezing not triggered by min_freeze_age\" on this thread,\n> or \"freezing that is partly conditioned on setting the page all-frozen\n> in the VM\", but I'm not even sure of that.\n\nBy \"opportunistic freezing\", I refer to freezing not triggered by\nmin_freeze_age. Though we only do so now if the page could be set\nall-frozen, that is not a requirement for use of the term, IMO. I\nassume that we are not considering changing any behavior related to\nfreezing when a tuple is older than freeze_min_age, so I think of that\nfreezing as \"required\". And any freezing of newer tuples is\nopportunistic.\n\n> The choice to freeze or not freeze pretty much always relies on\n> guesswork about what'll happen to the page in the future, no?\n> Obviously we wouldn't even apply the FPI trigger criteria if we could\n> somehow easily determine that it won't work out (to some degree that's\n> what conditioning it on being able to set the all-frozen VM bit\n> actually does).\n\nI suppose you are thinking of \"opportunistic\" as freezing whenever we\naren't certain it is the right thing to do simply because we have the\nopportunity to do it? If so, I'm not sure we need that distinction\ncurrently. It seems more useful in a narrative context as a way of\ncharacterizing a situation than it does when categorizing different\nbehaviors.\n\nI want a way to express \"freeze when freeze min age doesn't require it\"\n\n- Melanie\n\n\n", "msg_date": "Wed, 27 Sep 2023 20:20:22 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Wed, Sep 27, 2023 at 5:27 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Wed, Sep 27, 2023 at 1:45 PM Andres Freund <andres@anarazel.de> wrote:\n> > I am much more concerned about cases where\n> > opportunistic freezing requires an FPI - it'll often *still* be the right\n> > choice to freeze the page, but we need a way to prevent that from causing a\n> > lot of WAL in worse cases.\n>\n> What about my idea of holding back when some tuples are already frozen\n> from before? Admittedly that's still a fairly raw idea, but something\n> along those lines seems promising.\n\nOriginally, when you proposed this, the idea was to avoid freezing a\npage that has some tuples already frozen because that would mean we\nhave frozen it and it was unfrozen after.\n\nBut, today, I was thinking about what heuristics to combine with one\nthat considers the average amount of time pages in a table spend\nfrozen [1], and I think it might be interesting to use the \"some\ntuples on the page are frozen\" test in the opposite way than it was\noriginally proposed.\n\nTake a case where you insert a row and then update it once and repeat\nforever. Let's say you freeze the page before you've filled it up,\nand, on the next update, the page is unfrozen. Most of the tuples on\nthe page will still be frozen. The next time the page is vacuumed, the\nfact that the page has a lot of frozen tuples actually means that it\nis probably worth freezing the remaining few tuples and marking the\npage frozen.\n\nBasically, I'm wondering if we can use the page-is-partially-frozen\ntest to catch cases where we are willing to freeze a page a couple of\ntimes to make sure it gets frozen.\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/CAAKRu_Y0nLmQ%3DYS1c2ORzLi7bu3eWjdx%2B32BuFc0Tho2o7E3rw%40mail.gmail.com\n\n\n", "msg_date": "Wed, 27 Sep 2023 20:42:05 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Wed, Sep 27, 2023 at 5:20 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> > Can you define \"unfreeze\"? I don't know if this newly invented term\n> > refers to unsetting a page that was marked all-frozen following (say)\n> > an UPDATE, or if it refers to choosing to not freeze when the option\n> > was available (in the sense that it was possible to do it and fully\n> > mark the page all-frozen in the VM). Or something else.\n>\n> By \"unfreeze\", I mean unsetting a page all frozen in the visibility\n> map when modifying the page for the first time after it was last\n> frozen.\n\nI see. So I guess that Andres meant that you'd track that within all\nbackends, using pgstats infrastructure (when he summarized your call\nearlier today)? And that that information would be an important input\nfor VACUUM, as opposed to something that it maintained itself?\n\n> I would probably call choosing not to freeze when the option is\n> available \"no freeze\". I have been thinking of what to call it because\n> I want to add some developer stats for myself indicating why a page\n> that was freezable was not frozen.\n\nI think that having that sort of information available via custom\ninstrumentation (just for the performance validation side) makes a lot\nof sense.\n\nISTM that the concept of \"unfreezing\" a page is equivalent to\n\"opening\" the page that was \"closed\" at some point (by VACUUM). It's\nnot limited to freezing per se -- it's \"closed for business until\nfurther notice\", which is a slightly broader concept (and one not\nunique to Postgres). You don't just need to be concerned about updates\nand deletes -- inserts are also a concern.\n\nI would be sure to look out for new inserts that \"unfreeze\" pages, too\n-- ideally you'd have instrumentation that caught that, in order to\nget a general sense of the extent of the problem in each of your\nchosen representative workloads. This is particularly likely to be a\nconcern when there is enough space on a heap page to fit one more heap\ntuple, that's smaller than most other tuples. The FSM will \"helpfully\"\nmake sure of it. This problem isn't rare at all, unfortunately.\n\n> > The choice to freeze or not freeze pretty much always relies on\n> > guesswork about what'll happen to the page in the future, no?\n> > Obviously we wouldn't even apply the FPI trigger criteria if we could\n> > somehow easily determine that it won't work out (to some degree that's\n> > what conditioning it on being able to set the all-frozen VM bit\n> > actually does).\n>\n> I suppose you are thinking of \"opportunistic\" as freezing whenever we\n> aren't certain it is the right thing to do simply because we have the\n> opportunity to do it?\n\nI have heard the term \"opportunistic freezing\" used to refer to\nfreezing that takes place outside of VACUUM before now. You know,\nsomething perfectly analogous to pruning in VACUUM versus\nopportunistic pruning. (I knew that you can't have meant that -- my\npoint is that the terminology in this area has problems.)\n\n> I want a way to express \"freeze when freeze min age doesn't require it\"\n\nThat makes sense when you consider where we are right now, but it'll\nsound odd in a world where freezing via min_freeze_age is the\nexception rather than the rule. If anything, it would make more sense\nif the traditional min_freeze_age trigger criteria was the type of\nfreezing that needed its own adjective.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 27 Sep 2023 17:43:04 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "Hi,\n\nOn 2023-09-27 17:43:04 -0700, Peter Geoghegan wrote:\n> On Wed, Sep 27, 2023 at 5:20 PM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> > > Can you define \"unfreeze\"? I don't know if this newly invented term\n> > > refers to unsetting a page that was marked all-frozen following (say)\n> > > an UPDATE, or if it refers to choosing to not freeze when the option\n> > > was available (in the sense that it was possible to do it and fully\n> > > mark the page all-frozen in the VM). Or something else.\n> >\n> > By \"unfreeze\", I mean unsetting a page all frozen in the visibility\n> > map when modifying the page for the first time after it was last\n> > frozen.\n>\n> I see. So I guess that Andres meant that you'd track that within all\n> backends, using pgstats infrastructure (when he summarized your call\n> earlier today)?\n\nThat call was just between Robert and me (and not dedicated just to this\ntopic, fwiw).\n\nYes, I was thinking of tracking that in pgstat. I can imagine occasionally\nrolling it over into pg_class, to better deal with crashes / failovers, but am\nfairly agnostic on whether that's really useful / necessary.\n\n\n> And that that information would be an important input for VACUUM, as opposed\n> to something that it maintained itself?\n\nYes. If the ratio of opportunistically frozen pages (which I'd define as pages\nthat were frozen not because they strictly needed to) vs the number of\nunfrozen pages increases, we need to make opportunistic freezing less\naggressive and vice versa.\n\n\n> ISTM that the concept of \"unfreezing\" a page is equivalent to\n> \"opening\" the page that was \"closed\" at some point (by VACUUM). It's\n> not limited to freezing per se -- it's \"closed for business until\n> further notice\", which is a slightly broader concept (and one not\n> unique to Postgres). You don't just need to be concerned about updates\n> and deletes -- inserts are also a concern.\n>\n> I would be sure to look out for new inserts that \"unfreeze\" pages, too\n> -- ideally you'd have instrumentation that caught that, in order to\n> get a general sense of the extent of the problem in each of your\n> chosen representative workloads. This is particularly likely to be a\n> concern when there is enough space on a heap page to fit one more heap\n> tuple, that's smaller than most other tuples. The FSM will \"helpfully\"\n> make sure of it. This problem isn't rare at all, unfortunately.\n\nI'm not as convinced as you are that this is a problem / that the solution\nwon't cause more problems than it solves. Users are concerned when free space\ncan't be used - you don't have to look further than the discussion in the last\nweeks about adding the ability to disable HOT to fight bloat.\n\nI do agree that the FSM code tries way too hard to fit things onto early pages\n- it e.g. can slow down concurrent copy workloads by 3-4x due to contention in\nthe FSM - and that it has more size classes than necessary, but I don't think\njust closing frozen pages against further insertions of small tuples will\ncause its own set of issues.\n\nI think at the very least there'd need to be something causing pages to reopen\nonce the aggregate unused space in the table reaches some threshold.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 27 Sep 2023 18:07:48 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Wed, Sep 27, 2023 at 6:07 PM Andres Freund <andres@anarazel.de> wrote:\n> > I would be sure to look out for new inserts that \"unfreeze\" pages, too\n> > -- ideally you'd have instrumentation that caught that, in order to\n> > get a general sense of the extent of the problem in each of your\n> > chosen representative workloads. This is particularly likely to be a\n> > concern when there is enough space on a heap page to fit one more heap\n> > tuple, that's smaller than most other tuples. The FSM will \"helpfully\"\n> > make sure of it. This problem isn't rare at all, unfortunately.\n>\n> I'm not as convinced as you are that this is a problem / that the solution\n> won't cause more problems than it solves. Users are concerned when free space\n> can't be used - you don't have to look further than the discussion in the last\n> weeks about adding the ability to disable HOT to fight bloat.\n\nAll that I said was that it would be a good idea to keep an eye out\nfor it during performance validation work\n\n> I do agree that the FSM code tries way too hard to fit things onto early pages\n> - it e.g. can slow down concurrent copy workloads by 3-4x due to contention in\n> the FSM - and that it has more size classes than necessary, but I don't think\n> just closing frozen pages against further insertions of small tuples will\n> cause its own set of issues.\n\nOf course it'll cause other issues. Of course it's very subtle.\n\n> I think at the very least there'd need to be something causing pages to reopen\n> once the aggregate unused space in the table reaches some threshold.\n\nOf course that's true. ISTM that you might well need some kind of\nhysteresis to avoid pages ping-ponging. If it isn't sticky, it might\nnever settle, or take way too long to settle.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 27 Sep 2023 18:19:32 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Wed, Sep 27, 2023 at 5:42 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> On Wed, Sep 27, 2023 at 5:27 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > What about my idea of holding back when some tuples are already frozen\n> > from before? Admittedly that's still a fairly raw idea, but something\n> > along those lines seems promising.\n>\n> Originally, when you proposed this, the idea was to avoid freezing a\n> page that has some tuples already frozen because that would mean we\n> have frozen it and it was unfrozen after.\n\nRight.\n\n> But, today, I was thinking about what heuristics to combine with one\n> that considers the average amount of time pages in a table spend\n> frozen [1], and I think it might be interesting to use the \"some\n> tuples on the page are frozen\" test in the opposite way than it was\n> originally proposed.\n>\n> Take a case where you insert a row and then update it once and repeat\n> forever. Let's say you freeze the page before you've filled it up,\n> and, on the next update, the page is unfrozen. Most of the tuples on\n> the page will still be frozen. The next time the page is vacuumed, the\n> fact that the page has a lot of frozen tuples actually means that it\n> is probably worth freezing the remaining few tuples and marking the\n> page frozen.\n\nAssuming that you can get away with not writing an FPI, yeah,\nprobably. Otherwise, maybe, less clear.\n\n> Basically, I'm wondering if we can use the page-is-partially-frozen\n> test to catch cases where we are willing to freeze a page a couple of\n> times to make sure it gets frozen.\n\nThat doesn't seem like the opposite of my suggestion (except maybe in\na mechanical sort of way). To me this sounds like a refinement of the\nsame idea (not surprising that it would need to be refined).\n\nYou seem to be suggesting that we would freeze if either no tuples\nwere frozen at all, or if almost all tuples were already frozen -- but\nnot if the page was \"somewhere in between\". I think I see where you're\ngoing with that.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 27 Sep 2023 18:21:02 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On 2023-09-27 19:09:41 -0400, Melanie Plageman wrote:\n> On Wed, Sep 27, 2023 at 3:25 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Wed, Sep 27, 2023 at 12:34 PM Andres Freund <andres@anarazel.de> wrote:\n> > > One way to deal with that would be to not track the average age in\n> > > LSN-difference-bytes, but convert the value to some age metric at that\n> > > time. If we e.g. were to convert the byte-age into an approximate age in\n> > > checkpoints, with quadratic bucketing (e.g. 0 -> current checkpoint, 1 -> 1\n> > > checkpoint, 2 -> 2 checkpoints ago, 3 -> 4 checkpoints ago, ...), using a mean\n> > > of that age would probably be fine.\n> >\n> > Yes. I think it's possible that we could even get by with just two\n> > buckets. Say current checkpoint and not. Or current-or-previous\n> > checkpoint and not. And just look at what percentage of accesses fall\n> > into this first bucket -- it should be small or we're doing it wrong.\n> > It seems like the only thing we actually need to avoid is freezing the\n> > same ages over and over again in a tight loop.\n>\n> At the risk of seeming too execution-focused, I want to try and get more\n> specific.\n\nI think that's a good intuition :)\n\n> Here is a description of an example implementation to test my\n> understanding:\n>\n> In table-level stats, save two numbers: younger_than_cpt/older_than_cpt\n> storing the number of instances of unfreezing a page which is either\n> younger or older than the start of the most recent checkpoint at the\n> time of its unfreezing\n\n> This has the downside of counting most unfreezings directly after a\n> checkpoint in the older_than_cpt bucket. That is: older_than_cpt !=\n> longer_frozen_duration at certain times in the checkpoint cycle.\n\nYea - I don't think just using before/after checkpoint is a good measure. As\nyou say, it'd be quite jumpy around checkpoints - even though the freezing\nbehaviour hasn't materially changed. I think using the *distance* between\ncheckpoints would be a more reliable measure, i.e. if (insert_lsn - page_lsn)\n< recent_average_lsn_diff_between_checkpoints, then it's recently modified,\notherwise not.\n\nOne problem with using checkpoints \"distances\" to control things is\nforced/immediate checkpoints. The fact that a base backup was started (and\nthus a checkpoint completed much earlier than it would have otherwise)\nshouldn't make our system assume that the overall behaviour is quite different\ngoing forward.\n\n\n> Now, I'm trying to imagine how this would interact in a meaningful way\n> with opportunistic freezing behavior during vacuum.\n>\n> You would likely want to combine it with one of the other heuristics we\n> discussed.\n>\n> For example:\n> For a table with only 20% younger unfreezings, when vacuuming that page,\n\nFwiw, I wouldn't say that unfreezing 20% of recently frozen pages is a low\nvalue.\n\n\n> if insert LSN - RedoRecPtr < insert LSN - page LSN\n> page is older than the most recent checkpoint start, so freeze it\n> regardless of whether or not it would emit an FPI\n>\n> What aggressiveness levels should there be? What should change at each\n> level? What criteria should pages have to meet to be subject to the\n> aggressiveness level?\n\nI'm thinking something very roughly along these lines could make sense:\n\npage_lsn_age = insert_lsn - page_lsn;\n\nif (dirty && !fpi)\n{\n /*\n * If we can freeze without an FPI, be quite agressive about\n * opportunistically freezing. We just need to prevent freezing\n * when the table is constantly being rewritten. It's ok to make mistakes\n * initially - the rate of unfreezes will quickly stop us from making\n * mistakes as often.\n */\n#define NO_FPI_FREEZE_FACTOR 10.0\n if (page_lsn_age >\n average_lsn_bytes_per_checkpoint * (1 - recent_unfreeze_ratio) * NO_FPI_FREEZE_FACTOR)\n freeze = true;\n}\nelse\n{\n /*\n * Freezing would emit an FPI and/or dirty the page, making freezing quite\n * a bit more costly. Be more hesitant about freezing recently modified\n * data, unless it's very rare that we unfreeze recently modified data.\n * For insert-only/mostly tables, unfreezes should be rare, so we'll still\n * freeze most of the time.\n */\n#define FPI_FREEZE_FACTOR 1\n if (page_lsn_age >\n average_lsn_bytes_per_checkpoint * (1 - recent_unfreeze_ratio) * FPI_FREEZE_FACTOR)\n freeze = true;\n}\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 27 Sep 2023 18:35:00 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Wed, Sep 27, 2023 at 6:35 PM Andres Freund <andres@anarazel.de> wrote:\n> > if insert LSN - RedoRecPtr < insert LSN - page LSN\n> > page is older than the most recent checkpoint start, so freeze it\n> > regardless of whether or not it would emit an FPI\n> >\n> > What aggressiveness levels should there be? What should change at each\n> > level? What criteria should pages have to meet to be subject to the\n> > aggressiveness level?\n>\n> I'm thinking something very roughly along these lines could make sense:\n>\n> page_lsn_age = insert_lsn - page_lsn;\n\nWhile there is no reason to not experiment here, I have my doubts\nabout what you've sketched out. Most importantly, it doesn't have\nanything to say about the cost of not freezing -- just the cost of\nfreezing. But isn't the main problem *not* freezing when we could and\nshould have? (Of course the cost of freezing is very relevant, but\nit's still secondary.)\n\nBut even leaving that aside, I just don't get why this will work with\nthe case that you yourself emphasized earlier on: a workload with\ninserts plus \"hot tail\" updates. If you run TPC-C according to spec,\nthere is about 12 or 14 hours between the initial inserts into the\norders and order lines table (by a new order transaction), and the\nsubsequent updates (from the delivery transaction). When I run the\nbenchmark, I usually don't stick with the spec (it's rather limiting\non modern hardware), so it's more like 2 - 4 hours before each new\norder is delivered (meaning updated in those two big tables). Either\nway, it's a fairly long time relative to everything else.\n\nWon't the algorithm that you've sketched always think that\n\"unfreezing\" pages doesn't affect recently frozen pages with such a\nworkload? Isn't the definition of \"recently frozen\" that emerges from\nthis algorithm not in any way related to the order delivery time, or\nanything like that? You know, rather like vacuum_freeze_min_age.\n\nSeparately, at one point you also said \"Yes. If the ratio of\nopportunistically frozen pages (which I'd define as pages that were\nfrozen not because they strictly needed to) vs the number of unfrozen\npages increases, we need to make opportunistic freezing less\naggressive and vice versa\".\n\nCan we expect a discount for freezing that happened to be very cheap\nanyway, when that doesn't work out?\n\nWhat about a page that we would have had to have frozen anyway (based\non the conventional vacuum_freeze_min_age criteria) not too long after\nit was frozen by this new mechanism, that nevertheless became unfrozen\nsome time later? That is, a page where \"the unfreezing\" cannot\nreasonably be blamed on the initial so-called opportunistic freezing,\nbecause really it was a total accident involving when VACUUM showed\nup? You know, just like we'd expect with the TPC-C tables.\n\nAside: \"unfrozen pages\" seems to refer to pages that were frozen, and\nbecame unfrozen. Not pages that are simply frozen. Lots of\nopportunities for confusion here.\n\nI'm not saying that it's wrong to freeze like this in the specific case of\nTPC-C. But do you really need to invent all this complicated\ninfrastructure, just to avoid freezing the same pages again in a tight\nloop?\n\nOn a positive note, I like that what you've laid out freezes eagerly\nwhen an FPI won't result -- this much we can all agree on. I guess\nthat that part is becoming uncontroversial.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Wed, 27 Sep 2023 21:03:11 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Wed, Sep 27, 2023 at 7:09 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> At the risk of seeming too execution-focused, I want to try and get more\n> specific. Here is a description of an example implementation to test my\n> understanding:\n>\n> In table-level stats, save two numbers: younger_than_cpt/older_than_cpt\n> storing the number of instances of unfreezing a page which is either\n> younger or older than the start of the most recent checkpoint at the\n> time of its unfreezing\n\nWhen I first read this, I thought this sounded right, except for the\nnaming which doesn't mention freezing, but on further thought, this\nfeels like it might not be the right thing. Why do I care how many\nsuper-old pages (regardless of how exactly we define super-old) I\nunfroze? I think what I care about is more like: how often do I have\nto unfreeze a recently-frozen page in order to complete a DML\noperation?\n\nFor example, consider a table with a million pages, 10 of which are\nfrozen. 1 was frozen a long time ago, 9 were frozen recently, and the\nother 999,990 are unfrozen. I update every page in the table. Well,\nnow older_than_cpt = 1 and younger_than_cpt = 9, so the ratio of the\ntwo is terrible: 90% of the frozen pages I encountered were\nrecently-frozen. But that's irrelevant. What matters is that if I had\nfrozen less aggressively, I could have saved myself 9 unfreeze\noperations across a million page modifications. So actually I'm doing\ngreat: only 0.0009% of my page modifications required an unfreeze that\nI maybe should have avoided and didn't.\n\nLikewise, if all of the pages had been frozen, but only 9 were frozen\nrecently, I'm doing exactly equally great. But if the number of\nrecently frozen pages were higher, then I'd be doing less great. So I\nthink comparing the number of young pages unfrozen to the number of\nold pages unfrozen is the wrong test, and comparing it instead to the\nnumber of pages modified is a better test.\n\nBut I don't think it's completely right, either. Imagine a table with\n100 recently frozen pages. I modify the table and unfreeze one of\nthem. So, 100% of the pages that I unfroze were recently-frozen. Does\nthis mean that we should back off and freeze less aggressively? Not\nreally. It seems like I actually got this case 99% right. So another\nidea could be to look at what percentage of frozen pages I end up\nun-freezing shortly thereafter. That gets this example right. But it\ngets the earlier example with a million pages wrong.\n\nMaybe there's something to combining these ideas. If the percentage of\npage modifications (of clean pages, say, so that we don't skew the\nstats as much when a page is modified many times in a row) that have\nto un-freeze a recently-frozen page is low, then that means there's no\nreal performance penalty for whatever aggressive freezing we're doing.\nEven if we un-freeze a zillion pages, if that happens over the course\nof modifying 100 zillion pages, it's not material. On the flip side,\nif the percentage of frozen pages that get unfrozen soon thereafter is\nlow, then it feels worth doing even if most page modifications end up\nun-freezing a recently-frozen page, because on net we're still ending\nup with a lot of extra stuff frozen.\n\nSaid differently, we're freezing too aggressively if un-freezing is\nboth adding a significant expense (i.e. the foreground work we're\ndoing often requires un-freezing ages) and ineffective (i.e. we don't\nend up with frozen pages left over). If it has one of those problems,\nit's still OK, but if it has both, we need to back off.\n\nMaybe that's still not quite right, but it's the best I've got this morning.\n\n> You would likely want to combine it with one of the other heuristics we\n> discussed.\n>\n> For example:\n> For a table with only 20% younger unfreezings, when vacuuming that page,\n>\n> if insert LSN - RedoRecPtr < insert LSN - page LSN\n> page is older than the most recent checkpoint start, so freeze it\n> regardless of whether or not it would emit an FPI\n>\n> What aggressiveness levels should there be? What should change at each\n> level? What criteria should pages have to meet to be subject to the\n> aggressiveness level?\n\nMy guess is that, when we're being aggressive, we should be aiming to\nfreeze all pages that can be completely frozen with no additional\ncriterion. Any additional criterion that we add here is likely to mess\nup the insert-only table case, and I'm not really sure what it saves\nus from. On the other hand, when we're being non-aggressive, we might\nstill sometimes want to freeze in situations where we historically\nhaven't. Consider the following three examples:\n\n1. pgbench_accounts table, standard run\n2. pgbench_accounts table, highly skewed run\n3. slowly growing table with a hot tail. records are inserted, updated\nheavily for a while, thereafter updated only very occasionally.\n\nAggressive freezing could misfire in all of these cases, because all\nof them have some portion of the table where rows are being updated\nvery frequently. But in the second and third cases, there's also a\ncold portion of the table where aggressive freezing is still OK.\nUnless we could figure out how to get stats with page-level\ngranularity, which sounds infeasible, I think cases (2) and (3) will\nbe hard to get completely right. However, if we opportunistically\nfroze any page that hadn't been modified in the last checkpoint cycle,\nor any page that hadn't been modified in the last 2 checkpoint cycles,\nor any page we had to read into shared_buffers (following an earlier\nsuggestion from Andres), or any page that hadn't been modified in the\nlast 5 minutes, it seems like we could get a bit of extra freezing\nthat we're not getting today without too many downsides. We probably\nneed whatever criterion we use here to be pretty lenient to avoid\nfreezing the hot pages, which probably means we'll sometimes fail to\nfreeze pages that are actually cold but we don't quite realize it. But\nthat could still be better than today.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 29 Sep 2023 09:47:43 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Thu, Sep 28, 2023 at 12:03 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> But isn't the main problem *not* freezing when we could and\n> should have? (Of course the cost of freezing is very relevant, but\n> it's still secondary.)\n\nPerhaps this is all in how you look at it, but I don't see it this\nway. It's easy to see how to solve the \"not freezing\" problem: just\nfreeze everything as often as possible. If that were cheap enough that\nwe could just do it, then we'd just do it and be done here. The\nproblem is that, at least in my opinion, that seems too expensive in\nsome cases. I'm starting to believe that those cases are narrower than\nI once thought, but I think they do exist. So now, I'm thinking that\nmaybe the main problem is identifying when you've got such a case, so\nthat you know when you need to be less aggressive.\n\n> Won't the algorithm that you've sketched always think that\n> \"unfreezing\" pages doesn't affect recently frozen pages with such a\n> workload? Isn't the definition of \"recently frozen\" that emerges from\n> this algorithm not in any way related to the order delivery time, or\n> anything like that? You know, rather like vacuum_freeze_min_age.\n\nFWIW, I agree that vacuum_freeze_min_age sucks. I have been reluctant\nto endorse changes in this area mostly because I fear replacing one\nbad idea with another, not because I think that what we have now is\nparticularly good. It's better to be wrong in the same way in every\nrelease than to have every release be equally wrong but in a different\nway.\n\nAlso, I think the question of what \"recently frozen\" means is a good\none, but I'm not convinced that it ought to relate to the order\ndelivery time. If we insert into a table and 12-14 hours go buy before\nit's updated, it doesn't seem particularly bad to me if we froze that\ndata meanwhile (regardless of what metric drove that freezing). Same\nthing if it's 2-4 hours. What seems bad to me is if we're constantly\nupdating the table and vacuum comes sweeping through and freezing\neverything to no purpose over and over again and then it gets\nun-frozen a few seconds or minutes later.\n\nNow maybe that's the wrong idea. After all, as a percentage, the\noverhead is the same either way, regardless of whether we're talking\nabout WAL volume or CPU cycles. But somehow it feels worse to make the\nsame mistakes every few minutes or potentially even tens of seconds\nthan it does to make the same mistakes every few hours. The absolute\ncost is a lot higher.\n\n> On a positive note, I like that what you've laid out freezes eagerly\n> when an FPI won't result -- this much we can all agree on. I guess\n> that that part is becoming uncontroversial.\n\nI don't think that we're going to be able to get away with freezing\nrows in a small, frequently-updated table just because no FPI will\nresult. I think Melanie's results show that the cost is not\nnegligible. But Andres's pseudocode algorithm, although it is more\naggressive in that case, doesn't necessarily seem bad to me, because\nit still has some ability to hold off freezing in such cases if our\nstatistics show that it isn't working out.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 29 Sep 2023 10:55:19 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Fri, Sep 29, 2023 at 7:55 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Thu, Sep 28, 2023 at 12:03 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > But isn't the main problem *not* freezing when we could and\n> > should have? (Of course the cost of freezing is very relevant, but\n> > it's still secondary.)\n>\n> Perhaps this is all in how you look at it, but I don't see it this\n> way. It's easy to see how to solve the \"not freezing\" problem: just\n> freeze everything as often as possible. If that were cheap enough that\n> we could just do it, then we'd just do it and be done here. The\n> problem is that, at least in my opinion, that seems too expensive in\n> some cases. I'm starting to believe that those cases are narrower than\n> I once thought, but I think they do exist. So now, I'm thinking that\n> maybe the main problem is identifying when you've got such a case, so\n> that you know when you need to be less aggressive.\n\nAssuming your concern is more or less limited to those cases where the\nsame page could be frozen an unbounded number of times (once or almost\nonce per VACUUM), then I think we fully agree. We ought to converge on\nthe right behavior over time, but it's particularly important that we\nnever converge on the wrong behavior instead.\n\n> > Won't the algorithm that you've sketched always think that\n> > \"unfreezing\" pages doesn't affect recently frozen pages with such a\n> > workload? Isn't the definition of \"recently frozen\" that emerges from\n> > this algorithm not in any way related to the order delivery time, or\n> > anything like that? You know, rather like vacuum_freeze_min_age.\n>\n> FWIW, I agree that vacuum_freeze_min_age sucks. I have been reluctant\n> to endorse changes in this area mostly because I fear replacing one\n> bad idea with another, not because I think that what we have now is\n> particularly good. It's better to be wrong in the same way in every\n> release than to have every release be equally wrong but in a different\n> way.\n>\n> Also, I think the question of what \"recently frozen\" means is a good\n> one, but I'm not convinced that it ought to relate to the order\n> delivery time.\n\nI don't think it should, either.\n\nThe TPC-C scenario is partly interesting because it isn't actually\nobvious what the most desirable behavior is, even assuming that you\nhad perfect information, and were not subject to practical\nconsiderations about the complexity of your algorithm. There doesn't\nseem to be perfect clarity on what the goal should actually be in such\nscenarios -- it's not like the problem is just that we can't agree on\nthe best way to accomplish those goals with this specific workload.\n\nIf performance/efficiency and performance stability are directly in\ntension (as they sometimes are), how much do you want to prioritize\none or the other? It's not an easy question to answer. It's a value\njudgement as much as anything else.\n\n> If we insert into a table and 12-14 hours go buy before\n> it's updated, it doesn't seem particularly bad to me if we froze that\n> data meanwhile (regardless of what metric drove that freezing). Same\n> thing if it's 2-4 hours. What seems bad to me is if we're constantly\n> updating the table and vacuum comes sweeping through and freezing\n> everything to no purpose over and over again and then it gets\n> un-frozen a few seconds or minutes later.\n\nRight -- we agree here.\n\nI even think that it makes sense to freeze pages knowing for sure that\nthe pages will be unfrozen on that sort of timeline (at least with a\nlarge and ever growing table like this). It may technically be less\nefficient, but not when you consider how everything else is affected\nby the disruptive impact of freezing a great deal of stuff all at\nonce. (Of course it's also true that we don't really know what will\nhappen, which is all the more reason to freeze eagerly.)\n\n> Now maybe that's the wrong idea. After all, as a percentage, the\n> overhead is the same either way, regardless of whether we're talking\n> about WAL volume or CPU cycles. But somehow it feels worse to make the\n> same mistakes every few minutes or potentially even tens of seconds\n> than it does to make the same mistakes every few hours. The absolute\n> cost is a lot higher.\n\nI agree.\n\nAnother problem with the algorithm that Andres sketched is that it\nsupposes that vacuum_freeze_min_age means something relevant -- that's\nhow we decide whether or not freezing should count as \"opportunistic\".\nBut there really isn't that much difference between (say) an XID age\nof 25 million and 50 million. At least not with a table like the TPC-C\ntables, where VACUUMs are naturally big operations that take place\nrelatively infrequently.\n\nAssume a default vacuum_freeze_min_age of 50 million. How can it make\nsense to deem freezing a page \"opportunistic\" when its oldest XID has\nonly attained an age of 25 million, if the subsequent unfreezing\nhappens when that same XID would have attained an age of 75 million,\nhad we not frozen it? And if you agree that it doesn't make sense, how\ncan we compensate for this effect, as a practical matter?\n\nEven if you're willing to assume that vacuum_freeze_min_age isn't just\nan arbitrary threshold, this still seems wrong. vacuum_freeze_min_age\nis applied by VACUUM, at the point that it scans pages. If VACUUM were\ninfinitely fast, and new VACUUMs were launched constantly, then\nvacuum_freeze_min_age (and this bucketing scheme) might make more\nsense. But, you know, they're not. So whether or not VACUUM (with\nAndres' algorithm) deems a page that it has frozen to have been\nopportunistically frozen or not is greatly influenced by factors that\ncouldn't possibly be relevant.\n\n> > On a positive note, I like that what you've laid out freezes eagerly\n> > when an FPI won't result -- this much we can all agree on. I guess\n> > that that part is becoming uncontroversial.\n>\n> I don't think that we're going to be able to get away with freezing\n> rows in a small, frequently-updated table just because no FPI will\n> result. I think Melanie's results show that the cost is not\n> negligible. But Andres's pseudocode algorithm, although it is more\n> aggressive in that case, doesn't necessarily seem bad to me, because\n> it still has some ability to hold off freezing in such cases if our\n> statistics show that it isn't working out.\n\nOkay then. I guess it's more accurate to say that we'll have a strong\nbias in the direction of freezing when an FPI won't result, though not\nan infinitely strong bias. We'll at least have something that can be\nthought of as an improved version of the FPI thing for 17, I think --\nwhich is definitely significant progress.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 29 Sep 2023 08:56:39 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Fri, Sep 29, 2023 at 11:57 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> Assuming your concern is more or less limited to those cases where the\n> same page could be frozen an unbounded number of times (once or almost\n> once per VACUUM), then I think we fully agree. We ought to converge on\n> the right behavior over time, but it's particularly important that we\n> never converge on the wrong behavior instead.\n\nI think that more or less matches my current thinking on the subject.\nA caveat might be: If it were once per two vacuums rather than once\nper vacuum, that might still be an issue. But I agree with the idea\nthat the case that matters is *repeated* wasteful freezing. I don't\nthink freezing is expensive enough that individual instances of\nmistaken freezing are worth getting too stressed about, but as you\nsay, the overall pattern does matter.\n\n> The TPC-C scenario is partly interesting because it isn't actually\n> obvious what the most desirable behavior is, even assuming that you\n> had perfect information, and were not subject to practical\n> considerations about the complexity of your algorithm. There doesn't\n> seem to be perfect clarity on what the goal should actually be in such\n> scenarios -- it's not like the problem is just that we can't agree on\n> the best way to accomplish those goals with this specific workload.\n>\n> If performance/efficiency and performance stability are directly in\n> tension (as they sometimes are), how much do you want to prioritize\n> one or the other? It's not an easy question to answer. It's a value\n> judgement as much as anything else.\n\nI think that's true. For me, the issue is what a user is practically\nlikely to notice and care about. I submit that on a\nnot-particularly-busy system, it would probably be fine to freeze\naggressively in almost every situation, because you're only incurring\ncosts you can afford to pay. On a busy system, it's more important to\nbe right, or at least not too badly wrong. But even on a busy system,\nI think that when the time between data being written and being frozen\nis more than a few tens of minutes, it's very doubtful that anyone is\ngoing to notice the contribution that freezing makes to the overall\nworkload. They're much more likely to notice an annoying autovacuum\nthan they are to notIce a bit of excess freezing that ends up getting\nreversed. But when you start cranking the time between writing data\nand freezing it down into the single-digit numbers of minutes, and\neven more if you push down to tens of seconds or less, now I think\npeople are going to care more about useless freezing work than about\nlong-term autovacuum risks. Because now their database is really busy\nso they care a lot about performance, and seemingly most of the data\ninvolved is ephemeral anyway.\n\n> Even if you're willing to assume that vacuum_freeze_min_age isn't just\n> an arbitrary threshold, this still seems wrong. vacuum_freeze_min_age\n> is applied by VACUUM, at the point that it scans pages. If VACUUM were\n> infinitely fast, and new VACUUMs were launched constantly, then\n> vacuum_freeze_min_age (and this bucketing scheme) might make more\n> sense. But, you know, they're not. So whether or not VACUUM (with\n> Andres' algorithm) deems a page that it has frozen to have been\n> opportunistically frozen or not is greatly influenced by factors that\n> couldn't possibly be relevant.\n\nI'm not totally sure that I'm understanding what you're concerned\nabout here, but I *think* that the issue you're worried about here is:\nif we have various rules that can cause freezing, let's say X Y and Z,\nand we adjust the aggressiveness of rule X based on the performance of\nrule Y, that would be stupid and might suck.\n\nAssuming that the previous sentence is a correct framing, let's take X\nto be \"freezing based on the page LSN age\" and Y to be \"freezing based\non vacuum_freeze_min_age\". I think the problem scenario here would be\nif it turns out that, under some set of circumstances, Y freezes more\naggressively than X. For example, suppose the user runs VACUUM FREEZE,\neffectively setting vacuum_freeze_min_age=0 for that operation. If the\ntable is being modified at all, it's likely to suffer a bunch of\nunfreezing right afterward, which could cause us to decide to make\nfuture vacuums freeze less aggressively. That's not necessarily what\nwe want, because evidently the user, at least at that moment in time,\nthought that previous freezing hadn't been aggressive enough. They\nmight be surprised to find that flash-freezing the table inhibited\nfuture automatic freezing.\n\nOr suppose that they just have a very high XID consumption rate\ncompared to the rate of modifications to this particular table, such\nthat criteria related to vacuum_freeze_min_age tend to be satisfied a\nlot, and thus vacuums tend to freeze a lot no matter what the page LSN\nage is. This scenario actually doesn't seem like a problem, though. In\nthis case the freezing criterion based on page LSN age is already not\ngetting used, so it doesn't really matter whether we tune it up or\ndown or whatever.\n\nThe earlier scenario, where the user ran VACUUM FREEZE, is weirder,\nbut it doesn't sound that horrible, either. I did stop to wonder if we\nshould just remove vacuum_freeze_min_age entirely, but I don't really\nsee how to make that work. If we just always froze everything, then I\nguess we wouldn't need that value, because we would have effectively\nhard-coded it to zero. But if not, we need some kind of backstop to\nmake sure that XID age eventually triggers freezing even if nothing\nelse does, and vacuum_freeze_min_age is that thing.\n\nSo I agree there could maybe be some kind of problem in this area, but\nI'm not quite seeing it.\n\n> Okay then. I guess it's more accurate to say that we'll have a strong\n> bias in the direction of freezing when an FPI won't result, though not\n> an infinitely strong bias. We'll at least have something that can be\n> thought of as an improved version of the FPI thing for 17, I think --\n> which is definitely significant progress.\n\nI do kind of wonder whether we're going to care about the FPI thing in\nthe end. I don't mind if we do. But I wonder if it will prove\nnecessary, or even desirable. Andres's algorithm requires a greater\nLSN age to trigger freezing when an FPI is required than when one\nisn't. But Melanie's test results seem to me to show that using a\nsmall LSN distance freezes too much on pgbench_accounts-type workloads\nand using a large one freezes too little on insert-only workloads. So\nI'm currently feeling a lot of skepticism about how useful it is to\nvary the LSN-distance threshold as a way of controlling the behavior.\nMaybe that intuition is misplaced, or maybe it will turn out that we\ncan use the FPI criterion in some more satisfying way than using it to\nfrob the LSN distance. But if the algorithm does an overall good job\nguessing whether pages are likely to be modified again soon, then why\ncare about whether an FPI is required? And if it doesn't, is caring\nabout FPIs good enough to save us?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 29 Sep 2023 14:27:33 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Fri, Sep 29, 2023 at 11:27 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > Even if you're willing to assume that vacuum_freeze_min_age isn't just\n> > an arbitrary threshold, this still seems wrong. vacuum_freeze_min_age\n> > is applied by VACUUM, at the point that it scans pages. If VACUUM were\n> > infinitely fast, and new VACUUMs were launched constantly, then\n> > vacuum_freeze_min_age (and this bucketing scheme) might make more\n> > sense. But, you know, they're not. So whether or not VACUUM (with\n> > Andres' algorithm) deems a page that it has frozen to have been\n> > opportunistically frozen or not is greatly influenced by factors that\n> > couldn't possibly be relevant.\n>\n> I'm not totally sure that I'm understanding what you're concerned\n> about here, but I *think* that the issue you're worried about here is:\n> if we have various rules that can cause freezing, let's say X Y and Z,\n> and we adjust the aggressiveness of rule X based on the performance of\n> rule Y, that would be stupid and might suck.\n\nYour summary is pretty close. There are a couple of specific nuances\nto it, though:\n\n1. Anything that uses XID age or even LSN age necessarily depends on\nwhen VACUUM shows up, which itself depends on many other random\nthings.\n\nWith small to medium sized tables that don't really grow, it's perhaps\nreasonable to expect this to not matter. But with tables like the\nTPC-C order/order lines table, or even pgbench_history, the next\nVACUUM operation will reliably be significantly longer and more\nexpensive than the last one, forever (ignoring the influence of\naggressive mode, and assuming typical autovacuum settings). So VACUUMs\nget bigger and less frequent as the table grows.\n\nAs the table continues to grow, at some point we reach a stage where\nmany XIDs encountered by VACUUM will be significantly older than\nvacuum_freeze_min_age, while others will be significantly younger. And\nso whether we apply the vacuum_freeze_min_age rule (or some other age\nbased rule) is increasingly a matter of random happenstance (i.e. is\nmore and more due to when VACUUM happens to show up), and has less and\nless to do with what the workload signals we should do. This is a\nmoving target, but (if I'm not mistaken) under the scheme described by\nAndres we're not even trying to compensate for that.\n\nSeparately, I have a practical concern:\n\n2. It'll be very hard to independently track the effectiveness of\nrules X, Y, and Z as a practical matter, because the application of\neach rule quite naturally influences the application of every other\nrule over time. They simply aren't independent things in any practical\nsense.\n\nEven if this wasn't an issue, I can't think of a reasonable cost\nmodel. Is it good or bad if \"opportunistic freezing\" results in\nunfreezing 50% of the time? AFAICT that's an *extremely* complicated\nquestion. You cannot just interpolate from the 0% case (definitely\ngood) and the 100% case (definitely bad) and expect to get a sensible\nanswer. You can't split the difference -- even if we allow ourselves\nto ignore tricky value judgement type questions.\n\n> Assuming that the previous sentence is a correct framing, let's take X\n> to be \"freezing based on the page LSN age\" and Y to be \"freezing based\n> on vacuum_freeze_min_age\". I think the problem scenario here would be\n> if it turns out that, under some set of circumstances, Y freezes more\n> aggressively than X. For example, suppose the user runs VACUUM FREEZE,\n> effectively setting vacuum_freeze_min_age=0 for that operation. If the\n> table is being modified at all, it's likely to suffer a bunch of\n> unfreezing right afterward, which could cause us to decide to make\n> future vacuums freeze less aggressively. That's not necessarily what\n> we want, because evidently the user, at least at that moment in time,\n> thought that previous freezing hadn't been aggressive enough. They\n> might be surprised to find that flash-freezing the table inhibited\n> future automatic freezing.\n\nI didn't think of that one myself, but it's a great example.\n\n> Or suppose that they just have a very high XID consumption rate\n> compared to the rate of modifications to this particular table, such\n> that criteria related to vacuum_freeze_min_age tend to be satisfied a\n> lot, and thus vacuums tend to freeze a lot no matter what the page LSN\n> age is. This scenario actually doesn't seem like a problem, though. In\n> this case the freezing criterion based on page LSN age is already not\n> getting used, so it doesn't really matter whether we tune it up or\n> down or whatever.\n\nIt would have to be a smaller table, which I'm relatively unconcerned about.\n\n> > Okay then. I guess it's more accurate to say that we'll have a strong\n> > bias in the direction of freezing when an FPI won't result, though not\n> > an infinitely strong bias. We'll at least have something that can be\n> > thought of as an improved version of the FPI thing for 17, I think --\n> > which is definitely significant progress.\n>\n> I do kind of wonder whether we're going to care about the FPI thing in\n> the end. I don't mind if we do. But I wonder if it will prove\n> necessary, or even desirable. Andres's algorithm requires a greater\n> LSN age to trigger freezing when an FPI is required than when one\n> isn't. But Melanie's test results seem to me to show that using a\n> small LSN distance freezes too much on pgbench_accounts-type workloads\n> and using a large one freezes too little on insert-only workloads. So\n> I'm currently feeling a lot of skepticism about how useful it is to\n> vary the LSN-distance threshold as a way of controlling the behavior.\n\nI'm skeptical of varying the LSN distance, but I'm not skeptical of\nthe idea of caring about FPIs in general.\n\nI wonder how much truly useful work VACUUM performed for\npgbench_accounts during Melanie's performance evaluation -- leaving\nfreezing aside. For the \"too much freezing for pgbench_accounts\" case,\nwhere master performed better than the patch, would it have been\npossible to do even better than that by simply turning off autovacuum?\nOr at least increasing the scale factor that triggers autovacuuming?\n(The answer will depend to some extent on heap fill factor.)\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 29 Sep 2023 14:58:36 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Fri, Sep 29, 2023 at 11:27 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I think that's true. For me, the issue is what a user is practically\n> likely to notice and care about. I submit that on a\n> not-particularly-busy system, it would probably be fine to freeze\n> aggressively in almost every situation, because you're only incurring\n> costs you can afford to pay. On a busy system, it's more important to\n> be right, or at least not too badly wrong. But even on a busy system,\n> I think that when the time between data being written and being frozen\n> is more than a few tens of minutes, it's very doubtful that anyone is\n> going to notice the contribution that freezing makes to the overall\n> workload. They're much more likely to notice an annoying autovacuum\n> than they are to notIce a bit of excess freezing that ends up getting\n> reversed. But when you start cranking the time between writing data\n> and freezing it down into the single-digit numbers of minutes, and\n> even more if you push down to tens of seconds or less, now I think\n> people are going to care more about useless freezing work than about\n> long-term autovacuum risks. Because now their database is really busy\n> so they care a lot about performance, and seemingly most of the data\n> involved is ephemeral anyway.\n\nI think that that's all true.\n\nAs I believe I pointed out at least once in the past, my thinking\nabout the practical requirements from users was influenced by this\npaper/talk:\n\nhttps://www.microsoft.com/en-us/research/video/cost-performance-in-modern-data-stores-how-data-cashing-systems-succeed/\n\nFor the types of applications that Postgres is a natural fit for, most\nof the data is cold -- very cold (\"freezing\" cold, even). If you have\na 50GB pgbench_accounts table, Postgres will always perform\nsuboptimally. But so will SQL Server, Oracle, and MySQL.\n\nWhile pgbench makes a fine stress-test, for the most part its workload\nis highly unrealistic. And yet we seem to think that it's just about\nthe most important benchmark of all. If we're not willing to get over\neven small regressions in pgbench, I fear we'll never make significant\nprogress in this area. It's at least partly a cultural problem IMV.\n\nThe optimal freezing strategy for pgbench_accounts is to never freeze,\neven once. It doesn't matter if you vary the distributions of the\naccounts table updates -- it's still inevitable that each and every\nrow will get updated before too long. In fact, it's not just freezing\nthat should always be avoided here -- same applies with pruning by\nVACUUM.\n\nAs I suggested in my last email, the lesson offered by\npgbench_accounts table seems to be \"never VACUUM at all, except\nperhaps to advance relfrozenxid\" (which shouldn't actually require any\nfreezing even one page). If you haven't tuned heap fill factor, then\nyou might want to VACUUM a bit, at first. But, overall, vacuuming is\nbad. That is the logical though absurd conclusion. It completely flies\nin the face of practical experience.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 29 Sep 2023 17:49:51 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Fri, Sep 29, 2023 at 8:50 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> While pgbench makes a fine stress-test, for the most part its workload\n> is highly unrealistic. And yet we seem to think that it's just about\n> the most important benchmark of all. If we're not willing to get over\n> even small regressions in pgbench, I fear we'll never make significant\n> progress in this area. It's at least partly a cultural problem IMV.\n\nI think it's true that the fact that pgbench does what pgbench does\nmakes us think more about that workload than about some other, equally\nplausible workload. It's the test we have, so we end up running it a\nlot. If we had some other test, we'd run that one.\n\nBut I don't think I buy that it's \"highly unrealistic.\" It is true\nthat pgbench with default options is limited only by the speed of the\ndatabase, while real update-heavy workloads are typically limited by\nsomething external, like the number of users hitting the web site that\nthe database is backing. It's also true that real workloads tend to\ninvolve some level of non-uniform access. But I'm not sure that either\nof those things really matter that much in the end.\n\nThe problem I have with the external rate-limiting argument is that it\nignores hardware selection, architectural decision-making, and\nworkload growth. Sure, people are unlikely to stand up a database that\ncan do 10,000 transactions per second and hit it with a workload that\nrequires doing 20,000 transactions per second, because they're going\nto find out in testing that it doesn't work. Then they will either buy\nmore hardware or rearchitect the system to reduce the required number\nof transactions per second or give up on using PostgreSQL. So when\nthey do put it into production, it's unlikely to be overloaded on day\none. But that's just because all of the systems that would have been\noverloaded on day one never make it to day one. They get killed off or\nchanged before they get there. So it isn't like higher maximum\nthroughput wouldn't have been beneficial. And over time, systems that\ninitially started out not running maxed out can and, in my experience,\nfairly often do end up maxed out because once you've got the thing in\nproduction, it's hard to change anything, and load often does grow\nover time.\n\nAs for non-uniform access, that is real and does matter, but there are\ncertainly installations with tables where no rows survive long enough\nto need freezing, either because the table is regularly emptied, or\njust because the update load is high enough to hit all the rows fairly\nquickly.\n\nMaybe I'm misunderstanding your point here, in which case all of the\nabove may be irrelevant. But my feeling is that we can't simply ignore\ncases where all/many rows are short-lived and say, well, those are\nunrealistic, so let's just freeze everything super-aggressively and\nthat should be fine. I don't think that's OK. We can (and I think we\nshould) treat that situation as a special case rather than as the\ntypical case, but I think it would be a bad idea to dismiss it as a\ncase that we don't need to worry about at all.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 2 Oct 2023 09:25:47 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Mon, Oct 2, 2023 at 6:26 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I think it's true that the fact that pgbench does what pgbench does\n> makes us think more about that workload than about some other, equally\n> plausible workload. It's the test we have, so we end up running it a\n> lot. If we had some other test, we'd run that one.\n\nI find pgbench very useful as a stress-test. It usually represents the\nmost extreme possible adversarial case, which puts certain patches in\ncontext, helping to paint a complete picture.\n\n> As for non-uniform access, that is real and does matter, but there are\n> certainly installations with tables where no rows survive long enough\n> to need freezing, either because the table is regularly emptied, or\n> just because the update load is high enough to hit all the rows fairly\n> quickly.\n\nThat's true, but I think that they're all queue-like tables. Not large\ntables where there just isn't any chance of any row/page not being\nupdating after a fairly short period of time.\n\n> Maybe I'm misunderstanding your point here, in which case all of the\n> above may be irrelevant. But my feeling is that we can't simply ignore\n> cases where all/many rows are short-lived and say, well, those are\n> unrealistic, so let's just freeze everything super-aggressively and\n> that should be fine. I don't think that's OK. We can (and I think we\n> should) treat that situation as a special case rather than as the\n> typical case, but I think it would be a bad idea to dismiss it as a\n> case that we don't need to worry about at all.\n\nI think that my point about pgbench being generally unrepresentative\nmight have overshadowed the more important point about the VACUUM\nrequirements of pgbench_accounts in particular. Those are particularly\nunrealistic.\n\nIf no vacuuming against pgbench_accounts is strictly better than some\nvacuuming (unless it's just to advance relfrozenxid, which can't be\navoided), then to what extent is Melanie's freezing patch obligated to\nnot make the situation worse? I'm not saying that it doesn't matter at\nall; just that there needs to be a sense of proportion. If even\npruning (by VACUUM) is kinda, sorta, useless, then it seems like we\nmight need to revisit basic assumptions. (Or maybe the problem itself\ncan just be avoided for the most part; whatever works.)\n\nI have a patch that provides a simple way of tracking whether each\nPRUNE record was generated by VACUUM or by opportunistic pruning (also\nsomething Melanie is involved with):\n\nhttps://commitfest.postgresql.org/44/4288/\n\nAlthough it's really hard (maybe impossible) to track right now, I've\nnoticed that the vast majority of all PRUNE records are from\nopportunistic vacuuming with just about every write-heavy benchmark,\nincluding pgbench and TPC-C. And that's just going on the raw count of\neach type of record, not the utility provided by each pruning\noperation. Admittedly \"utility\" is a very hard thing to measure here,\nbut it might still matter. VACUUM might be doing some pruning, but\nonly pruning targeting pages that never particularly \"need to be\npruned\" by VACUUM itself (since pruning could happen whenever via\nopportunistic pruning anyway).\n\nWhatever your priorities might be, workload-wise, it could still be\nuseful to recognize useless freezing as part of a broader problem of\nuseless (or at least dubious) work performed by VACUUM -- especially\nwith pgbench-like workloads. The utility of freezing is a lot easier\nto measure than the utility of pruning, but why should we assume that\npruning isn't already just as much of a problem? (Maybe that's not a\nproblem that particularly interests you right now; I'm bringing it up\nbecause it seems possible that putting it in scope could somehow\nclarify what to do about freezing.)\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 2 Oct 2023 08:36:54 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Mon, Oct 2, 2023 at 11:37 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> If no vacuuming against pgbench_accounts is strictly better than some\n> vacuuming (unless it's just to advance relfrozenxid, which can't be\n> avoided), then to what extent is Melanie's freezing patch obligated to\n> not make the situation worse? I'm not saying that it doesn't matter at\n> all; just that there needs to be a sense of proportion.\n\nI agree. I think Melanie's previous test results showed no measurable\nperformance regression but a significant regression in WAL volume. I\ndon't remember how much of a regression it was, but it was enough to\nmake me think that the extra WAL volume could probably be turned into\na performance loss by adjusting the test scenario (a bit less\nhardware, or a bandwidth-constrained standby connection, etc.). I\nthink I'd be OK to accept a small amount of additional WAL, though. Do\nyou see it differently?\n\n> Whatever your priorities might be, workload-wise, it could still be\n> useful to recognize useless freezing as part of a broader problem of\n> useless (or at least dubious) work performed by VACUUM -- especially\n> with pgbench-like workloads. The utility of freezing is a lot easier\n> to measure than the utility of pruning, but why should we assume that\n> pruning isn't already just as much of a problem? (Maybe that's not a\n> problem that particularly interests you right now; I'm bringing it up\n> because it seems possible that putting it in scope could somehow\n> clarify what to do about freezing.)\n\nI wonder how much useless work we're doing in this area today. I mean,\nmost pruning in that workload gets done by HOT rather than by vacuum,\nbecause vacuum simply can't keep up, but I don't think it's worthless\nwork: if it wasn't done in the background by VACUUM, it would happen\nin the foreground anyway very soon after. I don't have a good feeling\nfor how much index cleanup we end up doing in a pgbench workload, but\nit seems to me that if we don't do index cleanup at least now and\nthen, we'll lose the ability to recycle line pointers in the wake of\nnon-HOT updates, and that seems bad. Maybe that's rare in pgbench\nspecifically, or maybe it isn't, but it seems like you'd only have to\nchange the workload a tiny bit for that to become a real problem.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 2 Oct 2023 16:25:32 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Mon, Oct 2, 2023 at 4:25 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Mon, Oct 2, 2023 at 11:37 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > If no vacuuming against pgbench_accounts is strictly better than some\n> > vacuuming (unless it's just to advance relfrozenxid, which can't be\n> > avoided), then to what extent is Melanie's freezing patch obligated to\n> > not make the situation worse? I'm not saying that it doesn't matter at\n> > all; just that there needs to be a sense of proportion.\n>\n> I agree. I think Melanie's previous test results showed no measurable\n> performance regression but a significant regression in WAL volume. I\n> don't remember how much of a regression it was, but it was enough to\n> make me think that the extra WAL volume could probably be turned into\n> a performance loss by adjusting the test scenario (a bit less\n> hardware, or a bandwidth-constrained standby connection, etc.). I\n> think I'd be OK to accept a small amount of additional WAL, though. Do\n> you see it differently?\n\nThat's pretty much how I see it too.\n\n> I wonder how much useless work we're doing in this area today. I mean,\n> most pruning in that workload gets done by HOT rather than by vacuum,\n> because vacuum simply can't keep up, but I don't think it's worthless\n> work: if it wasn't done in the background by VACUUM, it would happen\n> in the foreground anyway very soon after.\n\nYeah, but it probably wouldn't be as efficient, since only VACUUM uses\na somewhat older OldestXmin/prune cutoff.\n\nAs much as anything else, I'm arguing that we should treat the general\nproblem of useless work as relevant in the context of the performance\nvalidation work/benchmarks.\n\n> I don't have a good feeling\n> for how much index cleanup we end up doing in a pgbench workload, but\n> it seems to me that if we don't do index cleanup at least now and\n> then, we'll lose the ability to recycle line pointers in the wake of\n> non-HOT updates, and that seems bad. Maybe that's rare in pgbench\n> specifically, or maybe it isn't, but it seems like you'd only have to\n> change the workload a tiny bit for that to become a real problem.\n\nThere's no question that pruning that's required to go ahead in order\nfor index cleanup to take place isn't really ever something that we'd\nwant to skip.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 4 Oct 2023 08:16:33 -0400", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "Hi,\n\n\nRobert, Melanie and I spent an evening discussing this topic around\npgconf.nyc. Here are, mildly revised, notes from that:\n\n\nFirst a few random points that didn't fit with the sketch of an approach\nbelow:\n\n- Are unlogged tables a problem for using LSN based heuristics for freezing?\n\n We concluded, no, not a problem, because aggressively freezing does not\n increase overhead meaningfully, as we would already dirty both the heap and VM\n page to set the all-visible flag.\n\n- \"Unfreezing\" pages that were frozen hours / days ago aren't too bad and can\n be desirable.\n\n The main thing we are worried about is repeated freezing / unfreezing of\n pages within a relatively short time period.\n\n- Computing an average \"modification distance\" as I (Andres) proposed efor\n each page is complicated / \"fuzzy\"\n\n The main problem is that it's not clear how to come up with a good number\n for workloads that have many more inserts into new pages than modifications\n of existing pages.\n\n It's also hard to use average for this kind of thing, e.g. in cases where\n new pages are frequently updated, but also some old data is updated, it's\n easy for the updates to the old data to completely skew the average, even\n though that shouldn't prevent us from freezing.\n\n- We also discussed an idea by Robert to track the number of times we need to\n dirty a page when unfreezing and to compare that to the number of pages\n dirtied overall (IIRC), but I don't think we really came to a conclusion\n around that - and I didn't write down anything so this is purely from\n memory.\n\n\nA rough sketch of a freezing heuristic:\n\n- We concluded that to intelligently control opportunistic freezing we need\n statistics about the number of freezes and unfreezes\n\n - We should track page freezes / unfreezes in shared memory stats on a\n per-relation basis\n\n - To use such statistics to control heuristics, we need to turn them into\n rates. For that we need to keep snapshots of absolute values at certain\n times (when vacuuming), allowing us to compute a rate.\n\n - If we snapshot some stats, we need to limit the amount of data that occupies\n\n - evict based on wall clock time (we don't care about unfreezing pages\n frozen a month ago)\n\n - \"thin out\" data when exceeding limited amount of stats per relation\n using random sampling or such\n\n - need a smarter approach than just keeping N last vacuums, as there are\n situations where a table is (auto-) vacuumed at a high frequency\n\n\n - only looking at recent-ish table stats is fine, because we\n - a) don't want to look at too old data, as we need to deal with changing\n workloads\n\n - b) if there aren't recent vacuums, falsely freezing is of bounded cost\n\n - shared memory stats being lost on crash-restart/failover might be a problem\n\n - we certainly don't want to immediate store these stats in a table, due\n to the xid consumption that'd imply\n\n\n- Attributing \"unfreezes\" to specific vacuums would be powerful:\n\n - \"Number of pages frozen during vacuum\" and \"Number of pages unfrozen that\n were frozen during the same vacuum\" provides numerator / denominator for\n an \"error rate\"\n\n - We can perform this attribution by comparing the page LSN with recorded\n start/end LSNs of recent vacuums\n\n - If the freezing error rate of recent vacuums is low, freeze more\n aggressively. This is important to deal with insert mostly workloads.\n\n - If old data is \"unfrozen\", that's fine, we can ignore such unfreezes when\n controlling \"freezing aggressiveness\"\n\n - Ignoring unfreezing of old pages is important to e.g. deal with\n workloads that delete old data\n\n - This approach could provide \"goals\" for opportunistic freezing in a\n somewhat understandable way. E.g. aiming to rarely unfreeze data that has\n been frozen within 1h/1d/...\n\n\nAround this point my laptop unfortunately ran out of battery. Possibly the\nattendees of this mini summit also ran out of steam (and tea).\n\n\nWe had a few \"disagreements\" or \"unresolved issues\":\n\n- How aggressive should we be when we have no stats?\n\n- Should the freezing heuristic take into account whether freezing would\n require an FPI? Or whether page was not in s_b, or ...\n\n\nI likely mangled this substantially, both when taking notes during the lively\ndiscussion, and when revising them to make them a bit more readable.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 11 Oct 2023 17:43:46 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "Thanks for these notes.\n\nOn Wed, Oct 11, 2023 at 8:43 PM Andres Freund <andres@anarazel.de> wrote:\n> - We also discussed an idea by Robert to track the number of times we need to\n> dirty a page when unfreezing and to compare that to the number of pages\n> dirtied overall (IIRC), but I don't think we really came to a conclusion\n> around that - and I didn't write down anything so this is purely from\n> memory.\n\nSee http://postgr.es/m/CA+TgmoYcWisxLBL-pXu13OevThLOXm20oJqjNRZtKkhXsY92XA@mail.gmail.com\nfor my on-list discussion of this.\n\n> - Attributing \"unfreezes\" to specific vacuums would be powerful:\n>\n> - \"Number of pages frozen during vacuum\" and \"Number of pages unfrozen that\n> were frozen during the same vacuum\" provides numerator / denominator for\n> an \"error rate\"\n\nI want to highlight the denominator issue here. I think we all have\nthe intuition that if we count the number of times that a\nrecently-frozen page gets unfrozen, and that's a big number, that's\nbad, and a sign that we need to freeze less aggressively. But a lot of\nthe struggle has been around answering the question \"big compared to\nwhat\". A lot of the obvious candidates fail to behave nicely in corner\ncases, as discussed in the above email. I think this is one of the\nbetter candidates so far proposed, possibly the best.\n\n> - This approach could provide \"goals\" for opportunistic freezing in a\n> somewhat understandable way. E.g. aiming to rarely unfreeze data that has\n> been frozen within 1h/1d/...\n\nThis strikes me as another important point. Making the behavior\nunderstandable to users is going to be important, because sometimes\nwhatever system we might craft will misbehave, and then people are\ngoing to need to be able to understand why it's misbehaving and how to\ntune/fix it so it works.\n\n> Around this point my laptop unfortunately ran out of battery. Possibly the\n> attendees of this mini summit also ran out of steam (and tea).\n\nWhen the tea is gone, there's little point in continuing.\n\n> I likely mangled this substantially, both when taking notes during the lively\n> discussion, and when revising them to make them a bit more readable.\n\nI think it's quite a good summary, actually. Thanks!\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 12 Oct 2023 10:26:28 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Wed, Oct 11, 2023 at 8:43 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Robert, Melanie and I spent an evening discussing this topic around\n> pgconf.nyc. Here are, mildly revised, notes from that:\n\nThanks for taking notes!\n\n> The main thing we are worried about is repeated freezing / unfreezing of\n> pages within a relatively short time period.\n>\n> - Computing an average \"modification distance\" as I (Andres) proposed efor\n> each page is complicated / \"fuzzy\"\n>\n> The main problem is that it's not clear how to come up with a good number\n> for workloads that have many more inserts into new pages than modifications\n> of existing pages.\n>\n> It's also hard to use average for this kind of thing, e.g. in cases where\n> new pages are frequently updated, but also some old data is updated, it's\n> easy for the updates to the old data to completely skew the average, even\n> though that shouldn't prevent us from freezing.\n>\n> - We also discussed an idea by Robert to track the number of times we need to\n> dirty a page when unfreezing and to compare that to the number of pages\n> dirtied overall (IIRC), but I don't think we really came to a conclusion\n> around that - and I didn't write down anything so this is purely from\n> memory.\n\nI was under the impression that we decided we still had to consider\nthe number of clean pages dirtied as well as the number of pages\nunfrozen. The number of pages frozen and unfrozen over a time period\ngives us some idea of if we are freezing the wrong pages -- but it\ndoesn't tell us if we are freezing the right pages. A riff on an\nearlier example by Robert:\n\nWhile vacuuming a relation, we freeze 100 pages. During the same time\nperiod, we modify 1,000,000 previously clean pages. Of these 1,000,000\npages modified, 90 were frozen. So we unfroze 90% of the pages frozen\nduring this time. Does this mean we should back off of trying to\nfreeze any pages in the relation?\n\n> A rough sketch of a freezing heuristic:\n...\n> - Attributing \"unfreezes\" to specific vacuums would be powerful:\n>\n> - \"Number of pages frozen during vacuum\" and \"Number of pages unfrozen that\n> were frozen during the same vacuum\" provides numerator / denominator for\n> an \"error rate\"\n>\n> - We can perform this attribution by comparing the page LSN with recorded\n> start/end LSNs of recent vacuums\n\nWhile implementing a rough sketch of this, I realized I had a question\nabout this.\n\nvacuum 1 starts at lsn 10 and ends at lsn 200. It froze 100 pages.\nvacuum 2 then starts at lsn 600.\n5 frozen pages with page lsn > 10 and < 200 were updated. We count\nthose in vacuum 1's stats. 3 frozen pages with page lsn > 200 and <\n600 were updated. Do we count those somewhere?\n\n> - This approach could provide \"goals\" for opportunistic freezing in a\n> somewhat understandable way. E.g. aiming to rarely unfreeze data that has\n> been frozen within 1h/1d/...\n\nSimilar to the above question, if we are tracking pages frozen and\nunfrozen during a time period, if there are many vacuums in quick\nsuccession, we might care if a page was frozen by one vacuum and then\nunfrozen during a subsequent vacuum if not too much time has passed.\n\n- Melanie\n\n\n", "msg_date": "Thu, 12 Oct 2023 11:50:19 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Thu, Oct 12, 2023 at 11:50 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> I was under the impression that we decided we still had to consider\n> the number of clean pages dirtied as well as the number of pages\n> unfrozen. The number of pages frozen and unfrozen over a time period\n> gives us some idea of if we are freezing the wrong pages -- but it\n> doesn't tell us if we are freezing the right pages. A riff on an\n> earlier example by Robert:\n>\n> While vacuuming a relation, we freeze 100 pages. During the same time\n> period, we modify 1,000,000 previously clean pages. Of these 1,000,000\n> pages modified, 90 were frozen. So we unfroze 90% of the pages frozen\n> during this time. Does this mean we should back off of trying to\n> freeze any pages in the relation?\n\nI didn't think we decided the thing your first sentence says you\nthought we decided ... but I also didn't think of this example. That\nsaid, it might be fine to back off freezing in this case because we\nweren't doing enough of it to make any real difference in the first\nplace. Maybe there's a more moderate example where it feels like a\nbigger problem?\n\n> > - \"Number of pages frozen during vacuum\" and \"Number of pages unfrozen that\n> > were frozen during the same vacuum\" provides numerator / denominator for\n> > an \"error rate\"\n> >\n> > - We can perform this attribution by comparing the page LSN with recorded\n> > start/end LSNs of recent vacuums\n>\n> While implementing a rough sketch of this, I realized I had a question\n> about this.\n>\n> vacuum 1 starts at lsn 10 and ends at lsn 200. It froze 100 pages.\n> vacuum 2 then starts at lsn 600.\n> 5 frozen pages with page lsn > 10 and < 200 were updated. We count\n> those in vacuum 1's stats. 3 frozen pages with page lsn > 200 and <\n> 600 were updated. Do we count those somewhere?\n\nHow did those pages get frozen when no VACUUM was running?\n\nThe LSN of the frozen page just prior to unfreezing it should be from\nthe operation that froze it, which should be some VACUUM. I think the\ncase you're talking about could happen if we did on-access freezing,\nbut today I believe we don't.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 12 Oct 2023 13:28:57 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Wed, Oct 11, 2023 at 8:43 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> A rough sketch of a freezing heuristic:\n>\n> - We concluded that to intelligently control opportunistic freezing we need\n> statistics about the number of freezes and unfreezes\n>\n> - We should track page freezes / unfreezes in shared memory stats on a\n> per-relation basis\n>\n> - To use such statistics to control heuristics, we need to turn them into\n> rates. For that we need to keep snapshots of absolute values at certain\n> times (when vacuuming), allowing us to compute a rate.\n>\n> - If we snapshot some stats, we need to limit the amount of data that occupies\n>\n> - evict based on wall clock time (we don't care about unfreezing pages\n> frozen a month ago)\n>\n> - \"thin out\" data when exceeding limited amount of stats per relation\n> using random sampling or such\n>\n> - need a smarter approach than just keeping N last vacuums, as there are\n> situations where a table is (auto-) vacuumed at a high frequency\n>\n>\n> - only looking at recent-ish table stats is fine, because we\n> - a) don't want to look at too old data, as we need to deal with changing\n> workloads\n>\n> - b) if there aren't recent vacuums, falsely freezing is of bounded cost\n>\n> - shared memory stats being lost on crash-restart/failover might be a problem\n>\n> - we certainly don't want to immediate store these stats in a table, due\n> to the xid consumption that'd imply\n>\n>\n> - Attributing \"unfreezes\" to specific vacuums would be powerful:\n>\n> - \"Number of pages frozen during vacuum\" and \"Number of pages unfrozen that\n> were frozen during the same vacuum\" provides numerator / denominator for\n> an \"error rate\"\n>\n> - We can perform this attribution by comparing the page LSN with recorded\n> start/end LSNs of recent vacuums\n>\n> - If the freezing error rate of recent vacuums is low, freeze more\n> aggressively. This is important to deal with insert mostly workloads.\n>\n> - If old data is \"unfrozen\", that's fine, we can ignore such unfreezes when\n> controlling \"freezing aggressiveness\"\n>\n> - Ignoring unfreezing of old pages is important to e.g. deal with\n> workloads that delete old data\n>\n> - This approach could provide \"goals\" for opportunistic freezing in a\n> somewhat understandable way. E.g. aiming to rarely unfreeze data that has\n> been frozen within 1h/1d/...\n\nI have taken a stab at implementing the freeze stats tracking\ninfrastructure we would need to drive a heuristic based on error rate.\n\nAttached is a series of patches which adds a ring buffer of vacuum\nfreeze statistics to each table's stats (PgStat_StatTabEntry).\n\nIt also introduces a guc: target_page_freeze_duration. This is the time\nin seconds which we would like pages to stay frozen for. The aim is to\nadjust opportunistic freeze behavior such that pages stay frozen for at\nleast target_page_freeze_duration seconds.\n\nWhen a table is vacuumed the next entry is initialized in the ring\nbuffer (PgStat_StatTabEntry->frz_buckets[frz_current]). If all of the\nbuckets are in use, the two oldest buckets both ending either before or\nafter now - target_page_freeze_duration are combined.\n\nWhen vacuum freezes a page, we increment the freeze counter in the\ncurrent PgStat_Frz entry in the ring buffer and update the counters used\nto calculate the max, min, and average page age.\n\nWhen a frozen page is modified, we increment \"unfreezes\" in the bucket\nspanning the page freeze LSN. We also update the counters used to\ncalculate the min, max, and average frozen duration. Because we have\nboth the LSNs and time elapsed during the vacuum which froze the page,\nwe can derive the approximate time at which the page was frozen (using\nlinear interpolation) and use that to speculate how long it remained\nfrozen. If we are unfreezing it sooner than target_page_freeze_duration,\nthis is counted as an \"early unfreeze\". Using early unfreezes and page\nfreezes, we can calculate an error rate.\n\nThe guc, opp_freeze_algo, remains to allow us to test different freeze\nheuristics during development. I included a dummy algorithm (algo 2) to\ndemonstrate what we could do with the data. If the error rate is above a\nhard-coded threshold and the page is older than the average page age, it\nis frozen. This is missing an \"aggressiveness\" knob.\n\nThere are two workloads I think we can focus on. The first is a variant\nof our former workload I. The second workload, J, is the pgbench\nbuilt-in simple-update workload.\n\nI2. Work queue\n WL 1: 32 clients inserting a single row, updating an indexed\n column in another row twice, then deleting the last updated row\n pgbench scale 100\n WL 2: 2 clients, running the TPC-B like built-in workload\n rate-limited to 10,000 TPS\n\nJ. simple-update\n pgbench scale 450\n WL 1: 16 clients running built-in simple-update workload\n\nThe simple-update workload works well because it only updates\npgbench_accounts and inserts into pgbench_history. This gives us an\ninsert-only table and a table with uniform data modification. The former\nshould be frozen aggressively, the latter should be frozen as little as\npossible.\n\nThe convenience function pg_stat_get_table_vacuums(tablename) returns\nall of the collected vacuum freeze statistics for all of the vacuums of\nthe specified table (the contents of the ring buffer in\nPgStat_StatTabEntry).\n\nTo start with, I ran workloads I2 and J with the criteria from master\nand with the criteria that we freeze any page that is all frozen and all\nvisible. Having run the benchmarks for 40 minutes with the\ntarget_page_freeze_duration set to 300 seconds, the following was my\nerror and efficacy (I am calling efficacy the % of pages frozen at the\nend of the benchmark run).\n\nI2 (work queue)\n+-------+---------+---------+-----------+--------+----------+----------+\n| table | algo | freezes |early unfrz| error | #frzn end| %frzn end|\n+-------+---------+---------+-----------+--------+----------+----------+\n| queue | av + af | 274,459 | 272,794 | 99% | 906 | 75% |\n| queue | master | 274,666 | 272,798 | 99% | 908 | 56% |\n+-------+---------+---------+-----------+--------+----------+----------+\n\nJ (simple-update)\n\n+-----------+-------+---------+-----------+-------+----------+---------+\n| pgbench | algo | freezes |early unfrz| error |#frzn end |%frzn end|\n| tab | | | | | | |\n+-----------+-------+---------+-----------+-------+----------+---------+\n| accounts |av + af| 258,482 | 258,482 | 100% | 0 | 0% |\n| history |av + af| 287,362 | 357 | 0% | 287,005 | 86% |\n| accounts | master| 0 | 0 | | 0 | 0% |\n| history | master| 0 | 0 | | 0 | 0% |\n+-----------+-------+---------+-----------+-------+----------+---------+\n\nThe next step is to devise different heuristics and measure their\nefficacy. IMO, the goal of the algorithm it is to freeze pages in a\nrelation such that we drive early unfreezes/freezes -> 0 and pages\nfrozen/number of pages of a certain age -> 1.\n\nWe have already agreed to consider early unfreezes/freezes to be the\n\"error\". I have been thinking of pages frozen/number of pages of a\ncertain age as efficacy.\n\nAn alternative measure of efficacy is the average page freeze duration.\nI keep track of this as well as the min and max page freeze durations.\n\nI think we also will want some measure of magnitude in order to\ndetermine how much more or less aggressively we should freeze. To this\nend, I track the total number of pages scanned by a vacuum, the total\nnumber of pages in the relation at the end of the vacuum, and the total\nnumber of frozen pages at the beginning and end of the vacuum.\n\nBesides error, efficacy, and magnitude, we need a way to compare a\nfreeze candidate page to the relation-level metrics. We had settled on\nusing the page's age -- how long it has been since the page was last\nmodified. I track the average, min, and max page ages at the time of\nfreezing. The difference between the min and max gives us some idea how\nreliable the average page age may be.\n\nFor example, if we have high error, low variation (max - min page age),\nand high magnitude (high # pages frozen), freezing was uniformly\nineffective.\n\nRight now, I don't have an \"aggressiveness\" knob. Vacuum has the average\nerror rate over the past vacuums, the average page age calculated over\nthe past vacuums, and the target freeze duration (from the user set\nguc). We need some kind of aggressiveness knob, but I'm not sure if it\nis a page age threshold or something else.\n\n- Melanie", "msg_date": "Wed, 8 Nov 2023 21:23:01 -0500", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Wed, Nov 8, 2023 at 9:23 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> The next step is to devise different heuristics and measure their\n> efficacy. IMO, the goal of the algorithm it is to freeze pages in a\n> relation such that we drive early unfreezes/freezes -> 0 and pages\n> frozen/number of pages of a certain age -> 1.\n\nAttached is a patchset with an adaptive freezing algorithm that works\nwell. It keeps track of the pages' unmodified duration after having been\nset all-visible and uses that to predict how likely a page is to be\nmodified given its age.\n\nEach time an all-visible page is modified by an update, delete, or tuple\nlock, if the page was all-visible for less than target_freeze_duration\n(a new guc that specifies the minimum amount of time you would like data\nto stay frozen), it is counted as an \"early unset\" and the duration that\nit was unmodified is added into an accumulator. Before each relation is\nvacuumed, we calculate the mean and standard deviation of these\ndurations using the accumulator. This is used to calculate a page LSN\nthreshold demarcating pages with a < 5% likelihood of being modified\nbefore target_freeze_duration. We don't freeze pages younger than this\nthreshold.\n\nI've done away with the ring buffer of vacuums and the idea of\nattributing an \"unset\" to the vacuum that set it all visible. Instead,\nusing an \"accumulator\", I keep a running sum of the page ages, along\nwith the cardinality of the accumulated set and the sum squared of page\nages. This data structure allows us to extract the mean and standard\ndeviation, at any time, from an arbitrary number of values in constant\nspace; and is used to model the pattern of these unsets as a normal\ndistribution that we can use to try and predict whether or not a page\nwill be modified.\n\nValues can be \"removed\" from the accumulator by simply decrementing its\ncardinality and decreasing the sum and sum squared by a value that will\nnot change the mean and standard deviation of the overall distribution.\nTo adapt to a table's changing access patterns, we'll need to remove\nvalues from this accumulator over time, but this patch doesn't yet\ndecide when to do this. A simple solution may be to cap the cardinality\nof the accumulator to the greater of 1% of the table size, or some fixed\nnumber of values (perhaps 200?). Even without such removal of values,\nthe distribution recorded in the accumulator will eventually skew toward\nmore recent data, albeit at a slower rate.\n\nThis algorithm is able to predict when pages will be modified before\ntarget_freeze_threshold as long as their modification pattern fits a\nnormal distribution -- this accommodates access patterns where, after\nsome period of time, pages become less likely to be modified the older\nthey are. As an example, however, access patterns where modification\ntimes are bimodal aren't handled well by this model (imagine that half\nyour modifications are to very new data and the other half are to data\nthat is much older but still younger than target_freeze_duration). If\nthis is a common access pattern, the normal distribution could easily be\nswapped out for another distribution. The current accumulator is capable\nof tracking a distribution's first two moments of central tendency (the\nmean and standard deviation). We could track more if we wanted to use a\nfancier distribution.\n\nWe also must consider what to do when we have few unsets, e.g. with an\ninsert-only workload. When there are very few unsets (I chose 30 which\nthe internet says is the approximate minimum number required for the\ncentral limit theorem to hold), we can choose to always freeze freezable\npages; above this limit, the calculated page threshold is used. However,\nwe may not want to make predictions based on 31 values either. To avoid\nthis \"cliff\", we could modify the algorithm a bit to skew the mean and\nstandard deviation of the distribution using a confidence interval based\non the number of values we've collected.\n\nThe goal is to keep pages frozen for at least target_freeze_duration.\ntarget_freeze_duration is in seconds and pages only have a last\nmodification LSN, so target_freeze_duration must be converted to LSNs.\nTo accomplish this, I've added an LSNTimeline data structure, containing\nXLogRecPtr, TimestampTz pairs stored with decreasing precision as they\nage. When we need to translate the guc value to LSNs, we linearly\ninterpolate it on this timeline. For the time being, the global\nLSNTimeline is in PgStat_WalStats and is only updated by vacuum. There\nis no reason it can't be updated with some other cadence and/or by some\nother process (nothing about it is inherently tied to vacuum). The\ncached translated value of target_freeze_duration is stored in each\ntable's stats. This is arbitrary as it is not a table-level stat.\nHowever, it needs to be located somewhere that is accessible on\nupdate/delete. We may want to recalculate it more often than once per\ntable vacuum, especially in case of long-running vacuums.\n\nTo benchmark this new heuristic (I'm calling it algo 6 since it is the\n6th I've proposed on this thread), I've used a modified subset of my\noriginal workloads:\n\nWorkloads\n\nC. Shifting hot set\n 32 clients inserting multiple rows and then updating an indexed\n column of a row on a different page containing data they formerly\n inserted. Only recent data is updated.\n\nH. Append only table\n 32 clients, each inserting a single row at a time\n\nI. Work queue\n 32 clients, each inserting a row, then updating an indexed column in\n 2 different rows in nearby pages, then deleting 1 of the updated rows\n\n\nWorkload C:\n\nAlgo | Table | AVs | Page Freezes | Pages Frozen | % Frozen\n-----|----------|-----|--------------|--------------|---------\n 6 | hot_tail | 14 | 2,111,824 | 1,643,217 | 53.4%\n M | hot_tail | 16 | 241,605 | 3,921 | 0.1%\n\n\nAlgo | WAL GB | Cptr Bgwr Writes | Other r/w | AV IO time | TPS\n-----|--------|------------------|------------|------------|--------\n 6 | 193 | 5,473,949 | 12,793,574 | 14,870 | 28,397\n M | 207 | 5,213,763 | 20,362,315 | 46,190 | 28,461\n\nAlgorithm 6 freezes all of the cold data and doesn't freeze the current\nworking set. The notable thing is how much this reduces overall system\nI/O. On master, autovacuum is doing more than 3x the I/O and the rest of\nthe system is doing more than 1.5x the I/O. I suspect freezing data when\nit is initially vacuumed is saving future vacuums from having to evict\npages of the working set and read in cold data.\n\n\nWorkload H:\n\nAlgo | Table | AVs | Page Freezes | Pages Frozen | % frozen\n-----|-----------|-----|--------------|--------------|---------\n 6 | hthistory | 22 | 668,448 | 668,003 | 87%\n M | hthistory | 22 | 0 | 0 | 0%\n\n\nAlgo | WAL GB | Cptr Bgwr Writes | Other r/w | AV IO time | TPS\n-----|--------|------------------|-----------|------------|--------\n 6 | 14 | 725,103 | 725,575 | 1 | 43,535\n M | 13 | 693,945 | 694,417 | 1 | 43,522\n\nThe insert-only table is mostly frozen at the end. There is more I/O\ndone but not at the expense of TPS. This is exactly what we want.\n\n\nWorkload I:\n\nAlgo | Table | AVs | Page Freezes | Pages Frozen | % Frozen\n-----|-----------|-----|--------------|--------------|---------\n 6 | workqueue | 234 | 0 | 4,416 | 78%\n M | workqueue | 234 | 0 | 4,799 | 87%\n\n\nAlgo | WAL GB | Cptr Bgwr Writes | Other r/w | AV IO Time | TPS\n-----|--------|------------------|-----------|------------|--------\n 6 | 74 | 64,345 | 64,813 | 1 | 36,366\n M | 73 | 63,468 | 63,936 | 1 | 36,145\n\nWhat we want is for the work queue table to freeze as little as\npossible, because we will be constantly modifying the data. Both on\nmaster and with algorithm 6 we do not freeze tuples on any pages. You\nwill notice, however, that much of the table is set frozen in the VM at\nthe end. This is because we set pages all frozen in the VM if they are\ntechnically all frozen even if we do not freeze tuples on the page. This\nis inexpensive and not under the control of the freeze heuristic.\n\nOverall, the behavior of this new adaptive freezing method seems to be\nexactly what we want. The next step is to decide how many values to\nremove from the accumulator and benchmark cases where old data is\ndeleted.\n\nI'd be delighted to receive any feedback, ideas, questions, or review.\n\n- Melanie", "msg_date": "Fri, 8 Dec 2023 23:11:54 -0500", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On 12/8/23 23:11, Melanie Plageman wrote:\n> On Wed, Nov 8, 2023 at 9:23 PM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n>> The next step is to devise different heuristics and measure their\n>> efficacy. IMO, the goal of the algorithm it is to freeze pages in a\n>> relation such that we drive early unfreezes/freezes -> 0 and pages\n>> frozen/number of pages of a certain age -> 1.\n> \n> Attached is a patchset with an adaptive freezing algorithm that works\n> well. It keeps track of the pages' unmodified duration after having been\n> set all-visible and uses that to predict how likely a page is to be\n> modified given its age.\n> \n> Each time an all-visible page is modified by an update, delete, or tuple\n> lock, if the page was all-visible for less than target_freeze_duration\n> (a new guc that specifies the minimum amount of time you would like data\n> to stay frozen), it is counted as an \"early unset\" and the duration that\n> it was unmodified is added into an accumulator. Before each relation is\n> vacuumed, we calculate the mean and standard deviation of these\n> durations using the accumulator. This is used to calculate a page LSN\n> threshold demarcating pages with a < 5% likelihood of being modified\n> before target_freeze_duration. We don't freeze pages younger than this\n> threshold.\n> \n> I've done away with the ring buffer of vacuums and the idea of\n> attributing an \"unset\" to the vacuum that set it all visible. Instead,\n> using an \"accumulator\", I keep a running sum of the page ages, along\n> with the cardinality of the accumulated set and the sum squared of page\n> ages. This data structure allows us to extract the mean and standard\n> deviation, at any time, from an arbitrary number of values in constant\n> space; and is used to model the pattern of these unsets as a normal\n> distribution that we can use to try and predict whether or not a page\n> will be modified.\n> \n> Values can be \"removed\" from the accumulator by simply decrementing its\n> cardinality and decreasing the sum and sum squared by a value that will\n> not change the mean and standard deviation of the overall distribution.\n> To adapt to a table's changing access patterns, we'll need to remove\n> values from this accumulator over time, but this patch doesn't yet\n> decide when to do this. A simple solution may be to cap the cardinality\n> of the accumulator to the greater of 1% of the table size, or some fixed\n> number of values (perhaps 200?). Even without such removal of values,\n> the distribution recorded in the accumulator will eventually skew toward\n> more recent data, albeit at a slower rate.\n> \n> This algorithm is able to predict when pages will be modified before\n> target_freeze_threshold as long as their modification pattern fits a\n> normal distribution -- this accommodates access patterns where, after\n> some period of time, pages become less likely to be modified the older\n> they are. As an example, however, access patterns where modification\n> times are bimodal aren't handled well by this model (imagine that half\n> your modifications are to very new data and the other half are to data\n> that is much older but still younger than target_freeze_duration). If\n> this is a common access pattern, the normal distribution could easily be\n> swapped out for another distribution. The current accumulator is capable\n> of tracking a distribution's first two moments of central tendency (the\n> mean and standard deviation). We could track more if we wanted to use a\n> fancier distribution.\n> \n> We also must consider what to do when we have few unsets, e.g. with an\n> insert-only workload. When there are very few unsets (I chose 30 which\n> the internet says is the approximate minimum number required for the\n> central limit theorem to hold), we can choose to always freeze freezable\n> pages; above this limit, the calculated page threshold is used. However,\n> we may not want to make predictions based on 31 values either. To avoid\n> this \"cliff\", we could modify the algorithm a bit to skew the mean and\n> standard deviation of the distribution using a confidence interval based\n> on the number of values we've collected.\n> \n> The goal is to keep pages frozen for at least target_freeze_duration.\n> target_freeze_duration is in seconds and pages only have a last\n> modification LSN, so target_freeze_duration must be converted to LSNs.\n> To accomplish this, I've added an LSNTimeline data structure, containing\n> XLogRecPtr, TimestampTz pairs stored with decreasing precision as they\n> age. When we need to translate the guc value to LSNs, we linearly\n> interpolate it on this timeline. For the time being, the global\n> LSNTimeline is in PgStat_WalStats and is only updated by vacuum. There\n> is no reason it can't be updated with some other cadence and/or by some\n> other process (nothing about it is inherently tied to vacuum). The\n> cached translated value of target_freeze_duration is stored in each\n> table's stats. This is arbitrary as it is not a table-level stat.\n> However, it needs to be located somewhere that is accessible on\n> update/delete. We may want to recalculate it more often than once per\n> table vacuum, especially in case of long-running vacuums.\n> \n> To benchmark this new heuristic (I'm calling it algo 6 since it is the\n> 6th I've proposed on this thread), I've used a modified subset of my\n> original workloads:\n> \n> Workloads\n> \n> C. Shifting hot set\n> 32 clients inserting multiple rows and then updating an indexed\n> column of a row on a different page containing data they formerly\n> inserted. Only recent data is updated.\n> \n> H. Append only table\n> 32 clients, each inserting a single row at a time\n> \n> I. Work queue\n> 32 clients, each inserting a row, then updating an indexed column in\n> 2 different rows in nearby pages, then deleting 1 of the updated rows\n> \n> \n> Workload C:\n> \n> Algo | Table | AVs | Page Freezes | Pages Frozen | % Frozen\n> -----|----------|-----|--------------|--------------|---------\n> 6 | hot_tail | 14 | 2,111,824 | 1,643,217 | 53.4%\n> M | hot_tail | 16 | 241,605 | 3,921 | 0.1%\n> \n> \n> Algo | WAL GB | Cptr Bgwr Writes | Other r/w | AV IO time | TPS\n> -----|--------|------------------|------------|------------|--------\n> 6 | 193 | 5,473,949 | 12,793,574 | 14,870 | 28,397\n> M | 207 | 5,213,763 | 20,362,315 | 46,190 | 28,461\n> \n> Algorithm 6 freezes all of the cold data and doesn't freeze the current\n> working set. The notable thing is how much this reduces overall system\n> I/O. On master, autovacuum is doing more than 3x the I/O and the rest of\n> the system is doing more than 1.5x the I/O. I suspect freezing data when\n> it is initially vacuumed is saving future vacuums from having to evict\n> pages of the working set and read in cold data.\n> \n> \n> Workload H:\n> \n> Algo | Table | AVs | Page Freezes | Pages Frozen | % frozen\n> -----|-----------|-----|--------------|--------------|---------\n> 6 | hthistory | 22 | 668,448 | 668,003 | 87%\n> M | hthistory | 22 | 0 | 0 | 0%\n> \n> \n> Algo | WAL GB | Cptr Bgwr Writes | Other r/w | AV IO time | TPS\n> -----|--------|------------------|-----------|------------|--------\n> 6 | 14 | 725,103 | 725,575 | 1 | 43,535\n> M | 13 | 693,945 | 694,417 | 1 | 43,522\n> \n> The insert-only table is mostly frozen at the end. There is more I/O\n> done but not at the expense of TPS. This is exactly what we want.\n> \n> \n> Workload I:\n> \n> Algo | Table | AVs | Page Freezes | Pages Frozen | % Frozen\n> -----|-----------|-----|--------------|--------------|---------\n> 6 | workqueue | 234 | 0 | 4,416 | 78%\n> M | workqueue | 234 | 0 | 4,799 | 87%\n> \n> \n> Algo | WAL GB | Cptr Bgwr Writes | Other r/w | AV IO Time | TPS\n> -----|--------|------------------|-----------|------------|--------\n> 6 | 74 | 64,345 | 64,813 | 1 | 36,366\n> M | 73 | 63,468 | 63,936 | 1 | 36,145\n> \n> What we want is for the work queue table to freeze as little as\n> possible, because we will be constantly modifying the data. Both on\n> master and with algorithm 6 we do not freeze tuples on any pages. You\n> will notice, however, that much of the table is set frozen in the VM at\n> the end. This is because we set pages all frozen in the VM if they are\n> technically all frozen even if we do not freeze tuples on the page. This\n> is inexpensive and not under the control of the freeze heuristic.\n> \n> Overall, the behavior of this new adaptive freezing method seems to be\n> exactly what we want. The next step is to decide how many values to\n> remove from the accumulator and benchmark cases where old data is\n> deleted.\n> \n> I'd be delighted to receive any feedback, ideas, questions, or review.\n\n\nThis is well thought out, well described, and a fantastic improvement in \nmy view -- well done!\n\nI do think we will need to consider distributions other than normal, but \nI don't know offhand what they will be.\n\nHowever, even if we assume a more-or-less normal distribution, we should \nconsider using subgroups in a way similar to Statistical Process \nControl[1]. The reasoning is explained in this quote:\n\n The Math Behind Subgroup Size\n\n The Central Limit Theorem (CLT) plays a pivotal role here. According\n to CLT, as the subgroup size (n) increases, the distribution of the\n sample means will approximate a normal distribution, regardless of\n the shape of the population distribution. Therefore, as your\n subgroup size increases, your control chart limits will narrow,\n making the chart more sensitive to special cause variation and more\n prone to false alarms.\n\n\n[1] \nhttps://www.qualitygurus.com/rational-subgrouping-in-statistical-process-control/\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Sat, 9 Dec 2023 09:24:01 -0500", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "Great results.\n\nOn Sat, Dec 9, 2023 at 5:12 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> Values can be \"removed\" from the accumulator by simply decrementing its\n> cardinality and decreasing the sum and sum squared by a value that will\n> not change the mean and standard deviation of the overall distribution.\n> To adapt to a table's changing access patterns, we'll need to remove\n> values from this accumulator over time, but this patch doesn't yet\n> decide when to do this. A simple solution may be to cap the cardinality\n> of the accumulator to the greater of 1% of the table size, or some fixed\n> number of values (perhaps 200?). Even without such removal of values,\n> the distribution recorded in the accumulator will eventually skew toward\n> more recent data, albeit at a slower rate.\n\nI think we're going to need something here. Otherwise, after 6 months\nof use, changing a table's perceived access pattern will be quite\ndifficult.\n\nI think one challenge here is to find something that doesn't decay too\noften and end up with cases where it basically removes all the data.\n\nAs long as you avoid that, I suspect that the algorithm might not be\nterribly sensitive to other kinds of changes. If you decay after 200\nvalues or 2000 or 20,000, it will only affect how fast we can change\nour notion of the access pattern, and my guess would be that any of\nthose values would produce broadly acceptable results, with some\ndifferences in the details. If you decay after 200,000,000 values or\nnot at all, then I think there will be problems.\n\n> This algorithm is able to predict when pages will be modified before\n> target_freeze_threshold as long as their modification pattern fits a\n> normal distribution -- this accommodates access patterns where, after\n> some period of time, pages become less likely to be modified the older\n> they are. As an example, however, access patterns where modification\n> times are bimodal aren't handled well by this model (imagine that half\n> your modifications are to very new data and the other half are to data\n> that is much older but still younger than target_freeze_duration). If\n> this is a common access pattern, the normal distribution could easily be\n> swapped out for another distribution. The current accumulator is capable\n> of tracking a distribution's first two moments of central tendency (the\n> mean and standard deviation). We could track more if we wanted to use a\n> fancier distribution.\n\nI think it might be fine to ignore this problem. What we're doing\nright now is way worse.\n\n> We also must consider what to do when we have few unsets, e.g. with an\n> insert-only workload. When there are very few unsets (I chose 30 which\n> the internet says is the approximate minimum number required for the\n> central limit theorem to hold), we can choose to always freeze freezable\n> pages; above this limit, the calculated page threshold is used. However,\n> we may not want to make predictions based on 31 values either. To avoid\n> this \"cliff\", we could modify the algorithm a bit to skew the mean and\n> standard deviation of the distribution using a confidence interval based\n> on the number of values we've collected.\n\nI think some kind of smooth transition would might be a good idea, but\nI also don't know how much it matters. I think the important thing is\nthat we freeze aggressively unless there's a clear signal telling us\nnot to do that. Absence of evidence should cause us to tend toward\naggressive freezing. Otherwise, we get insert-only tables wrong.\n\n> The goal is to keep pages frozen for at least target_freeze_duration.\n> target_freeze_duration is in seconds and pages only have a last\n> modification LSN, so target_freeze_duration must be converted to LSNs.\n> To accomplish this, I've added an LSNTimeline data structure, containing\n> XLogRecPtr, TimestampTz pairs stored with decreasing precision as they\n> age. When we need to translate the guc value to LSNs, we linearly\n> interpolate it on this timeline. For the time being, the global\n> LSNTimeline is in PgStat_WalStats and is only updated by vacuum. There\n> is no reason it can't be updated with some other cadence and/or by some\n> other process (nothing about it is inherently tied to vacuum). The\n> cached translated value of target_freeze_duration is stored in each\n> table's stats. This is arbitrary as it is not a table-level stat.\n> However, it needs to be located somewhere that is accessible on\n> update/delete. We may want to recalculate it more often than once per\n> table vacuum, especially in case of long-running vacuums.\n\nThis part sounds like it isn't quite baked yet. The idea of the data\nstructure seems fine, but updating it once per vacuum sounds fairly\nunprincipled to me? Don't we want the updates to happen on a somewhat\nregular wall clock cadence?\n\n> To benchmark this new heuristic (I'm calling it algo 6 since it is the\n> 6th I've proposed on this thread), I've used a modified subset of my\n> original workloads:\n>\n> [ everything worked perfectly ]\n\nHard to beat that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 13 Dec 2023 18:24:25 +0100", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Sat, Dec 9, 2023 at 9:24 AM Joe Conway <mail@joeconway.com> wrote:\n>\n> On 12/8/23 23:11, Melanie Plageman wrote:\n> >\n> > I'd be delighted to receive any feedback, ideas, questions, or review.\n>\n>\n> This is well thought out, well described, and a fantastic improvement in\n> my view -- well done!\n\nThanks, Joe! That means a lot! I see work done by hackers on the\nmailing list a lot that makes me think, \"hey, that's\ncool/clever/awesome!\" but I don't give that feedback. I appreciate you\ndoing that!\n\n> I do think we will need to consider distributions other than normal, but\n> I don't know offhand what they will be.\n\nAgreed. I plan to test with another distribution. Though, the exercise\nof determining which ones are useful is probably more challenging.\nI imagine we will have to choose one distribution (as opposed to\nsupporting different distributions and choosing based on data access\npatterns for a table). Though, even with a normal distribution, I\nthink it should be an improvement.\n\n> However, even if we assume a more-or-less normal distribution, we should\n> consider using subgroups in a way similar to Statistical Process\n> Control[1]. The reasoning is explained in this quote:\n>\n> The Math Behind Subgroup Size\n>\n> The Central Limit Theorem (CLT) plays a pivotal role here. According\n> to CLT, as the subgroup size (n) increases, the distribution of the\n> sample means will approximate a normal distribution, regardless of\n> the shape of the population distribution. Therefore, as your\n> subgroup size increases, your control chart limits will narrow,\n> making the chart more sensitive to special cause variation and more\n> prone to false alarms.\n\nI haven't read anything about statistical process control until you\nmentioned this. I read the link you sent and also googled around a\nbit. I was under the impression that the more samples we have, the\nbetter. But, it seems like this may not be the assumption in\nstatistical process control?\n\nIt may help us to get more specific. I'm not sure what the\nrelationship between \"unsets\" in my code and subgroup members would\nbe. The article you linked suggests that each subgroup should be of\nsize 5 or smaller. Translating that to my code, were you imagining\nsubgroups of \"unsets\" (each time we modify a page that was previously\nall-visible)?\n\nThanks for the feedback!\n\n- Melanie\n\n\n", "msg_date": "Thu, 21 Dec 2023 10:56:09 -0500", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On 12/21/23 10:56, Melanie Plageman wrote:\n> On Sat, Dec 9, 2023 at 9:24 AM Joe Conway <mail@joeconway.com> wrote:\n>> However, even if we assume a more-or-less normal distribution, we should\n>> consider using subgroups in a way similar to Statistical Process\n>> Control[1]. The reasoning is explained in this quote:\n>>\n>> The Math Behind Subgroup Size\n>>\n>> The Central Limit Theorem (CLT) plays a pivotal role here. According\n>> to CLT, as the subgroup size (n) increases, the distribution of the\n>> sample means will approximate a normal distribution, regardless of\n>> the shape of the population distribution. Therefore, as your\n>> subgroup size increases, your control chart limits will narrow,\n>> making the chart more sensitive to special cause variation and more\n>> prone to false alarms.\n> \n> I haven't read anything about statistical process control until you\n> mentioned this. I read the link you sent and also googled around a\n> bit. I was under the impression that the more samples we have, the\n> better. But, it seems like this may not be the assumption in\n> statistical process control?\n> \n> It may help us to get more specific. I'm not sure what the\n> relationship between \"unsets\" in my code and subgroup members would\n> be. The article you linked suggests that each subgroup should be of\n> size 5 or smaller. Translating that to my code, were you imagining\n> subgroups of \"unsets\" (each time we modify a page that was previously\n> all-visible)?\n\nBasically, yes.\n\nIt might not makes sense, but I think we could test the theory by \nplotting a histogram of the raw data, and then also plot a histogram \nbased on sub-grouping every 5 sequential values in your accumulator.\n\nIf the former does not look very normal (I would guess most workloads it \nwill be skewed with a long tail) and the latter looks to be more normal, \nthen it would say we were on the right track.\n\nThere are statistical tests for \"normalness\" that could be applied too \n(<quickly looks> e.g. \nhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC6350423/#sec2-13title ) \nwhich be a more rigorous approach, but the quick look at histograms \nmight be sufficiently convincing.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Thu, 21 Dec 2023 11:36:43 -0500", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Wed, Dec 13, 2023 at 12:24 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> Great results.\n\nThanks!\n\n> On Sat, Dec 9, 2023 at 5:12 AM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> > Values can be \"removed\" from the accumulator by simply decrementing its\n> > cardinality and decreasing the sum and sum squared by a value that will\n> > not change the mean and standard deviation of the overall distribution.\n> > To adapt to a table's changing access patterns, we'll need to remove\n> > values from this accumulator over time, but this patch doesn't yet\n> > decide when to do this. A simple solution may be to cap the cardinality\n> > of the accumulator to the greater of 1% of the table size, or some fixed\n> > number of values (perhaps 200?). Even without such removal of values,\n> > the distribution recorded in the accumulator will eventually skew toward\n> > more recent data, albeit at a slower rate.\n>\n> I think we're going to need something here. Otherwise, after 6 months\n> of use, changing a table's perceived access pattern will be quite\n> difficult.\n>\n> I think one challenge here is to find something that doesn't decay too\n> often and end up with cases where it basically removes all the data.\n>\n> As long as you avoid that, I suspect that the algorithm might not be\n> terribly sensitive to other kinds of changes. If you decay after 200\n> values or 2000 or 20,000, it will only affect how fast we can change\n> our notion of the access pattern, and my guess would be that any of\n> those values would produce broadly acceptable results, with some\n> differences in the details. If you decay after 200,000,000 values or\n> not at all, then I think there will be problems.\n\nI'll add the decay logic and devise a benchmark that will exercise it.\nI can test at least one or two of these ideas.\n\n> > The goal is to keep pages frozen for at least target_freeze_duration.\n> > target_freeze_duration is in seconds and pages only have a last\n> > modification LSN, so target_freeze_duration must be converted to LSNs.\n> > To accomplish this, I've added an LSNTimeline data structure, containing\n> > XLogRecPtr, TimestampTz pairs stored with decreasing precision as they\n> > age. When we need to translate the guc value to LSNs, we linearly\n> > interpolate it on this timeline. For the time being, the global\n> > LSNTimeline is in PgStat_WalStats and is only updated by vacuum. There\n> > is no reason it can't be updated with some other cadence and/or by some\n> > other process (nothing about it is inherently tied to vacuum). The\n> > cached translated value of target_freeze_duration is stored in each\n> > table's stats. This is arbitrary as it is not a table-level stat.\n> > However, it needs to be located somewhere that is accessible on\n> > update/delete. We may want to recalculate it more often than once per\n> > table vacuum, especially in case of long-running vacuums.\n>\n> This part sounds like it isn't quite baked yet. The idea of the data\n> structure seems fine, but updating it once per vacuum sounds fairly\n> unprincipled to me? Don't we want the updates to happen on a somewhat\n> regular wall clock cadence?\n\nYes, this part was not fully baked. I actually discussed this with\nAndres at PGConf EU last week and he suggested that background writer\nupdate the LSNTimeline. He also suggested I propose the LSNTimeline in\na new thread. I could add a pageinspect function returning the\nestimated time of last page modification given the page LSN (so it is\nproposed with a user).\n\n- Melanie\n\n\n", "msg_date": "Thu, 21 Dec 2023 15:58:17 -0500", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Thu, Dec 21, 2023 at 10:56 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> Agreed. I plan to test with another distribution. Though, the exercise\n> of determining which ones are useful is probably more challenging.\n> I imagine we will have to choose one distribution (as opposed to\n> supporting different distributions and choosing based on data access\n> patterns for a table). Though, even with a normal distribution, I\n> think it should be an improvement.\n\nOur current algorithm isn't adaptive at all, so I like our chances of\ncoming out ahead. It won't surprise me if somebody finds a case where\nthere is a regression, but if we handle some common and important\ncases correctly (e.g. append-only, update-everything-nonstop) then I\nthink we're probably ahead even if there are some cases where we do\nworse. It does depend on how much worse they are, and how realistic\nthey are, but we don't want to be too fearful here: we know what we're\ndoing right now isn't too great.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 21 Dec 2023 16:07:57 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Eager page freeze criteria clarification" }, { "msg_contents": "On Thu, Dec 21, 2023 at 3:58 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> On Wed, Dec 13, 2023 at 12:24 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > On Sat, Dec 9, 2023 at 5:12 AM Melanie Plageman\n> > <melanieplageman@gmail.com> wrote:\n> > > The goal is to keep pages frozen for at least target_freeze_duration.\n> > > target_freeze_duration is in seconds and pages only have a last\n> > > modification LSN, so target_freeze_duration must be converted to LSNs.\n> > > To accomplish this, I've added an LSNTimeline data structure, containing\n> > > XLogRecPtr, TimestampTz pairs stored with decreasing precision as they\n> > > age. When we need to translate the guc value to LSNs, we linearly\n> > > interpolate it on this timeline. For the time being, the global\n> > > LSNTimeline is in PgStat_WalStats and is only updated by vacuum. There\n> > > is no reason it can't be updated with some other cadence and/or by some\n> > > other process (nothing about it is inherently tied to vacuum). The\n> > > cached translated value of target_freeze_duration is stored in each\n> > > table's stats. This is arbitrary as it is not a table-level stat.\n> > > However, it needs to be located somewhere that is accessible on\n> > > update/delete. We may want to recalculate it more often than once per\n> > > table vacuum, especially in case of long-running vacuums.\n> >\n> > This part sounds like it isn't quite baked yet. The idea of the data\n> > structure seems fine, but updating it once per vacuum sounds fairly\n> > unprincipled to me? Don't we want the updates to happen on a somewhat\n> > regular wall clock cadence?\n>\n> Yes, this part was not fully baked. I actually discussed this with\n> Andres at PGConf EU last week and he suggested that background writer\n> update the LSNTimeline. He also suggested I propose the LSNTimeline in\n> a new thread. I could add a pageinspect function returning the\n> estimated time of last page modification given the page LSN (so it is\n> proposed with a user).\n\nI've rebased this over top of the revised LSNTimeline functionality\nproposed separately in [1].\nIt is also registered for the current commitfest.\n\nI plan to add the decay logic and benchmark it this week.\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/CAAKRu_Z7tR7D1=DR=xWQDefYk_nu_gxgW88X0HtxN6AsK-8_gA@mail.gmail.com", "msg_date": "Tue, 2 Jan 2024 13:47:07 -0500", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Eager page freeze criteria clarification" } ]
[ { "msg_contents": "The PostgreSQL contributors team has been looking over the community\nactivity and, over the first half of this year, has been recognizing\nnew contributors to be listed on\n\nhttps://www.postgresql.org/community/contributors/\n\nNew PostgreSQL Contributors:\n\n Ajin Cherian\n Alexander Kukushkin\n Alexander Lakhin\n Dmitry Dolgov\n Hou Zhijie\n Ilya Kosmodemiansky\n Melanie Plageman\n Michael Banck\n Michael Brewer\n Paul Jungwirth\n Peter Smith\n Vigneshwaran C\n\nNew PostgreSQL Major Contributors:\n\n Julien Rouhaud\n Stacey Haysler\n Steve Singer\n\nThank you all for contributing to PostgreSQL to make it such a great project!\n\nFor the contributors team,\nChristoph\n\n\n", "msg_date": "Fri, 28 Jul 2023 17:29:38 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": true, "msg_subject": "New PostgreSQL Contributors" }, { "msg_contents": "On 7/28/23 17:29, Christoph Berg wrote:\n\n> New PostgreSQL Contributors:\n> \n> Ajin Cherian\n> Alexander Kukushkin\n> Alexander Lakhin\n> Dmitry Dolgov\n> Hou Zhijie\n> Ilya Kosmodemiansky\n> Melanie Plageman\n> Michael Banck\n> Michael Brewer\n> Paul Jungwirth\n> Peter Smith\n> Vigneshwaran C\n> \n> New PostgreSQL Major Contributors:\n> \n> Stacey Haysler\n> Steve Singer\n\nCongratulations to all, and well deserved!\n-- \nVik Fearing\n\n\n\n", "msg_date": "Fri, 28 Jul 2023 19:19:21 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: New PostgreSQL Contributors" }, { "msg_contents": "On Fri, Jul 28, 2023 at 07:19:21PM +0200, Vik Fearing wrote:\n> On 7/28/23 17:29, Christoph Berg wrote:\n> \n>> New PostgreSQL Contributors:\n>> \n>> Ajin Cherian\n>> Alexander Kukushkin\n>> Alexander Lakhin\n>> Dmitry Dolgov\n>> Hou Zhijie\n>> Ilya Kosmodemiansky\n>> Melanie Plageman\n>> Michael Banck\n>> Michael Brewer\n>> Paul Jungwirth\n>> Peter Smith\n>> Vigneshwaran C\n>> \n>> New PostgreSQL Major Contributors:\n>> \n>> Stacey Haysler\n>> Steve Singer\n> \n> Congratulations to all, and well deserved!\n\n+1, congrats!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 28 Jul 2023 10:40:34 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: New PostgreSQL Contributors" }, { "msg_contents": "Bingo work! Thanks to all the contributors!\n\nBest,\nDianjin Wang\n\n\nOn Fri, Jul 28, 2023 at 11:29 PM Christoph Berg <myon@debian.org> wrote:\n\n> The PostgreSQL contributors team has been looking over the community\n> activity and, over the first half of this year, has been recognizing\n> new contributors to be listed on\n>\n> https://www.postgresql.org/community/contributors/\n>\n> New PostgreSQL Contributors:\n>\n> Ajin Cherian\n> Alexander Kukushkin\n> Alexander Lakhin\n> Dmitry Dolgov\n> Hou Zhijie\n> Ilya Kosmodemiansky\n> Melanie Plageman\n> Michael Banck\n> Michael Brewer\n> Paul Jungwirth\n> Peter Smith\n> Vigneshwaran C\n>\n> New PostgreSQL Major Contributors:\n>\n> Julien Rouhaud\n> Stacey Haysler\n> Steve Singer\n>\n> Thank you all for contributing to PostgreSQL to make it such a great\n> project!\n>\n> For the contributors team,\n> Christoph\n>\n>\n>\n\nBingo work! Thanks to all the contributors!Best,Dianjin WangOn Fri, Jul 28, 2023 at 11:29 PM Christoph Berg <myon@debian.org> wrote:The PostgreSQL contributors team has been looking over the community\nactivity and, over the first half of this year, has been recognizing\nnew contributors to be listed on\n\nhttps://www.postgresql.org/community/contributors/\n\nNew PostgreSQL Contributors:\n\n  Ajin Cherian\n  Alexander Kukushkin\n  Alexander Lakhin\n  Dmitry Dolgov\n  Hou Zhijie\n  Ilya Kosmodemiansky\n  Melanie Plageman\n  Michael Banck\n  Michael Brewer\n  Paul Jungwirth\n  Peter Smith\n  Vigneshwaran C\n\nNew PostgreSQL Major Contributors:\n\n  Julien Rouhaud\n  Stacey Haysler\n  Steve Singer\n\nThank you all for contributing to PostgreSQL to make it such a great project!\n\nFor the contributors team,\nChristoph", "msg_date": "Mon, 31 Jul 2023 09:34:10 +0800", "msg_from": "Dianjin Wang <wangdianjin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: New PostgreSQL Contributors" }, { "msg_contents": "Congrats!\n\nOn Fri, Jul 28, 2023 at 10:29 PM Christoph Berg <myon@debian.org> wrote:\n\n> The PostgreSQL contributors team has been looking over the community\n> activity and, over the first half of this year, has been recognizing\n> new contributors to be listed on\n>\n> https://www.postgresql.org/community/contributors/\n>\n> New PostgreSQL Contributors:\n>\n> Ajin Cherian\n> Alexander Kukushkin\n> Alexander Lakhin\n> Dmitry Dolgov\n> Hou Zhijie\n> Ilya Kosmodemiansky\n> Melanie Plageman\n> Michael Banck\n> Michael Brewer\n> Paul Jungwirth\n> Peter Smith\n> Vigneshwaran C\n>\n> New PostgreSQL Major Contributors:\n>\n> Julien Rouhaud\n> Stacey Haysler\n> Steve Singer\n>\n> Thank you all for contributing to PostgreSQL to make it such a great\n> project!\n>\n> For the contributors team,\n> Christoph\n>\n>\n>\n\nCongrats!On Fri, Jul 28, 2023 at 10:29 PM Christoph Berg <myon@debian.org> wrote:The PostgreSQL contributors team has been looking over the community\nactivity and, over the first half of this year, has been recognizing\nnew contributors to be listed on\n\nhttps://www.postgresql.org/community/contributors/\n\nNew PostgreSQL Contributors:\n\n  Ajin Cherian\n  Alexander Kukushkin\n  Alexander Lakhin\n  Dmitry Dolgov\n  Hou Zhijie\n  Ilya Kosmodemiansky\n  Melanie Plageman\n  Michael Banck\n  Michael Brewer\n  Paul Jungwirth\n  Peter Smith\n  Vigneshwaran C\n\nNew PostgreSQL Major Contributors:\n\n  Julien Rouhaud\n  Stacey Haysler\n  Steve Singer\n\nThank you all for contributing to PostgreSQL to make it such a great project!\n\nFor the contributors team,\nChristoph", "msg_date": "Mon, 31 Jul 2023 10:18:30 +0700", "msg_from": "Chris Travers <chris@orioledata.com>", "msg_from_op": false, "msg_subject": "Re: New PostgreSQL Contributors" }, { "msg_contents": "On Fri, Jul 28, 2023 at 8:59 PM Christoph Berg <myon@debian.org> wrote:\n\n> The PostgreSQL contributors team has been looking over the community\n> activity and, over the first half of this year, has been recognizing\n> new contributors to be listed on\n>\n> https://www.postgresql.org/community/contributors/\n>\n> New PostgreSQL Contributors:\n>\n> Ajin Cherian\n> Alexander Kukushkin\n> Alexander Lakhin\n> Dmitry Dolgov\n> Hou Zhijie\n> Ilya Kosmodemiansky\n> Melanie Plageman\n> Michael Banck\n> Michael Brewer\n> Paul Jungwirth\n> Peter Smith\n> Vigneshwaran C\n>\n> New PostgreSQL Major Contributors:\n>\n> Julien Rouhaud\n> Stacey Haysler\n> Steve Singer\n>\n\nMany congratulations !!\n\nRegards,\nAmul\n\nOn Fri, Jul 28, 2023 at 8:59 PM Christoph Berg <myon@debian.org> wrote:The PostgreSQL contributors team has been looking over the community\nactivity and, over the first half of this year, has been recognizing\nnew contributors to be listed on\n\nhttps://www.postgresql.org/community/contributors/\n\nNew PostgreSQL Contributors:\n\n  Ajin Cherian\n  Alexander Kukushkin\n  Alexander Lakhin\n  Dmitry Dolgov\n  Hou Zhijie\n  Ilya Kosmodemiansky\n  Melanie Plageman\n  Michael Banck\n  Michael Brewer\n  Paul Jungwirth\n  Peter Smith\n  Vigneshwaran C\n\nNew PostgreSQL Major Contributors:\n\n  Julien Rouhaud\n  Stacey Haysler\n  Steve SingerMany congratulations !!Regards,Amul", "msg_date": "Mon, 31 Jul 2023 10:07:08 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: New PostgreSQL Contributors" }, { "msg_contents": "Hi,\n\nOn Fri, Jul 28, 2023 at 8:59 PM Christoph Berg <myon@debian.org> wrote:\n\n> The PostgreSQL contributors team has been looking over the community\n> activity and, over the first half of this year, has been recognizing\n> new contributors to be listed on\n>\n> https://www.postgresql.org/community/contributors/\n>\n> New PostgreSQL Contributors:\n>\n> Ajin Cherian\n> Alexander Kukushkin\n> Alexander Lakhin\n> Dmitry Dolgov\n> Hou Zhijie\n> Ilya Kosmodemiansky\n> Melanie Plageman\n> Michael Banck\n> Michael Brewer\n> Paul Jungwirth\n> Peter Smith\n> Vigneshwaran C\n>\n> New PostgreSQL Major Contributors:\n>\n> Julien Rouhaud\n> Stacey Haysler\n> Steve Singer\n>\n> Congratulations to all the new contributors!\n\nThank you,\nRahila syed\n\nHi,On Fri, Jul 28, 2023 at 8:59 PM Christoph Berg <myon@debian.org> wrote:The PostgreSQL contributors team has been looking over the community\nactivity and, over the first half of this year, has been recognizing\nnew contributors to be listed on\n\nhttps://www.postgresql.org/community/contributors/\n\nNew PostgreSQL Contributors:\n\n  Ajin Cherian\n  Alexander Kukushkin\n  Alexander Lakhin\n  Dmitry Dolgov\n  Hou Zhijie\n  Ilya Kosmodemiansky\n  Melanie Plageman\n  Michael Banck\n  Michael Brewer\n  Paul Jungwirth\n  Peter Smith\n  Vigneshwaran C\n\nNew PostgreSQL Major Contributors:\n\n  Julien Rouhaud\n  Stacey Haysler\n  Steve Singer\nCongratulations to all the new contributors!Thank you,Rahila syed", "msg_date": "Mon, 31 Jul 2023 10:21:39 +0530", "msg_from": "Rahila Syed <rahilasyed90@gmail.com>", "msg_from_op": false, "msg_subject": "Re: New PostgreSQL Contributors" }, { "msg_contents": "On Fri, 28 Jul 2023 at 17:29, Christoph Berg <myon@debian.org> wrote:\n>\n> The PostgreSQL contributors team has been looking over the community\n> activity and, over the first half of this year, has been recognizing\n> new contributors to be listed on\n>\n> https://www.postgresql.org/community/contributors/\n>\n> New PostgreSQL Contributors:\n>\n> Ajin Cherian\n> Alexander Kukushkin\n> Alexander Lakhin\n> Dmitry Dolgov\n> Hou Zhijie\n> Ilya Kosmodemiansky\n> Melanie Plageman\n> Michael Banck\n> Michael Brewer\n> Paul Jungwirth\n> Peter Smith\n> Vigneshwaran C\n>\n> New PostgreSQL Major Contributors:\n>\n> Julien Rouhaud\n> Stacey Haysler\n> Steve Singer\n>\n> Thank you all for contributing to PostgreSQL to make it such a great project!\n\n+1, thank you all, and congratulations!\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 31 Jul 2023 14:39:06 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: New PostgreSQL Contributors" } ]
[ { "msg_contents": "Hi,\n\nThe pg_leftmost_one_pos32 function with msvc compiler,\nrelies on Assert to guarantee the correct result.\n\nBut msvc documentation [1] says clearly that\nundefined behavior can occur.\n\nFix by checking the result of the function to avoid\na possible undefined behavior.\n\npatch attached.\n\nbest regards,\nRanier Vilela\n\n[1]\nhttps://learn.microsoft.com/en-us/cpp/intrinsics/bitscanreverse-bitscanreverse64?view=msvc-170", "msg_date": "Sat, 29 Jul 2023 09:37:19 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Avoid undefined behavior with msvc compiler\n (src/include/port/pg_bitutils.h)" }, { "msg_contents": "On Sat, Jul 29, 2023 at 7:37 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>\n> Hi,\n>\n> The pg_leftmost_one_pos32 function with msvc compiler,\n> relies on Assert to guarantee the correct result.\n>\n> But msvc documentation [1] says clearly that\n> undefined behavior can occur.\n\nIt seems that we should have \"Assert(word != 0);\" at the top, which matches\nthe other platforms anyway, so I'll add that.\n\n> Fix by checking the result of the function to avoid\n> a possible undefined behavior.\n\nNo other platform does this, and that is by design. I'm also not\nparticularly impressed by the unrelated cosmetic changes.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Sat, Jul 29, 2023 at 7:37 PM Ranier Vilela <ranier.vf@gmail.com> wrote:>> Hi,>> The pg_leftmost_one_pos32 function with msvc compiler,> relies on Assert to guarantee the correct result.>> But msvc documentation [1] says clearly that> undefined behavior can occur.It seems that we should have \"Assert(word != 0);\" at the top, which matches the other platforms anyway, so I'll add that.> Fix by checking the result of the function to avoid> a possible undefined behavior.No other platform does this, and that is by design. I'm also not particularly impressed by the unrelated cosmetic changes.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Sun, 30 Jul 2023 15:07:25 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Avoid undefined behavior with msvc compiler\n (src/include/port/pg_bitutils.h)" }, { "msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> It seems that we should have \"Assert(word != 0);\" at the top, which matches\n> the other platforms anyway, so I'll add that.\n\nThat's basically equivalent to the existing Assert(non_zero).\nI think it'd be okay to drop that one and instead have\nthe same Assert condition as other platforms, but having both\nwould be redundant.\n\nI agree that adding the non-Assert test that Ranier wants\nis entirely pointless. If a caller did manage to violate\nthe asserted-for condition (and we don't have asserts on),\nreturning zero is not better than returning an unspecified\nvalue. If anything it might be worse, since it might not\nlead to obvious misbehavior.\n\nOn the whole, there's not anything wrong with the code as-is.\nA case could be made for making the MSVC asserts more like\nother platforms, but it's quite cosmetic.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 30 Jul 2023 10:45:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Avoid undefined behavior with msvc compiler\n (src/include/port/pg_bitutils.h)" }, { "msg_contents": "On Sun, Jul 30, 2023 at 9:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> John Naylor <john.naylor@enterprisedb.com> writes:\n> > It seems that we should have \"Assert(word != 0);\" at the top, which\nmatches\n> > the other platforms anyway, so I'll add that.\n>\n> That's basically equivalent to the existing Assert(non_zero).\n> I think it'd be okay to drop that one and instead have\n> the same Assert condition as other platforms, but having both\n> would be redundant.\n\nWorks for me, so done that way for both forward and reverse variants. Since\nthe return value is no longer checked in any builds, I thought about\nremoving the variable containing it, but it seems best to leave it behind\nfor clarity since these are not our functions.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Sun, Jul 30, 2023 at 9:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:>> John Naylor <john.naylor@enterprisedb.com> writes:> > It seems that we should have \"Assert(word != 0);\" at the top, which matches> > the other platforms anyway, so I'll add that.>> That's basically equivalent to the existing Assert(non_zero).> I think it'd be okay to drop that one and instead have> the same Assert condition as other platforms, but having both> would be redundant.Works for me, so done that way for both forward and reverse variants. Since the return value is no longer checked in any builds, I thought about removing the variable containing it, but it seems best to leave it behind for clarity since these are not our functions.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Mon, 31 Jul 2023 14:56:15 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Avoid undefined behavior with msvc compiler\n (src/include/port/pg_bitutils.h)" }, { "msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> On Sun, Jul 30, 2023 at 9:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> That's basically equivalent to the existing Assert(non_zero).\n>> I think it'd be okay to drop that one and instead have\n>> the same Assert condition as other platforms, but having both\n>> would be redundant.\n\n> Works for me, so done that way for both forward and reverse variants. Since\n> the return value is no longer checked in any builds, I thought about\n> removing the variable containing it, but it seems best to leave it behind\n> for clarity since these are not our functions.\n\nHmm, aren't you risking \"variable is set but not used\" warnings?\nPersonally I'd have made these like\n\n (void) _BitScanReverse(&result, word);\n\nAnybody trying to understand this code is going to have to look up\nthe man page for _BitScanReverse anyway, so I'm not sure that\nkeeping the variable adds much readability.\n\nHowever, if no version of MSVC actually issues such a warning,\nit's moot.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 31 Jul 2023 06:57:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Avoid undefined behavior with msvc compiler\n (src/include/port/pg_bitutils.h)" }, { "msg_contents": "On Mon, Jul 31, 2023 at 5:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> John Naylor <john.naylor@enterprisedb.com> writes:\n\n> > Works for me, so done that way for both forward and reverse variants.\nSince\n> > the return value is no longer checked in any builds, I thought about\n> > removing the variable containing it, but it seems best to leave it\nbehind\n> > for clarity since these are not our functions.\n>\n> Hmm, aren't you risking \"variable is set but not used\" warnings?\n> Personally I'd have made these like\n>\n> (void) _BitScanReverse(&result, word);\n\nI'd reasoned that such a warning would have showed up in non-assert builds\nalready, but I neglected to consider that those could have different\nwarning settings. For the time being drongo shows green, at least, but\nwe'll see.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Mon, Jul 31, 2023 at 5:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:>> John Naylor <john.naylor@enterprisedb.com> writes:> > Works for me, so done that way for both forward and reverse variants. Since> > the return value is no longer checked in any builds, I thought about> > removing the variable containing it, but it seems best to leave it behind> > for clarity since these are not our functions.>> Hmm, aren't you risking \"variable is set but not used\" warnings?> Personally I'd have made these like>>     (void) _BitScanReverse(&result, word);I'd reasoned that such a warning would have showed up in non-assert builds already, but I neglected to consider that those could have different warning settings. For the time being drongo shows green, at least, but we'll see.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Mon, 31 Jul 2023 18:55:16 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Avoid undefined behavior with msvc compiler\n (src/include/port/pg_bitutils.h)" } ]
[ { "msg_contents": "Hello postgres hackers,\n Recently I encountered an issue: pg_upgrade fails when dealing with in-place tablespace. As we know, pg_upgrade uses pg_dumpall to dump objects and pg_restore to restore them. The problem seems to be that pg_dumpall is dumping in-place tablespace as relative path, which can't be restored.\n Here is the error message of pg_upgrade:\npsql:/home/postgres/postgresql/src/bin/pg_upgrade/tmp_check/t_002_pg_upgrade_new_node_data/pgdata/pg_upgrade_output.d/20230729T210058.329/dump/pg_upgrade_dump_globals.sql:36: ERROR: tablespace location must be an absolute path\n To help reproduce the failure, I have attached a tap test. The test also fails with tablespace regression, and it change the default value of allow_in_place_tablespaces to true only for debug, so it may not be fit for production. However, it is enough to reproduce this failure.\n I have also identified a solution for this problem, which I have included in the patch. The solution has two modifications:\n 1) Make the function pg_tablespace_location returns path \"\" with in-place tablespace, rather than relative path. Because the path of the in-place tablespace is always 'pg_tblspc/<oid>'.\n 2) Only check the tablespace with an absolute path in pg_upgrade.\n There are also other solutions, such as supporting the creation of relative-path tablespace in function CreateTableSpace. But do we really need relative-path tablespace? I think in-place tablespace is enough by now. My solution may be more lightweight and harmless.\n Thank you for your attention to this matter.\nBest regards,\nRui Zhao", "msg_date": "Sat, 29 Jul 2023 23:10:22 +0800", "msg_from": "\"Rui Zhao\" <xiyuan.zr@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "=?UTF-8?B?cGdfdXBncmFkZSBmYWlscyB3aXRoIGluLXBsYWNlIHRhYmxlc3BhY2U=?=" }, { "msg_contents": "On Mon, Jul 31, 2023 at 5:36 PM Rui Zhao <xiyuan.zr@alibaba-inc.com> wrote:\n>\n> Hello postgres hackers,\n> Recently I encountered an issue: pg_upgrade fails when dealing with in-place tablespace. As we know, pg_upgrade uses pg_dumpall to dump objects and pg_restore to restore them. The problem seems to be that pg_dumpall is dumping in-place tablespace as relative path, which can't be restored.\n>\n> Here is the error message of pg_upgrade:\n> psql:/home/postgres/postgresql/src/bin/pg_upgrade/tmp_check/t_002_pg_upgrade_new_node_data/pgdata/pg_upgrade_output.d/20230729T210058.329/dump/pg_upgrade_dump_globals.sql:36: ERROR: tablespace location must be an absolute path\n>\n> To help reproduce the failure, I have attached a tap test. The test also fails with tablespace regression, and it change the default value of allow_in_place_tablespaces to true only for debug, so it may not be fit for production. However, it is enough to reproduce this failure.\n> I have also identified a solution for this problem, which I have included in the patch. The solution has two modifications:\n> 1) Make the function pg_tablespace_location returns path \"\" with in-place tablespace, rather than relative path. Because the path of the in-place tablespace is always 'pg_tblspc/<oid>'.\n> 2) Only check the tablespace with an absolute path in pg_upgrade.\n>\n> There are also other solutions, such as supporting the creation of relative-path tablespace in function CreateTableSpace. But do we really need relative-path tablespace? I think in-place tablespace is enough by now. My solution may be more lightweight and harmless.\n>\n> Thank you for your attention to this matter.\n>\n> Best regards,\n> Rui Zhao\n\nSeems like allow_in_place_tablespaces is a developer only guc, and it\nis not for end user usage.\ncheck this commit 7170f2159fb21b62c263acd458d781e2f3c3f8bb\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Mon, 31 Jul 2023 23:26:13 +0800", "msg_from": "Junwang Zhao <zhjwpku@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade fails with in-place tablespace" }, { "msg_contents": "On Sat, Jul 29, 2023 at 11:10:22PM +0800, Rui Zhao wrote:\n> 2) Only check the tablespace with an absolute path in pg_upgrade.\n> There are also other solutions, such as supporting the creation of\n> relative-path tablespace in function CreateTableSpace. But do we\n> really need relative-path tablespace? I think in-place tablespace\n> is enough by now. My solution may be more lightweight and\n> harmless.\n\n+ /* The path of the in-place tablespace is always pg_tblspc/<oid>. */\n if (!S_ISLNK(st.st_mode))\n- PG_RETURN_TEXT_P(cstring_to_text(sourcepath));\n+ PG_RETURN_TEXT_P(cstring_to_text(\"\"));\n\nI don't think that your solution is the correct move. Having\npg_tablespace_location() return the physical location of the\ntablespace is very useful because that's the location where the\nphysical files are, and client applications don't need to know the\nlogic behind the way a path is built.\n\n- \" spcname != 'pg_global'\");\n+ \" spcname != 'pg_global' AND \"\n+ \" pg_catalog.pg_tablespace_location(oid) ~ '^/'\");\nThat may not work on Windows where the driver letter is appended at\nthe beginning of the path, no? There is is_absolute_path() to do this\njob in a more portable way.\n--\nMichael", "msg_date": "Tue, 1 Aug 2023 09:57:37 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade fails with in-place tablespace" }, { "msg_contents": "> I don't think that your solution is the correct move. Having\n> pg_tablespace_location() return the physical location of the\n> tablespace is very useful because that's the location where the\n> physical files are, and client applications don't need to know the\n> logic behind the way a path is built.\nI understand your concern. I suggest defining the standard behavior of the in-place tablespace to match that of the pg_default and pg_global tablespaces. The location of the pg_default and pg_global tablespaces is not a concern because they have default paths. Similarly, the location of the in-place tablespace is not a concern because they are all pg_tblspc/<oid>.\nAnother reason is that, up until now, we have been creating in-place tablespaces using the following command:\nCREATE TABLESPACE space_test LOCATION ''\nHowever, when using pg_dump to back up this in-place tablespace, it is dumped with a different path:\nCREATE TABLESPACE space_test LOCATION 'pg_tblspc/<oid>'\nThis can be confusing because it does not match the initial path that we created. Additionally, pg_restore cannot restore this in-place tablespace because the CREATE TABLESPACE command only supports an empty path for creating in-place tablespaces.\n> That may not work on Windows where the driver letter is appended at\n> the beginning of the path, no? There is is_absolute_path() to do this\n> job in a more portable way.\nYou are correct. I have made the necessary modifications to ensure compatibility with in-place tablespaces. Please find the updated code attached.\n--\nBest regards,\nRui Zhao\n------------------------------------------------------------------\nFrom:Michael Paquier <michael@paquier.xyz>\nSent At:2023 Aug. 1 (Tue.) 08:57\nTo:Mark <xiyuan.zr@alibaba-inc.com>\nCc:pgsql-hackers <pgsql-hackers@lists.postgresql.org>\nSubject:Re: pg_upgrade fails with in-place tablespace\nOn Sat, Jul 29, 2023 at 11:10:22PM +0800, Rui Zhao wrote:\n> 2) Only check the tablespace with an absolute path in pg_upgrade.\n> There are also other solutions, such as supporting the creation of\n> relative-path tablespace in function CreateTableSpace. But do we\n> really need relative-path tablespace? I think in-place tablespace\n> is enough by now. My solution may be more lightweight and\n> harmless.\n+ /* The path of the in-place tablespace is always pg_tblspc/<oid>. */\n if (!S_ISLNK(st.st_mode))\n- PG_RETURN_TEXT_P(cstring_to_text(sourcepath));\n+ PG_RETURN_TEXT_P(cstring_to_text(\"\"));\nI don't think that your solution is the correct move. Having\npg_tablespace_location() return the physical location of the\ntablespace is very useful because that's the location where the\nphysical files are, and client applications don't need to know the\nlogic behind the way a path is built.\n- \" spcname != 'pg_global'\");\n+ \" spcname != 'pg_global' AND \"\n+ \" pg_catalog.pg_tablespace_location(oid) ~ '^/'\");\nThat may not work on Windows where the driver letter is appended at\nthe beginning of the path, no? There is is_absolute_path() to do this\njob in a more portable way.\n--\nMichael", "msg_date": "Wed, 02 Aug 2023 22:38:00 +0800", "msg_from": "\"Rui Zhao\" <xiyuan.zr@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?B?UmU6IHBnX3VwZ3JhZGUgZmFpbHMgd2l0aCBpbi1wbGFjZSB0YWJsZXNwYWNl?=" }, { "msg_contents": "On Wed, Aug 02, 2023 at 10:38:00PM +0800, Rui Zhao wrote:\n> However, when using pg_dump to back up this in-place tablespace, it\n> is dumped with a different path: \n> CREATE TABLESPACE space_test LOCATION 'pg_tblspc/<oid>'\n> This can be confusing because it does not match the initial path\n> that we created. Additionally, pg_restore cannot restore this\n> in-place tablespace because the CREATE TABLESPACE command only\n> supports an empty path for creating in-place tablespaces.\n\nYes, the dump part is a different issue. Using a relative path is not\nfine when restoring the tablespace.\n\n> You are correct. I have made the necessary modifications to ensure\n> compatibility with in-place tablespaces. Please find the updated\n> code attached. \n\n+ /* Allow to create in-place tablespace. */\n+ if (!is_absolute_path(spclocation))\n+ appendPQExpBuffer(buf, \"SET allow_in_place_tablespaces = true;\\n\");\n+\n appendPQExpBuffer(buf, \"CREATE TABLESPACE %s\", fspcname);\n appendPQExpBuffer(buf, \" OWNER %s\", fmtId(spcowner));\n\nAs you say, This change in pg_dumpall.c takes care of an issue related\nto the dumps of in-place tablespaces, not only pg_upgrade. See for\nexample on HEAD:\n$ psql -c \"CREATE TABLESPACE popo LOCATION ''\"\n$ pg_dumpall | grep TABLESPACE\nCREATE TABLESPACE popo OWNER postgres LOCATION 'pg_tblspc/16385';\n\nAnd restoring this dump would be incorrect. I think that we'd better\nbe doing something like that for the dump part, as of:\n@@ -1286,7 +1286,12 @@ dumpTablespaces(PGconn *conn)\n appendPQExpBuffer(buf, \" OWNER %s\", fmtId(spcowner));\n \n appendPQExpBufferStr(buf, \" LOCATION \");\n- appendStringLiteralConn(buf, spclocation, conn);\n+\n+ /* In-place tablespaces use a relative path */\n+ if (is_absolute_path(spclocation))\n+ appendStringLiteralConn(buf, spclocation, conn);\n+ else\n+ appendPQExpBufferStr(buf, \"'';\\n\");\n\nI don't have a good feeling about enforcing allow_in_place_tablespaces\nin the connection creating the tablespace, as it can be useful to let\nthe restore of the dump fail if this GUC is disabled in the new\ncluster, so as one can check that no in-place tablespaces are left\naround on the new cluster restoring the contents of the dump.\n--\nMichael", "msg_date": "Tue, 8 Aug 2023 10:57:43 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade fails with in-place tablespace" }, { "msg_contents": "Thank you for your reply. I have implemented the necessary changes in accordance with your suggestions.\nOn Tue, Aug 08, 2023 at 09:57:00AM +0800, Michael Paquier wrote:\n> I don't have a good feeling about enforcing allow_in_place_tablespaces\n> in the connection creating the tablespace, as it can be useful to let\n> the restore of the dump fail if this GUC is disabled in the new\n> cluster, so as one can check that no in-place tablespaces are left\n> around on the new cluster restoring the contents of the dump.\nI have only enabled allow_in_place_tablespaces to true during a binary upgrade.\nPlease review my lastest patch.\n--\nBest regards,\nRui Zhao", "msg_date": "Tue, 08 Aug 2023 12:35:38 +0800", "msg_from": "\"Rui Zhao\" <xiyuan.zr@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?B?UmU6IHBnX3VwZ3JhZGUgZmFpbHMgd2l0aCBpbi1wbGFjZSB0YWJsZXNwYWNl?=" }, { "msg_contents": "On Tue, Aug 08, 2023 at 12:35:38PM +0800, Rui Zhao wrote:\n> I have only enabled allow_in_place_tablespaces to true during a binary upgrade.\n\nThe pg_dump part should be OK to allow the restore of in-place\ntablespaces, so I'll get that fixed first, but I think that we should\nadd a regression test as well.\n\nI'll try to look at the binary upgrade path and pg_upgrade after this\nis done.\n--\nMichael", "msg_date": "Tue, 8 Aug 2023 16:32:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade fails with in-place tablespace" }, { "msg_contents": "On Tue, Aug 08, 2023 at 04:32:45PM +0900, Michael Paquier wrote:\n> The pg_dump part should be OK to allow the restore of in-place\n> tablespaces, so I'll get that fixed first, but I think that we should\n> add a regression test as well.\n\nI have added a test for that in 002_pg_dump.pl, and applied the fix\nfor pg_dumpall.\n\n> I'll try to look at the binary upgrade path and pg_upgrade after this\n> is done.\n\n+ /* No need to check in-place tablespace. */\n snprintf(query, sizeof(query),\n \"SELECT pg_catalog.pg_tablespace_location(oid) AS spclocation \"\n \"FROM pg_catalog.pg_tablespace \"\n \"WHERE spcname != 'pg_default' AND \"\n- \" spcname != 'pg_global'\");\n+ \" spcname != 'pg_global' AND \"\n+ \" pg_catalog.pg_tablespace_location(oid) != 'pg_tblspc/' || oid\");\n\nThis does not really explain the reason why in-place tablespaces need\nto be skipped (in short they don't need a separate creation or check\nlike the others in create_script_for_old_cluster_deletion because they\nare part of the data folder). Anyway, the more I think about it, the\nless excited I get about the need to support pg_upgrade with in-place\ntablespaces, especially regarding the fact that the patch blindly\nenforces allows_in_place_tablespaces, assuming that it is OK to do so.\nSo what about the case where one would want to be warned if these are\nstill laying around when doing upgrades? And what's the actual use\ncase for supporting that? There is something else that we could do\nhere: add a pre-run check to make pg_upgrade fail gracefully if we\nfind in-place tablespaces in the old cluster.\n\n+# Test with in-place tablespace.\n+$oldnode->append_conf('postgresql.conf', 'allow_in_place_tablespaces = on');\n+$oldnode->reload;\n+$oldnode->safe_psql('postgres', \"CREATE TABLESPACE space_test LOCATION ''\");\n\nWhile on it, note that this breaks the case of cross-version upgrades\nwhere the old cluster uses binaries where in-place tablespaces are not\nsupported. So this requires an extra version check.\n--\nMichael", "msg_date": "Wed, 9 Aug 2023 10:19:57 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade fails with in-place tablespace" }, { "msg_contents": "I have found the patch and upon review, I believe the following code can be improved.\n+ /*\n+ * In-place tablespaces use a relative path, and need to be dumped\n+ * with an empty string as location.\n+ */\n+ if (is_absolute_path(spclocation))\n+ appendStringLiteralConn(buf, spclocation, conn);\n+ else\n+ appendStringLiteralConn(buf, \"\", conn);\nI believe that utilizing appendPQExpBufferStr(buf, \"''\"); would be better and more meaningful than using appendStringLiteralConn(buf, \"\", conn); in this scenario.\nI apologize for this wrong usage. Please help me fix it.\nI will try to respond to pg_upgrade after my deep dive.\n--\nBest regards,\nRui Zhao\n\nI have found the patch and upon review, I believe the following code can be improved.+        /*+         * In-place tablespaces use a relative path, and need to be dumped+         * with an empty string as location.+         */+        if (is_absolute_path(spclocation))+            appendStringLiteralConn(buf, spclocation, conn);+        else+            appendStringLiteralConn(buf, \"\", conn);I believe that utilizing appendPQExpBufferStr(buf, \"''\"); would be better and more meaningful than using appendStringLiteralConn(buf, \"\", conn); in this scenario.I apologize for this wrong usage. Please help me fix it.I will try to respond to pg_upgrade after my deep dive.--Best regards,Rui Zhao", "msg_date": "Wed, 09 Aug 2023 11:47:58 +0800", "msg_from": "\"Rui Zhao\" <xiyuan.zr@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?B?UmU6IHBnX3VwZ3JhZGUgZmFpbHMgd2l0aCBpbi1wbGFjZSB0YWJsZXNwYWNl?=" }, { "msg_contents": "On Wed, Aug 09, 2023 at 11:47:58AM +0800, Rui Zhao wrote:\n> I believe that utilizing appendPQExpBufferStr(buf, \"''\"); would be better and more meaningful than using appendStringLiteralConn(buf, \"\", conn); in this scenario.\n> I apologize for this wrong usage. Please help me fix it.\n> I will try to respond to pg_upgrade after my deep dive.\n\nThat was my original suggestion in [1], and what you've added gives\nexactly the same result.\n\n[1]: https://www.postgresql.org/message-id/ZNGhF6iBpUKgNimI%40paquier.xyz\n--\nMichael", "msg_date": "Wed, 9 Aug 2023 13:53:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade fails with in-place tablespace" }, { "msg_contents": "On Wed, Aug 09, 2023 at 09:20 AM +0800, Michael Paquier wrote:\n> This does not really explain the reason why in-place tablespaces need\n> to be skipped (in short they don't need a separate creation or check\n> like the others in create_script_for_old_cluster_deletion because they\n> are part of the data folder). Anyway, the more I think about it, the\n> less excited I get about the need to support pg_upgrade with in-place\n> tablespaces, especially regarding the fact that the patch blindly\n> enforces allows_in_place_tablespaces, assuming that it is OK to do so.\n> So what about the case where one would want to be warned if these are\n> still laying around when doing upgrades? And what's the actual use\n> case for supporting that? There is something else that we could do\n> here: add a pre-run check to make pg_upgrade fail gracefully if we\n> find in-place tablespaces in the old cluster.\nI have implemented the changes you suggested in our previous discussion. I have added the necessary code to ensure that pg_upgrade fails gracefully with in-place tablespaces and reports a hint to let the check pass.\nThank you for your guidance and support. Please review my latest patch.\n--\nBest regards,\nRui Zhao", "msg_date": "Fri, 18 Aug 2023 22:47:04 +0800", "msg_from": "\"Rui Zhao\" <xiyuan.zr@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?B?UmU6IHBnX3VwZ3JhZGUgZmFpbHMgd2l0aCBpbi1wbGFjZSB0YWJsZXNwYWNl?=" }, { "msg_contents": "On Fri, Aug 18, 2023 at 10:47:04PM +0800, Rui Zhao wrote:\n> I have implemented the changes you suggested in our previous\n> discussion. I have added the necessary code to ensure that\n> pg_upgrade fails gracefully with in-place tablespaces and reports a\n> hint to let the check pass. \n\n+\t/*\n+\t * No need to check in-place tablespaces when allow_in_place_tablespaces is\n+\t * on because they are part of the data folder.\n+\t */\n+\tif (allow_in_place_tablespaces)\n+\t\tsnprintf(query, sizeof(query),\n+\t\t\t\t \"%s AND pg_catalog.pg_tablespace_location(oid) != 'pg_tblspc/' || oid\", query);\n\nI am not sure to follow the meaning of this logic. There could be\nin-place tablespaces that got created with the GUC enabled during a\nsession, while the GUC has disabled the GUC. Shouldn't this stuff be\nmore restrictive and not require a lookup at the GUC, only looking at\nthe paths returned by pg_tablespace_location()?\n--\nMichael", "msg_date": "Sat, 19 Aug 2023 13:45:40 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade fails with in-place tablespace" }, { "msg_contents": "On Sat, Aug 19, 2023 at 12:45 PM +0800, Michael Paquier wrote:\n> I am not sure to follow the meaning of this logic. There could be\n> in-place tablespaces that got created with the GUC enabled during a\n> session, while the GUC has disabled the GUC. Shouldn't this stuff be\n> more restrictive and not require a lookup at the GUC, only looking at\n> the paths returned by pg_tablespace_location()?\nThere are two cases when we check in-place tablespaces:\n1. The GUC allow_in_place_tablespaces is turned on in the old node.\nIn this case, we assume that the caller is aware of what he is doing\nand allow the check to pass.\n2. The GUC allow_in_place_tablespaces is turned off in the old node.\nIn this case, we will fail the check if we find any in-place tablespaces.\nHowever, we provide an option to pass the check by adding\n--old-options=\"-c allow_in_place_tablespaces=on\" in pg_upgrade.\nThis option will turn on allow_in_place_tablespaces like GUC does.\nSo I lookup at the GUC and check if it is turned on.\nWe add this check of in-place tablespaces, to deal with the situation\nwhere one would want to be warned if these are in-place tablespaces\nwhen doing upgrades. I think we can take it a step further by providing\nan option that allows the check to pass if the caller explicitly adds\n--old-options=\"-c allow_in_place_tablespaces=on\".\nPlease refer to the TAP test I have included for a better understanding\nof my suggestion.\n--\nBest regards,\nRui Zhao\n\nOn Sat, Aug 19, 2023 at 12:45 PM +0800, Michael Paquier wrote:> I am not sure to follow the meaning of this logic.  There could be> in-place tablespaces that got created with the GUC enabled during a> session, while the GUC has disabled the GUC.  Shouldn't this stuff be> more restrictive and not require a lookup at the GUC, only looking at> the paths returned by pg_tablespace_location()?  There are two cases when we check in-place tablespaces:1. The GUC allow_in_place_tablespaces is turned on in the old node.In this case, we assume that the caller is aware of what he is doingand allow the check to pass.2. The GUC allow_in_place_tablespaces is turned off in the old node.In this case, we will fail the check if we find any in-place tablespaces.However, we provide an option to pass the check by adding--old-options=\"-c allow_in_place_tablespaces=on\" in pg_upgrade.This option will turn on allow_in_place_tablespaces like GUC does.So I lookup at the GUC and check if it is turned on.  We add this check of in-place tablespaces, to deal with the situationwhere one would want to be warned if these are in-place tablespaceswhen doing upgrades. I think we can take it a step further by providingan option that allows the check to pass if the caller explicitly adds--old-options=\"-c allow_in_place_tablespaces=on\".  Please refer to the TAP test I have included for a better understandingof my suggestion.  --Best regards,Rui Zhao", "msg_date": "Sat, 19 Aug 2023 20:11:28 +0800", "msg_from": "\"Rui Zhao\" <xiyuan.zr@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?B?UmU6IHBnX3VwZ3JhZGUgZmFpbHMgd2l0aCBpbi1wbGFjZSB0YWJsZXNwYWNl?=" }, { "msg_contents": "Could you please provide more thoughts about this issue?\npg_upgrade is the final issue of in-place tablespace and we are on the cusp of success.\n--\nBest regards,\nRui Zhao\n\nCould you please provide more thoughts about this issue?pg_upgrade is the final issue of in-place tablespace and we are on the cusp of success.--Best regards,Rui Zhao", "msg_date": "Sun, 27 Aug 2023 12:32:02 +0800", "msg_from": "\"Rui Zhao\" <xiyuan.zr@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?B?UmU6IHBnX3VwZ3JhZGUgZmFpbHMgd2l0aCBpbi1wbGFjZSB0YWJsZXNwYWNl?=" }, { "msg_contents": "On Sat, Aug 19, 2023 at 08:11:28PM +0800, Rui Zhao wrote:\n> Please refer to the TAP test I have included for a better understanding\n> of my suggestion.\n\nSure, but it seems to me that my original question is not really\nanswered: what's your use case for being able to support in-place\ntablespaces in pg_upgrade? The original use case being such\ntablespaces is to ease the introduction of tests with primaries and\nstandbys, which is not something that really applies to pg_upgrade,\nno? Perhaps there is meaning in having more TAP tests with\ntablespaces and a combination of primary/standbys, still having\nin-place tablespaces does not really make much sense to me because, as\nthese are in the data folder, we don't use them to test the path\nre-creation logic.\n\nI think that we should just add a routine in check.c that scans\npg_tablespace, reporting all the non-absolute paths found with their\nassociated tablespace names.\n--\nMichael", "msg_date": "Fri, 1 Sep 2023 13:58:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade fails with in-place tablespace[" }, { "msg_contents": "Thank you for your response. It is evident that there is a need\nfor this features in our system.\nFirstly, our customers express their desire to utilize tablespaces\nfor table management, without necessarily being concerned about\nthe directory location of these tablespaces.\nSecondly, currently PG only supports absolute-path tablespaces, but\nin-place tablespaces are very likely to become popular in the future.\nTherefore, it is essential to incorporate support for in-place\ntablespaces in the pg_upgrade feature. I intend to implement\nthis functionality in our system to accommodate our customers'\nrequirements.\nIt would be highly appreciated if the official PG could also\nincorporate support for this feature.\n--\nBest regards,\nRui Zhao\n\nThank you for your response. It is evident that there is a needfor this features in our system.Firstly, our customers express their desire to utilize tablespacesfor table management, without necessarily being concerned aboutthe directory location of these tablespaces.Secondly, currently PG only supports absolute-path tablespaces, butin-place tablespaces are very likely to become popular in the future.Therefore, it is essential to incorporate support for in-placetablespaces in the pg_upgrade feature. I intend to implementthis functionality in our system to accommodate our customers'requirements.It would be highly appreciated if the official PG could alsoincorporate support for this feature.--Best regards,Rui Zhao", "msg_date": "Tue, 05 Sep 2023 10:28:44 +0800", "msg_from": "\"Rui Zhao\" <xiyuan.zr@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?B?UmU6IHBnX3VwZ3JhZGUgZmFpbHMgd2l0aCBpbi1wbGFjZSB0YWJsZXNwYWNlWw==?=" }, { "msg_contents": "On Tue, Sep 05, 2023 at 10:28:44AM +0800, Rui Zhao wrote:\n> It would be highly appreciated if the official PG could also\n> incorporate support for this feature.\n\nThis is a development option, and documented as such. You may also\nwant to be aware of the fact that tablespaces in the data folder\nitself are discouraged, because they don't really make sense, and that\nwe generate a WARNING if trying to do that since 33cb8ff6aa11.\n\nI don't have really have more to add, but others interested in this\ntopic are of course free to share their opinions and/or comments.\n--\nMichael", "msg_date": "Tue, 5 Sep 2023 14:56:55 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade fails with in-place tablespace[" }, { "msg_contents": "On Mon, Sep 4, 2023 at 11:29 PM Rui Zhao <xiyuan.zr@alibaba-inc.com> wrote:\n> Thank you for your response. It is evident that there is a need\n> for this features in our system.\n> Firstly, our customers express their desire to utilize tablespaces\n> for table management, without necessarily being concerned about\n> the directory location of these tablespaces.\n\nThat's not the purpose of that feature. I'm not sure that we should be\nworrying about users who want to use features for a purpose other than\nthe one for which they were designed.\n\n> Secondly, currently PG only supports absolute-path tablespaces, but\n> in-place tablespaces are very likely to become popular in the future.\n\nThat's not really a reason. If you said why this was going to happen,\nthat might be a reason, but just asserting that it will happen isn't.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 5 Sep 2023 16:08:17 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade fails with in-place tablespace[" } ]
[ { "msg_contents": "Improve performance of \"SET search_path\".\n\nMotivation:\n\nCreating functions with a \"SET search_path\" clause is safer and more\nsecure because the function behavior doesn't change based on the\ncaller's search_path setting.\n\nSetting search_path in the function declaration is especially important\nfor SECURITY DEFINER functions[1], but even SECURITY INVOKER functions\ncan be executed more like SECURITY DEFINER in some contexts (e.g.\nREINDEX executing an index function). Also, it's just error-prone to\ndepend on the caller's search_path unless there's a specific reason you\nwant to do that.\n\nUnfortunately, adding a \"SET search_path\" clause to functions slows\nthem down. The attached patches close the performance gap\nsubstantially.\n\nChanges:\n\n0001: Transform the settings in proconfig into a List for faster\nprocessing. This is simple and speeds up any proconfig setting.\n\n0002: Introduce CheckIdentifierString(), which is a faster version of\nSplitIdentifierString() that only validates, and can be used in\ncheck_search_path().\n\n0003: Cache of previous search_path settings. The key is the raw\nnamespace_search_path string and the role OID, and it caches the\ncomputed OID list. Changes to the search_path setting or the role can\nretrieve the cached OID list as long as nothing else invalidates the\ncache (changes to the temp schema or a syscache invalidation of\npg_namespace or pg_role).\n\nOne behavior change in 0003 is that retrieving a cached OID list\ndoesn't call InvokeNamespaceSearchHook(). It would be easy enough to\ndisable caching when a hook exists, but I didn't see a reason to expect\nthat \"SET search_path\" must invoke that hook each time. Invoking it\nwhen computing for the first time, or after a real invalidation, seemed\nfine to me. Feedback on that is welcome.\n\nTest:\n\n CREATE SCHEMA a;\n CREATE SCHEMA b;\n CREATE TABLE big(i) AS SELECT generate_series(1,20000000);\n VACUUM big; CHECKPOINT;\n CREATE FUNCTION inc(int) RETURNS INT\n LANGUAGE plpgsql\n AS $$ begin return $1+1; end; $$;\n CREATE FUNCTION inc_ab(int) RETURNS INT\n LANGUAGE plpgsql SET search_path = a, b\n AS $$ begin return $1+1; end; $$;\n\n -- baseline\n EXPLAIN ANALYZE SELECT inc(i) FROM big;\n\n -- test query\n EXPLAIN ANALYZE SELECT inc_ab(i) FROM big;\n\nResults:\n\n baseline: 4.3s\n test query:\n without patch: 14.7s\n 0001: 13.6s\n 0001,0002: 10.4s\n 0001,0002,0003: 8.6s\n\nTimings were inconsistent for me so I took the middle of three runs.\n\nIt's a lot faster than without the patch. It's still 2X worse than not\nspecifying any search_path (baseline), but I think it brings it into\n\"usable\" territory for more use cases.\n\nRegards,\n\tJeff Davis\n\n[1]\nhttps://www.postgresql.org/docs/current/sql-createfunction.html#SQL-CREATEFUNCTION-SECURITY", "msg_date": "Sat, 29 Jul 2023 08:59:01 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Faster \"SET search_path\"" }, { "msg_contents": "On Sat, 29 Jul 2023 at 11:59, Jeff Davis <pgsql@j-davis.com> wrote:\n\nUnfortunately, adding a \"SET search_path\" clause to functions slows\n> them down. The attached patches close the performance gap\n> substantially.\n>\n> Changes:\n>\n> 0001: Transform the settings in proconfig into a List for faster\n> processing. This is simple and speeds up any proconfig setting.\n>\n> 0002: Introduce CheckIdentifierString(), which is a faster version of\n> SplitIdentifierString() that only validates, and can be used in\n> check_search_path().\n>\n> 0003: Cache of previous search_path settings. The key is the raw\n> namespace_search_path string and the role OID, and it caches the\n> computed OID list. Changes to the search_path setting or the role can\n> retrieve the cached OID list as long as nothing else invalidates the\n> cache (changes to the temp schema or a syscache invalidation of\n> pg_namespace or pg_role).\n>\n\nI'm glad to see this work. Something related to consider, not sure if this\nis helpful: can the case of the caller's search_path happening to be the\nsame as the SET search_path setting be optimized? Essentially, \"just\"\nobserve efficiently (somehow) that no change is needed, and skip changing\nit? I ask because substantially all my functions are written using \"SET\nsearch_path FROM CURRENT\", and then many of them call each other. As a\nresult, in my use I would say that the common case is a function being\ncalled by another function, where both have the same search_path setting.\nSo ideally, the search_path would not be changed at all when entering and\nexiting the callee.\n\nOn Sat, 29 Jul 2023 at 11:59, Jeff Davis <pgsql@j-davis.com> wrote:\nUnfortunately, adding a \"SET search_path\" clause to functions slows\nthem down. The attached patches close the performance gap\nsubstantially.\n\nChanges:\n\n0001: Transform the settings in proconfig into a List for faster\nprocessing. This is simple and speeds up any proconfig setting.\n\n0002: Introduce CheckIdentifierString(), which is a faster version of\nSplitIdentifierString() that only validates, and can be used in\ncheck_search_path().\n\n0003: Cache of previous search_path settings. The key is the raw\nnamespace_search_path string and the role OID, and it caches the\ncomputed OID list. Changes to the search_path setting or the role can\nretrieve the cached OID list as long as nothing else invalidates the\ncache (changes to the temp schema or a syscache invalidation of\npg_namespace or pg_role).I'm glad to see this work. Something related to consider, not sure if this is helpful: can the case of the caller's search_path happening to be the same as the SET search_path setting be optimized? Essentially, \"just\" observe efficiently (somehow) that no change is needed, and skip changing it? I ask because substantially all my functions are written using \"SET search_path FROM CURRENT\", and then many of them call each other. As a result, in my use I would say that the common case is a function being called by another function, where both have the same search_path setting. So ideally, the search_path would not be changed at all when entering and exiting the callee.", "msg_date": "Sat, 29 Jul 2023 12:44:00 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Faster \"SET search_path\"" }, { "msg_contents": "On Sat, Jul 29, 2023 at 08:59:01AM -0700, Jeff Davis wrote:\n> 0001: Transform the settings in proconfig into a List for faster\n> processing. This is simple and speeds up any proconfig setting.\n\nThis looks straightforward. It might be worth combining the common parts\nof ProcessGUCArray() and TransformGUCArray() if there is a clean way to do\nso.\n\n> 0002: Introduce CheckIdentifierString(), which is a faster version of\n> SplitIdentifierString() that only validates, and can be used in\n> check_search_path().\n\nThis also looks straightforward. And I'd make the same comment as above\nfor SplitIdentifierString() and CheckIdentifierString(). Perhaps we could\nhave folks set SplitIdentifierString's namelist argument to NULL to\nindicate that it only needs to validate.\n\n> 0003: Cache of previous search_path settings. The key is the raw\n> namespace_search_path string and the role OID, and it caches the\n> computed OID list. Changes to the search_path setting or the role can\n> retrieve the cached OID list as long as nothing else invalidates the\n> cache (changes to the temp schema or a syscache invalidation of\n> pg_namespace or pg_role).\n\nAt a glance, this looks pretty reasonable, too.\n\n> One behavior change in 0003 is that retrieving a cached OID list\n> doesn't call InvokeNamespaceSearchHook(). It would be easy enough to\n> disable caching when a hook exists, but I didn't see a reason to expect\n> that \"SET search_path\" must invoke that hook each time. Invoking it\n> when computing for the first time, or after a real invalidation, seemed\n> fine to me. Feedback on that is welcome.\n\nCould a previously cached list be used to circumvent a hook that was just\nreconfigured to ERROR for certain namespaces? If something like that is\npossible, then we might need to disable the cache when there is a hook.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 29 Jul 2023 21:51:05 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Faster \"SET search_path\"" }, { "msg_contents": "On Sat, Jul 29, 2023 at 11:59 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> Unfortunately, adding a \"SET search_path\" clause to functions slows\n> them down. The attached patches close the performance gap\n> substantially.\n\nI haven't reviewed the code but I like the concept a lot. This is badly needed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 31 Jul 2023 12:55:22 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Faster \"SET search_path\"" }, { "msg_contents": "On Sat, 2023-07-29 at 12:44 -0400, Isaac Morland wrote:\n> Essentially, \"just\" observe efficiently (somehow) that no change is\n> needed, and skip changing it?\n\nI gave this a try and it speeds things up some more.\n\nThere might be a surprise factor with an optimization like that,\nthough. If someone becomes accustomed to the function running fast,\nthen changing the search_path in the caller could slow things down a\nlot and it would be hard for the user to understand what happened.\n\nAlso, there are a few implementation complexities because I think we\nneed to still trigger the same invalidation that happens in\nassign_search_path().\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Mon, 31 Jul 2023 22:28:31 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Faster \"SET search_path\"" }, { "msg_contents": "On Sat, 2023-07-29 at 21:51 -0700, Nathan Bossart wrote:\n> On Sat, Jul 29, 2023 at 08:59:01AM -0700, Jeff Davis wrote:\n> > 0001: Transform the settings in proconfig into a List for faster\n> > processing. This is simple and speeds up any proconfig setting.\n> \n> This looks straightforward.  It might be worth combining the common\n> parts\n> of ProcessGUCArray() and TransformGUCArray() if there is a clean way\n> to do\n> so.\n\nI changed the former to call the latter to share code. A few extra\nallocations in the ProcessGUCArray() path, but it doesn't look like\nthat would matter.\n\n> > 0002: Introduce CheckIdentifierString(), which is a faster version\n> > of\n> > SplitIdentifierString() that only validates, and can be used in\n> > check_search_path().\n> \n> This also looks straightforward.  And I'd make the same comment as\n> above\n> for SplitIdentifierString() and CheckIdentifierString().  Perhaps we\n> could\n> have folks set SplitIdentifierString's namelist argument to NULL to\n> indicate that it only needs to validate.\n\nSplitIdentifierString() modifies its input, and it doesn't seem like a\nnet win in readability if the API sometimes modifies its input\n(requiring a copy in the caller) and sometimes does not. I'm open to\nsuggestion about a refactor here, but I didn't see a clean way to do\nso.\n\n> \n> > One behavior change in 0003 is that retrieving a cached OID list\n> > doesn't call InvokeNamespaceSearchHook(). It would be easy enough\n> > to\n> > disable caching when a hook exists, but I didn't see a reason to\n> > expect\n> > that \"SET search_path\" must invoke that hook each time. Invoking it\n> > when computing for the first time, or after a real invalidation,\n> > seemed\n> > fine to me. Feedback on that is welcome.\n> \n> Could a previously cached list be used to circumvent a hook that was\n> just\n> reconfigured to ERROR for certain namespaces?  If something like that\n> is\n> possible, then we might need to disable the cache when there is a\n> hook.\n\nI changed it to move the hook so that it's called after retrieving from\nthe cache. Most of the speedup is still there with no behavior change.\nIt also means I had to move the deduplication to happen after\nretrieving from the cache, which doesn't seem great but in practice the\nsearch path list is not very long so I don't think it will make much\ndifference. (Arguably, we can do the deduplication before caching, but\nI didn't see a big difference so I didn't want to split hairs on the\nsemantics.)\n\nNew timings:\n\n baseline: 4.2s\n test query:\n without patch: 13.1s\n 0001: 11.7s\n 0001+0002: 10.5s\n 0001+0002+0003: 8.1s\n\nNew patches attached.\n\nRegards,\n\tJeff Davis", "msg_date": "Tue, 01 Aug 2023 16:59:33 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Faster \"SET search_path\"" }, { "msg_contents": "On Tue, Aug 01, 2023 at 04:59:33PM -0700, Jeff Davis wrote:\n> +\t\tList\t\t*pair\t = lfirst(lc);\n> +\t\tchar\t\t*name\t = linitial(pair);\n> +\t\tchar\t\t*value\t = lsecond(pair);\n\nThis is definitely a nitpick, but this List of Lists business feels strange\nto me for some reason. There might be some code that does this, but I\ntypically see this done with two ѕeparate lists and forboth(). I doubt it\nmakes terribly much difference in the end, and I really don't have a strong\nopinion either way, but it seemed like something that would inevitably come\nup in this thread. Is there any particular reason you chose to do it this\nway?\n\n> SplitIdentifierString() modifies its input, and it doesn't seem like a\n> net win in readability if the API sometimes modifies its input\n> (requiring a copy in the caller) and sometimes does not. I'm open to\n> suggestion about a refactor here, but I didn't see a clean way to do\n> so.\n\nThat's fine by me.\n\n> +static inline uint32\n> +nspcachekey_hash(SearchPathCacheKey *key)\n> +{\n> +\tconst unsigned char\t*bytes = (const unsigned char *)key->searchPath;\n> +\tint\t\t\t\t\t blen = strlen(key->searchPath);\n> +\n> +\treturn hash_bytes(bytes, blen) ^ hash_uint32(key->roleid);\n> +}\n\nAny reason not to use hash_combine() here?\n\n> I changed it to move the hook so that it's called after retrieving from\n> the cache. Most of the speedup is still there with no behavior change.\n> It also means I had to move the deduplication to happen after\n> retrieving from the cache, which doesn't seem great but in practice the\n> search path list is not very long so I don't think it will make much\n> difference. (Arguably, we can do the deduplication before caching, but\n> I didn't see a big difference so I didn't want to split hairs on the\n> semantics.)\n\nI think you are right. This adds some complexity, but I don't have\nanything else to propose at the moment.\n\n> -\t\t/* Now safe to assign to state variables. */\n> -\t\tlist_free(baseSearchPath);\n> -\t\tbaseSearchPath = newpath;\n> -\t\tbaseCreationNamespace = firstNS;\n> -\t\tbaseTempCreationPending = temp_missing;\n> \t}\n> \n> +\t/* Now safe to assign to state variables. */\n> +\tlist_free(baseSearchPath);\n> +\tbaseSearchPath = finalPath;\n> +\tbaseCreationNamespace = firstNS;\n> +\tbaseTempCreationPending = temp_missing;\n\nI'm not following why this logic was moved.\n\n> New timings:\n>\n> baseline: 4.2s\n> test query:\n> without patch: 13.1s\n> 0001: 11.7s\n> 0001+0002: 10.5s\n> 0001+0002+0003: 8.1s\n\nNice. It makes sense that 0003 would provide the most benefit.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 1 Aug 2023 21:52:33 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Faster \"SET search_path\"" }, { "msg_contents": "On Mon, Jul 31, 2023 at 10:28:31PM -0700, Jeff Davis wrote:\n> On Sat, 2023-07-29 at 12:44 -0400, Isaac Morland wrote:\n>> Essentially, \"just\" observe efficiently (somehow) that no change is\n>> needed, and skip changing it?\n> \n> I gave this a try and it speeds things up some more.\n> \n> There might be a surprise factor with an optimization like that,\n> though. If someone becomes accustomed to the function running fast,\n> then changing the search_path in the caller could slow things down a\n> lot and it would be hard for the user to understand what happened.\n\nI wonder if this is a good enough reason to _not_ proceed with this\noptimization. At the moment, I'm on the fence about it.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 1 Aug 2023 22:07:20 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Faster \"SET search_path\"" }, { "msg_contents": "On Wed, 2 Aug 2023 at 01:07, Nathan Bossart <nathandbossart@gmail.com>\nwrote:\n\n> On Mon, Jul 31, 2023 at 10:28:31PM -0700, Jeff Davis wrote:\n> > On Sat, 2023-07-29 at 12:44 -0400, Isaac Morland wrote:\n> >> Essentially, \"just\" observe efficiently (somehow) that no change is\n> >> needed, and skip changing it?\n> >\n> > I gave this a try and it speeds things up some more.\n> >\n> > There might be a surprise factor with an optimization like that,\n> > though. If someone becomes accustomed to the function running fast,\n> > then changing the search_path in the caller could slow things down a\n> > lot and it would be hard for the user to understand what happened.\n>\n> I wonder if this is a good enough reason to _not_ proceed with this\n> optimization. At the moment, I'm on the fence about it.\n>\n\nSpeaking as someone who uses a lot of stored procedures, many of which call\none another, I definitely want this optimization.\n\nI don’t think the fact that an optimization might suddenly not work in a\ncertain situation is a reason not to optimize. What would our query planner\nlook like if we took that approach? Many people regularly fix performance\nproblems by making inscrutable adjustments to queries, sometimes after\nhaving accidentally ruined performance by modifying the query. Instead, we\nshould try to find ways of making the performance more transparent. We\nalready have some features for this, but maybe more can be done.\n\nOn Wed, 2 Aug 2023 at 01:07, Nathan Bossart <nathandbossart@gmail.com> wrote:On Mon, Jul 31, 2023 at 10:28:31PM -0700, Jeff Davis wrote:\n> On Sat, 2023-07-29 at 12:44 -0400, Isaac Morland wrote:\n>> Essentially, \"just\" observe efficiently (somehow) that no change is\n>> needed, and skip changing it?\n> \n> I gave this a try and it speeds things up some more.\n> \n> There might be a surprise factor with an optimization like that,\n> though. If someone becomes accustomed to the function running fast,\n> then changing the search_path in the caller could slow things down a\n> lot and it would be hard for the user to understand what happened.\n\nI wonder if this is a good enough reason to _not_ proceed with this\noptimization.  At the moment, I'm on the fence about it.Speaking as someone who uses a lot of stored procedures, many of which call one another, I definitely want this optimization.I don’t think the fact that an optimization might suddenly not work in a certain situation is a reason not to optimize. What would our query planner look like if we took that approach? Many people regularly fix performance problems by making inscrutable adjustments to queries, sometimes after having accidentally ruined performance by modifying the query. Instead, we should try to find ways of making the performance more transparent. We already have some features for this, but maybe more can be done.", "msg_date": "Wed, 2 Aug 2023 01:14:27 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Faster \"SET search_path\"" }, { "msg_contents": "On Tue, 2023-08-01 at 22:07 -0700, Nathan Bossart wrote:\n> I wonder if this is a good enough reason to _not_ proceed with this\n> optimization.  At the moment, I'm on the fence about it.\n\nI was wondering the same thing. It's something that could reasonably be\nexplained to users; it's not what I'd call \"magical\", it's just\navoiding an unnecessary SET. But I could certainly see it as a cause of\nconfusion: \"why is this query faster for user foo than user bar?\".\n\nAnother concern is that the default search_path is \"$foo, public\" but\nthe safe search_path is \"pg_catalog, pg_temp\". So if we automatically\nswitch to the safe search path for functions in index expressions (as I\nthink we should do, at least ideally), then they'd be slow by default.\nWe'd need to start advising people to set their search_path to\n\"pg_catalog, pg_temp\" to speed things up.\n\nI'm not opposed to this optimization, but I'm not sure about it either.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Tue, 01 Aug 2023 22:53:11 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Faster \"SET search_path\"" }, { "msg_contents": "On Wed, 2023-08-02 at 01:14 -0400, Isaac Morland wrote:\n> I don’t think the fact that an optimization might suddenly not work\n> in a certain situation is a reason not to optimize. What would our\n> query planner look like if we took that approach?\n\n...\n\n> Instead, we should try to find ways of making the performance more\n> transparent. We already have some features for this, but maybe more\n> can be done.\n\nExactly; for the planner we have EXPLAIN. We could try to expose GUC\nswitching costs that way.\n\nOr maybe users can start thinking of search_path as a performance-\nrelated setting to be careful with.\n\nOr we could try harder to optimize the switching costs so that it's not\nso bad. I just started and I already saw some nice speedups; with a few\nof the easy fixes done, the remaining opportunities should be more\nvisible in the profiler.\n\nRegards,\n\tJeff Davis\n\n\n", "msg_date": "Tue, 01 Aug 2023 23:06:56 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Faster \"SET search_path\"" }, { "msg_contents": "On Tue, 2023-08-01 at 21:52 -0700, Nathan Bossart wrote:\n> I\n> typically see this done with two ѕeparate lists and forboth().\n\nAgreed, done.\n\n> \n> Any reason not to use hash_combine() here?\n\nThank you, fixed.\n\n> > I changed it to move the hook so that it's called after retrieving\n> > from\n> > the cache.\n...\n> I think you are right.  This adds some complexity, but I don't have\n> anything else to propose at the moment.\n\nI reworked it a bit to avoid allocations in most cases, and it only\nreprocesses the oidlist and runs the hooks when necessary (and even\nthen, still benefits from the cached OIDs that have already passed the\nACL checks).\n\nNow I'm seeing timings around 7.1s, which is starting to look really\nnice.\n\n> \n> I'm not following why this logic was moved.\n\nThere was previously a comment:\n\n \"We want to detect the case where the effective value of the base\nsearch path variables didn't change. As long as we're doing so, we can\navoid copying the OID list unnecessarily.\"\n\nBut with v2 the copy had already happened by that point. In v3, there\nis no copy at all.\n> \n\nRegards,\n\tJeff Davis", "msg_date": "Wed, 02 Aug 2023 23:50:28 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Faster \"SET search_path\"" }, { "msg_contents": "On Wed, 2023-08-02 at 01:14 -0400, Isaac Morland wrote:\n> > > On Sat, 2023-07-29 at 12:44 -0400, Isaac Morland wrote:\n> > > > Essentially, \"just\" observe efficiently (somehow) that no\n> > > > change is\n> > > > needed, and skip changing it?\n\n...\n\n> Speaking as someone who uses a lot of stored procedures, many of\n> which call one another, I definitely want this optimization.\n\nIf we try to avoid the set_config_option() entirely, we'd have to\nintroduce the invalidation logic into fmgr.c somehow, and there would\nbe a performance cliff for users who change their session's\nsearch_path.\n\nInstead, in v4 (attached) I introduced a fast path (patch 0004,\nexperimental) for setting search_path that uses the same low-level\ninvalidation logic in namespace.c, but avoids most of the overhead of\nset_config_option().\n\n(Aside: part of the reason set_config_option() is slow is because of\nthe lookup in guc_hashtab. That's slower than some other hash lookups\nbecause it does case-folding, which needs to be done in both the hash\nfunction and also the match function. The hash lookups in\nSearchPathCache are significantly faster. I also have a patch to change\nguc_hashtab to simplehash, but I didn't see much difference.)\n\nNew timings (with session's search_path set to a, b):\n\n inc() plain function: 4.1s\n inc_secdef() SECURITY DEFINER: 4.3s\n inc_wm() SET work_mem = '4GB': 6.3s\n inc_ab() SET search_path = a, b: 5.4s\n inc_bc() SET search_path = b, c: 5.6s\n\nI reported the numbers a bit differently here. All numbers are using v4\nof the patch. The plain function is a baseline; SECURITY DEFINER is for\nmeasuring the overhead of the fmgr_security_definer wrapper; inc_ab()\nis for setting the search_path to the same value it's already set to;\nand inc_bc() is for setting the search_path to a new value.\n\nThere's still a difference between inc() (with no proconfigs) and\ninc_ab(), so you can certainly argue that some or all of that can be\navoided if the search_path is unchanged. For instance, adding the new\nGUC level causes AtEOXact_GUC to be run, and that does some work. Maybe\nit's even possible to avoid the fmgr_security_definer wrapper entirely\n(though it would be tricky to do so).\n\nBut in general I'd prefer to optimize cases that are going to work\nnicely by default for a lot of users (especially switching to a safe\nsearch path), without the need for obscure knowledge about the\nperformance implications of the session search_path. And to the extent\nthat we do optimize for the pre-existing search_path, I'd like to\nunderstand the inherent overhead of changing the search path versus\nincidental overhead.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS", "msg_date": "Mon, 07 Aug 2023 12:57:27 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Faster \"SET search_path\"" }, { "msg_contents": "0001 and 0002 LGTM. 0003 is looking pretty good, too, but I think we\nshould get some more eyes on it, given the complexity.\n\nOn Mon, Aug 07, 2023 at 12:57:27PM -0700, Jeff Davis wrote:\n> (Aside: part of the reason set_config_option() is slow is because of\n> the lookup in guc_hashtab. That's slower than some other hash lookups\n> because it does case-folding, which needs to be done in both the hash\n> function and also the match function. The hash lookups in\n> SearchPathCache are significantly faster. I also have a patch to change\n> guc_hashtab to simplehash, but I didn't see much difference.)\n\nI wonder if it'd be faster to downcase first and then pass it to\nhash_bytes(). This would require modifying the key, which might be a\ndeal-breaker, but maybe there's some way to improve find_option() for all\nGUCs.\n\n> But in general I'd prefer to optimize cases that are going to work\n> nicely by default for a lot of users (especially switching to a safe\n> search path), without the need for obscure knowledge about the\n> performance implications of the session search_path. And to the extent\n> that we do optimize for the pre-existing search_path, I'd like to\n> understand the inherent overhead of changing the search path versus\n> incidental overhead.\n\n+1\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 7 Aug 2023 15:39:08 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Faster \"SET search_path\"" }, { "msg_contents": "On Mon, 2023-08-07 at 15:39 -0700, Nathan Bossart wrote:\n> 0001 and 0002 LGTM.\n\nI'll probably go ahead with 0001 soon. Simple and effective.\n\nBut for 0002, I was thinking about trying to optimize so\ncheck_search_path() only gets called once at the beginning, rather than\nfor each function invocation. It's a bit of a hack right now, but it\nshould be correct because it's just doing a syntactic validity check\nand that doesn't depend on catalog contents. If I can find a\nreasonably-nice way to do this, then there's no reason to optimize the\ncheck itself, and we wouldn't have to maintain that separate parsing\ncode.\n\nI might just avoid guc.c entirely and directly set\nnamespace_search_path and baseSearchPathValid=false. The main thing we\nlose there is the GUC stack (AtEOXact_GUC()), but there's already a\nPG_TRY/PG_FINALLY block in fmgr_security_definer, so we can use that to\nchange it back safely. (I think? I need to convince myself that it's\nsafe to skip the work in guc.c, at least for the special case of\nsearch_path, and that it won't be too fragile.)\n\nI started down this road and it gets even closer in performance to the\nplain function call. I don't know if we'll ever call it zero cost, but\nsafety has a cost sometimes.\n\n>   0003 is looking pretty good, too, but I think we\n> should get some more eyes on it, given the complexity.\n\nI'm happy with 0003 and I'm fairly sure we want it, but I agree that\nanother set of eyes would be nice so I'll give it a bit more time. I\ncould also improve the testing; any suggestions here are welcome.\n\n> I wonder if it'd be faster to downcase first and then pass it to\n> hash_bytes().  This would require modifying the key, which might be a\n> deal-breaker, but maybe there's some way to improve find_option() for\n> all\n> GUCs.\n\nI thought about that and I don't think modifying the argument is a good\nAPI.\n\nFor the match function, I added a fast path for strcmp()==0 with a\nfallback to case folding (on the assumption that the case is consistent\nin most cases, especially built-in ones). For the hash function, I used\na stack buffer and downcased the first K bytes into that, then did\nhash_bytes(buff, K). Those improved things a bit but it wasn't an\nexciting amount so I didn't post it. I made a patch to port guc_hashtab\nto simplehash because it seems like a better fit than hsearch.h; but\nagain, I didn't see exciting numbers from doing so, so I didn't post\nit.\n\n> \nMaybe we could use perfect hashing for the built-in GUCs, and fall back\nto simplehash for the custom GUCs?\n\nRegardless, I'm mostly interested in search_path because I want to get\nto the point where we can recommend that users specify it for every\nfunction that depends on it. A special case to facilitate that without\ncompromising (much) on performance seems reasonable to me.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Mon, 07 Aug 2023 16:24:16 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Faster \"SET search_path\"" }, { "msg_contents": "On Mon, Aug 7, 2023 at 7:24 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> I might just avoid guc.c entirely and directly set\n> namespace_search_path and baseSearchPathValid=false. The main thing we\n> lose there is the GUC stack (AtEOXact_GUC()), but there's already a\n> PG_TRY/PG_FINALLY block in fmgr_security_definer, so we can use that to\n> change it back safely. (I think? I need to convince myself that it's\n> safe to skip the work in guc.c, at least for the special case of\n> search_path, and that it won't be too fragile.)\n\nI suspect that dodging the GUC stack machinery is not a very good\nidea. The timing of when TRY/CATCH blocks are called is different from\nwhen subtransactions are aborted, and that distinction has messed me\nup more than once when trying to code various things. Specifically,\nchanges that happen in a CATCH block happen before we actually reach\nAbort(Sub)Transaction, which sort of makes it sound like you'd be\nreverting back to the old value too early. The GUC stack is also\npretty complicated because of the SET/SAVE and LOCAL/non-LOCAL stuff.\nI can't say there isn't a way to make something like what you're\ntalking about here work, but I bet it will be difficult to get all of\nthe corner cases right, and I suspect that trying to speed up the\nexisting mechanism is a better plan than trying to go around it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 15 Aug 2023 13:04:45 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Faster \"SET search_path\"" }, { "msg_contents": "On Tue, 2023-08-15 at 13:04 -0400, Robert Haas wrote:\n> I suspect that dodging the GUC stack machinery is not a very good\n> idea. The timing of when TRY/CATCH blocks are called is different\n> from\n> when subtransactions are aborted, and that distinction has messed me\n> up more than once when trying to code various things.\n\n...\n\n> I can't say there isn't a way to make something like what you're\n> talking about here work, but I bet it will be difficult to get all of\n> the corner cases right, and I suspect that trying to speed up the\n> existing mechanism is a better plan than trying to go around it.\n\nThe SearchPathCache that I implemented (0003) is an example of speeding\nup the existing mechnism that and already offers a huge speedup. I'll\nfocus on getting that in first. That patch has had some review and I'm\ntempted to commit it soon, but Nathan has requested another set of eyes\non it.\n\nTo bring the overhead closer to zero we need to somehow avoid repeating\nso much work in guc.c, though. If we don't go around it, another\napproach would be to separate GUC setting into two phases: one that\ndoes the checks, and one that performs the final change. All the real\nwork (hash lookup, guc_strdup, and check_search_path) is done in the\n\"check\" phase.\n\nIt's a bit painful to reorganize the code in guc.c, though, so that\nmight need to happen in a few parts and will depend on how great the\ndemand is.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Wed, 16 Aug 2023 15:09:43 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Faster \"SET search_path\"" }, { "msg_contents": "On Wed, 2023-08-16 at 15:09 -0700, Jeff Davis wrote:\n> To bring the overhead closer to zero we need to somehow avoid\n> repeating\n> so much work in guc.c, though. If we don't go around it, another\n> approach would be to separate GUC setting into two phases: one that\n> does the checks, and one that performs the final change. All the real\n> work (hash lookup, guc_strdup, and check_search_path) is done in the\n> \"check\" phase.\n> \n> It's a bit painful to reorganize the code in guc.c, though, so that\n> might need to happen in a few parts and will depend on how great the\n> demand is.\n\nI looked into this, and one problem is that there are two different\nmemory allocators at work. In guc.c, the new value is guc_strdup'd, and\nneeds to be saved until it replaces the old value. But if the caller\n(in fmgr.c) saves the state in fmgr_security_definer_cache, it would be\nsaving a guc_strdup'd value, and we'd need to be careful about freeing\nthat if (and only if) the final set step of the GUC doesn't happen. Or,\nwe'd need to somehow use pstrdup() instead, and somehow tell guc.c not\nto guc_free() it.\n\nThose are obviously solvable, but I don't see a clean way to do so.\nPerhaps it's still better than circumventing guc.c entirely (because\nthat approach probably suffers from the same problem, in addition to\nthe problem you raised), but I'm not sure if it's worth it to take on\nthis level of complexity. Suggestions welcome.\n\nRight now I'm still intending to get the cache in place, which doesn't\nhave these downsides and provides a nice speedup.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Tue, 22 Aug 2023 14:45:04 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Faster \"SET search_path\"" }, { "msg_contents": "On Mon, 2023-08-07 at 15:39 -0700, Nathan Bossart wrote:\n> 0003 is looking pretty good, too, but I think we\n> should get some more eyes on it, given the complexity.\n\nAttached rebased version of 0003.\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS", "msg_date": "Tue, 12 Sep 2023 13:55:22 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Faster \"SET search_path\"" }, { "msg_contents": "On Tue, 2023-09-12 at 13:55 -0700, Jeff Davis wrote:\n> On Mon, 2023-08-07 at 15:39 -0700, Nathan Bossart wrote:\n> > 0003 is looking pretty good, too, but I think we\n> > should get some more eyes on it, given the complexity.\n> \n> Attached rebased version of 0003.\n\nIs someone else planning to look at 0003, or should I just proceed? It\nseems to be clearly wanted, and I think it's better to get it in this\n'fest than to wait.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Fri, 15 Sep 2023 11:31:10 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Faster \"SET search_path\"" }, { "msg_contents": "On Fri, 2023-09-15 at 11:31 -0700, Jeff Davis wrote:\n> On Tue, 2023-09-12 at 13:55 -0700, Jeff Davis wrote:\n> > On Mon, 2023-08-07 at 15:39 -0700, Nathan Bossart wrote:\n> > > 0003 is looking pretty good, too, but I think we\n> > > should get some more eyes on it, given the complexity.\n> > \n> > Attached rebased version of 0003.\n> \n> Is someone else planning to look at 0003, or should I just proceed?\n> It\n> seems to be clearly wanted, and I think it's better to get it in this\n> 'fest than to wait.\n\nAttaching a new version, as well as some additional optimizations.\n\nChanges:\n\n* I separated it into more small functions, and generally refactored\nquite a bit trying to make it easier to review.\n\n* The new version is more careful about what might happen during an\nOOM, or in weird cases such as a huge number of distinct search_path\nstrings.\n\n0003: Cache for recomputeNamespacePath.\n0004: Use the same cache to optimize check_search_path().\n0005: Optimize cache for repeated lookups of the same value.\n\nApplying the same tests as described in the first message[1], the new\nnumbers are:\n\n baseline: 4.4s\n test query:\n without patch: 12.3s\n 0003: 8.8s\n 0003,0004: 7.4s\n 0003,0004,0005: 7.0s\n\nThis brings the slowdown from 180% on master down to about 60%. Still\nnot where we want to be exactly, but more tolerable.\n\nThe profile shows a few more areas worth looking at, so I suppose a bit\nmore effort could bring it down further. find_option(), for instance,\nis annoyingly slow because it does case folding.\n\nRegards,\n\tJeff Davis\n\n[1]\nhttps://www.postgresql.org/message-id/04c8592dbd694e4114a3ed87139a7a04e4363030.camel%40j-davis.com", "msg_date": "Thu, 19 Oct 2023 19:01:36 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Faster \"SET search_path\"" }, { "msg_contents": "On Thu, 2023-10-19 at 19:01 -0700, Jeff Davis wrote:\n> 0003: Cache for recomputeNamespacePath.\n\nCommitted with some further simplification around the OOM handling.\nInstead of using MCXT_ALLOC_NO_OOM, it just temporarily sets the cache\ninvalid while copying the string, and sets it valid again before\nreturning. That way, if an OOM occurs during the string copy, the cache\nwill be reset.\n\nAlso some minor cleanup.\n\n> 0004: Use the same cache to optimize check_search_path().\n> 0005: Optimize cache for repeated lookups of the same value.\n\nI intend to commit these fairly soon, as well.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Tue, 14 Nov 2023 20:13:25 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Faster \"SET search_path\"" }, { "msg_contents": "On Tue, 2023-11-14 at 20:13 -0800, Jeff Davis wrote:\n> On Thu, 2023-10-19 at 19:01 -0700, Jeff Davis wrote:\n> > 0003: Cache for recomputeNamespacePath.\n> \n> Committed with some further simplification around the OOM handling.\n\nWhile I considered OOM during hash key initialization, I missed some\nother potential out-of-memory hazards. Attached a fixup patch 0003,\nwhich re-introduces one list copy but it simplifies things\nsubstantially in addition to being safer around OOM conditions.\n\n> > 0004: Use the same cache to optimize check_search_path().\n> > 0005: Optimize cache for repeated lookups of the same value.\n\nAlso attached new versions of these patches.\n\nRegards,\n\tJeff Davis", "msg_date": "Thu, 16 Nov 2023 16:46:10 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Faster \"SET search_path\"" }, { "msg_contents": "On Thu, 2023-11-16 at 16:46 -0800, Jeff Davis wrote:\n> While I considered OOM during hash key initialization, I missed some\n> other potential out-of-memory hazards. Attached a fixup patch 0003,\n> which re-introduces one list copy but it simplifies things\n> substantially in addition to being safer around OOM conditions.\n\nCommitted 0003 fixup.\n\n> > > 0004: Use the same cache to optimize check_search_path().\n\nCommitted 0004.\n\n> > > 0005: Optimize cache for repeated lookups of the same value.\n\nWill commit 0005 soon.\n\nI also attached a trivial 0006 patch that uses SH_STORE_HASH. I wasn't\nable to show much benefit, though, even when there's a bucket\ncollision. Perhaps there just aren't enough elements to matter -- I\nsuppose there would be a benefit if there are lots of unique\nsearch_path strings, but that doesn't seem very plausible to me. If\nsomeone thinks it's worth committing, then I will, but I don't see any\nreal upside or downside.\n\nRegards,\n\tJeff Davis", "msg_date": "Mon, 20 Nov 2023 17:13:33 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Faster \"SET search_path\"" }, { "msg_contents": "On Mon, 2023-11-20 at 17:13 -0800, Jeff Davis wrote:\n> Will commit 0005 soon.\n\nCommitted.\n\n> I also attached a trivial 0006 patch that uses SH_STORE_HASH. I\n> wasn't\n> able to show much benefit, though, even when there's a bucket\n> collision. Perhaps there just aren't enough elements to matter -- I\n> suppose there would be a benefit if there are lots of unique\n> search_path strings, but that doesn't seem very plausible to me. If\n> someone thinks it's worth committing, then I will, but I don't see\n> any\n> real upside or downside.\n\nI tried again by forcing a hash table with ~25 entries and 13\ncollisions, and even then, SH_STORE_HASH didn't make a difference in my\ntest. Maybe a microbenchmark would show a difference, but I didn't see\nmuch reason to commit 0006. (There's also no downside, so I was tempted\nto commit it just so I wouldn't have to put more thought into whether\nit's a problem or not.)\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Tue, 05 Dec 2023 15:55:11 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Faster \"SET search_path\"" }, { "msg_contents": "On Tue, Nov 14, 2023 at 08:13:25PM -0800, Jeff Davis wrote:\n> On Thu, 2023-10-19 at 19:01 -0700, Jeff Davis wrote:\n> > 0003: Cache for recomputeNamespacePath.\n> \n> Committed\n\nCommit f26c236 wrote:\n> Add cache for recomputeNamespacePath().\n> \n> When search_path is changed to something that was previously set, and\n> no invalidation happened in between, use the cached list of namespace\n> OIDs rather than recomputing them. This avoids syscache lookups and\n> ACL checks.\n\n> --- a/src/backend/catalog/namespace.c\n> +++ b/src/backend/catalog/namespace.c\n\n> @@ -109,11 +110,13 @@\n> * activeSearchPath is always the actually active path; it points to\n> * to baseSearchPath which is the list derived from namespace_search_path.\n> *\n> - * If baseSearchPathValid is false, then baseSearchPath (and other\n> - * derived variables) need to be recomputed from namespace_search_path.\n> - * We mark it invalid upon an assignment to namespace_search_path or receipt\n> - * of a syscache invalidation event for pg_namespace. The recomputation\n> - * is done during the next lookup attempt.\n> + * If baseSearchPathValid is false, then baseSearchPath (and other derived\n> + * variables) need to be recomputed from namespace_search_path, or retrieved\n> + * from the search path cache if there haven't been any syscache\n> + * invalidations. We mark it invalid upon an assignment to\n> + * namespace_search_path or receipt of a syscache invalidation event for\n> + * pg_namespace or pg_authid. The recomputation is done during the next\n\nYou're caching the result of object_aclcheck(NamespaceRelationId, ...), so\npg_auth_members changes and pg_database changes also need to invalidate this\ncache. (pg_database affects the ACL_CREATE_TEMP case in\npg_namespace_aclmask_ext() and affects ROLE_PG_DATABASE_OWNER membership.)\n\n\n", "msg_date": "Sun, 30 Jun 2024 15:30:47 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Faster \"SET search_path\"" }, { "msg_contents": "On Sun, 2024-06-30 at 15:30 -0700, Noah Misch wrote:\n> You're caching the result of object_aclcheck(NamespaceRelationId,\n> ...), so\n> pg_auth_members changes\n\nThank you for the report.\n\nQuestion: to check for changes to pg_auth_members, it seems like either\nAUTHMEMROLEMEM or AUTHMEMMEMROLE work, and to use both would be\nredundant. Am I missing something, or should I just pick one\narbitrarily (or by some convention)?\n\n> and pg_database changes also need to invalidate this\n> cache.  (pg_database affects the ACL_CREATE_TEMP case in\n> pg_namespace_aclmask_ext()\n\nI am having trouble finding an actual problem with ACL_CREATE_TEMP.\nsearch_path ACL checks are normally bypassed for the temp namespace, so\nit can only happen when the actual temp namespace name (e.g.\npg_temp_NNN) is explicitly included. In that case, the mask is\nACL_USAGE, so the two branches in pg_namespace_aclmask_ext() are\nequivalent, right?\n\nThis question is not terribly important for the fix, because\ninvalidating for each pg_database change is still necessary as you\npoint out in here:\n\n> and affects ROLE_PG_DATABASE_OWNER membership.)\n\nAnother good catch, thank you.\n\nPatch attached. Contains a bit of cleanup and is intended for 17+18.\n\nRegards,\n\tJeff Davis", "msg_date": "Mon, 08 Jul 2024 16:39:21 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Faster \"SET search_path\"" }, { "msg_contents": "On Mon, Jul 08, 2024 at 04:39:21PM -0700, Jeff Davis wrote:\n> On Sun, 2024-06-30 at 15:30 -0700, Noah Misch wrote:\n> > You're caching the result of object_aclcheck(NamespaceRelationId,\n> > ...), so\n> > pg_auth_members changes\n> \n> Thank you for the report.\n> \n> Question: to check for changes to pg_auth_members, it seems like either\n> AUTHMEMROLEMEM or AUTHMEMMEMROLE work, and to use both would be\n> redundant. Am I missing something, or should I just pick one\n> arbitrarily (or by some convention)?\n\nOne suffices. initialize_acl() is the only other place that invals on this\ncatalog, so consider it a fine convention to mimic.\n\n> > and pg_database changes also need to invalidate this\n> > cache.� (pg_database affects the ACL_CREATE_TEMP case in\n> > pg_namespace_aclmask_ext()\n> \n> I am having trouble finding an actual problem with ACL_CREATE_TEMP.\n> search_path ACL checks are normally bypassed for the temp namespace, so\n> it can only happen when the actual temp namespace name (e.g.\n> pg_temp_NNN) is explicitly included. In that case, the mask is\n> ACL_USAGE, so the two branches in pg_namespace_aclmask_ext() are\n> equivalent, right?\n\nI don't know.\n\n> This question is not terribly important for the fix, because\n> invalidating for each pg_database change is still necessary as you\n> point out\n\n\n", "msg_date": "Mon, 8 Jul 2024 17:20:11 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Faster \"SET search_path\"" } ]
[ { "msg_contents": "Hi,\n\nWhile testing something I made the checkpointer process intentionally crash as\nsoon as it started up. The odd thing I observed on macOS is that we start a\n*new* checkpointer before shutting down:\n\n2023-07-29 14:32:39.241 PDT [65031] LOG: listening on Unix socket \"/tmp/.s.PGSQL.5432\"\n2023-07-29 14:32:39.244 PDT [65031] DEBUG: reaping dead processes\n2023-07-29 14:32:39.244 PDT [65031] LOG: checkpointer process (PID 65032) was terminated by signal 11: Segmentation fault: 11\n2023-07-29 14:32:39.244 PDT [65031] LOG: terminating any other active server processes\n2023-07-29 14:32:39.244 PDT [65031] DEBUG: sending SIGQUIT to process 65034\n2023-07-29 14:32:39.245 PDT [65031] DEBUG: sending SIGQUIT to process 65033\n2023-07-29 14:32:39.245 PDT [65031] DEBUG: reaping dead processes\n2023-07-29 14:32:39.245 PDT [65035] LOG: process 65035 taking over ProcSignal slot 126, but it's not empty\n2023-07-29 14:32:39.245 PDT [65031] DEBUG: reaping dead processes\n2023-07-29 14:32:39.245 PDT [65031] LOG: shutting down because restart_after_crash is off\n\nNote that a new process (65035) is started after the crash has been\nobserved. I added logging to StartChildProcess(), and the process that's\nstarted is another checkpointer.\n\nI could not initially reproduce this on linux.\n\nAfter a fair bit of confusion, I figured out the reason: On macOS it takes a\nbit longer for the startup process to finish, which means we're still in\nPM_STARTUP state when we see that crash, instead of PM_RECOVERY or PM_RUN or\n...\n\nThe problem is that unfortunately HandleChildCrash() doesn't change pmState\nwhen in PM_STARTUP:\n\n\t/* We now transit into a state of waiting for children to die */\n\tif (pmState == PM_RECOVERY ||\n\t\tpmState == PM_HOT_STANDBY ||\n\t\tpmState == PM_RUN ||\n\t\tpmState == PM_STOP_BACKENDS ||\n\t\tpmState == PM_SHUTDOWN)\n\t\tpmState = PM_WAIT_BACKENDS;\n\nOnce I figured that out, I put a sleep(1) in StartupProcessMain(), and the\nproblem reproduces on linux as well.\n\nI haven't fully dug through the history, this looks to be a quite old problem.\n\n\nArguably we might also be missing PM_SHUTDOWN_2, but I can't really see a bad\nconsequence of that.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 29 Jul 2023 14:51:24 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Postmaster doesn't correctly handle crashes in PM_STARTUP state" } ]
[ { "msg_contents": "Hi there, hope to find you well.\n\nI'm attempting to develop a CDC on top of Postgres, currently using 12, the last minor, with a custom client, and I'm running into issues with data loss caused by out-of-order logical replication messages.\n\nThe problem is as follows: postgres streams A, B, D, G, K, I, P logical replication events, upon exit signal we stop consuming new events at LSN K, and we wait 30s for out-of-order events. Let's say that we only got A, (and K ofc) so in the following 30s, we get B, D, however, for whatever reason, G never arrived. As with pgoutput-based logical replication we have no way to calculate the next LSN, we have no idea that G was missing, so we assumed that it all arrived, committing K to postgres slot and shutdown. In the next run, our worker will start receiving data from K forward, and G is lost forever...\nMeanwhile postgres moves forward with archiving and we can't go back to check if we lost anything. And even if we could, would be extremely inefficient.\n\nIn sum, the issue comes from the fact that postgres will stream events with unordered LSNs on high transactional systems, and that pgoutput doesn't have access to enough information to calculate the next or last LSN, so we have no way to check if we receive all the data that we are supposed to receive, risking committing an offset that we shouldn't as we didn't receive yet preceding data.\n\nIt seems very either to me that none of the open-source CDC projects that I looked into care about this. They always assume that the next LSN received is... well the next one, and commit that one, so upon restart, they are vulnerable to the same issue. So... either I'm missing something... or we have a generalized assumption causing data loss under certain conditions all over.\n\nAm I missing any postgres mechanism that will allow me to at least detect that I'm missing data?\n\nThanks in advance for any clues on how to deal with this. It has been driving me nuts.\n\n *\n\nRegards,\nJosé Neves\n\n\n\n\n\n\n\n\n\nHi there, hope to find you well.\n\n\n\n\nI'm attempting to develop a CDC on top of Postgres, currently using 12, the last minor, with a custom\n client, and I'm running into issues with data loss caused by out-of-order logical replication messages.\n\n\n\n\nThe problem is as follows: postgres\n streams A, B, D, G, K, I, P logical\n replication events, upon exit signal we stop consuming new events at LSN K, and we\n wait 30s for out-of-order events. Let's say that we only got A,\n (and K ofc) so in the following 30s, we get B, D,\n however, for whatever reason, G never\n arrived. As with pgoutput-based logical replication we have no way to calculate the next LSN, we have no idea that G was\n missing, so we assumed that it all arrived, committing K to\n postgres slot and shutdown. In the next run, our worker will start receiving data from K forward,\n and G is lost forever...\n\nMeanwhile\n postgres moves forward with archiving and we can't go back to check if we lost anything. And even if we could, would be extremely inefficient.\n\n\n\n\n\nIn sum, the issue comes from the fact that postgres will stream events with unordered LSNs on high transactional systems, and that pgoutput doesn't have access to enough information to calculate the next or last LSN, so we have no way to check if we receive\n all the data that we are supposed to receive, risking committing an offset that we shouldn't as we didn't receive yet preceding data.\n\n\n\n\nIt seems very either to me that none of the open-source CDC projects that I looked into care about this. They always assume that\n the next LSN received is... well the next one, and commit that one, so upon restart, they are vulnerable to the same issue. So... either I'm missing something... or we have a generalized assumption causing data loss under certain conditions all over.\n\n\n\n\nAm I missing any postgres mechanism that will allow me to at least detect that I'm missing data?\n\n\n\n\nThanks in advance for any clues on how to deal with this. It has been driving me nuts.\n\n\n\n\n\n\n\nRegards,\nJosé Neves", "msg_date": "Sat, 29 Jul 2023 23:07:24 +0000", "msg_from": "=?iso-8859-1?Q?Jos=E9_Neves?= <rafaneves3@msn.com>", "msg_from_op": true, "msg_subject": "CDC/ETL system on top of logical replication with pgoutput, custom\n client" }, { "msg_contents": "On Mon, Jul 31, 2023 at 3:06 PM José Neves <rafaneves3@msn.com> wrote:\n>\n> Hi there, hope to find you well.\n>\n> I'm attempting to develop a CDC on top of Postgres, currently using 12, the last minor, with a custom client, and I'm running into issues with data loss caused by out-of-order logical replication messages.\n>\n> The problem is as follows: postgres streams A, B, D, G, K, I, P logical replication events, upon exit signal we stop consuming new events at LSN K, and we wait 30s for out-of-order events. Let's say that we only got A, (and K ofc) so in the following 30s, we get B, D, however, for whatever reason, G never arrived. As with pgoutput-based logical replication we have no way to calculate the next LSN, we have no idea that G was missing, so we assumed that it all arrived, committing K to postgres slot and shutdown. In the next run, our worker will start receiving data from K forward, and G is lost forever...\n> Meanwhile postgres moves forward with archiving and we can't go back to check if we lost anything. And even if we could, would be extremely inefficient.\n>\n> In sum, the issue comes from the fact that postgres will stream events with unordered LSNs on high transactional systems, and that pgoutput doesn't have access to enough information to calculate the next or last LSN, so we have no way to check if we receive all the data that we are supposed to receive, risking committing an offset that we shouldn't as we didn't receive yet preceding data.\n>\n\nAs per my understanding, we stream the data in the commit LSN order\nand for a particular transaction, all the changes are per their LSN\norder. Now, it is possible that for a parallel transaction, we send\nsome changes from a prior LSN after sending the commit of another\ntransaction. Say we have changes as follows:\n\nT-1\nchange1 LSN1-1000\nchange2 LSN2- 2000\ncommit LSN3- 3000\n\nT-2\nchange1 LSN1-500\nchange2 LSN2-1500\ncommit LSN3-4000\n\nIn such a case, all the changes including the commit of T-1 are sent\nand then all the changes including the commit of T-2 are sent. So, one\ncan say that some of the changes from T-2 from prior LSN arrived after\nT-1's commit but that shouldn't be a problem because if restart\nhappens after we received partial T-2, we should receive the entire\nT-2.\n\nIt is possible that you are seeing something else but if so then\nplease try to share a more concrete example.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 31 Jul 2023 19:01:07 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CDC/ETL system on top of logical replication with pgoutput,\n custom client" }, { "msg_contents": "Hi Amit, thanks for the reply.\r\n\r\nIn our worker (custom pg replication client), we care only about INSERT, UPDATE, and DELETE operations, which - sure - may be part of the issue.\r\nI can only replicate this with production-level load, not easy to get a real example, but as I'm understanding the issue (and building upon your exposition), we are seeing the following:\r\n\r\nT-1\r\nINSERT LSN1-1000\r\nUPDATE LSN2-2000\r\nUPDATE LSN3-3000\r\nCOMMIT LSN4-4000\r\n\r\nT-2\r\nINSERT LSN1-500\r\nUPDATE LSN2-1500\r\nUPDATE LSN3-2500\r\nCOMMIT LSN4-5500\r\n\r\nIf we miss LSN3-3000, let's say, a bad network, and we already received all other LSNs, we will commit to Postgres LSN4-5500 before restarting. LSN3 3000 will never be reattempted. And there are a couple of issues with this scenery:\r\n1. We have no way to match LSN operations with the respective commit, as they have unordered offsets. Assuming that all of them were received in order, we would commit all data with the commit message LSN4-4000 as other events would match the transaction start and end LSN interval of it.\r\n2. Still we have no way to verify that we got all data for a given transaction, we will never miss LSN3-3000 of the first transaction till we look at and analyze the resulting data.\r\n\r\nSo the question: how can we prevent our worker from committing LSN4-5500 without receiving LSN3-3000? Do we even have enough information out of pgoutput to do that?\r\nPS.: when I say bad network, my suspicion is that this situation may be caused by network saturation on high QPS periods. Data will still arrive eventually but by that time our worker is no longer listening.\r\n\r\nThanks again. Regards,\r\nJosé Neves\r\n\r\n\r\n\r\n\r\n________________________________\r\nDe: Amit Kapila <amit.kapila16@gmail.com>\r\nEnviado: 31 de julho de 2023 14:31\r\nPara: José Neves <rafaneves3@msn.com>\r\nCc: pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>\r\nAssunto: Re: CDC/ETL system on top of logical replication with pgoutput, custom client\r\n\r\nOn Mon, Jul 31, 2023 at 3:06 PM José Neves <rafaneves3@msn.com> wrote:\r\n>\r\n> Hi there, hope to find you well.\r\n>\r\n> I'm attempting to develop a CDC on top of Postgres, currently using 12, the last minor, with a custom client, and I'm running into issues with data loss caused by out-of-order logical replication messages.\r\n>\r\n> The problem is as follows: postgres streams A, B, D, G, K, I, P logical replication events, upon exit signal we stop consuming new events at LSN K, and we wait 30s for out-of-order events. Let's say that we only got A, (and K ofc) so in the following 30s, we get B, D, however, for whatever reason, G never arrived. As with pgoutput-based logical replication we have no way to calculate the next LSN, we have no idea that G was missing, so we assumed that it all arrived, committing K to postgres slot and shutdown. In the next run, our worker will start receiving data from K forward, and G is lost forever...\r\n> Meanwhile postgres moves forward with archiving and we can't go back to check if we lost anything. And even if we could, would be extremely inefficient.\r\n>\r\n> In sum, the issue comes from the fact that postgres will stream events with unordered LSNs on high transactional systems, and that pgoutput doesn't have access to enough information to calculate the next or last LSN, so we have no way to check if we receive all the data that we are supposed to receive, risking committing an offset that we shouldn't as we didn't receive yet preceding data.\r\n>\r\n\r\nAs per my understanding, we stream the data in the commit LSN order\r\nand for a particular transaction, all the changes are per their LSN\r\norder. Now, it is possible that for a parallel transaction, we send\r\nsome changes from a prior LSN after sending the commit of another\r\ntransaction. Say we have changes as follows:\r\n\r\nT-1\r\nchange1 LSN1-1000\r\nchange2 LSN2- 2000\r\ncommit LSN3- 3000\r\n\r\nT-2\r\nchange1 LSN1-500\r\nchange2 LSN2-1500\r\ncommit LSN3-4000\r\n\r\nIn such a case, all the changes including the commit of T-1 are sent\r\nand then all the changes including the commit of T-2 are sent. So, one\r\ncan say that some of the changes from T-2 from prior LSN arrived after\r\nT-1's commit but that shouldn't be a problem because if restart\r\nhappens after we received partial T-2, we should receive the entire\r\nT-2.\r\n\r\nIt is possible that you are seeing something else but if so then\r\nplease try to share a more concrete example.\r\n\r\n--\r\nWith Regards,\r\nAmit Kapila.\r\n\n\n\n\n\n\n\n\nHi Amit, thanks for the reply.\n\n\n\n\n\nIn our worker (custom pg replication client), we care only about INSERT, UPDATE, and DELETE operations, which - sure - may be part of the issue.\r\nI can only replicate this with production-level load, not easy to get a real example, but as I'm understanding the issue (and building upon your exposition), we are seeing the following:\n\n\n\nT-1\n\n\nINSERT\r\n LSN1-1000\n\n\nUPDATE\r\n LSN2-2000\n\n\nUPDATE LSN3-3000\n\nCOMMIT \r\n LSN4-4000\n\n\n\n\n\n\nT-2\nINSERT LSN1-500\nUPDATE LSN2-1500\nUPDATE LSN3-2500\nCOMMIT  LSN4-5500\n\n\n\n\r\nIf we miss LSN3-3000, let's say, a bad network, and we already received all other LSNs, we will commit to Postgres LSN4-5500 before restarting. LSN3 3000\r\n will never be reattempted. And there are a couple of issues with this scenery:\n\n\n1. We have no way to match LSN operations with the respective commit, as they have unordered offsets. Assuming that all of them were received in order, we\r\n would commit all data with the commit message LSN4-4000 as other events would match the transaction start and end LSN interval of it.\n\n2. Still we have no way to verify that we got all data for a given transaction, we will never miss LSN3-3000 of the first transaction till we look at and\r\n analyze the resulting data.\n\r\nSo the question: how can we prevent our worker from committing LSN4-5500 without receiving LSN3-3000? Do we even have enough information out of pgoutput to do that?\n\r\nPS.: when I say bad network, my suspicion is that this situation may be caused by network saturation on high QPS periods. Data will still arrive eventually\r\n but by that time our worker is no longer listening.\n\n\n\n\r\nThanks again. Regards,\n\r\nJosé Neves\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nDe: Amit Kapila <amit.kapila16@gmail.com>\nEnviado: 31 de julho de 2023 14:31\nPara: José Neves <rafaneves3@msn.com>\nCc: pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>\nAssunto: Re: CDC/ETL system on top of logical replication with pgoutput, custom client\n \n\n\nOn Mon, Jul 31, 2023 at 3:06 PM José Neves <rafaneves3@msn.com> wrote:\r\n>\r\n> Hi there, hope to find you well.\r\n>\r\n> I'm attempting to develop a CDC on top of Postgres, currently using 12, the last minor, with a custom client, and I'm running into issues with data loss caused by out-of-order logical replication messages.\r\n>\r\n> The problem is as follows: postgres streams A, B, D, G, K, I, P logical replication events, upon exit signal we stop consuming new events at LSN K, and we wait 30s for out-of-order events. Let's say that we only got A, (and K ofc) so in the following 30s,\r\n we get B, D, however, for whatever reason, G never arrived. As with pgoutput-based logical replication we have no way to calculate the next LSN, we have no idea that G was missing, so we assumed that it all arrived, committing K to postgres slot and shutdown.\r\n In the next run, our worker will start receiving data from K forward, and G is lost forever...\r\n> Meanwhile postgres moves forward with archiving and we can't go back to check if we lost anything. And even if we could, would be extremely inefficient.\r\n>\r\n> In sum, the issue comes from the fact that postgres will stream events with unordered LSNs on high transactional systems, and that pgoutput doesn't have access to enough information to calculate the next or last LSN, so we have no way to check if we receive\r\n all the data that we are supposed to receive, risking committing an offset that we shouldn't as we didn't receive yet preceding data.\r\n>\n\r\nAs per my understanding, we stream the data in the commit LSN order\r\nand for a particular transaction, all the changes are per their LSN\r\norder. Now, it is possible that for a parallel transaction, we send\r\nsome changes from a prior LSN after sending the commit of another\r\ntransaction. Say we have changes as follows:\n\r\nT-1\r\nchange1 LSN1-1000\r\nchange2 LSN2- 2000\r\ncommit   LSN3- 3000\n\r\nT-2\r\nchange1 LSN1-500\r\nchange2 LSN2-1500\r\ncommit   LSN3-4000\n\r\nIn such a case, all the changes including the commit of T-1 are sent\r\nand then all the changes including the commit of T-2 are sent. So, one\r\ncan say that some of the changes from T-2 from prior LSN arrived after\r\nT-1's commit but that shouldn't be a problem because if restart\r\nhappens after we received partial T-2, we should receive the entire\r\nT-2.\n\r\nIt is possible that you are seeing something else but if so then\r\nplease try to share a more concrete example.\n\r\n-- \r\nWith Regards,\r\nAmit Kapila.", "msg_date": "Mon, 31 Jul 2023 14:16:22 +0000", "msg_from": "=?utf-8?B?Sm9zw6kgTmV2ZXM=?= <rafaneves3@msn.com>", "msg_from_op": false, "msg_subject": "RE: CDC/ETL system on top of logical replication with pgoutput,\n custom client" }, { "msg_contents": "On Sat, Jul 29, 2023, at 8:07 PM, José Neves wrote:\n> I'm attempting to develop a CDC on top of Postgres, currently using 12, the last minor, with a custom client, and I'm running into issues with data loss caused by out-of-order logical replication messages.\n\nCan you provide a test case to show this issue? Did you try in a newer version?\n\n> The problem is as follows: postgres streams *A, B, D, G, K, I, P *logical replication events, upon exit signal we stop consuming new events at LSN *K,* and we wait 30s for out-of-order events. Let's say that we only got *A*, (and *K* ofc) so in the following 30s, we get *B, D*, however, for whatever reason, *G* never arrived. As with pgoutput-based logical replication we have no way to calculate the next LSN, we have no idea that *G* was missing, so we assumed that it all arrived, committing *K *to postgres slot and shutdown. In the next run, our worker will start receiving data from *K* forward, and *G* is lost forever...\n> Meanwhile postgres moves forward with archiving and we can't go back to check if we lost anything. And even if we could, would be extremely inefficient.\n\nLogical decoding provides the changes to output plugin at commit time. You\nmentioned the logical replication events but didn't say which are part of the\nsame transaction. Let's say A, B, D and K are changes from the same transaction\nand G, I and P are changes from another transaction. The first transaction will\nbe available when it processes K. The second transaction will be provided when\nthe logical decoding processes P.\n\nYou didn't say how your consumer is working. Are you sure your consumer doesn't\nget the second transaction? If your consumer is advancing the replication slot\n*after* receiving K (using pg_replication_slot_advance), it is doing it wrong.\nAnother common problem with consumer is that it uses\npg_logical_slot_get_changes() but *before* using the data it crashes; in this\ncase, the data is lost.\n\nIt is hard to say where the problem is if you didn't provide enough information\nabout the consumer logic and the WAL information (pg_waldump output) around the\ntime you detect the data loss.\n\n> In sum, the issue comes from the fact that postgres will stream events with unordered LSNs on high transactional systems, and that pgoutput doesn't have access to enough information to calculate the next or last LSN, so we have no way to check if we receive all the data that we are supposed to receive, risking committing an offset that we shouldn't as we didn't receive yet preceding data.\n> \n> It seems very either to me that none of the open-source CDC projects that I looked into care about this. They always assume that the next LSN received is... well the next one, and commit that one, so upon restart, they are vulnerable to the same issue. So... either I'm missing something... or we have a generalized assumption causing data loss under certain conditions all over.\n\nLet me illustrate the current behavior. Let's say there are 3 concurrent\ntransactions.\n\nSession A\n==========\n\neuler=# SELECT pg_create_logical_replication_slot('repslot1', 'wal2json');\npg_create_logical_replication_slot \n------------------------------------\n(repslot1,0/369DF088)\n(1 row)\n\neuler=# create table foo (a int primary key);\nCREATE TABLE\neuler=# BEGIN;\nBEGIN\neuler=*# INSERT INTO foo (a) SELECT generate_series(1, 2);\nINSERT 0 2\n\nSession B\n==========\n\neuler=# BEGIN;\nBEGIN\neuler=*# INSERT INTO foo (a) SELECT generate_series(11, 12);\nINSERT 0 2\n\nSession C\n==========\n\neuler=# BEGIN;\nBEGIN\neuler=*# INSERT INTO foo (a) SELECT generate_series(21, 22);\nINSERT 0 2\n\nSession A\n==========\n\neuler=*# INSERT INTO foo (a) VALUES(3);\nINSERT 0 1\n\nSession B\n==========\n\neuler=*# INSERT INTO foo (a) VALUES(13);\nINSERT 0 1\n\nSession C\n==========\n\neuler=*# INSERT INTO foo (a) VALUES(23);\nINSERT 0 1\neuler=*# COMMIT;\nCOMMIT\n\nSession B\n==========\n\neuler=*# COMMIT;\nCOMMIT\n\nSession A\n==========\n\neuler=*# COMMIT;\nCOMMIT\n\nThe output is:\n\neuler=# SELECT * FROM pg_logical_slot_peek_changes('repslot1', NULL, NULL, 'format-version', '2', 'include-types', '0');\n lsn | xid | data \n------------+--------+------------------------------------------------------------------------------------\n0/369E4800 | 454539 | {\"action\":\"B\"}\n0/36A05088 | 454539 | {\"action\":\"C\"}\n0/36A05398 | 454542 | {\"action\":\"B\"}\n0/36A05398 | 454542 | {\"action\":\"I\",\"schema\":\"public\",\"table\":\"foo\",\"columns\":[{\"name\":\"a\",\"value\":21}]}\n0/36A05418 | 454542 | {\"action\":\"I\",\"schema\":\"public\",\"table\":\"foo\",\"columns\":[{\"name\":\"a\",\"value\":22}]}\n0/36A05658 | 454542 | {\"action\":\"I\",\"schema\":\"public\",\"table\":\"foo\",\"columns\":[{\"name\":\"a\",\"value\":23}]}\n0/36A057C0 | 454542 | {\"action\":\"C\"}\n0/36A05258 | 454541 | {\"action\":\"B\"}\n0/36A05258 | 454541 | {\"action\":\"I\",\"schema\":\"public\",\"table\":\"foo\",\"columns\":[{\"name\":\"a\",\"value\":11}]}\n0/36A052D8 | 454541 | {\"action\":\"I\",\"schema\":\"public\",\"table\":\"foo\",\"columns\":[{\"name\":\"a\",\"value\":12}]}\n0/36A05598 | 454541 | {\"action\":\"I\",\"schema\":\"public\",\"table\":\"foo\",\"columns\":[{\"name\":\"a\",\"value\":13}]}\n0/36A057F0 | 454541 | {\"action\":\"C\"}\n0/36A050C0 | 454540 | {\"action\":\"B\"}\n0/36A050C0 | 454540 | {\"action\":\"I\",\"schema\":\"public\",\"table\":\"foo\",\"columns\":[{\"name\":\"a\",\"value\":1}]}\n0/36A051A0 | 454540 | {\"action\":\"I\",\"schema\":\"public\",\"table\":\"foo\",\"columns\":[{\"name\":\"a\",\"value\":2}]}\n0/36A054D8 | 454540 | {\"action\":\"I\",\"schema\":\"public\",\"table\":\"foo\",\"columns\":[{\"name\":\"a\",\"value\":3}]}\n0/36A05820 | 454540 | {\"action\":\"C\"}\n(17 rows)\n\nSince session C committed first, it is the first transaction available to output\nplugin (wal2json). Transaction 454541 is the next one that is available because\nit committed after session C (transaction 454542) and the first transaction\nthat started (session A) is the last one available. You can also notice that\nthe first transaction (454540) is the last one available.\n\nYour consumer cannot rely on LSN position or xid to track the progress.\nInstead, Postgres provides a replication progress mechanism [1] to do it.\n\n\n[1] https://www.postgresql.org/docs/current/replication-origins.html\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Sat, Jul 29, 2023, at 8:07 PM, José Neves wrote:I'm attempting to develop a CDC on top of Postgres, currently using 12, the last minor, with a custom\n client, and I'm running into issues with data loss caused by out-of-order logical replication messages.Can you provide a test case to show this issue? Did you try in a newer version?The problem is as follows: postgres\n streams A, B, D, G, K, I, P logical\n replication events, upon exit signal we stop consuming new events at LSN K, and we\n wait 30s for out-of-order events. Let's say that we only got A,\n (and K ofc) so in the following 30s, we get B, D,\n however, for whatever reason, G never\n arrived. As with pgoutput-based logical replication we have no way to calculate the next LSN, we have no idea that G was\n missing, so we assumed that it all arrived, committing K to\n postgres slot and shutdown. In the next run, our worker will start receiving data from K forward,\n and G is lost forever...Meanwhile\n postgres moves forward with archiving and we can't go back to check if we lost anything. And even if we could, would be extremely inefficient.Logical decoding provides the changes to output plugin at commit time. Youmentioned the logical replication events but didn't say which are part of thesame transaction. Let's say A, B, D and K are changes from the same transactionand G, I and P are changes from another transaction. The first transaction willbe available when it processes K. The second transaction will be provided whenthe logical decoding processes P.You didn't say how your consumer is working. Are you sure your consumer doesn'tget the second transaction? If your consumer is advancing the replication slot*after* receiving K (using pg_replication_slot_advance), it is doing it wrong.Another common problem with consumer is that it usespg_logical_slot_get_changes() but *before* using the data it crashes; in thiscase, the data is lost.It is hard to say where the problem is if you didn't provide enough informationabout the consumer logic and the WAL information (pg_waldump output) around thetime you detect the data loss.In sum, the issue comes from the fact that postgres will stream events with unordered LSNs on high transactional systems, and that pgoutput doesn't have access to enough information to calculate the next or last LSN, so we have no way to check if we receive\n all the data that we are supposed to receive, risking committing an offset that we shouldn't as we didn't receive yet preceding data.It seems very either to me that none of the open-source CDC projects that I looked into care about this. They always assume that\n the next LSN received is... well the next one, and commit that one, so upon restart, they are vulnerable to the same issue. So... either I'm missing something... or we have a generalized assumption causing data loss under certain conditions all over.Let me illustrate the current behavior. Let's say there are 3 concurrenttransactions.Session A==========euler=# SELECT pg_create_logical_replication_slot('repslot1', 'wal2json');pg_create_logical_replication_slot ------------------------------------(repslot1,0/369DF088)(1 row)euler=# create table foo (a int primary key);CREATE TABLEeuler=# BEGIN;BEGINeuler=*# INSERT INTO foo (a) SELECT generate_series(1, 2);INSERT 0 2Session B==========euler=# BEGIN;BEGINeuler=*# INSERT INTO foo (a) SELECT generate_series(11, 12);INSERT 0 2Session C==========euler=# BEGIN;BEGINeuler=*# INSERT INTO foo (a) SELECT generate_series(21, 22);INSERT 0 2Session A==========euler=*# INSERT INTO foo (a) VALUES(3);INSERT 0 1Session B==========euler=*# INSERT INTO foo (a) VALUES(13);INSERT 0 1Session C==========euler=*# INSERT INTO foo (a) VALUES(23);INSERT 0 1euler=*# COMMIT;COMMITSession B==========euler=*# COMMIT;COMMITSession A==========euler=*# COMMIT;COMMITThe output is:euler=# SELECT * FROM pg_logical_slot_peek_changes('repslot1', NULL, NULL, 'format-version', '2', 'include-types', '0');    lsn     |  xid   |                                        data                                        ------------+--------+------------------------------------------------------------------------------------0/369E4800 | 454539 | {\"action\":\"B\"}0/36A05088 | 454539 | {\"action\":\"C\"}0/36A05398 | 454542 | {\"action\":\"B\"}0/36A05398 | 454542 | {\"action\":\"I\",\"schema\":\"public\",\"table\":\"foo\",\"columns\":[{\"name\":\"a\",\"value\":21}]}0/36A05418 | 454542 | {\"action\":\"I\",\"schema\":\"public\",\"table\":\"foo\",\"columns\":[{\"name\":\"a\",\"value\":22}]}0/36A05658 | 454542 | {\"action\":\"I\",\"schema\":\"public\",\"table\":\"foo\",\"columns\":[{\"name\":\"a\",\"value\":23}]}0/36A057C0 | 454542 | {\"action\":\"C\"}0/36A05258 | 454541 | {\"action\":\"B\"}0/36A05258 | 454541 | {\"action\":\"I\",\"schema\":\"public\",\"table\":\"foo\",\"columns\":[{\"name\":\"a\",\"value\":11}]}0/36A052D8 | 454541 | {\"action\":\"I\",\"schema\":\"public\",\"table\":\"foo\",\"columns\":[{\"name\":\"a\",\"value\":12}]}0/36A05598 | 454541 | {\"action\":\"I\",\"schema\":\"public\",\"table\":\"foo\",\"columns\":[{\"name\":\"a\",\"value\":13}]}0/36A057F0 | 454541 | {\"action\":\"C\"}0/36A050C0 | 454540 | {\"action\":\"B\"}0/36A050C0 | 454540 | {\"action\":\"I\",\"schema\":\"public\",\"table\":\"foo\",\"columns\":[{\"name\":\"a\",\"value\":1}]}0/36A051A0 | 454540 | {\"action\":\"I\",\"schema\":\"public\",\"table\":\"foo\",\"columns\":[{\"name\":\"a\",\"value\":2}]}0/36A054D8 | 454540 | {\"action\":\"I\",\"schema\":\"public\",\"table\":\"foo\",\"columns\":[{\"name\":\"a\",\"value\":3}]}0/36A05820 | 454540 | {\"action\":\"C\"}(17 rows)Since session C committed first, it is the first transaction available to outputplugin (wal2json). Transaction 454541 is the next one that is available becauseit committed after session C (transaction 454542) and the first transactionthat started (session A) is the last one available. You can also notice thatthe first transaction (454540) is the last one available.Your consumer cannot rely on LSN position or xid to track the progress.Instead, Postgres provides a replication progress mechanism [1] to do it.[1] https://www.postgresql.org/docs/current/replication-origins.html--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Mon, 31 Jul 2023 11:27:46 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "Re: CDC/ETL system on top of logical replication with pgoutput,\n custom client" }, { "msg_contents": "Hi Euler, thank you for your reply.\n\nYour output is exactly how mine doesn't look like, I don't have such an order - that is - not only under heavy load.\nConditions in which this occurs make it challenging to provide detailed information, and will also take a while to trigger. I've sent a previous email explaining how my output looks like, from a previous debug.\n\nI can gather more information if needs be, but I was interested in this bit:\n> Instead, Postgres provides a replication progress mechanism [1] to do it.\nIt's not 100% clear to me how that would look like at the code level, can you provide a high-level algorithm on how such code would work? For reference, our implementation - to the bones - is very similar to this: https://adam-szpilewicz.pl/cdc-replication-from-postgresql-using-go-golang\n\nThanks for your help. Regards,\nJosé Neves\n\n________________________________\nDe: Euler Taveira <euler@eulerto.com>\nEnviado: 31 de julho de 2023 15:27\nPara: José Neves <rafaneves3@msn.com>; pgsql-hackers <pgsql-hackers@postgresql.org>\nAssunto: Re: CDC/ETL system on top of logical replication with pgoutput, custom client\n\nOn Sat, Jul 29, 2023, at 8:07 PM, José Neves wrote:\nI'm attempting to develop a CDC on top of Postgres, currently using 12, the last minor, with a custom client, and I'm running into issues with data loss caused by out-of-order logical replication messages.\n\nCan you provide a test case to show this issue? Did you try in a newer version?\n\nThe problem is as follows: postgres streams A, B, D, G, K, I, P logical replication events, upon exit signal we stop consuming new events at LSN K, and we wait 30s for out-of-order events. Let's say that we only got A, (and K ofc) so in the following 30s, we get B, D, however, for whatever reason, G never arrived. As with pgoutput-based logical replication we have no way to calculate the next LSN, we have no idea that G was missing, so we assumed that it all arrived, committing K to postgres slot and shutdown. In the next run, our worker will start receiving data from K forward, and G is lost forever...\nMeanwhile postgres moves forward with archiving and we can't go back to check if we lost anything. And even if we could, would be extremely inefficient.\n\nLogical decoding provides the changes to output plugin at commit time. You\nmentioned the logical replication events but didn't say which are part of the\nsame transaction. Let's say A, B, D and K are changes from the same transaction\nand G, I and P are changes from another transaction. The first transaction will\nbe available when it processes K. The second transaction will be provided when\nthe logical decoding processes P.\n\nYou didn't say how your consumer is working. Are you sure your consumer doesn't\nget the second transaction? If your consumer is advancing the replication slot\n*after* receiving K (using pg_replication_slot_advance), it is doing it wrong.\nAnother common problem with consumer is that it uses\npg_logical_slot_get_changes() but *before* using the data it crashes; in this\ncase, the data is lost.\n\nIt is hard to say where the problem is if you didn't provide enough information\nabout the consumer logic and the WAL information (pg_waldump output) around the\ntime you detect the data loss.\n\nIn sum, the issue comes from the fact that postgres will stream events with unordered LSNs on high transactional systems, and that pgoutput doesn't have access to enough information to calculate the next or last LSN, so we have no way to check if we receive all the data that we are supposed to receive, risking committing an offset that we shouldn't as we didn't receive yet preceding data.\n\nIt seems very either to me that none of the open-source CDC projects that I looked into care about this. They always assume that the next LSN received is... well the next one, and commit that one, so upon restart, they are vulnerable to the same issue. So... either I'm missing something... or we have a generalized assumption causing data loss under certain conditions all over.\n\nLet me illustrate the current behavior. Let's say there are 3 concurrent\ntransactions.\n\nSession A\n==========\n\neuler=# SELECT pg_create_logical_replication_slot('repslot1', 'wal2json');\npg_create_logical_replication_slot\n------------------------------------\n(repslot1,0/369DF088)\n(1 row)\n\neuler=# create table foo (a int primary key);\nCREATE TABLE\neuler=# BEGIN;\nBEGIN\neuler=*# INSERT INTO foo (a) SELECT generate_series(1, 2);\nINSERT 0 2\n\nSession B\n==========\n\neuler=# BEGIN;\nBEGIN\neuler=*# INSERT INTO foo (a) SELECT generate_series(11, 12);\nINSERT 0 2\n\nSession C\n==========\n\neuler=# BEGIN;\nBEGIN\neuler=*# INSERT INTO foo (a) SELECT generate_series(21, 22);\nINSERT 0 2\n\nSession A\n==========\n\neuler=*# INSERT INTO foo (a) VALUES(3);\nINSERT 0 1\n\nSession B\n==========\n\neuler=*# INSERT INTO foo (a) VALUES(13);\nINSERT 0 1\n\nSession C\n==========\n\neuler=*# INSERT INTO foo (a) VALUES(23);\nINSERT 0 1\neuler=*# COMMIT;\nCOMMIT\n\nSession B\n==========\n\neuler=*# COMMIT;\nCOMMIT\n\nSession A\n==========\n\neuler=*# COMMIT;\nCOMMIT\n\nThe output is:\n\neuler=# SELECT * FROM pg_logical_slot_peek_changes('repslot1', NULL, NULL, 'format-version', '2', 'include-types', '0');\n lsn | xid | data\n------------+--------+------------------------------------------------------------------------------------\n0/369E4800 | 454539 | {\"action\":\"B\"}\n0/36A05088 | 454539 | {\"action\":\"C\"}\n0/36A05398 | 454542 | {\"action\":\"B\"}\n0/36A05398 | 454542 | {\"action\":\"I\",\"schema\":\"public\",\"table\":\"foo\",\"columns\":[{\"name\":\"a\",\"value\":21}]}\n0/36A05418 | 454542 | {\"action\":\"I\",\"schema\":\"public\",\"table\":\"foo\",\"columns\":[{\"name\":\"a\",\"value\":22}]}\n0/36A05658 | 454542 | {\"action\":\"I\",\"schema\":\"public\",\"table\":\"foo\",\"columns\":[{\"name\":\"a\",\"value\":23}]}\n0/36A057C0 | 454542 | {\"action\":\"C\"}\n0/36A05258 | 454541 | {\"action\":\"B\"}\n0/36A05258 | 454541 | {\"action\":\"I\",\"schema\":\"public\",\"table\":\"foo\",\"columns\":[{\"name\":\"a\",\"value\":11}]}\n0/36A052D8 | 454541 | {\"action\":\"I\",\"schema\":\"public\",\"table\":\"foo\",\"columns\":[{\"name\":\"a\",\"value\":12}]}\n0/36A05598 | 454541 | {\"action\":\"I\",\"schema\":\"public\",\"table\":\"foo\",\"columns\":[{\"name\":\"a\",\"value\":13}]}\n0/36A057F0 | 454541 | {\"action\":\"C\"}\n0/36A050C0 | 454540 | {\"action\":\"B\"}\n0/36A050C0 | 454540 | {\"action\":\"I\",\"schema\":\"public\",\"table\":\"foo\",\"columns\":[{\"name\":\"a\",\"value\":1}]}\n0/36A051A0 | 454540 | {\"action\":\"I\",\"schema\":\"public\",\"table\":\"foo\",\"columns\":[{\"name\":\"a\",\"value\":2}]}\n0/36A054D8 | 454540 | {\"action\":\"I\",\"schema\":\"public\",\"table\":\"foo\",\"columns\":[{\"name\":\"a\",\"value\":3}]}\n0/36A05820 | 454540 | {\"action\":\"C\"}\n(17 rows)\n\nSince session C committed first, it is the first transaction available to output\nplugin (wal2json). Transaction 454541 is the next one that is available because\nit committed after session C (transaction 454542) and the first transaction\nthat started (session A) is the last one available. You can also notice that\nthe first transaction (454540) is the last one available.\n\nYour consumer cannot rely on LSN position or xid to track the progress.\nInstead, Postgres provides a replication progress mechanism [1] to do it.\n\n\n[1] https://www.postgresql.org/docs/current/replication-origins.html\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\n\n\n\n\n\n\n\n\nHi Euler, thank you for your reply.\n\nYour output is exactly how mine doesn't look like, I don't have such an order - that is - not only under heavy load.\n\nConditions in which this occurs make it challenging to provide detailed information, and will also take a while to trigger.  I've\n sent a previous email explaining how my output looks like, from a previous debug.\n\n\n\n\nI can gather more information if needs be, but I was interested in this bit:\n\n\n> Instead,\n Postgres provides a replication progress mechanism [1] to do it.\n\n\nIt's\n not 100% clear to me how that would look like at the code level, can you provide a high-level algorithm on how such code would work? For reference, our implementation - to the bones - is very similar to this: https://adam-szpilewicz.pl/cdc-replication-from-postgresql-using-go-golang\n\n\n\n\nThanks for your help. Regards,\nJosé Neves\n\n\n\n\n\n\nDe: Euler Taveira <euler@eulerto.com>\nEnviado: 31 de julho de 2023 15:27\nPara: José Neves <rafaneves3@msn.com>; pgsql-hackers <pgsql-hackers@postgresql.org>\nAssunto: Re: CDC/ETL system on top of logical replication with pgoutput, custom client\n \n\n\n\nOn Sat, Jul 29, 2023, at 8:07 PM, José Neves wrote:\n\n\n\nI'm\n attempting to develop a CDC on top of Postgres, currently using 12, the last minor, with a custom client, and I'm running into issues with data loss caused by out-of-order logical replication messages.\n\n\n\n\nCan you provide a test case to show this issue? Did you try in a newer version?\n\n\n\n\n\nThe\n problem is as follows: postgres streams A,\n B, D, G, K, I, P logical\n replication events, upon exit signal we stop consuming new events at LSN K, and we\n wait 30s for out-of-order events. Let's say that we only got A,\n (and K ofc) so in the following 30s, we get B,\n D, however, for whatever reason, G never\n arrived. As with pgoutput-based logical replication we have no way to calculate the next LSN, we have no idea that G was\n missing, so we assumed that it all arrived, committing K to\n postgres slot and shutdown. In the next run, our worker will start receiving data from K forward,\n and G is lost forever...\n\n\nMeanwhile\n postgres moves forward with archiving and we can't go back to check if we lost anything. And even if we could, would be extremely inefficient.\n\n\n\n\nLogical decoding provides the changes to output plugin at commit time. You\n\nmentioned the logical replication events but didn't say which are part of the\n\nsame transaction. Let's say A, B, D and K are changes from the same transaction\n\nand G, I and P are changes from another transaction. The first transaction will\n\nbe available when it processes K. The second transaction will be provided when\n\nthe logical decoding processes P.\n\n\n\nYou didn't say how your consumer is working. Are you sure your consumer doesn't\n\nget the second transaction? If your consumer is advancing the replication slot\n\n*after* receiving K (using pg_replication_slot_advance), it is doing it wrong.\n\nAnother common problem with consumer is that it uses\n\npg_logical_slot_get_changes() but *before* using the data it crashes; in this\n\ncase, the data is lost.\n\n\n\nIt is hard to say where the problem is if you didn't provide enough information\n\nabout the consumer logic and the WAL information (pg_waldump output) around the\n\ntime you detect the data loss.\n\n\n\n\n\nIn sum, the issue comes from the fact that postgres will stream events with unordered LSNs on high transactional systems, and that pgoutput doesn't have access to enough information to calculate the next or last LSN, so we have no way to check if we receive\n all the data that we are supposed to receive, risking committing an offset that we shouldn't as we didn't receive yet preceding data.\n\n\n\n\n\nIt\n seems very either to me that none of the open-source CDC projects that I looked into care about this. They always assume that the next LSN received is... well the next one, and commit that one, so upon restart, they are vulnerable to the same issue. So...\n either I'm missing something... or we have a generalized assumption causing data loss under certain conditions all over.\n\n\n\n\nLet me illustrate the current behavior. Let's say there are 3 concurrent\n\ntransactions.\n\n\n\nSession A\n\n==========\n\n\n\neuler=# SELECT pg_create_logical_replication_slot('repslot1', 'wal2json');\n\npg_create_logical_replication_slot \n\n------------------------------------\n\n(repslot1,0/369DF088)\n\n(1 row)\n\n\n\neuler=# create table foo (a int primary key);\n\nCREATE TABLE\n\neuler=# BEGIN;\n\nBEGIN\n\neuler=*# INSERT INTO foo (a) SELECT generate_series(1, 2);\n\nINSERT 0 2\n\n\n\nSession B\n\n==========\n\n\n\neuler=# BEGIN;\n\nBEGIN\n\neuler=*# INSERT INTO foo (a) SELECT generate_series(11, 12);\n\nINSERT 0 2\n\n\n\nSession C\n\n==========\n\n\n\neuler=# BEGIN;\n\nBEGIN\n\neuler=*# INSERT INTO foo (a) SELECT generate_series(21, 22);\n\nINSERT 0 2\n\n\n\nSession A\n\n==========\n\n\n\neuler=*# INSERT INTO foo (a) VALUES(3);\n\nINSERT 0 1\n\n\n\nSession B\n\n==========\n\n\n\neuler=*# INSERT INTO foo (a) VALUES(13);\n\nINSERT 0 1\n\n\n\nSession C\n\n==========\n\n\n\neuler=*# INSERT INTO foo (a) VALUES(23);\n\nINSERT 0 1\n\neuler=*# COMMIT;\n\nCOMMIT\n\n\n\nSession B\n\n==========\n\n\n\neuler=*# COMMIT;\n\nCOMMIT\n\n\n\nSession A\n\n==========\n\n\n\neuler=*# COMMIT;\n\nCOMMIT\n\n\n\nThe output is:\n\n\n\neuler=# SELECT * FROM pg_logical_slot_peek_changes('repslot1', NULL, NULL, 'format-version', '2', 'include-types', '0');\n\n    lsn     |  xid   |                                        data                                        \n\n------------+--------+------------------------------------------------------------------------------------\n\n0/369E4800 | 454539 | {\"action\":\"B\"}\n\n0/36A05088 | 454539 | {\"action\":\"C\"}\n\n0/36A05398 | 454542 | {\"action\":\"B\"}\n\n0/36A05398 | 454542 | {\"action\":\"I\",\"schema\":\"public\",\"table\":\"foo\",\"columns\":[{\"name\":\"a\",\"value\":21}]}\n\n0/36A05418 | 454542 | {\"action\":\"I\",\"schema\":\"public\",\"table\":\"foo\",\"columns\":[{\"name\":\"a\",\"value\":22}]}\n\n0/36A05658 | 454542 | {\"action\":\"I\",\"schema\":\"public\",\"table\":\"foo\",\"columns\":[{\"name\":\"a\",\"value\":23}]}\n\n0/36A057C0 | 454542 | {\"action\":\"C\"}\n\n0/36A05258 | 454541 | {\"action\":\"B\"}\n\n0/36A05258 | 454541 | {\"action\":\"I\",\"schema\":\"public\",\"table\":\"foo\",\"columns\":[{\"name\":\"a\",\"value\":11}]}\n\n0/36A052D8 | 454541 | {\"action\":\"I\",\"schema\":\"public\",\"table\":\"foo\",\"columns\":[{\"name\":\"a\",\"value\":12}]}\n\n0/36A05598 | 454541 | {\"action\":\"I\",\"schema\":\"public\",\"table\":\"foo\",\"columns\":[{\"name\":\"a\",\"value\":13}]}\n\n0/36A057F0 | 454541 | {\"action\":\"C\"}\n\n0/36A050C0 | 454540 | {\"action\":\"B\"}\n\n0/36A050C0 | 454540 | {\"action\":\"I\",\"schema\":\"public\",\"table\":\"foo\",\"columns\":[{\"name\":\"a\",\"value\":1}]}\n\n0/36A051A0 | 454540 | {\"action\":\"I\",\"schema\":\"public\",\"table\":\"foo\",\"columns\":[{\"name\":\"a\",\"value\":2}]}\n\n0/36A054D8 | 454540 | {\"action\":\"I\",\"schema\":\"public\",\"table\":\"foo\",\"columns\":[{\"name\":\"a\",\"value\":3}]}\n\n0/36A05820 | 454540 | {\"action\":\"C\"}\n\n(17 rows)\n\n\n\nSince session C committed first, it is the first transaction available to output\n\nplugin (wal2json). Transaction 454541 is the next one that is available because\n\nit committed after session C (transaction 454542) and the first transaction\n\nthat started (session A) is the last one available. You can also notice that\n\nthe first transaction (454540) is the last one available.\n\n\n\nYour consumer cannot rely on LSN position or xid to track the progress.\n\nInstead, Postgres provides a replication progress mechanism [1] to do it.\n\n\n\n\n\n[1] \nhttps://www.postgresql.org/docs/current/replication-origins.html\n\n\n\n\n\n\n--\n\nEuler Taveira\n\nEDB   https://www.enterprisedb.com/", "msg_date": "Mon, 31 Jul 2023 18:41:31 +0000", "msg_from": "=?iso-8859-1?Q?Jos=E9_Neves?= <rafaneves3@msn.com>", "msg_from_op": true, "msg_subject": "RE: CDC/ETL system on top of logical replication with pgoutput,\n custom client" }, { "msg_contents": "Hi,\n\nOn 2023-07-31 14:16:22 +0000, Jos� Neves wrote:\n> Hi Amit, thanks for the reply.\n>\n> In our worker (custom pg replication client), we care only about INSERT,\n> UPDATE, and DELETE operations, which - sure - may be part of the issue.\n\nThat seems likely. Postgres streams out changes in commit order, not in order\nof the changes having been made (that'd not work due to rollbacks etc). If you\njust disregard transactions entirely, you'll get something bogus after\nretries.\n\nYou don't need to store the details for each commit in the target system, just\nup to which LSN you have processed *commit records*. E.g. if you have received\nand safely stored up to commit 0/1000, you need to remember that.\n\n\nAre you using the 'streaming' mode / option to pgoutput?\n\n\n> 1. We have no way to match LSN operations with the respective commit, as\n> they have unordered offsets.\n\nNot sure what you mean with \"unordered offsets\"?\n\n\n> Assuming that all of them were received in order, we would commit all data with the commit message LSN4-4000 as other events would match the transaction start and end LSN interval of it.\n\nLogical decoding sends out changes in a deterministic order and you won't see\nout of order data when using TCP (the entire connection can obviously fail\nthough).\n\nAndres\n\n\n", "msg_date": "Mon, 31 Jul 2023 13:39:27 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: CDC/ETL system on top of logical replication with pgoutput,\n custom client" }, { "msg_contents": "Hi Andres, thanks for your reply.\n\nOk, if I understood you correctly, I start to see where my logic is faulty. Just to make sure that I got it right, taking the following example again:\n\nT-1\nINSERT LSN1-1000\nUPDATE LSN2-2000\nUPDATE LSN3-3000\nCOMMIT LSN4-4000\n\nT-2\nINSERT LSN1-500\nUPDATE LSN2-1500\nUPDATE LSN3-2500\nCOMMIT LSN4-5500\n\nWhere data will arrive in this order:\n\nINSERT LSN1-500\nINSERT LSN1-1000\nUPDATE LSN2-1500\nUPDATE LSN2-2000\nUPDATE LSN3-2500\nUPDATE LSN3-3000\nCOMMIT LSN4-4000\nCOMMIT LSN4-5500\n\nYou are saying that the LSN3-3000 will never be missing, either the entire connection will fail at that point, or all should be received in the expected order (which is different from the \"numeric order\" of LSNs). If the connection is down, upon restart, I will receive the entire T-1 transaction again (well, all example data again).\nIn addition to that, if I commit LSN4-4000, even tho that LSN has a \"bigger numeric value\" than the ones representing INSERT and UPDATE events on T-2, I will be receiving the entire T-2 transaction again, as the LSN4-5500 is still uncommitted.\nThis makes sense to me, but just to be extra clear, I will never receive a transaction commit before receiving all other events for that transaction.\nAre these statements correct?\n\n>Are you using the 'streaming' mode / option to pgoutput?\nNo.\n\n>Not sure what you mean with \"unordered offsets\"?\nOrdered: EB53/E0D88188, EB53/E0D88189, EB53/E0D88190\nUnordered: EB53/E0D88190, EB53/E0D88188, EB53/E0D88189\n\nExtra question: When I get a begin message, I get a transaction starting at LSN-1000, and a transaction ending at LSN-2000. But as the example above shows, I can have data points from other transactions with LSNs in that interval. I have no way to identify to which transaction they belong, correct?\n\nThanks again. Regards,\nJosé Neves\n________________________________\nDe: Andres Freund <andres@anarazel.de>\nEnviado: 31 de julho de 2023 21:39\nPara: José Neves <rafaneves3@msn.com>\nCc: Amit Kapila <amit.kapila16@gmail.com>; pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>\nAssunto: Re: CDC/ETL system on top of logical replication with pgoutput, custom client\n\nHi,\n\nOn 2023-07-31 14:16:22 +0000, José Neves wrote:\n> Hi Amit, thanks for the reply.\n>\n> In our worker (custom pg replication client), we care only about INSERT,\n> UPDATE, and DELETE operations, which - sure - may be part of the issue.\n\nThat seems likely. Postgres streams out changes in commit order, not in order\nof the changes having been made (that'd not work due to rollbacks etc). If you\njust disregard transactions entirely, you'll get something bogus after\nretries.\n\nYou don't need to store the details for each commit in the target system, just\nup to which LSN you have processed *commit records*. E.g. if you have received\nand safely stored up to commit 0/1000, you need to remember that.\n\n\nAre you using the 'streaming' mode / option to pgoutput?\n\n\n> 1. We have no way to match LSN operations with the respective commit, as\n> they have unordered offsets.\n\nNot sure what you mean with \"unordered offsets\"?\n\n\n> Assuming that all of them were received in order, we would commit all data with the commit message LSN4-4000 as other events would match the transaction start and end LSN interval of it.\n\nLogical decoding sends out changes in a deterministic order and you won't see\nout of order data when using TCP (the entire connection can obviously fail\nthough).\n\nAndres\n\n\n\n\n\n\n\n\nHi Andres, thanks for your reply.\n\nOk, if I understood you correctly, I start to see where my logic is faulty. Just to make sure that I got it right, taking the following example again:\n\nT-1\n\nINSERT LSN1-1000\n\n\nUPDATE LSN2-2000\n\n\nUPDATE LSN3-3000\n\nCOMMIT  LSN4-4000\n\n\n\n\n\n\n\nT-2\n\nINSERT LSN1-500\n\nUPDATE LSN2-1500\n\nUPDATE LSN3-2500\n\n\nCOMMIT\n  LSN4-5500\n\nWhere data will arrive in this order:\n\n\n\n\n\nINSERT\n LSN1-500\nINSERT LSN1-1000\n\n\nUPDATE\n LSN2-1500\n\n\n\n\nUPDATE LSN2-2000\n\n\nUPDATE\n LSN3-2500\n\n\nUPDATE LSN3-3000\n\n\nCOMMIT \n LSN4-4000\nCOMMIT\n  LSN4-5500\n\n\n\n\n\nYou are saying that the LSN3-3000\n will never be missing, either the entire connection will fail at that point, or all should be received in the expected order (which is different from the \"numeric order\" of LSNs). If the connection is down, upon restart, I will receive the entire T-1 transaction\n again (well, all example data again).\n\n\nIn addition to that, if I commit LSN4-4000, even tho that LSN has a \"bigger numeric value\" than the ones representing INSERT and UPDATE events on T-2, I will be\n receiving the entire T-2 transaction again, as the LSN4-5500 is still uncommitted. \n\nThis makes sense to me, but just to be extra clear, I will never receive a transaction commit before receiving all other events for that transaction.\n\nAre these statements correct?\n\n\n\n>Are you\n using the 'streaming' mode / option to pgoutput?\nNo.\n\n>Not sure\n what you mean with \"unordered offsets\"?\nOrdered: EB53/E0D88188, EB53/E0D88189, EB53/E0D88190\nUnordered: EB53/E0D88190, EB53/E0D88188, EB53/E0D88189\n\nExtra question: When I get a begin message, I get a transaction starting at LSN-1000, and a transaction ending at LSN-2000. But as the example above shows, I can have data points from other transactions with LSNs in that interval. I have no way to identify\n to which transaction they belong, correct?\n\n\n\nThanks again. Regards,\n\nJosé Neves\n\n\nDe: Andres Freund <andres@anarazel.de>\nEnviado: 31 de julho de 2023 21:39\nPara: José Neves <rafaneves3@msn.com>\nCc: Amit Kapila <amit.kapila16@gmail.com>; pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>\nAssunto: Re: CDC/ETL system on top of logical replication with pgoutput, custom client\n \n\n\nHi,\n\nOn 2023-07-31 14:16:22 +0000, José Neves wrote:\n> Hi Amit, thanks for the reply.\n>\n> In our worker (custom pg replication client), we care only about INSERT,\n> UPDATE, and DELETE operations, which - sure - may be part of the issue.\n\nThat seems likely. Postgres streams out changes in commit order, not in order\nof the changes having been made (that'd not work due to rollbacks etc). If you\njust disregard transactions entirely, you'll get something bogus after\nretries.\n\nYou don't need to store the details for each commit in the target system, just\nup to which LSN you have processed *commit records*. E.g. if you have received\nand safely stored up to commit 0/1000, you need to remember that.\n\n\nAre you using the 'streaming' mode / option to pgoutput?\n\n\n> 1. We have no way to match LSN operations with the respective commit, as\n> they have unordered offsets.\n\nNot sure what you mean with \"unordered offsets\"?\n\n\n> Assuming that all of them were received in order, we would commit all data with the commit message LSN4-4000 as other events would match the transaction start and end LSN interval of it.\n\nLogical decoding sends out changes in a deterministic order and you won't see\nout of order data when using TCP (the entire connection can obviously fail\nthough).\n\nAndres", "msg_date": "Mon, 31 Jul 2023 21:25:06 +0000", "msg_from": "=?iso-8859-1?Q?Jos=E9_Neves?= <rafaneves3@msn.com>", "msg_from_op": true, "msg_subject": "RE: CDC/ETL system on top of logical replication with pgoutput,\n custom client" }, { "msg_contents": "Hi,\n\nOn 2023-07-31 21:25:06 +0000, Jos� Neves wrote:\n> Ok, if I understood you correctly, I start to see where my logic is faulty. Just to make sure that I got it right, taking the following example again:\n>\n> T-1\n> INSERT LSN1-1000\n> UPDATE LSN2-2000\n> UPDATE LSN3-3000\n> COMMIT LSN4-4000\n>\n> T-2\n> INSERT LSN1-500\n> UPDATE LSN2-1500\n> UPDATE LSN3-2500\n> COMMIT LSN4-5500\n>\n> Where data will arrive in this order:\n>\n> INSERT LSN1-500\n> INSERT LSN1-1000\n> UPDATE LSN2-1500\n> UPDATE LSN2-2000\n> UPDATE LSN3-2500\n> UPDATE LSN3-3000\n> COMMIT LSN4-4000\n> COMMIT LSN4-5500\n\nNo, they won't arrive in that order. They will arive as\n\nBEGIN\nINSERT LSN1-1000\nUPDATE LSN2-2000\nUPDATE LSN3-3000\nCOMMIT LSN4-4000\nBEGIN\nINSERT LSN1-500\nUPDATE LSN2-1500\nUPDATE LSN3-2500\nCOMMIT LSN4-5500\n\nBecause T1 committed before T2. Changes are only streamed out at commit /\nprepare transaction (*). Within a transaction, they however *will* be ordered\nby LSN.\n\n(*) Unless you use streaming mode, in which case it'll all be more\ncomplicated, as you'll also receive changes for transactions that might still\nabort.\n\n\n> You are saying that the LSN3-3000 will never be missing, either the entire\n> connection will fail at that point, or all should be received in the\n> expected order (which is different from the \"numeric order\" of LSNs).\n\nI'm not quite sure what you mean with the \"different from the numeric order\"\nbit...\n\n\n> If the connection is down, upon restart, I will receive the entire T-1\n> transaction again (well, all example data again).\n\nYes, unless you already acknowledged receipt up to LSN4-4000 and/or are only\nasking for newer transactions when reconnecting.\n\n\n> In addition to that, if I commit LSN4-4000, even tho that LSN has a \"bigger\n> numeric value\" than the ones representing INSERT and UPDATE events on T-2, I\n> will be receiving the entire T-2 transaction again, as the LSN4-5500 is\n> still uncommitted.\n\nI don't quite know what you mean with \"commit LSN4-4000\" here.\n\n\n> This makes sense to me, but just to be extra clear, I will never receive a\n> transaction commit before receiving all other events for that transaction.\n\nCorrect.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 31 Jul 2023 16:21:25 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: CDC/ETL system on top of logical replication with pgoutput,\n custom client" }, { "msg_contents": "Hi Andres.\n\nOwh, I see the error of my way... :(\n\nBy ignoring commits, and committing individual operation LSNs, I was effectively rolling back the subscription. In the previous example, if I committed the LSN of the first insert of the second transaction (LSN1-500), I was basically telling Postgres to send everything again, including the already processed T1.\n\n> what you mean with the \"different from the numeric order\"\nI'm probably lacking terminology. I mean that LSN4-5500 > LSN4-4000 > LSN3-3000 > LSN3-2500...\n\nBut, if I'm understanding correctly, I can only rely on the incremental sequence to be true for the commit events. Which explains my pain.\nThe world makes sense again.\n\nThank you very much. Will try to implement this new logic, and hopefully not bug again with this issue.\nRegards,\nJosé Neves\n________________________________\nDe: Andres Freund <andres@anarazel.de>\nEnviado: 1 de agosto de 2023 00:21\nPara: José Neves <rafaneves3@msn.com>\nCc: Amit Kapila <amit.kapila16@gmail.com>; pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>\nAssunto: Re: CDC/ETL system on top of logical replication with pgoutput, custom client\n\nHi,\n\nOn 2023-07-31 21:25:06 +0000, José Neves wrote:\n> Ok, if I understood you correctly, I start to see where my logic is faulty. Just to make sure that I got it right, taking the following example again:\n>\n> T-1\n> INSERT LSN1-1000\n> UPDATE LSN2-2000\n> UPDATE LSN3-3000\n> COMMIT LSN4-4000\n>\n> T-2\n> INSERT LSN1-500\n> UPDATE LSN2-1500\n> UPDATE LSN3-2500\n> COMMIT LSN4-5500\n>\n> Where data will arrive in this order:\n>\n> INSERT LSN1-500\n> INSERT LSN1-1000\n> UPDATE LSN2-1500\n> UPDATE LSN2-2000\n> UPDATE LSN3-2500\n> UPDATE LSN3-3000\n> COMMIT LSN4-4000\n> COMMIT LSN4-5500\n\nNo, they won't arrive in that order. They will arive as\n\nBEGIN\nINSERT LSN1-1000\nUPDATE LSN2-2000\nUPDATE LSN3-3000\nCOMMIT LSN4-4000\nBEGIN\nINSERT LSN1-500\nUPDATE LSN2-1500\nUPDATE LSN3-2500\nCOMMIT LSN4-5500\n\nBecause T1 committed before T2. Changes are only streamed out at commit /\nprepare transaction (*). Within a transaction, they however *will* be ordered\nby LSN.\n\n(*) Unless you use streaming mode, in which case it'll all be more\ncomplicated, as you'll also receive changes for transactions that might still\nabort.\n\n\n> You are saying that the LSN3-3000 will never be missing, either the entire\n> connection will fail at that point, or all should be received in the\n> expected order (which is different from the \"numeric order\" of LSNs).\n\nI'm not quite sure what you mean with the \"different from the numeric order\"\nbit...\n\n\n> If the connection is down, upon restart, I will receive the entire T-1\n> transaction again (well, all example data again).\n\nYes, unless you already acknowledged receipt up to LSN4-4000 and/or are only\nasking for newer transactions when reconnecting.\n\n\n> In addition to that, if I commit LSN4-4000, even tho that LSN has a \"bigger\n> numeric value\" than the ones representing INSERT and UPDATE events on T-2, I\n> will be receiving the entire T-2 transaction again, as the LSN4-5500 is\n> still uncommitted.\n\nI don't quite know what you mean with \"commit LSN4-4000\" here.\n\n\n> This makes sense to me, but just to be extra clear, I will never receive a\n> transaction commit before receiving all other events for that transaction.\n\nCorrect.\n\nGreetings,\n\nAndres Freund\n\n\n\n\n\n\n\n\nHi Andres.\n\nOwh, I see the error of my way... :(\n\nBy ignoring commits, and committing individual operation LSNs, I was effectively rolling back the subscription. In the previous example, if I committed the LSN of the first insert of the second transaction (LSN1-500), I was basically telling Postgres to send\n everything again, including the already processed T1.\n\n\n\n> what\n you mean with the \"different from the numeric order\"\n\n\nI'm\n probably lacking terminology. I mean that LSN4-5500 > LSN4-4000\n > LSN3-3000 > LSN3-2500...\n\n\n\n\nBut,\n if I'm understanding correctly, I can only rely on the incremental sequence to be true for the commit events. Which explains my pain.\n\nThe\n world makes sense again.\n\n\n\n\nThank\n you very much. Will try to implement this new logic, and hopefully not bug again with this issue.\n\nRegards,\nJosé Neves\n\n\nDe: Andres Freund <andres@anarazel.de>\nEnviado: 1 de agosto de 2023 00:21\nPara: José Neves <rafaneves3@msn.com>\nCc: Amit Kapila <amit.kapila16@gmail.com>; pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>\nAssunto: Re: CDC/ETL system on top of logical replication with pgoutput, custom client\n \n\n\nHi,\n\nOn 2023-07-31 21:25:06 +0000, José Neves wrote:\n> Ok, if I understood you correctly, I start to see where my logic is faulty. Just to make sure that I got it right, taking the following example again:\n>\n> T-1\n> INSERT LSN1-1000\n> UPDATE LSN2-2000\n> UPDATE LSN3-3000\n> COMMIT  LSN4-4000\n>\n> T-2\n> INSERT LSN1-500\n> UPDATE LSN2-1500\n> UPDATE LSN3-2500\n> COMMIT  LSN4-5500\n>\n> Where data will arrive in this order:\n>\n> INSERT LSN1-500\n> INSERT LSN1-1000\n> UPDATE LSN2-1500\n> UPDATE LSN2-2000\n> UPDATE LSN3-2500\n> UPDATE LSN3-3000\n> COMMIT  LSN4-4000\n> COMMIT  LSN4-5500\n\nNo, they won't arrive in that order. They will arive as\n\nBEGIN\nINSERT LSN1-1000\nUPDATE LSN2-2000\nUPDATE LSN3-3000\nCOMMIT  LSN4-4000\nBEGIN\nINSERT LSN1-500\nUPDATE LSN2-1500\nUPDATE LSN3-2500\nCOMMIT  LSN4-5500\n\nBecause T1 committed before T2. Changes are only streamed out at commit /\nprepare transaction (*). Within a transaction, they however *will* be ordered\nby LSN.\n\n(*) Unless you use streaming mode, in which case it'll all be more\ncomplicated, as you'll also receive changes for transactions that might still\nabort.\n\n\n> You are saying that the LSN3-3000 will never be missing, either the entire\n> connection will fail at that point, or all should be received in the\n> expected order (which is different from the \"numeric order\" of LSNs).\n\nI'm not quite sure what you mean with the \"different from the numeric order\"\nbit...\n\n\n> If the connection is down, upon restart, I will receive the entire T-1\n> transaction again (well, all example data again).\n\nYes, unless you already acknowledged receipt up to LSN4-4000 and/or are only\nasking for newer transactions when reconnecting.\n\n\n> In addition to that, if I commit LSN4-4000, even tho that LSN has a \"bigger\n> numeric value\" than the ones representing INSERT and UPDATE events on T-2, I\n> will be receiving the entire T-2 transaction again, as the LSN4-5500 is\n> still uncommitted.\n\nI don't quite know what you mean with \"commit LSN4-4000\" here.\n\n\n> This makes sense to me, but just to be extra clear, I will never receive a\n> transaction commit before receiving all other events for that transaction.\n\nCorrect.\n\nGreetings,\n\nAndres Freund", "msg_date": "Tue, 1 Aug 2023 09:13:47 +0000", "msg_from": "=?iso-8859-1?Q?Jos=E9_Neves?= <rafaneves3@msn.com>", "msg_from_op": true, "msg_subject": "RE: CDC/ETL system on top of logical replication with pgoutput,\n custom client" }, { "msg_contents": "Hi there, hope to find you all well.\n\nA follow-up on this. Indeed, a new commit-based approach solved my missing data issues.\nBut, getting back to the previous examples, how are server times expected to be logged for the xlogs containing these records?\n\nWith these 2 transactions:\nT-1\nINSERT LSN1-1000\nUPDATE LSN2-2000\nUPDATE LSN3-3000\nCOMMIT LSN4-4000\n\nT-2\nINSERT LSN1-500\nUPDATE LSN2-1500\nUPDATE LSN3-2500\nCOMMIT LSN4-5500\n\nArriving this way:\nBEGIN\nINSERT LSN1-1000\nUPDATE LSN2-2000\nUPDATE LSN3-3000\nCOMMIT LSN4-4000\n\nBEGIN\nINSERT LSN1-500\nUPDATE LSN2-1500\nUPDATE LSN3-2500\nCOMMIT LSN4-5500\n\nAre server times for them expected to be:\n\nBEGIN\nINSERT LSN1-1000 - 2\nUPDATE LSN2-2000 - 4\nUPDATE LSN3-3000 - 6\nCOMMIT LSN4-4000 - 7\n\nBEGIN\nINSERT LSN1-500 - 1\nUPDATE LSN2-1500 - 3\nUPDATE LSN3-2500 - 5\nCOMMIT LSN4-5500 - 8\n\nOr:\n\nBEGIN\nINSERT LSN1-1000 - 1\nUPDATE LSN2-2000 - 2\nUPDATE LSN3-3000 - 3\nCOMMIT LSN4-4000 - 4\n\nBEGIN\nINSERT LSN1-500 - 5\nUPDATE LSN2-1500 - 6\nUPDATE LSN3-2500 - 7\nCOMMIT LSN4-5500 - 8\n\nI'm asking because altho I'm no longer missing data, I have a second async process that can fail (publishing data to an event messaging service), and therefore there is a possibility of data duplication. Worst I've to split large transactions as message sizes are limited. Would be nice if I could rely on server time ts to discard duplicated data...\n\nThanks.\nRegards,\nJosé Neves\n________________________________\nDe: José Neves <rafaneves3@msn.com>\nEnviado: 1 de agosto de 2023 10:13\nPara: Andres Freund <andres@anarazel.de>\nCc: Amit Kapila <amit.kapila16@gmail.com>; pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>\nAssunto: RE: CDC/ETL system on top of logical replication with pgoutput, custom client\n\nHi Andres.\n\nOwh, I see the error of my way... :(\n\nBy ignoring commits, and committing individual operation LSNs, I was effectively rolling back the subscription. In the previous example, if I committed the LSN of the first insert of the second transaction (LSN1-500), I was basically telling Postgres to send everything again, including the already processed T1.\n\n> what you mean with the \"different from the numeric order\"\nI'm probably lacking terminology. I mean that LSN4-5500 > LSN4-4000 > LSN3-3000 > LSN3-2500...\n\nBut, if I'm understanding correctly, I can only rely on the incremental sequence to be true for the commit events. Which explains my pain.\nThe world makes sense again.\n\nThank you very much. Will try to implement this new logic, and hopefully not bug again with this issue.\nRegards,\nJosé Neves\n________________________________\nDe: Andres Freund <andres@anarazel.de>\nEnviado: 1 de agosto de 2023 00:21\nPara: José Neves <rafaneves3@msn.com>\nCc: Amit Kapila <amit.kapila16@gmail.com>; pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>\nAssunto: Re: CDC/ETL system on top of logical replication with pgoutput, custom client\n\nHi,\n\nOn 2023-07-31 21:25:06 +0000, José Neves wrote:\n> Ok, if I understood you correctly, I start to see where my logic is faulty. Just to make sure that I got it right, taking the following example again:\n>\n> T-1\n> INSERT LSN1-1000\n> UPDATE LSN2-2000\n> UPDATE LSN3-3000\n> COMMIT LSN4-4000\n>\n> T-2\n> INSERT LSN1-500\n> UPDATE LSN2-1500\n> UPDATE LSN3-2500\n> COMMIT LSN4-5500\n>\n> Where data will arrive in this order:\n>\n> INSERT LSN1-500\n> INSERT LSN1-1000\n> UPDATE LSN2-1500\n> UPDATE LSN2-2000\n> UPDATE LSN3-2500\n> UPDATE LSN3-3000\n> COMMIT LSN4-4000\n> COMMIT LSN4-5500\n\nNo, they won't arrive in that order. They will arive as\n\nBEGIN\nINSERT LSN1-1000\nUPDATE LSN2-2000\nUPDATE LSN3-3000\nCOMMIT LSN4-4000\nBEGIN\nINSERT LSN1-500\nUPDATE LSN2-1500\nUPDATE LSN3-2500\nCOMMIT LSN4-5500\n\nBecause T1 committed before T2. Changes are only streamed out at commit /\nprepare transaction (*). Within a transaction, they however *will* be ordered\nby LSN.\n\n(*) Unless you use streaming mode, in which case it'll all be more\ncomplicated, as you'll also receive changes for transactions that might still\nabort.\n\n\n> You are saying that the LSN3-3000 will never be missing, either the entire\n> connection will fail at that point, or all should be received in the\n> expected order (which is different from the \"numeric order\" of LSNs).\n\nI'm not quite sure what you mean with the \"different from the numeric order\"\nbit...\n\n\n> If the connection is down, upon restart, I will receive the entire T-1\n> transaction again (well, all example data again).\n\nYes, unless you already acknowledged receipt up to LSN4-4000 and/or are only\nasking for newer transactions when reconnecting.\n\n\n> In addition to that, if I commit LSN4-4000, even tho that LSN has a \"bigger\n> numeric value\" than the ones representing INSERT and UPDATE events on T-2, I\n> will be receiving the entire T-2 transaction again, as the LSN4-5500 is\n> still uncommitted.\n\nI don't quite know what you mean with \"commit LSN4-4000\" here.\n\n\n> This makes sense to me, but just to be extra clear, I will never receive a\n> transaction commit before receiving all other events for that transaction.\n\nCorrect.\n\nGreetings,\n\nAndres Freund\n\n\n\n\n\n\n\n\nHi there, hope to find you all well.\n\nA follow-up on this. Indeed, a new commit-based approach solved my missing data issues.\nBut, getting back to the previous examples, how are server times expected to be logged for the xlogs containing these records?\n\nWith these 2 transactions:\n\nT-1\nINSERT LSN1-1000\n\nUPDATE LSN2-2000\n\nUPDATE LSN3-3000\nCOMMIT  LSN4-4000\n\n\n\n\n\nT-2\n\nINSERT LSN1-500\n\nUPDATE LSN2-1500\n\nUPDATE LSN3-2500\n\n\nCOMMIT  LSN4-5500\n\nArriving this way:\nBEGIN\nINSERT LSN1-1000\nUPDATE LSN2-2000\nUPDATE LSN3-3000\nCOMMIT  LSN4-4000\n\n\nBEGIN\nINSERT LSN1-500\nUPDATE LSN2-1500\nUPDATE LSN3-2500\nCOMMIT  LSN4-5500\n\nAre server times for them expected to be:\n\nBEGIN\n\nINSERT LSN1-1000 - 2\n\nUPDATE LSN2-2000 - 4\n\nUPDATE LSN3-3000 - 6\n\nCOMMIT  LSN4-4000 - 7\n\n\n\nBEGIN\n\nINSERT LSN1-500 - 1\n\nUPDATE LSN2-1500 - 3\n\nUPDATE LSN3-2500 - 5\n\n\nCOMMIT\n  LSN4-5500 - 8\n\nOr:\n\n\n\nBEGIN\n\nINSERT LSN1-1000 - 1\n\nUPDATE LSN2-2000 - 2\n\nUPDATE LSN3-3000 - 3\n\nCOMMIT  LSN4-4000 - 4\n\n\n\nBEGIN\n\nINSERT LSN1-500 - 5\n\nUPDATE LSN2-1500 - 6\n\nUPDATE LSN3-2500 - 7\n\n\n\nCOMMIT\n  LSN4-5500 - 8\n\n\n\nI'm asking because altho I'm no longer missing data, I have a second async process that can fail (publishing data to an event messaging service), and therefore there is a possibility of data duplication. Worst I've to split large transactions as message sizes\n are limited. Would be nice if I could rely on server time ts to discard duplicated data...\n\nThanks.\nRegards,\nJosé Neves\n\n\nDe: José Neves <rafaneves3@msn.com>\nEnviado: 1 de agosto de 2023 10:13\nPara: Andres Freund <andres@anarazel.de>\nCc: Amit Kapila <amit.kapila16@gmail.com>; pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>\nAssunto: RE: CDC/ETL system on top of logical replication with pgoutput, custom client\n \n\n\n\n\nHi Andres.\n\nOwh, I see the error of my way... :(\n\nBy ignoring commits, and committing individual operation LSNs, I was effectively rolling back the subscription. In the previous example, if I committed the LSN of the first insert of the second transaction (LSN1-500), I was basically telling Postgres to send\n everything again, including the already processed T1.\n\n\n\n> what you mean with the \"different from the numeric order\"\n\n\nI'm probably lacking terminology. I mean that LSN4-5500\n > LSN4-4000 > LSN3-3000 > LSN3-2500...\n\n\n\n\nBut,\n if I'm understanding correctly, I can only rely on the incremental sequence to be true for the commit events. Which explains my pain.\n\nThe\n world makes sense again.\n\n\n\n\nThank\n you very much. Will try to implement this new logic, and hopefully not bug again with this issue.\n\nRegards,\nJosé Neves\n\n\nDe: Andres Freund <andres@anarazel.de>\nEnviado: 1 de agosto de 2023 00:21\nPara: José Neves <rafaneves3@msn.com>\nCc: Amit Kapila <amit.kapila16@gmail.com>; pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>\nAssunto: Re: CDC/ETL system on top of logical replication with pgoutput, custom client\n \n\n\nHi,\n\nOn 2023-07-31 21:25:06 +0000, José Neves wrote:\n> Ok, if I understood you correctly, I start to see where my logic is faulty. Just to make sure that I got it right, taking the following example again:\n>\n> T-1\n> INSERT LSN1-1000\n> UPDATE LSN2-2000\n> UPDATE LSN3-3000\n> COMMIT  LSN4-4000\n>\n> T-2\n> INSERT LSN1-500\n> UPDATE LSN2-1500\n> UPDATE LSN3-2500\n> COMMIT  LSN4-5500\n>\n> Where data will arrive in this order:\n>\n> INSERT LSN1-500\n> INSERT LSN1-1000\n> UPDATE LSN2-1500\n> UPDATE LSN2-2000\n> UPDATE LSN3-2500\n> UPDATE LSN3-3000\n> COMMIT  LSN4-4000\n> COMMIT  LSN4-5500\n\nNo, they won't arrive in that order. They will arive as\n\nBEGIN\nINSERT LSN1-1000\nUPDATE LSN2-2000\nUPDATE LSN3-3000\nCOMMIT  LSN4-4000\nBEGIN\nINSERT LSN1-500\nUPDATE LSN2-1500\nUPDATE LSN3-2500\nCOMMIT  LSN4-5500\n\nBecause T1 committed before T2. Changes are only streamed out at commit /\nprepare transaction (*). Within a transaction, they however *will* be ordered\nby LSN.\n\n(*) Unless you use streaming mode, in which case it'll all be more\ncomplicated, as you'll also receive changes for transactions that might still\nabort.\n\n\n> You are saying that the LSN3-3000 will never be missing, either the entire\n> connection will fail at that point, or all should be received in the\n> expected order (which is different from the \"numeric order\" of LSNs).\n\nI'm not quite sure what you mean with the \"different from the numeric order\"\nbit...\n\n\n> If the connection is down, upon restart, I will receive the entire T-1\n> transaction again (well, all example data again).\n\nYes, unless you already acknowledged receipt up to LSN4-4000 and/or are only\nasking for newer transactions when reconnecting.\n\n\n> In addition to that, if I commit LSN4-4000, even tho that LSN has a \"bigger\n> numeric value\" than the ones representing INSERT and UPDATE events on T-2, I\n> will be receiving the entire T-2 transaction again, as the LSN4-5500 is\n> still uncommitted.\n\nI don't quite know what you mean with \"commit LSN4-4000\" here.\n\n\n> This makes sense to me, but just to be extra clear, I will never receive a\n> transaction commit before receiving all other events for that transaction.\n\nCorrect.\n\nGreetings,\n\nAndres Freund", "msg_date": "Sun, 6 Aug 2023 14:24:45 +0000", "msg_from": "=?iso-8859-1?Q?Jos=E9_Neves?= <rafaneves3@msn.com>", "msg_from_op": true, "msg_subject": "RE: CDC/ETL system on top of logical replication with pgoutput,\n custom client" }, { "msg_contents": "On Sun, Aug 6, 2023 at 7:54 PM José Neves <rafaneves3@msn.com> wrote:\n>\n> A follow-up on this. Indeed, a new commit-based approach solved my missing data issues.\n> But, getting back to the previous examples, how are server times expected to be logged for the xlogs containing these records?\n>\n\nI think it should be based on commit_time because as far as I see we\ncan only get that on the client.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 7 Aug 2023 10:29:54 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CDC/ETL system on top of logical replication with pgoutput,\n custom client" }, { "msg_contents": "Hi Amit.\r\n\r\nHumm, that's... challenging. I faced some issues after \"the fix\" because I had a couple of transactions with 25k updates, and I had to split it to be able to push to our event messaging system, as our max message size is 10MB. Relying on commit time would mean that all transaction operations will have the same timestamp. If something goes wrong while my worker is pushing that transaction data chunks, I will duplicate some data in the next run, so... this wouldn't allow me to deal with data duplication.\r\nIs there any other way that you see to deal with it?\r\n\r\nRight now I only see an option, which is to store all processed LSNs on the other side of the ETL. I'm trying to avoid that overhead.\r\n\r\nThanks.\r\nRegards,\r\nJosé Neves\r\n________________________________\r\nDe: Amit Kapila <amit.kapila16@gmail.com>\r\nEnviado: 7 de agosto de 2023 05:59\r\nPara: José Neves <rafaneves3@msn.com>\r\nCc: Andres Freund <andres@anarazel.de>; pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>\r\nAssunto: Re: CDC/ETL system on top of logical replication with pgoutput, custom client\r\n\r\nOn Sun, Aug 6, 2023 at 7:54 PM José Neves <rafaneves3@msn.com> wrote:\r\n>\r\n> A follow-up on this. Indeed, a new commit-based approach solved my missing data issues.\r\n> But, getting back to the previous examples, how are server times expected to be logged for the xlogs containing these records?\r\n>\r\n\r\nI think it should be based on commit_time because as far as I see we\r\ncan only get that on the client.\r\n\r\n--\r\nWith Regards,\r\nAmit Kapila.\r\n\n\n\n\n\n\n\n\r\nHi Amit.\n\n\n\n\r\nHumm, that's... challenging. I faced some issues after \"the fix\" because I had a couple of transactions with 25k updates, and I had to split it to\r\n be able to push to our event messaging system, as our max message size is 10MB. Relying on commit time would mean that all transaction operations will have the same timestamp. If something goes wrong while my worker is pushing that transaction data chunks,\r\n I will duplicate some data in the next run, so... this wouldn't allow me to deal with data duplication.\r\nIs there any other way that you see to deal with it?\n\r\nRight now I only see an option, which is to store all processed LSNs on the other side of the ETL. I'm trying to avoid that overhead.\n\n\n\n\r\nThanks.\r\nRegards,\r\nJosé Neves\n\n\nDe: Amit Kapila <amit.kapila16@gmail.com>\nEnviado: 7 de agosto de 2023 05:59\nPara: José Neves <rafaneves3@msn.com>\nCc: Andres Freund <andres@anarazel.de>; pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>\nAssunto: Re: CDC/ETL system on top of logical replication with pgoutput, custom client\n \n\n\nOn Sun, Aug 6, 2023 at 7:54 PM José Neves <rafaneves3@msn.com> wrote:\r\n>\r\n> A follow-up on this. Indeed, a new commit-based approach solved my missing data issues.\r\n> But, getting back to the previous examples, how are server times expected to be logged for the xlogs containing these records?\r\n>\n\r\nI think it should be based on commit_time because as far as I see we\r\ncan only get that on the client.\n\r\n-- \r\nWith Regards,\r\nAmit Kapila.", "msg_date": "Mon, 7 Aug 2023 08:16:54 +0000", "msg_from": "=?utf-8?B?Sm9zw6kgTmV2ZXM=?= <rafaneves3@msn.com>", "msg_from_op": false, "msg_subject": "RE: CDC/ETL system on top of logical replication with pgoutput,\n custom client" }, { "msg_contents": "On Mon, Aug 7, 2023 at 1:46 PM José Neves <rafaneves3@msn.com> wrote:\n>\n> Humm, that's... challenging. I faced some issues after \"the fix\" because I had a couple of transactions with 25k updates, and I had to split it to be able to push to our event messaging system, as our max message size is 10MB. Relying on commit time would mean that all transaction operations will have the same timestamp. If something goes wrong while my worker is pushing that transaction data chunks, I will duplicate some data in the next run, so... this wouldn't allow me to deal with data duplication.\n> Is there any other way that you see to deal with it?\n>\n> Right now I only see an option, which is to store all processed LSNs on the other side of the ETL. I'm trying to avoid that overhead.\n>\n\nSorry, I don't understand your system enough to give you suggestions\nbut if you have any questions related to how logical replication work\nthen I might be able to help.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 8 Aug 2023 19:07:37 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CDC/ETL system on top of logical replication with pgoutput,\n custom client" }, { "msg_contents": "Hi there, hope to find you well.\r\n\r\nI have a follow-up question to this already long thread.\r\n\r\nUpon deploying my PostgreSQL logical replication fed application on a stale database, I ended up running out of space, as the replication slot is being held back till the next time that we receive a data-changing event, and we advance to that new LSN offset.\r\nI think that the solution for this is to advance our LSN offset every time a keep-alive message is received ('k' // 107).\r\nMy doubt is, can the keep-alive messages be received in between open transaction events? I think not, but I would like to get your input to be extra sure as if this happens, and I commit that offset, I may introduce again faulty logic leading to data loss.\r\n\r\nIn sum, something like this wouldn't happen:\r\nBEGIN LSN001\r\nINSERT LSN002\r\nKEEP LIVE LSN003\r\nUPDATE LSN004\r\nCOMMIT LSN005\r\n\r\nCorrect? It has to be either:\r\n\r\nKEEP LIVE LSN001\r\nBEGIN LSN002\r\nINSERT LSN003\r\nUPDATE LSN004\r\nCOMMIT LSN005\r\n\r\nOr:\r\nBEGIN LSN001\r\nINSERT LSN002\r\nUPDATE LSN004\r\nCOMMIT LSN005\r\nKEEP LIVE LSN006\r\n\r\nLSNXXX are mere representations of LSN offsets.\r\n\r\nThank you again.\r\nRegards,\r\nJosé Neves\r\n\r\n________________________________\r\nDe: Amit Kapila <amit.kapila16@gmail.com>\r\nEnviado: 8 de agosto de 2023 14:37\r\nPara: José Neves <rafaneves3@msn.com>\r\nCc: Andres Freund <andres@anarazel.de>; pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>\r\nAssunto: Re: CDC/ETL system on top of logical replication with pgoutput, custom client\r\n\r\nOn Mon, Aug 7, 2023 at 1:46 PM José Neves <rafaneves3@msn.com> wrote:\r\n>\r\n> Humm, that's... challenging. I faced some issues after \"the fix\" because I had a couple of transactions with 25k updates, and I had to split it to be able to push to our event messaging system, as our max message size is 10MB. Relying on commit time would mean that all transaction operations will have the same timestamp. If something goes wrong while my worker is pushing that transaction data chunks, I will duplicate some data in the next run, so... this wouldn't allow me to deal with data duplication.\r\n> Is there any other way that you see to deal with it?\r\n>\r\n> Right now I only see an option, which is to store all processed LSNs on the other side of the ETL. I'm trying to avoid that overhead.\r\n>\r\n\r\nSorry, I don't understand your system enough to give you suggestions\r\nbut if you have any questions related to how logical replication work\r\nthen I might be able to help.\r\n\r\n--\r\nWith Regards,\r\nAmit Kapila.\r\n\n\n\n\n\n\n\nHi there, hope to find you well.\n\r\nI have a follow-up question to this already long thread.\n\r\nUpon deploying my PostgreSQL logical replication fed application on a stale database, I ended up running out of space, as the replication slot is being held back till the next time that we receive a data-changing event, and we advance to that new LSN offset.\r\nI think that the solution for this is to advance our LSN offset every time a keep-alive message is received\r\n('k' // 107).\nMy doubt is, can the keep-alive messages be received in between open transaction events? I think not, but I would like to get your input\r\n to be extra sure as if this happens, and I commit that offset, I may introduce again faulty logic leading to data loss.\n\n\nIn sum, something like this wouldn't happen:\nBEGIN LSN001\r\nINSERT LSN002\nKEEP LIVE LSN003\nUPDATE LSN004\r\nCOMMIT LSN005\n\r\nCorrect? It has to be either:\n\n\nKEEP LIVE LSN001\nBEGIN LSN002\r\nINSERT LSN003\nUPDATE LSN004\r\nCOMMIT LSN005\n\r\nOr:\nBEGIN LSN001\r\nINSERT LSN002\nUPDATE LSN004\r\nCOMMIT LSN005\nKEEP LIVE LSN006\n\n\nLSNXXX are mere representations of LSN offsets.\n\r\nThank you again.\r\nRegards,\r\nJosé Neves\n\n\n\n\nDe: Amit Kapila <amit.kapila16@gmail.com>\nEnviado: 8 de agosto de 2023 14:37\nPara: José Neves <rafaneves3@msn.com>\nCc: Andres Freund <andres@anarazel.de>; pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>\nAssunto: Re: CDC/ETL system on top of logical replication with pgoutput, custom client\n \n\n\nOn Mon, Aug 7, 2023 at 1:46 PM José Neves <rafaneves3@msn.com> wrote:\r\n>\r\n> Humm, that's... challenging. I faced some issues after \"the fix\" because I had a couple of transactions with 25k updates, and I had to split it to be able to push to our event messaging system, as our max message size is 10MB. Relying on commit time would\r\n mean that all transaction operations will have the same timestamp. If something goes wrong while my worker is pushing that transaction data chunks, I will duplicate some data in the next run, so... this wouldn't allow me to deal with data duplication.\r\n> Is there any other way that you see to deal with it?\r\n>\r\n> Right now I only see an option, which is to store all processed LSNs on the other side of the ETL. I'm trying to avoid that overhead.\r\n>\n\r\nSorry, I don't understand your system enough to give you suggestions\r\nbut if you have any questions related to how logical replication work\r\nthen I might be able to help.\n\r\n-- \r\nWith Regards,\r\nAmit Kapila.", "msg_date": "Tue, 24 Oct 2023 15:21:33 +0000", "msg_from": "=?utf-8?B?Sm9zw6kgTmV2ZXM=?= <rafaneves3@msn.com>", "msg_from_op": false, "msg_subject": "RE: CDC/ETL system on top of logical replication with pgoutput,\n custom client" }, { "msg_contents": "On Tue, Oct 24, 2023 at 8:53 PM José Neves <rafaneves3@msn.com> wrote:\n>\n> Hi there, hope to find you well.\n>\n> I have a follow-up question to this already long thread.\n>\n> Upon deploying my PostgreSQL logical replication fed application on a stale database, I ended up running out of space, as the replication slot is being held back till the next time that we receive a data-changing event, and we advance to that new LSN offset.\n> I think that the solution for this is to advance our LSN offset every time a keep-alive message is received ('k' // 107).\n> My doubt is, can the keep-alive messages be received in between open transaction events? I think not, but I would like to get your input to be extra sure as if this happens, and I commit that offset, I may introduce again faulty logic leading to data loss.\n>\n> In sum, something like this wouldn't happen:\n> BEGIN LSN001\n> INSERT LSN002\n> KEEP LIVE LSN003\n> UPDATE LSN004\n> COMMIT LSN005\n>\n\nIf the downstream acknowledges receipt of LSN003 and saves it locally\nand crashes, upon restart the upstream will resend all the\ntransactions that committed after LSN003 including the one ended at\nLSN005. So this is safe.\n\n--\nBest Wishes,\nAshutosh\n\n\n", "msg_date": "Wed, 25 Oct 2023 16:12:25 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CDC/ETL system on top of logical replication with pgoutput,\n custom client" }, { "msg_contents": "Ok, I see. In that situation is safe indeed, as the offset is lower than the current transaction commit.\r\nBut I think that I asked the wrong question. I guess that the right question is: Can we receive a keep-alive message with an LSN offset bigger than the commit of the open or following transactions?\r\nSomething like:\r\n\r\nBEGIN LSN001\r\nINSERT LSN002\r\nKEEP LIVE LSN006\r\nUPDATE LSN004\r\nCOMMIT LSN005\r\n\r\nOr:\r\n\r\nKEEP LIVE LSN006\r\nBEGIN LSN001\r\nINSERT LSN002\r\nUPDATE LSN004\r\nCOMMIT LSN005\r\nKEEP LIVE LSN007\r\n\r\nOr is the sequence ensured not only between commits but also with keep-alive messaging?\r\n________________________________\r\nDe: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\r\nEnviado: 25 de outubro de 2023 11:42\r\nPara: José Neves <rafaneves3@msn.com>\r\nCc: Amit Kapila <amit.kapila16@gmail.com>; Andres Freund <andres@anarazel.de>; pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>\r\nAssunto: Re: CDC/ETL system on top of logical replication with pgoutput, custom client\r\n\r\nOn Tue, Oct 24, 2023 at 8:53 PM José Neves <rafaneves3@msn.com> wrote:\r\n>\r\n> Hi there, hope to find you well.\r\n>\r\n> I have a follow-up question to this already long thread.\r\n>\r\n> Upon deploying my PostgreSQL logical replication fed application on a stale database, I ended up running out of space, as the replication slot is being held back till the next time that we receive a data-changing event, and we advance to that new LSN offset.\r\n> I think that the solution for this is to advance our LSN offset every time a keep-alive message is received ('k' // 107).\r\n> My doubt is, can the keep-alive messages be received in between open transaction events? I think not, but I would like to get your input to be extra sure as if this happens, and I commit that offset, I may introduce again faulty logic leading to data loss.\r\n>\r\n> In sum, something like this wouldn't happen:\r\n> BEGIN LSN001\r\n> INSERT LSN002\r\n> KEEP LIVE LSN003\r\n> UPDATE LSN004\r\n> COMMIT LSN005\r\n>\r\n\r\nIf the downstream acknowledges receipt of LSN003 and saves it locally\r\nand crashes, upon restart the upstream will resend all the\r\ntransactions that committed after LSN003 including the one ended at\r\nLSN005. So this is safe.\r\n\r\n--\r\nBest Wishes,\r\nAshutosh\r\n\n\n\n\n\n\n\nOk, I see. In that situation is safe indeed, as the offset is lower than the current transaction commit.\r\nBut I think that I asked the wrong question. I guess that the right question is: Can we receive a keep-alive message with an LSN offset bigger than the commit of the open or following transactions?\r\nSomething like:\n\nBEGIN\r\n LSN001\nINSERT\r\n LSN002\nKEEP\r\n LIVE LSN006\nUPDATE\r\n LSN004\nCOMMIT\r\nLSN005\n\r\nOr:\n\n\nKEEP\r\n LIVE LSN006\nBEGIN\r\n LSN001\nINSERT\r\n LSN002\nUPDATE\r\n LSN004\nCOMMIT\r\nLSN005\nKEEP\r\n LIVE LSN007\n\r\nOr is the sequence ensured not only between commits but also with keep-alive messaging?\n\n\nDe: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nEnviado: 25 de outubro de 2023 11:42\nPara: José Neves <rafaneves3@msn.com>\nCc: Amit Kapila <amit.kapila16@gmail.com>; Andres Freund <andres@anarazel.de>; pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>\nAssunto: Re: CDC/ETL system on top of logical replication with pgoutput, custom client\n \n\n\nOn Tue, Oct 24, 2023 at 8:53 PM José Neves <rafaneves3@msn.com> wrote:\r\n>\r\n> Hi there, hope to find you well.\r\n>\r\n> I have a follow-up question to this already long thread.\r\n>\r\n> Upon deploying my PostgreSQL logical replication fed application on a stale database, I ended up running out of space, as the replication slot is being held back till the next time that we receive a data-changing event, and we advance to that new LSN offset.\r\n> I think that the solution for this is to advance our LSN offset every time a keep-alive message is received ('k' // 107).\r\n> My doubt is, can the keep-alive messages be received in between open transaction events? I think not, but I would like to get your input to be extra sure as if this happens, and I commit that offset, I may introduce again faulty logic leading to data loss.\r\n>\r\n> In sum, something like this wouldn't happen:\r\n> BEGIN LSN001\r\n> INSERT LSN002\r\n> KEEP LIVE LSN003\r\n> UPDATE LSN004\r\n> COMMIT LSN005\r\n>\n\r\nIf the downstream acknowledges receipt of LSN003 and saves it locally\r\nand crashes, upon restart the upstream will resend all the\r\ntransactions that committed after LSN003 including the one ended at\r\nLSN005. So this is safe.\n\r\n--\r\nBest Wishes,\r\nAshutosh", "msg_date": "Wed, 25 Oct 2023 10:53:47 +0000", "msg_from": "=?utf-8?B?Sm9zw6kgTmV2ZXM=?= <rafaneves3@msn.com>", "msg_from_op": false, "msg_subject": "RE: CDC/ETL system on top of logical replication with pgoutput,\n custom client" }, { "msg_contents": "On Wed, Oct 25, 2023 at 4:23 PM José Neves <rafaneves3@msn.com> wrote:\n>\n> Ok, I see. In that situation is safe indeed, as the offset is lower than the current transaction commit.\n> But I think that I asked the wrong question. I guess that the right question is: Can we receive a keep-alive message with an LSN offset bigger than the commit of the open or following transactions?\n> Something like:\n>\n> BEGIN LSN001\n> INSERT LSN002\n> KEEP LIVE LSN006\n> UPDATE LSN004\n> COMMIT LSN005\n>\n> Or:\n>\n> KEEP LIVE LSN006\n> BEGIN LSN001\n> INSERT LSN002\n> UPDATE LSN004\n> COMMIT LSN005\n> KEEP LIVE LSN007\n>\n\nAFAIU the code in walsender this isn't possible. Keep alive sends the\nLSN of the last WAL record it read (sentPtr). Upon reading a commit\nWAL record, the whole transaction is decoded. Till that point sentPtr\nis not updated.\n\nPlease take a look at XLogSendLogical(void) and the places where\nWalSndKeepaliveIfNecessary() is called.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Wed, 25 Oct 2023 16:43:57 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CDC/ETL system on top of logical replication with pgoutput,\n custom client" } ]
[ { "msg_contents": "Hi hackers,\n\nBACKGROUND:\n\nThe logical replication has different worker \"types\" (e.g. apply\nleader, apply parallel, tablesync).\n\nThey all use a common structure called LogicalRepWorker, but at times\nit is necessary to know what \"type\" of worker a given LogicalRepWorker\nrepresents.\n\nOnce, there were just apply workers and tablesync workers - these were\neasily distinguished because only tablesync workers had a valid\n'relid' field. Next, parallel-apply workers were introduced - these\nare distinguishable from the apply leaders by the value of\n'leader_pid' field.\n\n\nPROBLEM:\n\nIMO, deducing the worker's type by examining multiple different field\nvalues seems a dubious way to do it. This maybe was reasonable enough\nwhen there were only 2 types, but as more get added it becomes\nincreasingly complicated.\n\nSOLUTION:\n\nAFAIK none of the complications is necessary anyway; the worker type\nis already known at launch time, and it never changes during the life\nof the process.\n\nThe attached Patch 0001 introduces a new enum 'type' field, which is\nassigned during the launch.\n\n~\n\nThis change not only simplifies the code, but it also permits other\ncode optimizations, because now we can switch on the worker enum type,\ninstead of using cascading if/else. (see Patch 0002).\n\nThoughts?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Mon, 31 Jul 2023 11:46:24 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Adding a LogicalRepWorker type field" }, { "msg_contents": "On Mon, Jul 31, 2023 at 7:17 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> PROBLEM:\n>\n> IMO, deducing the worker's type by examining multiple different field\n> values seems a dubious way to do it. This maybe was reasonable enough\n> when there were only 2 types, but as more get added it becomes\n> increasingly complicated.\n\n+1 for being more explicit in worker types. I also think that we must\nmove away from am_{parallel_apply, tablesync,\nleader_apply}_worker(void) to Is{ParallelApply, TableSync,\nLeaderApply}Worker(MyLogicalRepWorker), just to be a little more\nconsistent and less confusion around different logical replication\nworker type related functions.\n\n> AFAIK none of the complications is necessary anyway; the worker type\n> is already known at launch time, and it never changes during the life\n> of the process.\n>\n> The attached Patch 0001 introduces a new enum 'type' field, which is\n> assigned during the launch.\n>\n> This change not only simplifies the code, but it also permits other\n> code optimizations, because now we can switch on the worker enum type,\n> instead of using cascading if/else. (see Patch 0002).\n\nSome comments:\n1.\n+/* Different kinds of workers */\n+typedef enum LogicalRepWorkerType\n+{\n+ LR_WORKER_TABLESYNC,\n+ LR_WORKER_APPLY,\n+ LR_WORKER_APPLY_PARALLEL\n+} LogicalRepWorkerType;\n\nCan these names be readable? How about something like\nLR_TABLESYNC_WORKER, LR_APPLY_WORKER, LR_PARALLEL_APPLY_WORKER?\n\n2.\n-#define isParallelApplyWorker(worker) ((worker)->leader_pid != InvalidPid)\n+#define isLeaderApplyWorker(worker) ((worker)->type == LR_WORKER_APPLY)\n+#define isParallelApplyWorker(worker) ((worker)->type ==\nLR_WORKER_APPLY_PARALLEL)\n+#define isTablesyncWorker(worker) ((worker)->type == LR_WORKER_TABLESYNC)\n\nCan the above start with \"Is\" instead of \"is\" similar to\nIsLogicalWorker and IsLogicalParallelApplyWorker?\n\n3.\n+ worker->type =\n+ OidIsValid(relid) ? LR_WORKER_TABLESYNC :\n+ is_parallel_apply_worker ? LR_WORKER_APPLY_PARALLEL :\n+ LR_WORKER_APPLY;\n\nPerhaps, an if-else is better for readability?\n\nif (OidIsValid(relid))\n worker->type = LR_WORKER_TABLESYNC;\nelse if (is_parallel_apply_worker)\n worker->type = LR_WORKER_APPLY_PARALLEL;\nelse\n worker->type = LR_WORKER_APPLY;\n\n4.\n+/* Different kinds of workers */\n+typedef enum LogicalRepWorkerType\n+{\n+ LR_WORKER_TABLESYNC,\n+ LR_WORKER_APPLY,\n+ LR_WORKER_APPLY_PARALLEL\n+} LogicalRepWorkerType;\n\nHave a LR_WORKER_UNKNOWN = 0 and set it in logicalrep_worker_cleanup()?\n\n5.\n+ case LR_WORKER_APPLY:\n+ return (rel->state == SUBREL_STATE_READY ||\n+ (rel->state == SUBREL_STATE_SYNCDONE &&\n+ rel->statelsn <= remote_final_lsn));\n\nNot necessary, but a good idea to have a default: clause and error out\nsaying wrong logical replication worker type?\n\n6.\n+ case LR_WORKER_APPLY_PARALLEL:\n+ /*\n+ * Skip for parallel apply workers because they only\noperate on tables\n+ * that are in a READY state. See pa_can_start() and\n+ * should_apply_changes_for_rel().\n+ */\n+ break;\n\nI'd better keep this if-else as-is instead of a switch case with\nnothing for parallel apply worker.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 31 Jul 2023 15:25:04 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding a LogicalRepWorker type field" }, { "msg_contents": "On Mon, Jul 31, 2023 at 3:25 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Jul 31, 2023 at 7:17 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > PROBLEM:\n> >\n> > IMO, deducing the worker's type by examining multiple different field\n> > values seems a dubious way to do it. This maybe was reasonable enough\n> > when there were only 2 types, but as more get added it becomes\n> > increasingly complicated.\n>\n> +1 for being more explicit in worker types.\n>\n\n+1. BTW, do we need the below functions (am_tablesync_worker(),\nam_leader_apply_worker()) after this work?\nstatic inline bool\n am_tablesync_worker(void)\n {\n- return OidIsValid(MyLogicalRepWorker->relid);\n+ return isTablesyncWorker(MyLogicalRepWorker);\n }\n\n static inline bool\n am_leader_apply_worker(void)\n {\n- return (!am_tablesync_worker() &&\n- !isParallelApplyWorker(MyLogicalRepWorker));\n+ return isLeaderApplyWorker(MyLogicalRepWorker);\n }\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 31 Jul 2023 18:41:00 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding a LogicalRepWorker type field" }, { "msg_contents": "Thanks for your detailed code review.\n\nMost comments are addressed in the attached v2 patches. Details inline below:\n\nOn Mon, Jul 31, 2023 at 7:55 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Jul 31, 2023 at 7:17 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > PROBLEM:\n> >\n> > IMO, deducing the worker's type by examining multiple different field\n> > values seems a dubious way to do it. This maybe was reasonable enough\n> > when there were only 2 types, but as more get added it becomes\n> > increasingly complicated.\n>\n> +1 for being more explicit in worker types. I also think that we must\n> move away from am_{parallel_apply, tablesync,\n> leader_apply}_worker(void) to Is{ParallelApply, TableSync,\n> LeaderApply}Worker(MyLogicalRepWorker), just to be a little more\n> consistent and less confusion around different logical replication\n> worker type related functions.\n>\n\nDone. I converged everything to the macro-naming style as suggested,\nbut I chose not to pass MyLogicalRepWorker as a parameter for the\nnormal (am_xxx case) otherwise I felt it would make the calling code\nunnecessarily verbose.\n\n> Some comments:\n> 1.\n> +/* Different kinds of workers */\n> +typedef enum LogicalRepWorkerType\n> +{\n> + LR_WORKER_TABLESYNC,\n> + LR_WORKER_APPLY,\n> + LR_WORKER_APPLY_PARALLEL\n> +} LogicalRepWorkerType;\n>\n> Can these names be readable? How about something like\n> LR_TABLESYNC_WORKER, LR_APPLY_WORKER, LR_PARALLEL_APPLY_WORKER?\n\nDone. Renamed similar to your suggestion.\n\n>\n> 2.\n> -#define isParallelApplyWorker(worker) ((worker)->leader_pid != InvalidPid)\n> +#define isLeaderApplyWorker(worker) ((worker)->type == LR_WORKER_APPLY)\n> +#define isParallelApplyWorker(worker) ((worker)->type ==\n> LR_WORKER_APPLY_PARALLEL)\n> +#define isTablesyncWorker(worker) ((worker)->type == LR_WORKER_TABLESYNC)\n>\n> Can the above start with \"Is\" instead of \"is\" similar to\n> IsLogicalWorker and IsLogicalParallelApplyWorker?\n>\n\nDone as suggested.\n\n> 3.\n> + worker->type =\n> + OidIsValid(relid) ? LR_WORKER_TABLESYNC :\n> + is_parallel_apply_worker ? LR_WORKER_APPLY_PARALLEL :\n> + LR_WORKER_APPLY;\n>\n> Perhaps, an if-else is better for readability?\n>\n> if (OidIsValid(relid))\n> worker->type = LR_WORKER_TABLESYNC;\n> else if (is_parallel_apply_worker)\n> worker->type = LR_WORKER_APPLY_PARALLEL;\n> else\n> worker->type = LR_WORKER_APPLY;\n>\n\nDone as suggested.\n\n> 4.\n> +/* Different kinds of workers */\n> +typedef enum LogicalRepWorkerType\n> +{\n> + LR_WORKER_TABLESYNC,\n> + LR_WORKER_APPLY,\n> + LR_WORKER_APPLY_PARALLEL\n> +} LogicalRepWorkerType;\n>\n> Have a LR_WORKER_UNKNOWN = 0 and set it in logicalrep_worker_cleanup()?\n>\n\nDone as suggested.\n\n> 5.\n> + case LR_WORKER_APPLY:\n> + return (rel->state == SUBREL_STATE_READY ||\n> + (rel->state == SUBREL_STATE_SYNCDONE &&\n> + rel->statelsn <= remote_final_lsn));\n>\n> Not necessary, but a good idea to have a default: clause and error out\n> saying wrong logical replication worker type?\n>\n\nNot done. Switching on the enum gives a compiler warning if the case\nis not handled. All enums are handled.\n\n> 6.\n> + case LR_WORKER_APPLY_PARALLEL:\n> + /*\n> + * Skip for parallel apply workers because they only\n> operate on tables\n> + * that are in a READY state. See pa_can_start() and\n> + * should_apply_changes_for_rel().\n> + */\n> + break;\n>\n> I'd better keep this if-else as-is instead of a switch case with\n> nothing for parallel apply worker.\n>\n\nNot done. IIUC using a switch, the compiler can optimize the code to a\nsingle \"jump\" thereby saving the extra type-check the HEAD code is\ndoing. Admittedly, it’s only a nanosecond saving, so it is no problem\nto revert this change, but why waste any CPU time unless there is a\nreason to?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Wed, 2 Aug 2023 12:31:56 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Adding a LogicalRepWorker type field" }, { "msg_contents": "On Mon, Jul 31, 2023 at 11:11 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> +1. BTW, do we need the below functions (am_tablesync_worker(),\n> am_leader_apply_worker()) after this work?\n> static inline bool\n> am_tablesync_worker(void)\n> {\n> - return OidIsValid(MyLogicalRepWorker->relid);\n> + return isTablesyncWorker(MyLogicalRepWorker);\n> }\n>\n> static inline bool\n> am_leader_apply_worker(void)\n> {\n> - return (!am_tablesync_worker() &&\n> - !isParallelApplyWorker(MyLogicalRepWorker));\n> + return isLeaderApplyWorker(MyLogicalRepWorker);\n> }\n>\n\nThe am_xxx functions are removed now in the v2-0001 patch. See [1].\n\nThe replacement set of macros (the ones with no arg) are not strictly\nnecessary, except I felt it would make the code unnecessarily verbose\nif we insist to pass MyLogicalRepWorker everywhere from the callers in\nworker.c / tablesync.c / applyparallelworker.c.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 2 Aug 2023 12:40:04 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Adding a LogicalRepWorker type field" }, { "msg_contents": "On Wed, Aug 2, 2023 at 8:10 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n>\n> The am_xxx functions are removed now in the v2-0001 patch. See [1].\n>\n> The replacement set of macros (the ones with no arg) are not strictly\n> necessary, except I felt it would make the code unnecessarily verbose\n> if we insist to pass MyLogicalRepWorker everywhere from the callers in\n> worker.c / tablesync.c / applyparallelworker.c.\n>\n\nAgreed but having a dual set of macros is also not clean. Can we\nprovide only a unique set of inline functions instead?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 2 Aug 2023 08:30:35 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding a LogicalRepWorker type field" }, { "msg_contents": "On Mon, Jul 31, 2023 at 10:47 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi hackers,\n>\n> BACKGROUND:\n>\n> The logical replication has different worker \"types\" (e.g. apply\n> leader, apply parallel, tablesync).\n>\n> They all use a common structure called LogicalRepWorker, but at times\n> it is necessary to know what \"type\" of worker a given LogicalRepWorker\n> represents.\n>\n> Once, there were just apply workers and tablesync workers - these were\n> easily distinguished because only tablesync workers had a valid\n> 'relid' field. Next, parallel-apply workers were introduced - these\n> are distinguishable from the apply leaders by the value of\n> 'leader_pid' field.\n>\n>\n> PROBLEM:\n>\n> IMO, deducing the worker's type by examining multiple different field\n> values seems a dubious way to do it. This maybe was reasonable enough\n> when there were only 2 types, but as more get added it becomes\n> increasingly complicated.\n>\n> SOLUTION:\n>\n> AFAIK none of the complications is necessary anyway; the worker type\n> is already known at launch time, and it never changes during the life\n> of the process.\n>\n> The attached Patch 0001 introduces a new enum 'type' field, which is\n> assigned during the launch.\n>\n> ~\n>\n> This change not only simplifies the code, but it also permits other\n> code optimizations, because now we can switch on the worker enum type,\n> instead of using cascading if/else. (see Patch 0002).\n>\n> Thoughts?\n\nI can see the problem you stated and it's true that the worker's type\nnever changes during its lifetime. But I'm not sure we really need to\nadd a new variable to LogicalRepWorker since we already have enough\ninformation there to distinguish the worker's type.\n\nCurrently, we deduce the worker's type by checking some fields that\nnever change such as relid and leader_piid. It's fine to me as long as\nwe encapcel the deduction of worker's type by macros (or inline\nfunctions). Even with the proposed patch (0001 patch), we still need a\nsimilar logic to determine the worker's type.\n\nIf we want to change both process_syncing_tables() and\nshould_apply_changes_for_rel() to use the switch statement instead of\nusing cascading if/else, I think we can have a function, say\nLogicalRepWorkerType(), that returns LogicalRepWorkerType of the given\nworker.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 2 Aug 2023 14:34:46 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding a LogicalRepWorker type field" }, { "msg_contents": "On Wed, Aug 2, 2023 at 3:35 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Jul 31, 2023 at 10:47 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Hi hackers,\n> >\n> > BACKGROUND:\n> >\n> > The logical replication has different worker \"types\" (e.g. apply\n> > leader, apply parallel, tablesync).\n> >\n> > They all use a common structure called LogicalRepWorker, but at times\n> > it is necessary to know what \"type\" of worker a given LogicalRepWorker\n> > represents.\n> >\n> > Once, there were just apply workers and tablesync workers - these were\n> > easily distinguished because only tablesync workers had a valid\n> > 'relid' field. Next, parallel-apply workers were introduced - these\n> > are distinguishable from the apply leaders by the value of\n> > 'leader_pid' field.\n> >\n> >\n> > PROBLEM:\n> >\n> > IMO, deducing the worker's type by examining multiple different field\n> > values seems a dubious way to do it. This maybe was reasonable enough\n> > when there were only 2 types, but as more get added it becomes\n> > increasingly complicated.\n> >\n> > SOLUTION:\n> >\n> > AFAIK none of the complications is necessary anyway; the worker type\n> > is already known at launch time, and it never changes during the life\n> > of the process.\n> >\n> > The attached Patch 0001 introduces a new enum 'type' field, which is\n> > assigned during the launch.\n> >\n> > ~\n> >\n> > This change not only simplifies the code, but it also permits other\n> > code optimizations, because now we can switch on the worker enum type,\n> > instead of using cascading if/else. (see Patch 0002).\n> >\n> > Thoughts?\n>\n> I can see the problem you stated and it's true that the worker's type\n> never changes during its lifetime. But I'm not sure we really need to\n> add a new variable to LogicalRepWorker since we already have enough\n> information there to distinguish the worker's type.\n>\n> Currently, we deduce the worker's type by checking some fields that\n> never change such as relid and leader_piid. It's fine to me as long as\n> we encapcel the deduction of worker's type by macros (or inline\n> functions). Even with the proposed patch (0001 patch), we still need a\n> similar logic to determine the worker's type.\n\nThanks for the feedback.\n\nI agree that the calling code will not look very different\nwith/without this patch 0001, because everything is hidden by the\nmacros/functions. But those HEAD macros/functions are also hiding the\ninefficiencies of the type deduction -- e.g. IMO it is quite strange\nthat we can only recognize the worker is an \"apply worker\" by\neliminating the other 2 possibilities.\n\nAs further background, the idea for this patch came from considering\nanother thread to \"re-use\" workers [1]. For example, when thinking\nabout re-using tablesync workers the HEAD am_tablesync_worker()\nfunction begins to fall apart -- ie. it does not let you sensibly\ndisassociate a tablesync worker from its relid (which you might want\nto do if tablesync workers were \"pooled\") because in the HEAD code any\ntablesync worker without a relid is (by definition) NOT recognized as\na tablesync worker anymore!\n\nThose above issues just disappear when a type field is added.\n\n>\n> If we want to change both process_syncing_tables() and\n> should_apply_changes_for_rel() to use the switch statement instead of\n> using cascading if/else, I think we can have a function, say\n> LogicalRepWorkerType(), that returns LogicalRepWorkerType of the given\n> worker.\n>\n\nThe point of that \"switch\" in patch 0002 was to demonstrate that with\na worker-type enum we can write more *efficient* code in some places\nthan the current cascading if/else. I think making a \"switch\" just for\nthe sake of it, by adding more functions that hide yet more work, is\ngoing in the opposite direction.\n\n------\n[1] reuse workers -\nhttps://www.postgresql.org/message-id/flat/CAGPVpCTq%3DrUDd4JUdaRc1XUWf4BrH2gdSNf3rtOMUGj9rPpfzQ%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 2 Aug 2023 16:27:13 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Adding a LogicalRepWorker type field" }, { "msg_contents": "On Wed, Aug 2, 2023 at 1:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Aug 2, 2023 at 8:10 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> >\n> > The am_xxx functions are removed now in the v2-0001 patch. See [1].\n> >\n> > The replacement set of macros (the ones with no arg) are not strictly\n> > necessary, except I felt it would make the code unnecessarily verbose\n> > if we insist to pass MyLogicalRepWorker everywhere from the callers in\n> > worker.c / tablesync.c / applyparallelworker.c.\n> >\n>\n> Agreed but having a dual set of macros is also not clean. Can we\n> provide only a unique set of inline functions instead?\n>\n\nWe can't use the same names for both with/without-parameter functions\nbecause there is no function overloading in C. In patch v3-0001 I've\nreplaced the \"dual set of macros\", with a single inline function of a\ndifferent name, and one set of space-saving macros.\n\nPSA v3\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Wed, 2 Aug 2023 16:43:33 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Adding a LogicalRepWorker type field" }, { "msg_contents": "On Wed, Aug 2, 2023 at 12:14 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> We can't use the same names for both with/without-parameter functions\n> because there is no function overloading in C. In patch v3-0001 I've\n> replaced the \"dual set of macros\", with a single inline function of a\n> different name, and one set of space-saving macros.\n>\n> PSA v3\n\nQuick thoughts:\n\n1.\n+typedef enum LogicalRepWorkerType\n+{\n+ TYPE_UNKNOWN = 0,\n+ TYPE_TABLESYNC_WORKER,\n+ TYPE_APPLY_WORKER,\n+ TYPE_PARALLEL_APPLY_WORKER\n+} LogicalRepWorkerType;\n\n-1 for these names starting with prefix TYPE_, in fact LR_ looked better.\n\n2.\n+is_worker_type(LogicalRepWorker *worker, LogicalRepWorkerType type)\n\nThis function name looks too generic, an element of logical\nreplication is better in the name.\n\n3.\n+#define IsLeaderApplyWorker() is_worker_type(MyLogicalRepWorker,\nTYPE_APPLY_WORKER)\n+#define IsParallelApplyWorker() is_worker_type(MyLogicalRepWorker,\nTYPE_PARALLEL_APPLY_WORKER)\n+#define IsTablesyncWorker() is_worker_type(MyLogicalRepWorker,\nTYPE_TABLESYNC_WORKER)\n\nMy thoughts were around removing am_XXX_worker() and\nIsXXXWorker(&worker) functions and just have IsXXXWorker(&worker)\nalone with using IsXXXWorker(MyLogicalRepWorker) in places where\nam_XXX_worker() is used. If implementing this leads to something like\nthe above with is_worker_type, -1 from me.\n\n> Done. I converged everything to the macro-naming style as suggested,\n> but I chose not to pass MyLogicalRepWorker as a parameter for the\n> normal (am_xxx case) otherwise I felt it would make the calling code\n> unnecessarily verbose.\n\nWith is_worker_type, the calling code now looks even more verbose. I\ndon't think IsLeaderApplyWorker(MyLogicalRepWorker) is a bad idea.\nThis unifies multiple worker type functions into one and makes code\nsimple.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 2 Aug 2023 14:45:00 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding a LogicalRepWorker type field" }, { "msg_contents": "On Wed, Aug 2, 2023 at 2:46 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Aug 2, 2023 at 12:14 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > We can't use the same names for both with/without-parameter functions\n> > because there is no function overloading in C. In patch v3-0001 I've\n> > replaced the \"dual set of macros\", with a single inline function of a\n> > different name, and one set of space-saving macros.\n> >\n> > PSA v3\n>\n> Quick thoughts:\n>\n> 1.\n> +typedef enum LogicalRepWorkerType\n> +{\n> + TYPE_UNKNOWN = 0,\n> + TYPE_TABLESYNC_WORKER,\n> + TYPE_APPLY_WORKER,\n> + TYPE_PARALLEL_APPLY_WORKER\n> +} LogicalRepWorkerType;\n>\n> -1 for these names starting with prefix TYPE_, in fact LR_ looked better.\n>\n> 2.\n> +is_worker_type(LogicalRepWorker *worker, LogicalRepWorkerType type)\n>\n> This function name looks too generic, an element of logical\n> replication is better in the name.\n>\n> 3.\n> +#define IsLeaderApplyWorker() is_worker_type(MyLogicalRepWorker,\n> TYPE_APPLY_WORKER)\n> +#define IsParallelApplyWorker() is_worker_type(MyLogicalRepWorker,\n> TYPE_PARALLEL_APPLY_WORKER)\n> +#define IsTablesyncWorker() is_worker_type(MyLogicalRepWorker,\n> TYPE_TABLESYNC_WORKER)\n>\n> My thoughts were around removing am_XXX_worker() and\n> IsXXXWorker(&worker) functions and just have IsXXXWorker(&worker)\n> alone with using IsXXXWorker(MyLogicalRepWorker) in places where\n> am_XXX_worker() is used. If implementing this leads to something like\n> the above with is_worker_type, -1 from me.\n>\n\nWhat I was suggesting to Peter to have something like:\nstatic inline bool\nam_tablesync_worker(void)\n{\nreturn (MyLogicalRepWorker->type == TYPE_APPLY_WORKER);\n}\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 2 Aug 2023 14:56:06 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding a LogicalRepWorker type field" }, { "msg_contents": "Hi,\n\nPeter Smith <smithpb2250@gmail.com>, 2 Ağu 2023 Çar, 09:27 tarihinde şunu\nyazdı:\n\n> On Wed, Aug 2, 2023 at 3:35 PM Masahiko Sawada <sawada.mshk@gmail.com>\n> wrote:\n> >\n> > I can see the problem you stated and it's true that the worker's type\n> > never changes during its lifetime. But I'm not sure we really need to\n> > add a new variable to LogicalRepWorker since we already have enough\n> > information there to distinguish the worker's type.\n> >\n> > Currently, we deduce the worker's type by checking some fields that\n> > never change such as relid and leader_piid. It's fine to me as long as\n> > we encapcel the deduction of worker's type by macros (or inline\n> > functions). Even with the proposed patch (0001 patch), we still need a\n> > similar logic to determine the worker's type.\n>\n> Thanks for the feedback.\n>\n> I agree that the calling code will not look very different\n> with/without this patch 0001, because everything is hidden by the\n> macros/functions. But those HEAD macros/functions are also hiding the\n> inefficiencies of the type deduction -- e.g. IMO it is quite strange\n> that we can only recognize the worker is an \"apply worker\" by\n> eliminating the other 2 possibilities.\n>\n\nIsn't the apply worker type still decided by eliminating the other choices\neven with the patch?\n\n /* Prepare the worker slot. */\n+ if (OidIsValid(relid))\n+ worker->type = TYPE_TABLESYNC_WORKER;\n+ else if (is_parallel_apply_worker)\n+ worker->type = TYPE_PARALLEL_APPLY_WORKER;\n+ else\n+ worker->type = TYPE_APPLY_WORKER;\n\n\n> 3.\n> +#define IsLeaderApplyWorker() is_worker_type(MyLogicalRepWorker,\n> TYPE_APPLY_WORKER)\n> +#define IsParallelApplyWorker() is_worker_type(MyLogicalRepWorker,\n> TYPE_PARALLEL_APPLY_WORKER)\n> +#define IsTablesyncWorker() is_worker_type(MyLogicalRepWorker,\n> TYPE_TABLESYNC_WORKER)\n>\n> My thoughts were around removing am_XXX_worker() and\n> IsXXXWorker(&worker) functions and just have IsXXXWorker(&worker)\n> alone with using IsXXXWorker(MyLogicalRepWorker) in places where\n> am_XXX_worker() is used. If implementing this leads to something like\n> the above with is_worker_type, -1 from me.\n>\n\nI agree that having both is_worker_type and IsXXXWorker makes the code more\nconfusing. Especially both type check ways are used in the same\nfile/function as follows:\n\n logicalrep_worker_detach(void)\n> {\n> /* Stop the parallel apply workers. */\n> - if (am_leader_apply_worker())\n> + if (IsLeaderApplyWorker())\n> {\n> List *workers;\n> ListCell *lc;\n> @@ -749,7 +756,7 @@ logicalrep_worker_detach(void)\n> {\n> LogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);\n>\n> - if (isParallelApplyWorker(w))\n> + if (is_worker_type(w, TYPE_PARALLEL_APPLY_WORKER))\n> logicalrep_worker_stop_internal(w, SIGTERM);\n> }\n\n\n\nRegards,\n-- \nMelih Mutlu\nMicrosoft\n\nHi,Peter Smith <smithpb2250@gmail.com>, 2 Ağu 2023 Çar, 09:27 tarihinde şunu yazdı:On Wed, Aug 2, 2023 at 3:35 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I can see the problem you stated and it's true that the worker's type\n> never changes during its lifetime. But I'm not sure we really need to\n> add a new variable to LogicalRepWorker since we already have enough\n> information there to distinguish the worker's type.\n>\n> Currently, we deduce the worker's type by checking some fields that\n> never change such as relid and leader_piid. It's fine to me as long as\n> we encapcel the deduction of worker's type by macros (or inline\n> functions). Even with the proposed patch (0001 patch), we still need a\n> similar logic to determine the worker's type.\n\nThanks for the feedback.\n\nI agree that the calling code will not look very different\nwith/without this patch 0001, because everything is hidden by the\nmacros/functions. But those HEAD macros/functions are also hiding the\ninefficiencies of the type deduction -- e.g. IMO it is quite strange\nthat we can only recognize the worker is an \"apply worker\" by\neliminating the other 2 possibilities.Isn't the apply worker type still decided by eliminating the other choices even with the patch?   /* Prepare the worker slot. */+\tif (OidIsValid(relid))+\t\tworker->type = TYPE_TABLESYNC_WORKER;+\telse if (is_parallel_apply_worker)+\t\tworker->type = TYPE_PARALLEL_APPLY_WORKER;+\telse+\t\tworker->type = TYPE_APPLY_WORKER;  > 3.> +#define IsLeaderApplyWorker() is_worker_type(MyLogicalRepWorker,> TYPE_APPLY_WORKER)> +#define IsParallelApplyWorker() is_worker_type(MyLogicalRepWorker,> TYPE_PARALLEL_APPLY_WORKER)> +#define IsTablesyncWorker() is_worker_type(MyLogicalRepWorker,> TYPE_TABLESYNC_WORKER)>> My thoughts were around removing am_XXX_worker() and> IsXXXWorker(&worker) functions and just have IsXXXWorker(&worker)> alone with using IsXXXWorker(MyLogicalRepWorker) in places where> am_XXX_worker() is used. If implementing this leads to something like> the above with is_worker_type, -1 from me.>I agree that having both is_worker_type and IsXXXWorker makes the code more confusing. Especially both type check ways are used in the same file/function as follows: logicalrep_worker_detach(void) { \t/* Stop the parallel apply workers. */-\tif (am_leader_apply_worker())+\tif (IsLeaderApplyWorker()) \t{ \t\tList\t   *workers; \t\tListCell   *lc;@@ -749,7 +756,7 @@ logicalrep_worker_detach(void) \t\t{ \t\t\tLogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc); -\t\t\tif (isParallelApplyWorker(w))+\t\t\tif (is_worker_type(w, TYPE_PARALLEL_APPLY_WORKER)) \t\t\t\tlogicalrep_worker_stop_internal(w, SIGTERM);   } Regards,-- Melih MutluMicrosoft", "msg_date": "Wed, 2 Aug 2023 14:57:35 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding a LogicalRepWorker type field" }, { "msg_contents": "On Wed, Aug 2, 2023 at 12:14 PM Peter Smith <smithpb2250@gmail.com> wrote:\n\n>\n> We can't use the same names for both with/without-parameter functions\n> because there is no function overloading in C. In patch v3-0001 I've\n> replaced the \"dual set of macros\", with a single inline function of a\n> different name, and one set of space-saving macros.\n>\n\nI think it's a good idea to add worker type field. Trying to deduce it\nbased on the contents of the structure isn't good. RelOptInfo, for example,\nhas RelOptKind. But RelOptInfo has become really large with members\nrequired by all RelOptKinds crammed under the same structure. If we can\navoid that here at the beginning itself, that will be great. May be we\nshould create a union of type specific members while we are introducing the\ntype. This will also provide some protection against unrelated members\nbeing (mis)used in the future.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Wed, Aug 2, 2023 at 12:14 PM Peter Smith <smithpb2250@gmail.com> wrote:\nWe can't use the same names for both with/without-parameter functions\nbecause there is no function overloading in C. In patch v3-0001 I've\nreplaced the \"dual set of macros\", with a single inline function of a\ndifferent name, and one set of space-saving macros.I think it's a good idea to add worker type field. Trying to deduce it based on the contents of the structure isn't good. RelOptInfo, for example, has RelOptKind. But RelOptInfo has become really large with members required by all RelOptKinds crammed under the same structure. If we can avoid that here at the beginning itself, that will be great. May be we should create a union of type specific members while we are introducing the type. This will also provide some protection against unrelated members being (mis)used in the future.-- Best Wishes,Ashutosh Bapat", "msg_date": "Thu, 3 Aug 2023 14:20:52 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding a LogicalRepWorker type field" }, { "msg_contents": "Thank you for the feedback received so far. Sorry, I have become busy\nlately with other work.\n\nIIUC there is a general +1 for the idea, but I still have some\njuggling necessary to make the functions/macros of patch 0001\nacceptable to everybody.\n\nThe difficulties arise from:\n- no function overloading C\n- ideally, we prefer the same names for everything (e.g. instead of\ndual-set macros)\n- but launcher code calls need to pass param (other code always uses\nMyLogicalRepWorker)\n- would be nice (although no mandatory) to not have to always pass\nMyLogicalRepWorker as a param.\n\nAnyway, I will work towards finding some compromise on this next week.\nCurrently, I feel the best choice is to go with what Bharath suggested\nin the first place -- just always pass the parameter (including\npassing MyLogicalRepWorker). I will think more about it...\n\n------\n\nMeanwhile, here are some replies to other feedback received:\n\nAshutosh -- \"May be we should create a union of type specific members\" [1].\n\nYes, I was wondering this myself, but I won't pursue this yet until\ngetting over the hurdle of finding acceptable functions/macros for\npatch 0001. Hopefully, we can come back to this idea.\n\n~~\n\nMellih -- \"Isn't the apply worker type still decided by eliminating\nthe other choices even with the patch?\".\n\nAFAIK the only case of that now is the one-time check in the\nlogicalrep_worker_launch() function. IMO, that is quite a different\nproposition to the HEAD macros that have to make that deduction\n10,000s ot times in executing code. Actually, even the caller knows\nexactly the kind of worker it wants to launch so we can pass the\nLogicalRepWorkerType directly to logicalrep_worker_launch() and\neliminate even this one-time-check. I can code it that way in the next\npatch version.\n\n~~\n\nBarath -- \"-1 for these names starting with prefix TYPE_, in fact LR_\nlooked better.\"\n\nHmmm. I'm not sure what is best. Of the options below I prefer\n\"WORKER_TYPE_xxx\", but I don't mind so long as there is a consensus.\n\nLR_APPLY_WORKER\nLR_PARALLEL_APPLY_WORKER\nLR_TABLESYNC_WORKER\n\nTYPE_APPLY_WORKERT\nTYPE_PARALLEL_APPLY_WORKER\nTYPE_TABLESYNC_WORKER\n\nWORKER_TYPE_APPLY\nWORKER_TYPE_PARALLEL_APPLY\nWORKER_TYPE_TABLESYNC\n\nAPPLY_WORKER\nPARALLEL_APPLY_WORKER\nTABLESYNC_WORKER\n\nAPPLY\nPARALLEL_APPLY\nTABLESYNC\n\n------\n[1] Ashutosh - https://www.postgresql.org/message-id/CAExHW5uALiimrdpdO0vwuDivD99TW%2B_9vvfFsErVNzq1ehYV9Q%40mail.gmail.com\n[2] Melih - https://www.postgresql.org/message-id/CAGPVpCRZ-zEOa2Lkvw%2BiTU3NhJzfbGwH1dU7Htreup--8e5nxg%40mail.gmail.com\n[3] Bharath - https://www.postgresql.org/message-id/CALj2ACVro6oCsTg_gpYu%2BV_LPMSgk4wjmSPDk8d5thArWNRoRQ%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 4 Aug 2023 17:45:07 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Adding a LogicalRepWorker type field" }, { "msg_contents": "PSA v4 patches.\n\nOn Fri, Aug 4, 2023 at 5:45 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Thank you for the feedback received so far. Sorry, I have become busy\n> lately with other work.\n>\n> IIUC there is a general +1 for the idea, but I still have some\n> juggling necessary to make the functions/macros of patch 0001\n> acceptable to everybody.\n>\n> The difficulties arise from:\n> - no function overloading C\n> - ideally, we prefer the same names for everything (e.g. instead of\n> dual-set macros)\n> - but launcher code calls need to pass param (other code always uses\n> MyLogicalRepWorker)\n> - would be nice (although no mandatory) to not have to always pass\n> MyLogicalRepWorker as a param.\n>\n> Anyway, I will work towards finding some compromise on this next week.\n> Currently, I feel the best choice is to go with what Bharath suggested\n> in the first place -- just always pass the parameter (including\n> passing MyLogicalRepWorker). I will think more about it...\n\nv4-0001 uses only 3 simple inline functions. Callers always pass\nparameters as Bharath had suggested.\n\n>\n> ------\n>\n> Meanwhile, here are some replies to other feedback received:\n>\n> Ashutosh -- \"May be we should create a union of type specific members\" [1].\n>\n> Yes, I was wondering this myself, but I won't pursue this yet until\n> getting over the hurdle of finding acceptable functions/macros for\n> patch 0001. Hopefully, we can come back to this idea.\n>\n\nTo be explored later.\n\n> ~~\n>\n> Mellih -- \"Isn't the apply worker type still decided by eliminating\n> the other choices even with the patch?\".\n>\n> AFAIK the only case of that now is the one-time check in the\n> logicalrep_worker_launch() function. IMO, that is quite a different\n> proposition to the HEAD macros that have to make that deduction\n> 10,000s ot times in executing code. Actually, even the caller knows\n> exactly the kind of worker it wants to launch so we can pass the\n> LogicalRepWorkerType directly to logicalrep_worker_launch() and\n> eliminate even this one-time-check. I can code it that way in the next\n> patch version.\n>\n\nNow even the one-time checking to assign the worker type is removed.\nThe callers know the LogicalReWorkerType they want, so v4-0001 just\npasses the type into logicalrep_worker_launch()\n\n> ~~\n>\n> Barath -- \"-1 for these names starting with prefix TYPE_, in fact LR_\n> looked better.\"\n>\n> Hmmm. I'm not sure what is best. Of the options below I prefer\n> \"WORKER_TYPE_xxx\", but I don't mind so long as there is a consensus.\n>\n> LR_APPLY_WORKER\n> LR_PARALLEL_APPLY_WORKER\n> LR_TABLESYNC_WORKER\n>\n> TYPE_APPLY_WORKERT\n> TYPE_PARALLEL_APPLY_WORKER\n> TYPE_TABLESYNC_WORKER\n>\n> WORKER_TYPE_APPLY\n> WORKER_TYPE_PARALLEL_APPLY\n> WORKER_TYPE_TABLESYNC\n>\n> APPLY_WORKER\n> PARALLEL_APPLY_WORKER\n> TABLESYNC_WORKER\n>\n> APPLY\n> PARALLEL_APPLY\n> TABLESYNC\n>\n\nFor now, in v4-0001 these are called WORKERTYPE_xxx. Please do propose\nbetter names if these are no good.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 8 Aug 2023 18:09:20 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Adding a LogicalRepWorker type field" }, { "msg_contents": "On Tue, Aug 8, 2023 at 1:39 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Fri, Aug 4, 2023 at 5:45 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> v4-0001 uses only 3 simple inline functions. Callers always pass\n> parameters as Bharath had suggested.\n>\n\n*\n- Assert(am_leader_apply_worker());\n+ Assert(is_leader_apply_worker(MyLogicalRepWorker));\n...\n- if (am_leader_apply_worker())\n+ if (is_leader_apply_worker(MyLogicalRepWorker))\n\nPassing everywhere MyLogicalRepWorker not only increased the code\nchange footprint but doesn't appear any better to me. Instead, let\nam_parallel_apply_worker() keep calling\nisParallelApplyWorker(MyLogicalRepWorker) as it is doing now. I feel\neven if you or others feel that is a better idea, we can debate it\nseparately after the main patch is done because as far as I understand\nthat is not the core idea of this proposal.\n\n* If you do the above then there won't be a need to change the\nvariable name is_parallel_apply_worker in logicalrep_worker_launch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 9 Aug 2023 11:48:06 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding a LogicalRepWorker type field" }, { "msg_contents": "On Wed, Aug 9, 2023 at 4:18 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Aug 8, 2023 at 1:39 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Fri, Aug 4, 2023 at 5:45 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > v4-0001 uses only 3 simple inline functions. Callers always pass\n> > parameters as Bharath had suggested.\n> >\n>\n> *\n> - Assert(am_leader_apply_worker());\n> + Assert(is_leader_apply_worker(MyLogicalRepWorker));\n> ...\n> - if (am_leader_apply_worker())\n> + if (is_leader_apply_worker(MyLogicalRepWorker))\n>\n> Passing everywhere MyLogicalRepWorker not only increased the code\n> change footprint but doesn't appear any better to me. Instead, let\n> am_parallel_apply_worker() keep calling\n> isParallelApplyWorker(MyLogicalRepWorker) as it is doing now. I feel\n> even if you or others feel that is a better idea, we can debate it\n> separately after the main patch is done because as far as I understand\n> that is not the core idea of this proposal.\n\nRight, those changes were not really core. Reverted as suggested. PSA v5.\n\n>\n> * If you do the above then there won't be a need to change the\n> variable name is_parallel_apply_worker in logicalrep_worker_launch.\n>\n\nDone.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Thu, 10 Aug 2023 12:20:06 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Adding a LogicalRepWorker type field" }, { "msg_contents": "On Thu, Aug 10, 2023 at 7:50 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> >\n> > * If you do the above then there won't be a need to change the\n> > variable name is_parallel_apply_worker in logicalrep_worker_launch.\n> >\n>\n> Done.\n>\n\nI don't think the addition of two new macros isTablesyncWorker() and\nisLeaderApplyWorker() adds much value, so removed those and ran\npgindent. I am planning to commit this patch early next week unless\nyou or others have any comments.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Fri, 11 Aug 2023 15:03:44 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding a LogicalRepWorker type field" }, { "msg_contents": "On Fri, Aug 11, 2023 at 7:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Aug 10, 2023 at 7:50 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > >\n> > > * If you do the above then there won't be a need to change the\n> > > variable name is_parallel_apply_worker in logicalrep_worker_launch.\n> > >\n> >\n> > Done.\n> >\n>\n> I don't think the addition of two new macros isTablesyncWorker() and\n> isLeaderApplyWorker() adds much value, so removed those and ran\n> pgindent. I am planning to commit this patch early next week unless\n> you or others have any comments.\n>\n\nThanks for considering this patch fit for pushing.\n\nActually, I recently found 2 more overlooked places in the launcher.c\ncode which can benefit from using the isTablesyncWorker(w) macro that\nwas removed in patch v6-0001.\n\nI have posted another v7. (v7-0001 is identical to v6-0001). The\nv7-0002 patch has the isTablesyncWorker changes. I think wherever\npossible it is better to check the worker-type via macro instead of\ndeducing it by fields like 'relid', and patch v7-0002 makes the code\nmore consistent with other nearby isParallelApplyWorker checks in\nlauncher.c\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Fri, 11 Aug 2023 20:10:35 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Adding a LogicalRepWorker type field" }, { "msg_contents": "On Fri, Aug 11, 2023 at 3:41 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Fri, Aug 11, 2023 at 7:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Aug 10, 2023 at 7:50 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > >\n> > > > * If you do the above then there won't be a need to change the\n> > > > variable name is_parallel_apply_worker in logicalrep_worker_launch.\n> > > >\n> > >\n> > > Done.\n> > >\n> >\n> > I don't think the addition of two new macros isTablesyncWorker() and\n> > isLeaderApplyWorker() adds much value, so removed those and ran\n> > pgindent. I am planning to commit this patch early next week unless\n> > you or others have any comments.\n> >\n>\n> Thanks for considering this patch fit for pushing.\n>\n> Actually, I recently found 2 more overlooked places in the launcher.c\n> code which can benefit from using the isTablesyncWorker(w) macro that\n> was removed in patch v6-0001.\n>\n\n@@ -1301,7 +1301,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)\n worker_pid = worker.proc->pid;\n\n values[0] = ObjectIdGetDatum(worker.subid);\n- if (OidIsValid(worker.relid))\n+ if (isTablesyncWorker(&worker))\n values[1] = ObjectIdGetDatum(worker.relid);\n\nI don't see this as a good fit for using isTablesyncWorker(). If we\nwere returning worker_type then using it would be okay.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 11 Aug 2023 16:43:35 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding a LogicalRepWorker type field" }, { "msg_contents": "On Fri, Aug 11, 2023 at 9:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Aug 11, 2023 at 3:41 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Fri, Aug 11, 2023 at 7:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Thu, Aug 10, 2023 at 7:50 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > > >\n> > > > >\n> > > > > * If you do the above then there won't be a need to change the\n> > > > > variable name is_parallel_apply_worker in logicalrep_worker_launch.\n> > > > >\n> > > >\n> > > > Done.\n> > > >\n> > >\n> > > I don't think the addition of two new macros isTablesyncWorker() and\n> > > isLeaderApplyWorker() adds much value, so removed those and ran\n> > > pgindent. I am planning to commit this patch early next week unless\n> > > you or others have any comments.\n> > >\n> >\n> > Thanks for considering this patch fit for pushing.\n> >\n> > Actually, I recently found 2 more overlooked places in the launcher.c\n> > code which can benefit from using the isTablesyncWorker(w) macro that\n> > was removed in patch v6-0001.\n> >\n>\n> @@ -1301,7 +1301,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)\n> worker_pid = worker.proc->pid;\n>\n> values[0] = ObjectIdGetDatum(worker.subid);\n> - if (OidIsValid(worker.relid))\n> + if (isTablesyncWorker(&worker))\n> values[1] = ObjectIdGetDatum(worker.relid);\n>\n> I don't see this as a good fit for using isTablesyncWorker(). If we\n> were returning worker_type then using it would be okay.\n\nYeah, I also wasn't very sure about that one, except it seems\nanalogous to the existing code immediately below it, where you could\nsay the same thing:\n if (isParallelApplyWorker(&worker))\n values[3] = Int32GetDatum(worker.leader_pid);\n\nWhatever you think is best for that one is fine by me.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Sat, 12 Aug 2023 09:31:13 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Adding a LogicalRepWorker type field" }, { "msg_contents": "On Sat, Aug 12, 2023 at 5:01 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Fri, Aug 11, 2023 at 9:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Aug 11, 2023 at 3:41 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > On Fri, Aug 11, 2023 at 7:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Thu, Aug 10, 2023 at 7:50 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > > > >\n> > > > > >\n> > > > > > * If you do the above then there won't be a need to change the\n> > > > > > variable name is_parallel_apply_worker in logicalrep_worker_launch.\n> > > > > >\n> > > > >\n> > > > > Done.\n> > > > >\n> > > >\n> > > > I don't think the addition of two new macros isTablesyncWorker() and\n> > > > isLeaderApplyWorker() adds much value, so removed those and ran\n> > > > pgindent. I am planning to commit this patch early next week unless\n> > > > you or others have any comments.\n> > > >\n> > >\n> > > Thanks for considering this patch fit for pushing.\n> > >\n> > > Actually, I recently found 2 more overlooked places in the launcher.c\n> > > code which can benefit from using the isTablesyncWorker(w) macro that\n> > > was removed in patch v6-0001.\n> > >\n> >\n> > @@ -1301,7 +1301,7 @@ pg_stat_get_subscription(PG_FUNCTION_ARGS)\n> > worker_pid = worker.proc->pid;\n> >\n> > values[0] = ObjectIdGetDatum(worker.subid);\n> > - if (OidIsValid(worker.relid))\n> > + if (isTablesyncWorker(&worker))\n> > values[1] = ObjectIdGetDatum(worker.relid);\n> >\n> > I don't see this as a good fit for using isTablesyncWorker(). If we\n> > were returning worker_type then using it would be okay.\n>\n> Yeah, I also wasn't very sure about that one, except it seems\n> analogous to the existing code immediately below it, where you could\n> say the same thing:\n> if (isParallelApplyWorker(&worker))\n> values[3] = Int32GetDatum(worker.leader_pid);\n>\n\nFair point. I think it is better to keep the code consistent. So, I'll\nmerge your changes and push the patch early next week.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 12 Aug 2023 09:33:27 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding a LogicalRepWorker type field" }, { "msg_contents": "The main patch for adding the worker type enum has been pushed [1].\n\nHere is the remaining (rebased) patch for changing some previous\ncascading if/else to switch on the LogicalRepWorkerType enum instead.\n\nPSA v8.\n\n------\n[1] https://github.com/postgres/postgres/commit/2a8b40e3681921943a2989fd4ec6cdbf8766566c\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Mon, 14 Aug 2023 16:38:29 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Adding a LogicalRepWorker type field" }, { "msg_contents": "On Mon, Aug 14, 2023 at 12:08 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> The main patch for adding the worker type enum has been pushed [1].\n>\n> Here is the remaining (rebased) patch for changing some previous\n> cascading if/else to switch on the LogicalRepWorkerType enum instead.\n>\n\nI see this as being useful if we plan to add more worker types. Does\nanyone else see this remaining patch as an improvement?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 18 Aug 2023 08:50:26 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding a LogicalRepWorker type field" }, { "msg_contents": "On Fri, Aug 18, 2023 at 8:50 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Aug 14, 2023 at 12:08 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > The main patch for adding the worker type enum has been pushed [1].\n> >\n> > Here is the remaining (rebased) patch for changing some previous\n> > cascading if/else to switch on the LogicalRepWorkerType enum instead.\n> >\n>\n> I see this as being useful if we plan to add more worker types. Does\n> anyone else see this remaining patch as an improvement?\n>\n\nI feel it does give a tad bit more clarity for cases where we have\n'else' part with no clear comments or relevant keywords. As an\nexample, in function 'should_apply_changes_for_rel' , we have:\n else\n return (rel->state == SUBREL_STATE_READY ||\n (rel->state == SUBREL_STATE_SYNCDONE &&\n rel->statelsn <= remote_final_lsn));\n\nIt is difficult to figure out which worker is this if I do not know\nthe concept completely; 'case WORKERTYPE_APPLY' makes it better for\nthe reader to understand.\n\nthanks\nShveta\n\n\n", "msg_date": "Fri, 18 Aug 2023 09:19:10 +0530", "msg_from": "shveta malik <shveta.malik@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding a LogicalRepWorker type field" }, { "msg_contents": "On Friday, August 18, 2023 11:20 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Mon, Aug 14, 2023 at 12:08 PM Peter Smith <smithpb2250@gmail.com>\r\n> wrote:\r\n> >\r\n> > The main patch for adding the worker type enum has been pushed [1].\r\n> >\r\n> > Here is the remaining (rebased) patch for changing some previous\r\n> > cascading if/else to switch on the LogicalRepWorkerType enum instead.\r\n> >\r\n> \r\n> I see this as being useful if we plan to add more worker types. Does anyone else\r\n> see this remaining patch as an improvement?\r\n\r\n+1\r\n\r\nI have one comment for the new error message.\r\n\r\n+\t\tcase WORKERTYPE_UNKNOWN:\r\n+\t\t\tereport(ERROR, errmsg_internal(\"should_apply_changes_for_rel: Unknown worker type\"));\r\n\r\nI think reporting an ERROR in this case is fine. However, I would suggest\r\nrefraining from mentioning the function name in the error message, as\r\nrecommended in the error style document [1]. Also, it appears we could use\r\nelog() here.\r\n\r\n[1] https://www.postgresql.org/docs/devel/error-style-guide.html\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Fri, 18 Aug 2023 08:16:41 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Adding a LogicalRepWorker type field" }, { "msg_contents": "On Fri, Aug 18, 2023 at 6:16 PM Zhijie Hou (Fujitsu)\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Friday, August 18, 2023 11:20 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Aug 14, 2023 at 12:08 PM Peter Smith <smithpb2250@gmail.com>\n> > wrote:\n> > >\n> > > The main patch for adding the worker type enum has been pushed [1].\n> > >\n> > > Here is the remaining (rebased) patch for changing some previous\n> > > cascading if/else to switch on the LogicalRepWorkerType enum instead.\n> > >\n> >\n> > I see this as being useful if we plan to add more worker types. Does anyone else\n> > see this remaining patch as an improvement?\n>\n> +1\n>\n> I have one comment for the new error message.\n>\n> + case WORKERTYPE_UNKNOWN:\n> + ereport(ERROR, errmsg_internal(\"should_apply_changes_for_rel: Unknown worker type\"));\n>\n> I think reporting an ERROR in this case is fine. However, I would suggest\n> refraining from mentioning the function name in the error message, as\n> recommended in the error style document [1]. Also, it appears we could use\n> elog() here.\n>\n> [1] https://www.postgresql.org/docs/devel/error-style-guide.html\n\nOK. Modified as suggested. Anyway, getting these errors should not\neven be possible, so I added a comment to emphasise that.\n\nPSA v9\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Mon, 21 Aug 2023 10:04:05 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Adding a LogicalRepWorker type field" }, { "msg_contents": "On Mon, Aug 21, 2023 at 5:34 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Fri, Aug 18, 2023 at 6:16 PM Zhijie Hou (Fujitsu)\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > On Friday, August 18, 2023 11:20 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, Aug 14, 2023 at 12:08 PM Peter Smith <smithpb2250@gmail.com>\n> > > wrote:\n> > > >\n> > > > The main patch for adding the worker type enum has been pushed [1].\n> > > >\n> > > > Here is the remaining (rebased) patch for changing some previous\n> > > > cascading if/else to switch on the LogicalRepWorkerType enum instead.\n> > > >\n> > >\n> > > I see this as being useful if we plan to add more worker types. Does anyone else\n> > > see this remaining patch as an improvement?\n> >\n> > +1\n> >\n> > I have one comment for the new error message.\n> >\n> > + case WORKERTYPE_UNKNOWN:\n> > + ereport(ERROR, errmsg_internal(\"should_apply_changes_for_rel: Unknown worker type\"));\n> >\n> > I think reporting an ERROR in this case is fine. However, I would suggest\n> > refraining from mentioning the function name in the error message, as\n> > recommended in the error style document [1]. Also, it appears we could use\n> > elog() here.\n> >\n> > [1] https://www.postgresql.org/docs/devel/error-style-guide.html\n>\n> OK. Modified as suggested. Anyway, getting these errors should not\n> even be possible, so I added a comment to emphasise that.\n>\n> PSA v9\n>\n\nLGTM. I'll push this tomorrow unless there are any more comments.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 21 Aug 2023 15:48:04 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding a LogicalRepWorker type field" }, { "msg_contents": "On Mon, Aug 21, 2023 at 3:48 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Aug 21, 2023 at 5:34 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > PSA v9\n> >\n>\n> LGTM. I'll push this tomorrow unless there are any more comments.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 22 Aug 2023 13:54:29 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding a LogicalRepWorker type field" }, { "msg_contents": "On Tue, Aug 22, 2023 at 6:24 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Mon, Aug 21, 2023 at 3:48 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >\n> > On Mon, Aug 21, 2023 at 5:34 AM Peter Smith <smithpb2250@gmail.com>\n> wrote:\n> > >\n> > > PSA v9\n> > >\n> >\n> > LGTM. I'll push this tomorrow unless there are any more comments.\n> >\n>\n> Pushed.\n>\n\nThanks for pushing this. The CF entry has been updated [1]\n\n------\n[1] . https://commitfest.postgresql.org/44/4472/\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\nOn Tue, Aug 22, 2023 at 6:24 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Mon, Aug 21, 2023 at 3:48 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Aug 21, 2023 at 5:34 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > PSA v9\n> >\n>\n> LGTM. I'll push this tomorrow unless there are any more comments.\n>\n\nPushed.Thanks for pushing this. The CF entry has been updated [1]------[1] . https://commitfest.postgresql.org/44/4472/Kind Regards,Peter Smith.Fujitsu Australia", "msg_date": "Wed, 23 Aug 2023 09:10:23 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Adding a LogicalRepWorker type field" } ]
[ { "msg_contents": "Hi,\n\nWhile working on [1], I noticed $SUBJECT: commit e7cb7ee14 failed to\nupdate comments for the CustomPath struct in pathnodes.h, and commit\nf49842d1e failed to update docs about custom scan path callbacks in\ncustom-scan.sgml, IIUC. Attached are patches for updating these,\nwhich I created separately for ease of review (patch\nupdate-custom-scan-path-comments.patch for the former and patch\nupdate-custom-scan-path-docs.patch for the latter). In the second\npatch I used almost the same text as for the\nReparameterizeForeignPathByChild callback function in fdwhandler.sgml.\n\nBest regards,\nEtsuro Fujita\n\n[1] https://www.postgresql.org/message-id/CAPmGK16ah9JtyVPtdqu6d%3DQGkRX%3DRAzoYQfX7%3DLZ%2BKnqwBfftg%40mail.gmail.com", "msg_date": "Mon, 31 Jul 2023 19:05:02 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": true, "msg_subject": "Missing comments/docs about custom scan path" }, { "msg_contents": "On Mon, Jul 31, 2023 at 7:05 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> While working on [1], I noticed $SUBJECT: commit e7cb7ee14 failed to\n> update comments for the CustomPath struct in pathnodes.h, and commit\n> f49842d1e failed to update docs about custom scan path callbacks in\n> custom-scan.sgml, IIUC. Attached are patches for updating these,\n> which I created separately for ease of review (patch\n> update-custom-scan-path-comments.patch for the former and patch\n> update-custom-scan-path-docs.patch for the latter). In the second\n> patch I used almost the same text as for the\n> ReparameterizeForeignPathByChild callback function in fdwhandler.sgml.\n\nThere seem to be no objections, so I pushed both.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Thu, 3 Aug 2023 18:01:33 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Missing comments/docs about custom scan path" }, { "msg_contents": "On Thu, Aug 3, 2023 at 6:01 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Mon, Jul 31, 2023 at 7:05 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > While working on [1], I noticed $SUBJECT:\n\nAnother thing I would like to propose is minor adjustments to the docs\nrelated to parallel query:\n\n A custom scan provider will typically add paths for a base relation by\n setting the following hook, which is called after the core code has\n generated all the access paths it can for the relation (except for\n Gather paths, which are made after this call so that they can use\n partial paths added by the hook):\n\nFor clarity, I think \"except for Gather paths\" should be \"except for\nGather and Gather Merge paths\".\n\n Although this hook function can be used to examine, modify, or remove\n paths generated by the core system, a custom scan provider will\n typically confine itself to generating CustomPath objects and adding\n them to rel using add_path.\n\nFor clarity, I think \"adding them to rel using add_path\" should be eg,\n\"adding them to rel using add_path, or using add_partial_path if they\nare partial paths\".\n\nAttached is a patch for that.\n\nBest regards,\nEtsuro Fujita", "msg_date": "Tue, 29 Aug 2023 18:08:04 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Missing comments/docs about custom scan path" }, { "msg_contents": "On Tue, Aug 29, 2023 at 5:08 PM Etsuro Fujita <etsuro.fujita@gmail.com>\nwrote:\n\n> Another thing I would like to propose is minor adjustments to the docs\n> related to parallel query:\n>\n> A custom scan provider will typically add paths for a base relation by\n> setting the following hook, which is called after the core code has\n> generated all the access paths it can for the relation (except for\n> Gather paths, which are made after this call so that they can use\n> partial paths added by the hook):\n>\n> For clarity, I think \"except for Gather paths\" should be \"except for\n> Gather and Gather Merge paths\".\n>\n> Although this hook function can be used to examine, modify, or remove\n> paths generated by the core system, a custom scan provider will\n> typically confine itself to generating CustomPath objects and adding\n> them to rel using add_path.\n>\n> For clarity, I think \"adding them to rel using add_path\" should be eg,\n> \"adding them to rel using add_path, or using add_partial_path if they\n> are partial paths\".\n\n\n+1. I can see that this change makes the doc more consistent with the\ncomments in set_rel_pathlist.\n\nThanks\nRichard\n\nOn Tue, Aug 29, 2023 at 5:08 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\nAnother thing I would like to propose is minor adjustments to the docs\nrelated to parallel query:\n\n    A custom scan provider will typically add paths for a base relation by\n    setting the following hook, which is called after the core code has\n    generated all the access paths it can for the relation (except for\n    Gather paths, which are made after this call so that they can use\n    partial paths added by the hook):\n\nFor clarity, I think \"except for Gather paths\" should be \"except for\nGather and Gather Merge paths\".\n\n    Although this hook function can be used to examine, modify, or remove\n    paths generated by the core system, a custom scan provider will\n    typically confine itself to generating CustomPath objects and adding\n    them to rel using add_path.\n\nFor clarity, I think \"adding them to rel using add_path\" should be eg,\n\"adding them to rel using add_path, or using add_partial_path if they\nare partial paths\".+1.  I can see that this change makes the doc more consistent with thecomments in set_rel_pathlist.ThanksRichard", "msg_date": "Wed, 30 Aug 2023 10:05:24 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Missing comments/docs about custom scan path" }, { "msg_contents": "Hi Richard,\n\nOn Wed, Aug 30, 2023 at 11:05 AM Richard Guo <guofenglinux@gmail.com> wrote:\n> On Tue, Aug 29, 2023 at 5:08 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n>> Another thing I would like to propose is minor adjustments to the docs\n>> related to parallel query:\n>>\n>> A custom scan provider will typically add paths for a base relation by\n>> setting the following hook, which is called after the core code has\n>> generated all the access paths it can for the relation (except for\n>> Gather paths, which are made after this call so that they can use\n>> partial paths added by the hook):\n>>\n>> For clarity, I think \"except for Gather paths\" should be \"except for\n>> Gather and Gather Merge paths\".\n>>\n>> Although this hook function can be used to examine, modify, or remove\n>> paths generated by the core system, a custom scan provider will\n>> typically confine itself to generating CustomPath objects and adding\n>> them to rel using add_path.\n>>\n>> For clarity, I think \"adding them to rel using add_path\" should be eg,\n>> \"adding them to rel using add_path, or using add_partial_path if they\n>> are partial paths\".\n\n> +1. I can see that this change makes the doc more consistent with the\n> comments in set_rel_pathlist.\n\nAgreed.\n\nI have committed the patch. Thanks for taking a look!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Wed, 30 Aug 2023 18:07:59 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Missing comments/docs about custom scan path" } ]
[ { "msg_contents": "Hi all,\n\nWhile reading the code, I realized that the following code comments\nmight not be accurate:\n\n /*\n * Pick the largest transaction (or subtransaction) and evict it from\n * memory by streaming, if possible. Otherwise, spill to disk.\n */\n if (ReorderBufferCanStartStreaming(rb) &&\n (txn = ReorderBufferLargestStreamableTopTXN(rb)) != NULL)\n {\n /* we know there has to be one, because the size is not zero */\n Assert(txn && rbtxn_is_toptxn(txn));\n Assert(txn->total_size > 0);\n Assert(rb->size >= txn->total_size);\n\n ReorderBufferStreamTXN(rb, txn);\n }\n\nAFAICS since ReorderBufferLargestStreamableTopTXN() returns only\ntop-level transactions, the comment above the if statement is not\nright. It would not pick a subtransaction.\n\nAlso, I'm not sure that the second comment \"we know there has to be\none, because the size is not zero\" is right since there might not be\ntop-transactions that are streamable. I think this is why we say\n\"Otherwise, spill to disk\".\n\nI've attached a patch to fix these comments. Feedback is very welcome.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 1 Aug 2023 00:15:25 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": true, "msg_subject": "Inaccurate comments in ReorderBufferCheckMemoryLimit()" }, { "msg_contents": "On Mon, Jul 31, 2023 at 8:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> While reading the code, I realized that the following code comments\n> might not be accurate:\n>\n> /*\n> * Pick the largest transaction (or subtransaction) and evict it from\n> * memory by streaming, if possible. Otherwise, spill to disk.\n> */\n> if (ReorderBufferCanStartStreaming(rb) &&\n> (txn = ReorderBufferLargestStreamableTopTXN(rb)) != NULL)\n> {\n> /* we know there has to be one, because the size is not zero */\n> Assert(txn && rbtxn_is_toptxn(txn));\n> Assert(txn->total_size > 0);\n> Assert(rb->size >= txn->total_size);\n>\n> ReorderBufferStreamTXN(rb, txn);\n> }\n>\n> AFAICS since ReorderBufferLargestStreamableTopTXN() returns only\n> top-level transactions, the comment above the if statement is not\n> right. It would not pick a subtransaction.\n>\n\nI think the subtransaction case is for the spill-to-disk case as both\ncases are explained in the same comment.\n\n> Also, I'm not sure that the second comment \"we know there has to be\n> one, because the size is not zero\" is right since there might not be\n> top-transactions that are streamable.\n>\n\nI think this comment is probably referring to asserts related to the\nsize similar to spill to disk case.\n\nHow about if we just remove (or subtransaction) from the following\ncomment: \"Pick the largest transaction (or subtransaction) and evict\nit from memory by streaming, if possible. Otherwise, spill to disk.\"?\nThen by referring to streaming/spill-to-disk cases, one can understand\nin which cases only top-level xacts are involved and in which cases\nboth are involved.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 1 Aug 2023 08:03:36 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inaccurate comments in ReorderBufferCheckMemoryLimit()" }, { "msg_contents": "On Tue, Aug 1, 2023 at 11:33 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Jul 31, 2023 at 8:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > While reading the code, I realized that the following code comments\n> > might not be accurate:\n> >\n> > /*\n> > * Pick the largest transaction (or subtransaction) and evict it from\n> > * memory by streaming, if possible. Otherwise, spill to disk.\n> > */\n> > if (ReorderBufferCanStartStreaming(rb) &&\n> > (txn = ReorderBufferLargestStreamableTopTXN(rb)) != NULL)\n> > {\n> > /* we know there has to be one, because the size is not zero */\n> > Assert(txn && rbtxn_is_toptxn(txn));\n> > Assert(txn->total_size > 0);\n> > Assert(rb->size >= txn->total_size);\n> >\n> > ReorderBufferStreamTXN(rb, txn);\n> > }\n> >\n> > AFAICS since ReorderBufferLargestStreamableTopTXN() returns only\n> > top-level transactions, the comment above the if statement is not\n> > right. It would not pick a subtransaction.\n> >\n>\n> I think the subtransaction case is for the spill-to-disk case as both\n> cases are explained in the same comment.\n>\n> > Also, I'm not sure that the second comment \"we know there has to be\n> > one, because the size is not zero\" is right since there might not be\n> > top-transactions that are streamable.\n> >\n>\n> I think this comment is probably referring to asserts related to the\n> size similar to spill to disk case.\n>\n> How about if we just remove (or subtransaction) from the following\n> comment: \"Pick the largest transaction (or subtransaction) and evict\n> it from memory by streaming, if possible. Otherwise, spill to disk.\"?\n> Then by referring to streaming/spill-to-disk cases, one can understand\n> in which cases only top-level xacts are involved and in which cases\n> both are involved.\n\nSounds good. I've updated the patch accordingly.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 1 Aug 2023 17:36:02 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Inaccurate comments in ReorderBufferCheckMemoryLimit()" }, { "msg_contents": "On Tue, Aug 1, 2023 at 2:06 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Aug 1, 2023 at 11:33 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Jul 31, 2023 at 8:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > While reading the code, I realized that the following code comments\n> > > might not be accurate:\n> > >\n> > > /*\n> > > * Pick the largest transaction (or subtransaction) and evict it from\n> > > * memory by streaming, if possible. Otherwise, spill to disk.\n> > > */\n> > > if (ReorderBufferCanStartStreaming(rb) &&\n> > > (txn = ReorderBufferLargestStreamableTopTXN(rb)) != NULL)\n> > > {\n> > > /* we know there has to be one, because the size is not zero */\n> > > Assert(txn && rbtxn_is_toptxn(txn));\n> > > Assert(txn->total_size > 0);\n> > > Assert(rb->size >= txn->total_size);\n> > >\n> > > ReorderBufferStreamTXN(rb, txn);\n> > > }\n> > >\n> > > AFAICS since ReorderBufferLargestStreamableTopTXN() returns only\n> > > top-level transactions, the comment above the if statement is not\n> > > right. It would not pick a subtransaction.\n> > >\n> >\n> > I think the subtransaction case is for the spill-to-disk case as both\n> > cases are explained in the same comment.\n> >\n> > > Also, I'm not sure that the second comment \"we know there has to be\n> > > one, because the size is not zero\" is right since there might not be\n> > > top-transactions that are streamable.\n> > >\n> >\n> > I think this comment is probably referring to asserts related to the\n> > size similar to spill to disk case.\n> >\n> > How about if we just remove (or subtransaction) from the following\n> > comment: \"Pick the largest transaction (or subtransaction) and evict\n> > it from memory by streaming, if possible. Otherwise, spill to disk.\"?\n> > Then by referring to streaming/spill-to-disk cases, one can understand\n> > in which cases only top-level xacts are involved and in which cases\n> > both are involved.\n>\n> Sounds good. I've updated the patch accordingly.\n>\n\nLGTM.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 2 Aug 2023 07:50:49 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inaccurate comments in ReorderBufferCheckMemoryLimit()" }, { "msg_contents": "On Wed, Aug 2, 2023 at 11:21 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Aug 1, 2023 at 2:06 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Aug 1, 2023 at 11:33 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, Jul 31, 2023 at 8:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > While reading the code, I realized that the following code comments\n> > > > might not be accurate:\n> > > >\n> > > > /*\n> > > > * Pick the largest transaction (or subtransaction) and evict it from\n> > > > * memory by streaming, if possible. Otherwise, spill to disk.\n> > > > */\n> > > > if (ReorderBufferCanStartStreaming(rb) &&\n> > > > (txn = ReorderBufferLargestStreamableTopTXN(rb)) != NULL)\n> > > > {\n> > > > /* we know there has to be one, because the size is not zero */\n> > > > Assert(txn && rbtxn_is_toptxn(txn));\n> > > > Assert(txn->total_size > 0);\n> > > > Assert(rb->size >= txn->total_size);\n> > > >\n> > > > ReorderBufferStreamTXN(rb, txn);\n> > > > }\n> > > >\n> > > > AFAICS since ReorderBufferLargestStreamableTopTXN() returns only\n> > > > top-level transactions, the comment above the if statement is not\n> > > > right. It would not pick a subtransaction.\n> > > >\n> > >\n> > > I think the subtransaction case is for the spill-to-disk case as both\n> > > cases are explained in the same comment.\n> > >\n> > > > Also, I'm not sure that the second comment \"we know there has to be\n> > > > one, because the size is not zero\" is right since there might not be\n> > > > top-transactions that are streamable.\n> > > >\n> > >\n> > > I think this comment is probably referring to asserts related to the\n> > > size similar to spill to disk case.\n> > >\n> > > How about if we just remove (or subtransaction) from the following\n> > > comment: \"Pick the largest transaction (or subtransaction) and evict\n> > > it from memory by streaming, if possible. Otherwise, spill to disk.\"?\n> > > Then by referring to streaming/spill-to-disk cases, one can understand\n> > > in which cases only top-level xacts are involved and in which cases\n> > > both are involved.\n> >\n> > Sounds good. I've updated the patch accordingly.\n> >\n>\n> LGTM.\n\nThank you for reviewing the patch! Pushed.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 2 Aug 2023 15:03:29 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Inaccurate comments in ReorderBufferCheckMemoryLimit()" } ]
[ { "msg_contents": "Hi,\n\nI saw that CI image builds for freebsd were failing, because 13.1, used until\nnow, is EOL. Update to 13.2, no problem, I thought. Ran a CI run against 13.2\n- unfortunately that failed. In pltcl of all places.\n\nhttps://api.cirrus-ci.com/v1/artifact/task/5275616266682368/testrun/build/testrun/pltcl/regress/regression.diffs\n\nNotably both 13.1 and 13.2 are using tcl 8.6.13.\n\nThe code for the relevant function is this:\n\ncreate function tcl_date_week(int4,int4,int4) returns text as $$\n return [clock format [clock scan \"$2/$3/$1\"] -format \"%U\"]\n$$ language pltcl immutable;\n\nselect tcl_date_week(2010,1,26);\n\n\nIt doesn't surprise me that there are problems with above clock scan, it uses\nthe MM/DD/YYYY format without making that explicit. But why that doesn't work\non freebsd 13.2, I can't explain. It looks like tcl specifies the MM/DD/YYYY\nbit for \"free format scans\":\nhttps://www.tcl.tk/man/tcl8.6/TclCmd/clock.html#M80\n\nIt sure looks like freebsd 13.2 tcl is just busted. Notably it can't even\nparse what it generates:\n\necho 'puts [clock scan [clock format [clock seconds] -format \"%Y/%m/%d\"] -format \"%Y/%m/%d\"]'|tclsh8.6\n\nWhich works on 13.1 (and other operating systems), without a problem.\n\nI used truss as a very basic way to see differences between 13.1 and 13.2 -\nthe big difference was .2 failing just after\naccess(\"/etc/localtime\",F_OK)\t\t\t ERR#2 'No such file or directory'\nopen(\"/etc/localtime\",O_RDONLY,077)\t\t ERR#2 'No such file or directory'\n\nwhereas 13.1 also saw that, but then continued to\n\nissetugid()\t\t\t\t\t = 0 (0x0)\nopen(\"/usr/share/zoneinfo/UTC\",O_RDONLY,00)\t = 3 (0x3)\nfstat(3,{ mode=-r--r--r-- ,inode=351417,size=118,blksize=32768 }) = 0 (0x0)\n...\n\nwhich made me test specifying the timezone explicitly:\necho 'puts [clock scan [clock format [clock seconds] -format \"%Y/%m/%d\" -timezone \"UTC\"] -format \"%Y/%m/%d\" -timezone \"UTC\"]'|tclsh8.6\n\nWhich, surprise, works.\n\nSo does specifying the timezone via the TZ='UTC' environment variable.\n\n\nI guess there could be a libc behaviour change or such around timezones? I do\nsee\nhttps://www.freebsd.org/releases/13.2R/relnotes/\n\"tzcode has been upgraded to version 2022g with improved timezone change detection and reliability fixes.\"\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 31 Jul 2023 12:15:10 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "pltcl tests fail with FreeBSD 13.2" }, { "msg_contents": "Hi,\n\nOn 2023-07-31 12:15:10 -0700, Andres Freund wrote:\n> It sure looks like freebsd 13.2 tcl is just busted. Notably it can't even\n> parse what it generates:\n> \n> echo 'puts [clock scan [clock format [clock seconds] -format \"%Y/%m/%d\"] -format \"%Y/%m/%d\"]'|tclsh8.6\n> \n> Which works on 13.1 (and other operating systems), without a problem.\n> \n> I used truss as a very basic way to see differences between 13.1 and 13.2 -\n> the big difference was .2 failing just after\n> access(\"/etc/localtime\",F_OK)\t\t\t ERR#2 'No such file or directory'\n> open(\"/etc/localtime\",O_RDONLY,077)\t\t ERR#2 'No such file or directory'\n> \n> whereas 13.1 also saw that, but then continued to\n> \n> issetugid()\t\t\t\t\t = 0 (0x0)\n> open(\"/usr/share/zoneinfo/UTC\",O_RDONLY,00)\t = 3 (0x3)\n> fstat(3,{ mode=-r--r--r-- ,inode=351417,size=118,blksize=32768 }) = 0 (0x0)\n> ...\n> \n> which made me test specifying the timezone explicitly:\n> echo 'puts [clock scan [clock format [clock seconds] -format \"%Y/%m/%d\" -timezone \"UTC\"] -format \"%Y/%m/%d\" -timezone \"UTC\"]'|tclsh8.6\n> \n> Which, surprise, works.\n> \n> So does specifying the timezone via the TZ='UTC' environment variable.\n> \n> \n> I guess there could be a libc behaviour change or such around timezones? I do\n> see\n> https://www.freebsd.org/releases/13.2R/relnotes/\n> \"tzcode has been upgraded to version 2022g with improved timezone change detection and reliability fixes.\"\n\nOne additional datapoint: If I configure the timezone with \"tzsetup\" (which\ncreates /etc/localtime), the problem vanishes as well.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 31 Jul 2023 12:36:59 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: pltcl tests fail with FreeBSD 13.2" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I saw that CI image builds for freebsd were failing, because 13.1, used until\n> now, is EOL. Update to 13.2, no problem, I thought. Ran a CI run against 13.2\n> - unfortunately that failed. In pltcl of all places.\n\nI tried to replicate this in a freshly-installed 13.2 VM, and could\nnot, which is unsurprising because the FreeBSD installation process\ndoes not let you skip selecting a timezone --- which creates\n/etc/localtime AFAICT. So I'm unconvinced that we ought to worry\nabout the case where that's not there. How is it that the CI image\nlacks that file? (And could it be that we had one in the predecessor\n13.1 image?)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 31 Jul 2023 18:33:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pltcl tests fail with FreeBSD 13.2" }, { "msg_contents": "Hi,\n\nOn 2023-07-31 18:33:37 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I saw that CI image builds for freebsd were failing, because 13.1, used until\n> > now, is EOL. Update to 13.2, no problem, I thought. Ran a CI run against 13.2\n> > - unfortunately that failed. In pltcl of all places.\n> \n> I tried to replicate this in a freshly-installed 13.2 VM, and could\n> not, which is unsurprising because the FreeBSD installation process\n> does not let you skip selecting a timezone --- which creates\n> /etc/localtime AFAICT. So I'm unconvinced that we ought to worry\n> about the case where that's not there. How is it that the CI image\n> lacks that file?\n\nI don't know why it lacks the file - the CI image is based on the google cloud\nimage freebsd maintains / publishes ([1][2]). Which doesn't have /etc/localtime.\n\nI've now added a \"tzsetup UTC\" to the image generation, which fixes the test\nfailure.\n\n\n> (And could it be that we had one in the predecessor 13.1 image?)\n\nNo, I checked, and it's not in there either... It looks like the difference is\nthat 13.1 reads the UTC zoneinfo in that case, whereas 13.2 doesn't.\n\nUpstream 13.1 image:\n\ntruss date 2>&1|grep -E 'open|stat'\n...\nopen(\"/etc/localtime\",O_RDONLY,0101401200)\t ERR#2 'No such file or directory'\nopen(\"/usr/share/zoneinfo/UTC\",O_RDONLY,00)\t = 3 (0x3)\nfstat(3,{ mode=-r--r--r-- ,inode=351417,size=118,blksize=32768 }) = 0 (0x0)\nopen(\"/usr/share/zoneinfo/posixrules\",O_RDONLY,014330460400) = 3 (0x3)\nfstat(3,{ mode=-r--r--r-- ,inode=356621,size=3535,blksize=32768 }) = 0 (0x0)\nfstat(1,{ mode=p--------- ,inode=696,size=0,blksize=4096 }) = 0 (0x0)\n\nUpstream 13.2 image:\nopen(\"/etc/localtime\",O_RDONLY,01745)\t\t ERR#2 'No such file or directory'\nfstat(1,{ mode=p--------- ,inode=658,size=0,blksize=4096 }) = 0 (0x0)\n\n\nWhy not reading the UTC zone leads to timestamps being out of range, I do not\nknow...\n\nGreetings,\n\nAndres Freund\n\n[1] https://wiki.freebsd.org/GoogleCloudPlatform\n[2] https://cloud.google.com/compute/docs/images#freebsd\n\n\n", "msg_date": "Mon, 31 Jul 2023 16:07:41 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: pltcl tests fail with FreeBSD 13.2" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-07-31 18:33:37 -0400, Tom Lane wrote:\n>> (And could it be that we had one in the predecessor 13.1 image?)\n\n> No, I checked, and it's not in there either... It looks like the difference is\n> that 13.1 reads the UTC zoneinfo in that case, whereas 13.2 doesn't.\n\nHuh. Maybe worth reporting as a FreeBSD bug? In any case I don't\nthink it's *our* bug.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 31 Jul 2023 19:11:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pltcl tests fail with FreeBSD 13.2" }, { "msg_contents": "Hi,\n\nOn 2023-07-31 19:11:38 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2023-07-31 18:33:37 -0400, Tom Lane wrote:\n> >> (And could it be that we had one in the predecessor 13.1 image?)\n> \n> > No, I checked, and it's not in there either... It looks like the difference is\n> > that 13.1 reads the UTC zoneinfo in that case, whereas 13.2 doesn't.\n> \n> Huh. Maybe worth reporting as a FreeBSD bug?\n\nYea. Hoping our local freebsd developer has a suggestion as to which component\nto report it to, or even fix it :).\n\n\n> In any case I don't think it's *our* bug.\n\nAgreed. An explicit tzsetup UTC isn't much to carry going forward...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 31 Jul 2023 17:05:11 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: pltcl tests fail with FreeBSD 13.2" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-07-31 19:11:38 -0400, Tom Lane wrote:\n>> Huh. Maybe worth reporting as a FreeBSD bug?\n\n> Yea. Hoping our local freebsd developer has a suggestion as to which component\n> to report it to, or even fix it :).\n\nYou already have a reproducer using just tcl, so I'd suggest filing\nit against tcl and letting them drill down as needed. (It's possible\nthat it actually is tcl that's misbehaving, seeing that our core\nregression tests are passing in the same environment.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 31 Jul 2023 20:10:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pltcl tests fail with FreeBSD 13.2" } ]
[ { "msg_contents": "Hi all,\n\n31de7e6 has silenced savepoint names in the query jumbling, and\nsomething similar can be done for 2PC transactions once the GID is\nignored in TransactionStmt. This leads to the following grouping in\npg_stat_statements, for instance, which is something that matters with\nworkloads that heavily rely on 2PC:\nCOMMIT PREPARED $1\nPREPARE TRANSACTION $1\nROLLBACK PREPARED $1\n\nThoughts or comments?\n--\nMichael", "msg_date": "Tue, 1 Aug 2023 09:38:14 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Ignore 2PC transaction GIDs in query jumbling" }, { "msg_contents": "Hi,\n\nOn Tue, Aug 01, 2023 at 09:38:14AM +0900, Michael Paquier wrote:\n>\n> 31de7e6 has silenced savepoint names in the query jumbling, and\n> something similar can be done for 2PC transactions once the GID is\n> ignored in TransactionStmt. This leads to the following grouping in\n> pg_stat_statements, for instance, which is something that matters with\n> workloads that heavily rely on 2PC:\n> COMMIT PREPARED $1\n> PREPARE TRANSACTION $1\n> ROLLBACK PREPARED $1\n\nHaving an application relying on 2pc leads to pg_stat_statements being\nvirtually unusable on the whole instance, so +1 for the patch.\n\nFTR we had to entirely ignore all those statements in powa years ago to try to\nmake the tool usable in such case for some users who where using 2pc, it would\nbe nice to be able to track them back for pg16+.\n\nThe patch LGTM.\n\n\n", "msg_date": "Tue, 1 Aug 2023 09:28:08 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Ignore 2PC transaction GIDs in query jumbling" }, { "msg_contents": "On Tue, Aug 01, 2023 at 09:28:08AM +0800, Julien Rouhaud wrote:\n> Having an application relying on 2pc leads to pg_stat_statements being\n> virtually unusable on the whole instance, so +1 for the patch.\n\nCool, thanks for the feedback!\n\n> FTR we had to entirely ignore all those statements in powa years ago to try to\n> make the tool usable in such case for some users who where using 2pc, it would\n> be nice to be able to track them back for pg16+.\n\nThat would be nice for your users. Are there more query patterns you'd\nbe interested in grouping in the backend? I had a few folks aiming\nfor CallStmt and SetStmt, but both a a bit tricky without a custom\nroutine.\n--\nMichael", "msg_date": "Tue, 1 Aug 2023 11:00:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Ignore 2PC transaction GIDs in query jumbling" }, { "msg_contents": "On Tue, Aug 01, 2023 at 11:00:32AM +0900, Michael Paquier wrote:\n> On Tue, Aug 01, 2023 at 09:28:08AM +0800, Julien Rouhaud wrote:\n>\n> > FTR we had to entirely ignore all those statements in powa years ago to try to\n> > make the tool usable in such case for some users who where using 2pc, it would\n> > be nice to be able to track them back for pg16+.\n>\n> That would be nice for your users.\n\nIndeed, although I will need to make that part a runtime variable based on the\nserver version rather than a hardcoded string since the extension framework\ndoesn't provide a way to do that cleanly across major pg versions.\n\n> Are there more query patterns you'd\n> be interested in grouping in the backend? I had a few folks aiming\n> for CallStmt and SetStmt, but both a a bit tricky without a custom\n> routine.\n\nLooking at the rest of the ignored patterns, the only remaining one would be\nDEALLOCATE, which AFAICS doesn't have a query_jumble_ignore tag for now.\n\n\n", "msg_date": "Tue, 1 Aug 2023 10:22:09 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Ignore 2PC transaction GIDs in query jumbling" }, { "msg_contents": "On Tue, Aug 01, 2023 at 10:22:09AM +0800, Julien Rouhaud wrote:\n> Looking at the rest of the ignored patterns, the only remaining one would be\n> DEALLOCATE, which AFAICS doesn't have a query_jumble_ignore tag for now.\n\nThis one seems to be simple as well with a location field, looking\nquickly at its Node.\n--\nMichael", "msg_date": "Tue, 1 Aug 2023 11:37:49 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Ignore 2PC transaction GIDs in query jumbling" }, { "msg_contents": "On Tue, Aug 01, 2023 at 11:37:49AM +0900, Michael Paquier wrote:\n> On Tue, Aug 01, 2023 at 10:22:09AM +0800, Julien Rouhaud wrote:\n> > Looking at the rest of the ignored patterns, the only remaining one would be\n> > DEALLOCATE, which AFAICS doesn't have a query_jumble_ignore tag for now.\n>\n> This one seems to be simple as well with a location field, looking\n> quickly at its Node.\n\nAgreed, it should be as trivial to implement as for the 2pc commands :)\n\n\n", "msg_date": "Tue, 1 Aug 2023 10:46:58 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Ignore 2PC transaction GIDs in query jumbling" }, { "msg_contents": "On Tue, Aug 01, 2023 at 10:46:58AM +0800, Julien Rouhaud wrote:\n> On Tue, Aug 01, 2023 at 11:37:49AM +0900, Michael Paquier wrote:\n>> On Tue, Aug 01, 2023 at 10:22:09AM +0800, Julien Rouhaud wrote:\n>>> Looking at the rest of the ignored patterns, the only remaining one would be\n>>> DEALLOCATE, which AFAICS doesn't have a query_jumble_ignore tag for now.\n>>\n>> This one seems to be simple as well with a location field, looking\n>> quickly at its Node.\n> \n> Agreed, it should be as trivial to implement as for the 2pc commands :)\n\nPerhaps not as much, actually, because I was just reminded that\nDEALLOCATE is something that pg_stat_statements ignores. So this\nmakes harder the introduction of tests. Anyway, I guess that your own\nextension modules have a need for a query ID compiled with these\nfields ignored? \n\nFor now, I have applied the 2PC bits independently, as of 638d42a.\n--\nMichael", "msg_date": "Sun, 13 Aug 2023 15:25:33 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Ignore 2PC transaction GIDs in query jumbling" }, { "msg_contents": "On Sun, Aug 13, 2023 at 03:25:33PM +0900, Michael Paquier wrote:\n> On Tue, Aug 01, 2023 at 10:46:58AM +0800, Julien Rouhaud wrote:\n> >\n> > Agreed, it should be as trivial to implement as for the 2pc commands :)\n>\n> Perhaps not as much, actually, because I was just reminded that\n> DEALLOCATE is something that pg_stat_statements ignores. So this\n> makes harder the introduction of tests.\n\nMaybe it's time to revisit that? According to [1] the reason why\npg_stat_statements currently ignores DEALLOCATE is because it indeed bloated\nthe entries but also because at that time the suggestion for jumbling only this\none was really hackish.\n\nNow that we do have a sensible approach to jumble utility statements, I think\nit would be beneficial to be able to track those, for instance to be easily\ndiagnose a driver that doesn't rely on the extended protocol.\n\n> Anyway, I guess that your own\n> extension modules have a need for a query ID compiled with these\n> fields ignored?\n\nMy module has a need to track statements and still work efficiently after that.\nSo anything that can lead to virtually an infinite number of pg_stat_statements\nentries is a problem for me, and I guess to all the other modules / tools that\ntrack pg_stat_statements. I guess the reason why my module is still ignoring\nDEALLOCATE is because it existed before pg 9.4 (with a homemade queryid as it\nwasn't exposed before that), and it just stayed there since with the rest of\nstill problematic statements.\n\n> For now, I have applied the 2PC bits independently, as of 638d42a.\n\nThanks!\n\n[1] https://www.postgresql.org/message-id/flat/alpine.DEB.2.10.1404011631560.2557%40sto\n\n\n", "msg_date": "Sun, 13 Aug 2023 14:48:22 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Ignore 2PC transaction GIDs in query jumbling" }, { "msg_contents": "On Sun, Aug 13, 2023 at 02:48:22PM +0800, Julien Rouhaud wrote:\n> On Sun, Aug 13, 2023 at 03:25:33PM +0900, Michael Paquier wrote:\n>> Perhaps not as much, actually, because I was just reminded that\n>> DEALLOCATE is something that pg_stat_statements ignores. So this\n>> makes harder the introduction of tests.\n> \n> Maybe it's time to revisit that? According to [1] the reason why\n> pg_stat_statements currently ignores DEALLOCATE is because it indeed bloated\n> the entries but also because at that time the suggestion for jumbling only this\n> one was really hackish.\n\nGood point. The argument of the other thread does not really hold\nmuch these days now that query jumbling can happen for all the utility\nnodes.\n\n> Now that we do have a sensible approach to jumble utility statements, I think\n> it would be beneficial to be able to track those, for instance to be easily\n> diagnose a driver that doesn't rely on the extended protocol.\n\nFine by me. Would you like to write a patch? I've begun typing an\nembryon of patch a few days ago, and did not look yet at the rest.\nPlease see the attached.\n\n>> Anyway, I guess that your own\n>> extension modules have a need for a query ID compiled with these\n>> fields ignored?\n> \n> My module has a need to track statements and still work efficiently after that.\n> So anything that can lead to virtually an infinite number of pg_stat_statements\n> entries is a problem for me, and I guess to all the other modules / tools that\n> track pg_stat_statements. I guess the reason why my module is still ignoring\n> DEALLOCATE is because it existed before pg 9.4 (with a homemade queryid as it\n> wasn't exposed before that), and it just stayed there since with the rest of\n> still problematic statements.\n\nYeah, probably.\n--\nMichael", "msg_date": "Sun, 13 Aug 2023 17:39:34 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Ignore 2PC transaction GIDs in query jumbling" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n\n> On Sun, Aug 13, 2023 at 02:48:22PM +0800, Julien Rouhaud wrote:\n>> On Sun, Aug 13, 2023 at 03:25:33PM +0900, Michael Paquier wrote:\n>>> Perhaps not as much, actually, because I was just reminded that\n>>> DEALLOCATE is something that pg_stat_statements ignores. So this\n>>> makes harder the introduction of tests.\n>> \n>> Maybe it's time to revisit that? According to [1] the reason why\n>> pg_stat_statements currently ignores DEALLOCATE is because it indeed bloated\n>> the entries but also because at that time the suggestion for jumbling only this\n>> one was really hackish.\n>\n> Good point. The argument of the other thread does not really hold\n> much these days now that query jumbling can happen for all the utility\n> nodes.\n>\n>> Now that we do have a sensible approach to jumble utility statements, I think\n>> it would be beneficial to be able to track those, for instance to be easily\n>> diagnose a driver that doesn't rely on the extended protocol.\n>\n> Fine by me. Would you like to write a patch? I've begun typing an\n> embryon of patch a few days ago, and did not look yet at the rest.\n> Please see the attached.\n\nAs far as I could tell the only thing missing was removing\nDeallocateStmt from the list of unhandled utility statement types (and\nupdating comments to match). Updated patch attached.\n\n- ilmari", "msg_date": "Mon, 14 Aug 2023 12:11:13 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: Ignore 2PC transaction GIDs in query jumbling" }, { "msg_contents": "On Mon, Aug 14, 2023 at 12:11:13PM +0100, Dagfinn Ilmari Mannsåker wrote:\n> As far as I could tell the only thing missing was removing\n> DeallocateStmt from the list of unhandled utility statement types (and\n> updating comments to match). Updated patch attached.\n\nHmm. One issue with the patch is that we finish by considering\nDEALLOCATE ALL and DEALLOCATE $1 as the same things, compiling the\nsame query IDs. The difference is made in the Nodes by assigning NULL\nto the name but we would now ignore it. Wouldn't it be better to add\nan extra field to DeallocateStmt to track separately the named\ndeallocate queries and ALL in monitoring?\n--\nMichael", "msg_date": "Tue, 15 Aug 2023 08:49:37 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Ignore 2PC transaction GIDs in query jumbling" }, { "msg_contents": "On Tue, Aug 15, 2023 at 08:49:37AM +0900, Michael Paquier wrote:\n> Hmm. One issue with the patch is that we finish by considering\n> DEALLOCATE ALL and DEALLOCATE $1 as the same things, compiling the\n> same query IDs. The difference is made in the Nodes by assigning NULL\n> to the name but we would now ignore it. Wouldn't it be better to add\n> an extra field to DeallocateStmt to track separately the named\n> deallocate queries and ALL in monitoring?\n\nIn short, I would propose something like that, with a new boolean\nfield in DeallocateStmt that's part of the jumbling.\n\nDagfinn, Julien, what do you think about the attached?\n--\nMichael", "msg_date": "Fri, 18 Aug 2023 09:18:19 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Ignore 2PC transaction GIDs in query jumbling" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n\n> On Tue, Aug 15, 2023 at 08:49:37AM +0900, Michael Paquier wrote:\n>> Hmm. One issue with the patch is that we finish by considering\n>> DEALLOCATE ALL and DEALLOCATE $1 as the same things, compiling the\n>> same query IDs. The difference is made in the Nodes by assigning NULL\n>> to the name but we would now ignore it. Wouldn't it be better to add\n>> an extra field to DeallocateStmt to track separately the named\n>> deallocate queries and ALL in monitoring?\n>\n> In short, I would propose something like that, with a new boolean\n> field in DeallocateStmt that's part of the jumbling.\n>\n> Dagfinn, Julien, what do you think about the attached?\n\nI don't have a particularly strong opinion on whether we should\ndistinguish DEALLOCATE ALL from DEALLOCATE <stmt> (call it +0.5), but\nthis seems like a reasonable way to do it unless we want to invent a\nquery_jumble_ignore_unless_null attribute (which seems like overkill for\nthis one case).\n\n- ilmari\n\n> Michael\n>\n> From ad91672367f408663824b36b6dd517513d7f0a2a Mon Sep 17 00:00:00 2001\n> From: Michael Paquier <michael@paquier.xyz>\n> Date: Fri, 18 Aug 2023 09:12:58 +0900\n> Subject: [PATCH v2] Track DEALLOCATE statements in pg_stat_activity\n>\n> Treat the statement name as a constant when jumbling. A boolean field\n> is added to DeallocateStmt to make a distinction between the ALL and\n> named cases.\n> ---\n> src/include/nodes/parsenodes.h | 8 +++-\n> src/backend/parser/gram.y | 8 ++++\n> .../pg_stat_statements/expected/utility.out | 41 +++++++++++++++++++\n> .../pg_stat_statements/pg_stat_statements.c | 8 +---\n> contrib/pg_stat_statements/sql/utility.sql | 13 ++++++\n> 5 files changed, 70 insertions(+), 8 deletions(-)\n>\n> diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h\n> index 2565348303..2b356336eb 100644\n> --- a/src/include/nodes/parsenodes.h\n> +++ b/src/include/nodes/parsenodes.h\n> @@ -3925,8 +3925,12 @@ typedef struct ExecuteStmt\n> typedef struct DeallocateStmt\n> {\n> \tNodeTag\t\ttype;\n> -\tchar\t *name;\t\t\t/* The name of the plan to remove */\n> -\t/* NULL means DEALLOCATE ALL */\n> +\t/* The name of the plan to remove. NULL if DEALLOCATE ALL. */\n> +\tchar\t *name pg_node_attr(query_jumble_ignore);\n> +\t/* true if DEALLOCATE ALL */\n> +\tbool\t\tisall;\n> +\t/* token location, or -1 if unknown */\n> +\tint\t\t\tlocation pg_node_attr(query_jumble_location);\n> } DeallocateStmt;\n> \n> /*\n> diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y\n> index b3bdf947b6..df34e96f89 100644\n> --- a/src/backend/parser/gram.y\n> +++ b/src/backend/parser/gram.y\n> @@ -11936,6 +11936,8 @@ DeallocateStmt: DEALLOCATE name\n> \t\t\t\t\t\tDeallocateStmt *n = makeNode(DeallocateStmt);\n> \n> \t\t\t\t\t\tn->name = $2;\n> +\t\t\t\t\t\tn->isall = false;\n> +\t\t\t\t\t\tn->location = @2;\n> \t\t\t\t\t\t$$ = (Node *) n;\n> \t\t\t\t\t}\n> \t\t\t\t| DEALLOCATE PREPARE name\n> @@ -11943,6 +11945,8 @@ DeallocateStmt: DEALLOCATE name\n> \t\t\t\t\t\tDeallocateStmt *n = makeNode(DeallocateStmt);\n> \n> \t\t\t\t\t\tn->name = $3;\n> +\t\t\t\t\t\tn->isall = false;\n> +\t\t\t\t\t\tn->location = @3;\n> \t\t\t\t\t\t$$ = (Node *) n;\n> \t\t\t\t\t}\n> \t\t\t\t| DEALLOCATE ALL\n> @@ -11950,6 +11954,8 @@ DeallocateStmt: DEALLOCATE name\n> \t\t\t\t\t\tDeallocateStmt *n = makeNode(DeallocateStmt);\n> \n> \t\t\t\t\t\tn->name = NULL;\n> +\t\t\t\t\t\tn->isall = true;\n> +\t\t\t\t\t\tn->location = -1;\n> \t\t\t\t\t\t$$ = (Node *) n;\n> \t\t\t\t\t}\n> \t\t\t\t| DEALLOCATE PREPARE ALL\n> @@ -11957,6 +11963,8 @@ DeallocateStmt: DEALLOCATE name\n> \t\t\t\t\t\tDeallocateStmt *n = makeNode(DeallocateStmt);\n> \n> \t\t\t\t\t\tn->name = NULL;\n> +\t\t\t\t\t\tn->isall = true;\n> +\t\t\t\t\t\tn->location = -1;\n> \t\t\t\t\t\t$$ = (Node *) n;\n> \t\t\t\t\t}\n> \t\t;\n> diff --git a/contrib/pg_stat_statements/expected/utility.out b/contrib/pg_stat_statements/expected/utility.out\n> index 93735d5d85..f331044f3e 100644\n> --- a/contrib/pg_stat_statements/expected/utility.out\n> +++ b/contrib/pg_stat_statements/expected/utility.out\n> @@ -472,6 +472,47 @@ SELECT pg_stat_statements_reset();\n> \n> (1 row)\n> \n> +-- Execution statements\n> +SELECT 1 as a;\n> + a \n> +---\n> + 1\n> +(1 row)\n> +\n> +PREPARE stat_select AS SELECT $1 AS a;\n> +EXECUTE stat_select (1);\n> + a \n> +---\n> + 1\n> +(1 row)\n> +\n> +DEALLOCATE stat_select;\n> +PREPARE stat_select AS SELECT $1 AS a;\n> +EXECUTE stat_select (2);\n> + a \n> +---\n> + 2\n> +(1 row)\n> +\n> +DEALLOCATE PREPARE stat_select;\n> +DEALLOCATE ALL;\n> +DEALLOCATE PREPARE ALL;\n> +SELECT calls, rows, query FROM pg_stat_statements ORDER BY query COLLATE \"C\";\n> + calls | rows | query \n> +-------+------+---------------------------------------\n> + 2 | 0 | DEALLOCATE $1\n> + 2 | 0 | DEALLOCATE ALL\n> + 2 | 2 | PREPARE stat_select AS SELECT $1 AS a\n> + 1 | 1 | SELECT $1 as a\n> + 1 | 1 | SELECT pg_stat_statements_reset()\n> +(5 rows)\n> +\n> +SELECT pg_stat_statements_reset();\n> + pg_stat_statements_reset \n> +--------------------------\n> + \n> +(1 row)\n> +\n> -- SET statements.\n> -- These use two different strings, still they count as one entry.\n> SET work_mem = '1MB';\n> diff --git a/contrib/pg_stat_statements/pg_stat_statements.c b/contrib/pg_stat_statements/pg_stat_statements.c\n> index 55b957d251..06b65aeef5 100644\n> --- a/contrib/pg_stat_statements/pg_stat_statements.c\n> +++ b/contrib/pg_stat_statements/pg_stat_statements.c\n> @@ -104,8 +104,7 @@ static const uint32 PGSS_PG_MAJOR_VERSION = PG_VERSION_NUM / 100;\n> * ignores.\n> */\n> #define PGSS_HANDLED_UTILITY(n)\t\t(!IsA(n, ExecuteStmt) && \\\n> -\t\t\t\t\t\t\t\t\t!IsA(n, PrepareStmt) && \\\n> -\t\t\t\t\t\t\t\t\t!IsA(n, DeallocateStmt))\n> +\t\t\t\t\t\t\t\t\t!IsA(n, PrepareStmt))\n> \n> /*\n> * Extension version number, for supporting older extension versions' objects\n> @@ -830,8 +829,7 @@ pgss_post_parse_analyze(ParseState *pstate, Query *query, JumbleState *jstate)\n> \n> \t/*\n> \t * Clear queryId for prepared statements related utility, as those will\n> -\t * inherit from the underlying statement's one (except DEALLOCATE which is\n> -\t * entirely untracked).\n> +\t * inherit from the underlying statement's one.\n> \t */\n> \tif (query->utilityStmt)\n> \t{\n> @@ -1116,8 +1114,6 @@ pgss_ProcessUtility(PlannedStmt *pstmt, const char *queryString,\n> \t * calculated from the query tree) would be used to accumulate costs of\n> \t * ensuing EXECUTEs. This would be confusing, and inconsistent with other\n> \t * cases where planning time is not included at all.\n> -\t *\n> -\t * Likewise, we don't track execution of DEALLOCATE.\n> \t */\n> \tif (pgss_track_utility && pgss_enabled(exec_nested_level) &&\n> \t\tPGSS_HANDLED_UTILITY(parsetree))\n> diff --git a/contrib/pg_stat_statements/sql/utility.sql b/contrib/pg_stat_statements/sql/utility.sql\n> index 87666d9135..5f7d4a467f 100644\n> --- a/contrib/pg_stat_statements/sql/utility.sql\n> +++ b/contrib/pg_stat_statements/sql/utility.sql\n> @@ -237,6 +237,19 @@ DROP DOMAIN domain_stats;\n> SELECT calls, rows, query FROM pg_stat_statements ORDER BY query COLLATE \"C\";\n> SELECT pg_stat_statements_reset();\n> \n> +-- Execution statements\n> +SELECT 1 as a;\n> +PREPARE stat_select AS SELECT $1 AS a;\n> +EXECUTE stat_select (1);\n> +DEALLOCATE stat_select;\n> +PREPARE stat_select AS SELECT $1 AS a;\n> +EXECUTE stat_select (2);\n> +DEALLOCATE PREPARE stat_select;\n> +DEALLOCATE ALL;\n> +DEALLOCATE PREPARE ALL;\n> +SELECT calls, rows, query FROM pg_stat_statements ORDER BY query COLLATE \"C\";\n> +SELECT pg_stat_statements_reset();\n> +\n> -- SET statements.\n> -- These use two different strings, still they count as one entry.\n> SET work_mem = '1MB';\n\n\n", "msg_date": "Fri, 18 Aug 2023 11:31:03 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: Ignore 2PC transaction GIDs in query jumbling" }, { "msg_contents": "On Fri, Aug 18, 2023 at 11:31:03AM +0100, Dagfinn Ilmari Mannsåker wrote:\n> I don't have a particularly strong opinion on whether we should\n> distinguish DEALLOCATE ALL from DEALLOCATE <stmt> (call it +0.5), but\n\nThe difference looks important to me, especially for monitoring.\nAnd pgbouncer may also use both of them, actually? (Somebody, please\ncorrect me here if necessary.)\n\n> this seems like a reasonable way to do it unless we want to invent a\n> query_jumble_ignore_unless_null attribute (which seems like overkill for\n> this one case).\n\nI don't really want to have a NULL-based property for that :)\n--\nMichael", "msg_date": "Sat, 19 Aug 2023 13:47:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Ignore 2PC transaction GIDs in query jumbling" }, { "msg_contents": "On Sat, Aug 19, 2023 at 01:47:48PM +0900, Michael Paquier wrote:\n> On Fri, Aug 18, 2023 at 11:31:03AM +0100, Dagfinn Ilmari Mannsåker wrote:\n>> I don't have a particularly strong opinion on whether we should\n>> distinguish DEALLOCATE ALL from DEALLOCATE <stmt> (call it +0.5), but\n> \n> The difference looks important to me, especially for monitoring.\n\nAfter thinking a few days about it, I'd rather still make the\ndifference between both, so applied this way.\n\n> And pgbouncer may also use both of them, actually? (Somebody, please\n> correct me here if necessary.)\n\nI cannot see any signs of that in pgbouncer, btw.\n--\nMichael", "msg_date": "Sun, 27 Aug 2023 17:32:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Ignore 2PC transaction GIDs in query jumbling" } ]
[ { "msg_contents": "Hi,\r\n\r\nThe release date for PostgreSQL 16 Beta 3 is August 10, 2023, alongside \r\nthe regular update release[1]. Please be sure to commit any open \r\nitems[2] for the Beta 3 release before August 6, 2023 0:00 AoE[3] to \r\ngive them enough time to work through the buildfarm.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] https://www.postgresql.org/developer/roadmap/\r\n[2] https://wiki.postgresql.org/wiki/PostgreSQL_16_Open_Items\r\n[3] https://en.wikipedia.org/wiki/Anywhere_on_Earth", "msg_date": "Mon, 31 Jul 2023 21:18:37 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "PostgreSQL 16 Beta 3 release date" } ]
[ { "msg_contents": "Hi hackers,\n\nBACKGROUND\n\nThere are 3 different types of logical replication workers.\n1. apply leader workers\n2. parallel apply workers\n3. tablesync workers\n\nAnd there are a number of places where the current worker type is\nchecked using the am_<worker-type> inline functions.\n\nPROBLEM / SOLUTION\n\nDuring recent reviews, I noticed some of these conditions are a bit unusual.\n\n======\nworker.c\n\n1. apply_worker_exit\n\n/*\n* Reset the last-start time for this apply worker so that the launcher\n* will restart it without waiting for wal_retrieve_retry_interval if the\n* subscription is still active, and so that we won't leak that hash table\n* entry if it isn't.\n*/\nif (!am_tablesync_worker())\nApplyLauncherForgetWorkerStartTime(MyLogicalRepWorker->subid);\n\n~\n\nIn this case, it cannot be a parallel apply worker (there is a check\nprior), so IMO it is better to simplify the condition here to below.\nThis also makes the code consistent with all the subsequent\nsuggestions in this post.\n\nif (am_apply_leader_worker())\nApplyLauncherForgetWorkerStartTime(MyLogicalRepWorker->subid);\n\n~~~\n\n2. maybe_reread_subscription\n\n/* Ensure we remove no-longer-useful entry for worker's start time */\nif (!am_tablesync_worker() && !am_parallel_apply_worker())\nApplyLauncherForgetWorkerStartTime(MyLogicalRepWorker->subid);\nproc_exit(0);\n\n~\n\nShould simplify the above condition to say:\nif (!am_leader_apply_worker())\n\n~~~\n\n3. InitializeApplyWorker\n\n/* Ensure we remove no-longer-useful entry for worker's start time */\nif (!am_tablesync_worker() && !am_parallel_apply_worker())\nApplyLauncherForgetWorkerStartTime(MyLogicalRepWorker->subid);\nproc_exit(0);\n\n~\n\nDitto. Should simplify the above condition to say:\nif (!am_leader_apply_worker())\n\n~~~\n\n4. DisableSubscriptionAndExit\n\n/* Ensure we remove no-longer-useful entry for worker's start time */\nif (!am_tablesync_worker() && !am_parallel_apply_worker())\nApplyLauncherForgetWorkerStartTime(MyLogicalRepWorker->subid);\n\n~\n\nDitto. Should simplify the above condition to say:\nif (!am_leader_apply_worker())\n\n------\n\nPSA a small patch making those above-suggested changes. The 'make\ncheck' and TAP subscription tests are all passing OK.\n\n\n-------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 1 Aug 2023 11:35:35 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Simplify some logical replication worker type checking" }, { "msg_contents": "On Tuesday, August 1, 2023 9:36 AM Peter Smith <smithpb2250@gmail.com>\r\n> PROBLEM / SOLUTION\r\n> \r\n> During recent reviews, I noticed some of these conditions are a bit unusual.\r\n\r\nThanks for the patch.\r\n\r\n> \r\n> ======\r\n> worker.c\r\n> \r\n> 1. apply_worker_exit\r\n> \r\n> /*\r\n> * Reset the last-start time for this apply worker so that the launcher\r\n> * will restart it without waiting for wal_retrieve_retry_interval if the\r\n> * subscription is still active, and so that we won't leak that hash table\r\n> * entry if it isn't.\r\n> */\r\n> if (!am_tablesync_worker())\r\n> ApplyLauncherForgetWorkerStartTime(MyLogicalRepWorker->subid);\r\n>\r\n> ~\r\n> \r\n> In this case, it cannot be a parallel apply worker (there is a check\r\n> prior), so IMO it is better to simplify the condition here to below.\r\n> This also makes the code consistent with all the subsequent\r\n> suggestions in this post.\r\n> \r\n> if (am_apply_leader_worker())\r\n> ApplyLauncherForgetWorkerStartTime(MyLogicalRepWorker->subid);\r\n\r\nThis change looks OK to me.\r\n\r\n> ~~~\r\n> \r\n> 2. maybe_reread_subscription\r\n> \r\n> /* Ensure we remove no-longer-useful entry for worker's start time */\r\n> if (!am_tablesync_worker() && !am_parallel_apply_worker())\r\n> ApplyLauncherForgetWorkerStartTime(MyLogicalRepWorker->subid);\r\n> proc_exit(0);\r\n> \r\n> ~\r\n> \r\n> Should simplify the above condition to say:\r\n> if (!am_leader_apply_worker())\r\n> \r\n> ~~~\r\n> \r\n> 3. InitializeApplyWorker\r\n> \r\n> /* Ensure we remove no-longer-useful entry for worker's start time */\r\n> if (!am_tablesync_worker() && !am_parallel_apply_worker())\r\n> ApplyLauncherForgetWorkerStartTime(MyLogicalRepWorker->subid);\r\n> proc_exit(0);\r\n> \r\n> ~\r\n> \r\n> Ditto. Should simplify the above condition to say:\r\n> if (!am_leader_apply_worker())\r\n> \r\n> ~~~\r\n> \r\n> 4. DisableSubscriptionAndExit\r\n> \r\n> /* Ensure we remove no-longer-useful entry for worker's start time */\r\n> if (!am_tablesync_worker() && !am_parallel_apply_worker())\r\n> ApplyLauncherForgetWorkerStartTime(MyLogicalRepWorker->subid);\r\n> \r\n> ~\r\n> \r\n> Ditto. Should simplify the above condition to say:\r\n> if (!am_leader_apply_worker())\r\n> \r\n> ------\r\n> \r\n> PSA a small patch making those above-suggested changes. The 'make\r\n> check' and TAP subscription tests are all passing OK.\r\n\r\nAbout 2,3,4, it seems you should use \"if (am_leader_apply_worker())\" instead of\r\n\"if (!am_leader_apply_worker())\" because only leader apply worker should invoke\r\nthis function.\r\n\r\nBest Regards,\r\nHou zj\r\n\r\n", "msg_date": "Tue, 1 Aug 2023 02:59:25 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Simplify some logical replication worker type checking" }, { "msg_contents": "On Tue, Aug 1, 2023 at 12:59 PM Zhijie Hou (Fujitsu)\n<houzj.fnst@fujitsu.com> wrote:\n>\n>\n> About 2,3,4, it seems you should use \"if (am_leader_apply_worker())\" instead of\n> \"if (!am_leader_apply_worker())\" because only leader apply worker should invoke\n> this function.\n>\n\nHi Hou-san,\n\nThanks for your review!\n\nOops. Of course, you are right. My cut/paste typo was mostly confined\nto the post, but there was one instance also in the patch.\n\nFixed in v2.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 1 Aug 2023 15:32:10 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Simplify some logical replication worker type checking" }, { "msg_contents": "On 2023-Aug-01, Peter Smith wrote:\n\n> PSA a small patch making those above-suggested changes. The 'make\n> check' and TAP subscription tests are all passing OK.\n\nI think the code ends up more readable with this style of changes, so\n+1. I do wonder if these calls should appear in a proc_exit callback or\nsome such instead, though.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"No tengo por qué estar de acuerdo con lo que pienso\"\n (Carlos Caszeli)\n\n\n", "msg_date": "Tue, 1 Aug 2023 08:40:46 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Simplify some logical replication worker type checking" }, { "msg_contents": "On Tue, Aug 1, 2023 at 12:11 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2023-Aug-01, Peter Smith wrote:\n>\n> > PSA a small patch making those above-suggested changes. The 'make\n> > check' and TAP subscription tests are all passing OK.\n>\n> I think the code ends up more readable with this style of changes, so\n> +1. I do wonder if these calls should appear in a proc_exit callback or\n> some such instead, though.\n>\n\nBut the call to\nApplyLauncherForgetWorkerStartTime()->logicalrep_launcher_attach_dshmem()\nhas some dynamic shared memory allocation/attach calls which I am not\nsure is a good idea to do in proc_exit() callbacks. We may want to\nevaluate whether moving the suggested checks to proc_exit or any other\ncallback is a better idea. What do you think?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 2 Aug 2023 08:20:32 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Simplify some logical replication worker type checking" }, { "msg_contents": "On Wed, Aug 2, 2023 at 8:20 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Aug 1, 2023 at 12:11 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > On 2023-Aug-01, Peter Smith wrote:\n> >\n> > > PSA a small patch making those above-suggested changes. The 'make\n> > > check' and TAP subscription tests are all passing OK.\n> >\n> > I think the code ends up more readable with this style of changes, so\n> > +1. I do wonder if these calls should appear in a proc_exit callback or\n> > some such instead, though.\n> >\n>\n> But the call to\n> ApplyLauncherForgetWorkerStartTime()->logicalrep_launcher_attach_dshmem()\n> has some dynamic shared memory allocation/attach calls which I am not\n> sure is a good idea to do in proc_exit() callbacks. We may want to\n> evaluate whether moving the suggested checks to proc_exit or any other\n> callback is a better idea. What do you think?\n>\n\nI have pushed the existing patch but feel free to pursue further improvements.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 4 Aug 2023 15:38:49 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Simplify some logical replication worker type checking" } ]
[ { "msg_contents": "Hi all,\n\nA colleague, Ethan Mertz (in CC), has discovered that we don't handle\ncorrectly WAL records that are failing because of an OOM when\nallocating their required space. In the case of Ethan, we have bumped\non the failure after an allocation failure on XLogReadRecordAlloc():\n\"out of memory while trying to decode a record of length\"\n\nAs far as I can see, PerformWalRecovery() uses LOG as elevel for its\nprivate callback in the xlogreader, when doing through ReadRecord(),\nwhich leads to a failure being reported, but recovery considers that\nthe failure is the end of WAL and decides to abruptly end recovery,\nleading to some data lost.\n\nIn crash recovery, any records after the OOM would not be replayed.\nAt quick glance, it seems to me that this can also impact standbys,\nwhere recovery could stop earlier than it should once a consistent\npoint has been reached.\n\nAttached is a patch that can be applied on HEAD to inject an error,\nthen just run the script xlogreader_oom.bash attached, or something\nsimilar, to see the failure in the logs:\nLOG: redo starts at 0/1913CD0\nLOG: out of memory while trying to decode a record of length 57\nLOG: redo done at 0/1917358 system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\n\nIt also looks like recovery_prefetch may mitigate a bit the issue if\nwe do a read in non-blocking mode, but that's not a strong guarantee\neither, especially if the host is under memory pressure.\n\nA patch is registered in the commit fest to improve the error\ndetection handling, but as far as I can see it fails to handle the OOM\ncase and replaces ReadRecord() to use a WARNING in the redo loop:\nhttps://www.postgresql.org/message-id/20200228.160100.2210969269596489579.horikyota.ntt%40gmail.com\n\nOn top of my mind, any solution I can think of needs to add more\ninformation to XLogReaderState, where we'd either track the type of\nerror that happened close to errormsg_buf which is where these errors\nare tracked, but any of that cannot be backpatched, unfortunately.\n\nComments?\n--\nMichael", "msg_date": "Tue, 1 Aug 2023 12:43:21 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Incorrect handling of OOM in WAL replay leading to data loss" }, { "msg_contents": "At Tue, 1 Aug 2023 12:43:21 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> A colleague, Ethan Mertz (in CC), has discovered that we don't handle\n> correctly WAL records that are failing because of an OOM when\n> allocating their required space. In the case of Ethan, we have bumped\n> on the failure after an allocation failure on XLogReadRecordAlloc():\n> \"out of memory while trying to decode a record of length\"\n\nI believe a database server is not supposed to be executed under such\na memory-constrained environment.\n\n> In crash recovery, any records after the OOM would not be replayed.\n> At quick glance, it seems to me that this can also impact standbys,\n> where recovery could stop earlier than it should once a consistent\n> point has been reached.\n\nActually the code is assuming that OOM happens solely due to a broken\nrecord length field. I believe that we intentionally put that\nassumption.\n\n> A patch is registered in the commit fest to improve the error\n> detection handling, but as far as I can see it fails to handle the OOM\n> case and replaces ReadRecord() to use a WARNING in the redo loop:\n> https://www.postgresql.org/message-id/20200228.160100.2210969269596489579.horikyota.ntt%40gmail.com\n\nIt doesn't change behavior unrelated to the case where the last record\nis followed by zeroed trailing bytes.\n\n> On top of my mind, any solution I can think of needs to add more\n> information to XLogReaderState, where we'd either track the type of\n> error that happened close to errormsg_buf which is where these errors\n> are tracked, but any of that cannot be backpatched, unfortunately.\n\nOne issue on changing that behavior is that there's not a simple way\nto detect a broken record before loading it into memory. We might be\nable to implement a fallback mechanism for example that loads the\nrecord into an already-allocated buffer (which is smaller than the\nspecified length) just to verify if it's corrupted. However, I\nquestion whether it's worth the additional complexity. And I'm not\nsure what if the first allocation failed.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 01 Aug 2023 13:51:13 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Incorrect handling of OOM in WAL replay leading to data loss" }, { "msg_contents": "On Tue, Aug 01, 2023 at 01:51:13PM +0900, Kyotaro Horiguchi wrote:\n> I believe a database server is not supposed to be executed under such\n> a memory-constrained environment.\n\nI don't really follow this argument. The backend and the frontends\nare reliable on OOM, where we generate ERRORs or even FATALs depending\non the code path involved. A memory bounded environment is something\nthat can easily happen if one's not careful enough with the sizing of\nthe instance. For example, this error can be triggered on a standby\nwith read-only queries that put pressure on the host's memory.\n\n> One issue on changing that behavior is that there's not a simple way\n> to detect a broken record before loading it into memory. We might be\n> able to implement a fallback mechanism for example that loads the\n> record into an already-allocated buffer (which is smaller than the\n> specified length) just to verify if it's corrupted. However, I\n> question whether it's worth the additional complexity. And I'm not\n> sure what if the first allocation failed.\n\nPerhaps we could rely more on a fallback memory, especially if it is\npossible to use that for the header validation. That seems like a\nseparate thing, still.\n--\nMichael", "msg_date": "Tue, 1 Aug 2023 14:03:36 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Incorrect handling of OOM in WAL replay leading to data loss" }, { "msg_contents": "At Tue, 1 Aug 2023 14:03:36 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Tue, Aug 01, 2023 at 01:51:13PM +0900, Kyotaro Horiguchi wrote:\n> > I believe a database server is not supposed to be executed under such\n> > a memory-constrained environment.\n> \n> I don't really follow this argument. The backend and the frontends\n> are reliable on OOM, where we generate ERRORs or even FATALs depending\n> on the code path involved. A memory bounded environment is something\n> that can easily happen if one's not careful enough with the sizing of\n\nI didn't meant that OOM should not happen. I mentioned an environemnt\nwhere allocation failure can happen while crash recovery. Anyway I\ndidn't meant that we shouldn't \"fix\" it.\n\n> the instance. For example, this error can be triggered on a standby\n> with read-only queries that put pressure on the host's memory.\n\nI thoght that the failure on a stanby results in continuing to retry\nreading the next record. However, I found that there's a case where\nstart process stops in response to OOM [1].\n\n> > One issue on changing that behavior is that there's not a simple way\n> > to detect a broken record before loading it into memory. We might be\n> > able to implement a fallback mechanism for example that loads the\n> > record into an already-allocated buffer (which is smaller than the\n> > specified length) just to verify if it's corrupted. However, I\n> > question whether it's worth the additional complexity. And I'm not\n> > sure what if the first allocation failed.\n> \n> Perhaps we could rely more on a fallback memory, especially if it is\n> possible to use that for the header validation. That seems like a\n> separate thing, still.\n\nOnce a record have been read, that size of memory is already\nallocated.\n\nWhile we will not agree, we could establish a defalut behavior where\nan OOM during recovery immediately triggers an ERROR. Then, we could\nintroduce a *GUC* that causes recovery to regard OOM as an\nend-of-recovery error.\n\nregards.\n\n[1] https://www.postgresql.org/message-id/17928-aa92416a70ff44a2%40postgresql.org\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 01 Aug 2023 15:28:54 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Incorrect handling of OOM in WAL replay leading to data loss" }, { "msg_contents": "At Tue, 01 Aug 2023 15:28:54 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> While we will not agree, we could establish a defalut behavior where\n> an OOM during recovery immediately triggers an ERROR. Then, we could\n> introduce a *GUC* that causes recovery to regard OOM as an\n> end-of-recovery error.\n\nIf we do that, the reverse might be preferable. (OOMs are\nend-of-reocvery by default. That can be changed to ERROR by GUC.)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 01 Aug 2023 15:59:12 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Incorrect handling of OOM in WAL replay leading to data loss" }, { "msg_contents": "Hi,\n\n> As far as I can see, PerformWalRecovery() uses LOG as elevel\n> [...]\n> On top of my mind, any solution I can think of needs to add more\n> information to XLogReaderState, where we'd either track the type of\n> error that happened close to errormsg_buf which is where these errors\n> are tracked, but any of that cannot be backpatched, unfortunately.\n\nProbably I'm missing something, but if memory allocation is required\nduring WAL replay and it fails, wouldn't it be a better solution to\nlog the error and terminate the DBMS immediately?\n\nClearly Postgres doesn't have control of the amount of memory\navailable. It's up to the DBA to resolve the problem and start the\nrecovery again. If this happens on a replica, it indicates a\nmisconfiguration of the system and/or lack of the corresponding\nconfiguration options.\n\nMaybe a certain amount of memory should be reserved for the WAL replay\nand perhaps other needs. In the recent case the system should account\nfor the overcommitment of the OS - cases when a successful malloc()\ndoesn't necessarily allocate the required amount of *physical* memory,\nas it's done on Linux.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 1 Aug 2023 16:14:54 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Incorrect handling of OOM in WAL replay leading to data loss" }, { "msg_contents": "On Tue, 2023-08-01 at 16:14 +0300, Aleksander Alekseev wrote:\n> Probably I'm missing something, but if memory allocation is required\n> during WAL replay and it fails, wouldn't it be a better solution to\n> log the error and terminate the DBMS immediately?\n\nWe need to differentiate between:\n\n1. No valid record exists and it must be the end of WAL; LOG and start\nup.\n\n2. A valid record exists and we are unable to process it (e.g. due to\nOOM); PANIC.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Tue, 01 Aug 2023 16:39:54 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Incorrect handling of OOM in WAL replay leading to data loss" }, { "msg_contents": "At Tue, 01 Aug 2023 15:28:54 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> I thoght that the failure on a stanby results in continuing to retry\n> reading the next record. However, I found that there's a case where\n> start process stops in response to OOM [1].\n\nI've examined the calls to\nMemoryContextAllocExtended(..,MCXT_ALLOC_NO_OOM). In server recovery\npath, XLogDecodeNextRecord is the only function that uses it.\n\nSo, there doesn't seem to be a problem here. I proceeded to test the\nidea of only varifying headers after an allocation failure, and I've\nattached a PoC.\n\n- allocate_recordbuf() ensures a minimum of SizeOfXLogRecord bytes\n when it reutnrs false, indicating an allocation failure.\n\n- If allocate_recordbuf() returns false, XLogDecodeNextRecord()\n continues to read pages and perform header checks until the\n total_len reached, but not copying data (except for the logical\n record header, when the first page didn't store the entire header).\n\n- If all relevant WAL pages are consistent, ReadRecord concludes with\n an 'out of memory' ERROR, which then escalates to FATAL.\n\nI believe this approach is sufficient to determine whether the error\nis OOM or not. If total_len is currupted and has an excessively large\nvalue, it's highly unlikely that all subsequent pages for that length\nwill be consistent.\n\nDo you have any thoughts on this?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 02 Aug 2023 13:16:02 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Incorrect handling of OOM in WAL replay leading to data loss" }, { "msg_contents": "On Tue, Aug 01, 2023 at 04:39:54PM -0700, Jeff Davis wrote:\n> On Tue, 2023-08-01 at 16:14 +0300, Aleksander Alekseev wrote:\n> > Probably I'm missing something, but if memory allocation is required\n> > during WAL replay and it fails, wouldn't it be a better solution to\n> > log the error and terminate the DBMS immediately?\n> \n> We need to differentiate between:\n> \n> 1. No valid record exists and it must be the end of WAL; LOG and start\n> up.\n> \n> 2. A valid record exists and we are unable to process it (e.g. due to\n> OOM); PANIC.\n\nYes, still there is a bit more to it. The origin of the introduction\nto palloc(MCXT_ALLOC_NO_OOM) partially comes from this thread, that\nhas reported a problem where we switched from malloc() to palloc()\nwhen xlogreader.c got introduced:\nhttps://www.postgresql.org/message-id/CAHGQGwE46cJC4rJGv+kVMV8g6BxHm9dBR_7_QdPjvJUqdt7m=Q@mail.gmail.com\n\nAnd the malloc() behavior when replaying WAL records is even older\nthan that.\n\nAt the end, we want to be able to give more options to anybody looking\nat WAL records, and let them take decisions based on the error reached\nand the state of the system. For example, it does not make much sense\nto fail hard on OOM if replaying records when in standby mode because\nwe can just loop again. The same can actually be said when in crash\nrecovery. On OOM, the startup process considers that we have an\ninvalid record now, which is incorrect. We could fail hard and FATAL\nto replay again (sounds like the natural option), or we could loop\nover the record that failed its allocation, repeating things. In any\ncase, we need to give more information back to the system so as it can\ntake better decisions on what it should do.\n--\nMichael", "msg_date": "Tue, 8 Aug 2023 14:52:42 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Incorrect handling of OOM in WAL replay leading to data loss" }, { "msg_contents": "On Wed, Aug 02, 2023 at 01:16:02PM +0900, Kyotaro Horiguchi wrote:\n> I believe this approach is sufficient to determine whether the error\n> is OOM or not. If total_len is currupted and has an excessively large\n> value, it's highly unlikely that all subsequent pages for that length\n> will be consistent.\n> \n> Do you have any thoughts on this?\n\nThis could be more flexible IMO, and actually in some cases\nerrormsg_fatal may be eaten if using the WAL prefetcher as the error\nmessage is reported with the caller of XLogPrefetcherReadRecord(), no?\n\nAnything that has been discussed on this thread now involves a change\nin XLogReaderState that induces an ABI breakage. For HEAD, we are\nlikely going in this direction, but if we are going to bite the bullet\nwe'd better be a bit more aggressive with the integration and report\nan error code side-by-side with the error message returned by\nXLogPrefetcherReadRecord(), XLogReadRecord() and XLogNextRecord() so\nas all of the callers can decide what they want to do on an invalid\nrecord or just an OOM.\n\nAttached is the idea of infrastructure I have in mind, as of 0001,\nwhere this adds an error code to report_invalid_record(). For now\nthis includes three error codes appended to the error messages\ngenerated that can be expanded if need be: no error, OOM and invalid\ndata. The invalid data part may needs to be much more verbose, and\ncould be improved to make this stuff \"less scary\" as the other thread\nproposes, but what I have here would be enough to trigger a different \ndecision in the startup process if a record cannot be fetched on OOM\nor if there's a different reason behind that.\n\n0002 is an example of decision that can be taken in WAL replay if we\nsee an OOM, based on the error code received. One argument is that we\nmay want to improve emode_for_corrupt_record() so as it reacts better\non OOM, upgrading the emode wanted, but this needs more analysis\ndepending on the code path involved.\n\n0003 is my previous trick to inject an OOM failure at replay. Reusing\nthe previous script, this would be enough to prevent an early redo\ncreating a loss of data.\n\nNote that we have a few other things going in the tree. As one\nexample, pg_walinspect would consider an OOM as the end of WAL. Not\ncritical, still slightly incorrect as the end of WAL may not have been\nreached yet so it can report some incorrect information depending on\nwhat the WAL reader faces. This could be improved with the additions\nof 0001.\n\nThoughts or comments?\n--\nMichael", "msg_date": "Tue, 8 Aug 2023 16:29:49 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Incorrect handling of OOM in WAL replay leading to data loss" }, { "msg_contents": "At Tue, 8 Aug 2023 16:29:49 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Wed, Aug 02, 2023 at 01:16:02PM +0900, Kyotaro Horiguchi wrote:\n> > I believe this approach is sufficient to determine whether the error\n> > is OOM or not. If total_len is currupted and has an excessively large\n> > value, it's highly unlikely that all subsequent pages for that length\n> > will be consistent.\n> > \n> > Do you have any thoughts on this?\n> \n> This could be more flexible IMO, and actually in some cases\n> errormsg_fatal may be eaten if using the WAL prefetcher as the error\n> message is reported with the caller of XLogPrefetcherReadRecord(), no?\n\nRight. The goal of my PoC was to detect OOM accurately or at least\nsufficiently so. We need to separately pass the \"error code\" along\nwith the message to make it work with the prefethcer. We could\nenclose errormsg and errorcode in a struct.\n\n> Anything that has been discussed on this thread now involves a change\n> in XLogReaderState that induces an ABI breakage. For HEAD, we are\n> likely going in this direction, but if we are going to bite the bullet\n> we'd better be a bit more aggressive with the integration and report\n> an error code side-by-side with the error message returned by\n> XLogPrefetcherReadRecord(), XLogReadRecord() and XLogNextRecord() so\n> as all of the callers can decide what they want to do on an invalid\n> record or just an OOM.\n\nSounds reasonable.\n\n> Attached is the idea of infrastructure I have in mind, as of 0001,\n> where this adds an error code to report_invalid_record(). For now\n> this includes three error codes appended to the error messages\n> generated that can be expanded if need be: no error, OOM and invalid\n> data. The invalid data part may needs to be much more verbose, and\n> could be improved to make this stuff \"less scary\" as the other thread\n> proposes, but what I have here would be enough to trigger a different \n> decision in the startup process if a record cannot be fetched on OOM\n> or if there's a different reason behind that.\n\nAgreed. This clarifies the basis for decisions at the upper layer\n(ReadRecord) and adds flexibility.\n\n> 0002 is an example of decision that can be taken in WAL replay if we\n> see an OOM, based on the error code received. One argument is that we\n> may want to improve emode_for_corrupt_record() so as it reacts better\n> on OOM, upgrading the emode wanted, but this needs more analysis\n> depending on the code path involved.\n> \n> 0003 is my previous trick to inject an OOM failure at replay. Reusing\n> the previous script, this would be enough to prevent an early redo\n> creating a loss of data.\n> \n> Note that we have a few other things going in the tree. As one\n> example, pg_walinspect would consider an OOM as the end of WAL. Not\n> critical, still slightly incorrect as the end of WAL may not have been\n> reached yet so it can report some incorrect information depending on\n> what the WAL reader faces. This could be improved with the additions\n> of 0001.\n> \n> Thoughts or comments?\n\nI like the overall direction. Though, I'm considering enclosing the\nerrormsg and errorcode in a struct.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 08 Aug 2023 17:44:03 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Incorrect handling of OOM in WAL replay leading to data loss" }, { "msg_contents": "On Tue, Aug 08, 2023 at 05:44:03PM +0900, Kyotaro Horiguchi wrote:\n> I like the overall direction. Though, I'm considering enclosing the\n> errormsg and errorcode in a struct.\n\nYes, this suggestion makes sense as it simplifies all the WAL routines\nthat need to report back a complete error state, and there are four of\nthem now:\nXLogPrefetcherReadRecord()\nXLogReadRecord()\nXLogNextRecord()\nDecodeXLogRecord()\n\nI have spent more time on 0001, polishing it and fixing a few bugs\nthat I have found while reviewing the whole. Most of them were\nrelated to mistakes in resetting the error state when expected. I\nhave also expanded DecodeXLogRecord() to use an error structure\ninstead of only an errmsg, giving more consistency. The error state\nnow relies on two structures:\n+typedef enum XLogReaderErrorCode\n+{\n+ XLOG_READER_NONE = 0,\n+ XLOG_READER_OOM, /* out-of-memory */\n+ XLOG_READER_INVALID_DATA, /* record data */\n+} XLogReaderErrorCode;\n+typedef struct XLogReaderError\n+{\n+ /* Buffer to hold error message */\n+ char *message;\n+ bool message_deferred;\n+ /* Error code when filling *message */\n+ XLogReaderErrorCode code;\n+} XLogReaderError;\n\nI'm kind of happy with this layer, now.\n\nI have also spent some time on finding a more elegant solution for the\nWAL replay, relying on the new facility from 0001. And it happens\nthat it is easy enough to loop if facing an out-of-memory failure when\nreading a record when we are in crash recovery, as the state is\nactually close to what a standby does. The trick is that we should\nnot change the state and avoid tracking a continuation record. This\nis done in 0002, making replay more robust. With the addition of the\nerror injection tweak in 0003, I am able to finish recovery while the\nstartup process loops if under memory pressure. As mentioned\npreviously, there are more code paths to consider, but that's a start\nto fix the data loss problems.\n\nComments are welcome.\n--\nMichael", "msg_date": "Wed, 9 Aug 2023 15:03:21 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Incorrect handling of OOM in WAL replay leading to data loss" }, { "msg_contents": "At Wed, 9 Aug 2023 15:03:21 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> I have spent more time on 0001, polishing it and fixing a few bugs\n> that I have found while reviewing the whole. Most of them were\n> related to mistakes in resetting the error state when expected. I\n> have also expanded DecodeXLogRecord() to use an error structure\n> instead of only an errmsg, giving more consistency. The error state\n> now relies on two structures:\n> +typedef enum XLogReaderErrorCode\n> +{\n> + XLOG_READER_NONE = 0,\n> + XLOG_READER_OOM, /* out-of-memory */\n> + XLOG_READER_INVALID_DATA, /* record data */\n> +} XLogReaderErrorCode;\n> +typedef struct XLogReaderError\n> +{\n> + /* Buffer to hold error message */\n> + char *message;\n> + bool message_deferred;\n> + /* Error code when filling *message */\n> + XLogReaderErrorCode code;\n> +} XLogReaderError;\n> \n> I'm kind of happy with this layer, now.\n\nI'm not certain if message_deferred is a property of the error\nstruct. Callers don't seem to need that information.\n\nThe name \"XLOG_RADER_NONE\" seems too generic. XLOG_READER_NOERROR will\nbe clearer.\n\n> I have also spent some time on finding a more elegant solution for the\n> WAL replay, relying on the new facility from 0001. And it happens\n> that it is easy enough to loop if facing an out-of-memory failure when\n> reading a record when we are in crash recovery, as the state is\n> actually close to what a standby does. The trick is that we should\n> not change the state and avoid tracking a continuation record. This\n> is done in 0002, making replay more robust. With the addition of the\n\n0002 shifts the behavior for the OOM case from ending recovery to\nretrying at the same record. If the last record is really corrupted,\nthe server won't be able to finish recovery. I doubt we are good with\nthis behavior change.\n\n> error injection tweak in 0003, I am able to finish recovery while the\n> startup process loops if under memory pressure. As mentioned\n> previously, there are more code paths to consider, but that's a start\n> to fix the data loss problems.\n\n(The file name \"xlogreader_oom\" is a bit trickeier to type than \"hoge\"\n or \"foo\"X( )\n\n> Comments are welcome.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 09 Aug 2023 16:13:53 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Incorrect handling of OOM in WAL replay leading to data loss" }, { "msg_contents": "On Wed, Aug 09, 2023 at 04:13:53PM +0900, Kyotaro Horiguchi wrote:\n> I'm not certain if message_deferred is a property of the error\n> struct. Callers don't seem to need that information.\n\nTrue enough, will remove.\n\n> The name \"XLOG_RADER_NONE\" seems too generic. XLOG_READER_NOERROR will\n> be clearer.\n\nOr perhaps just XLOG_READER_NO_ERROR?\n\n> 0002 shifts the behavior for the OOM case from ending recovery to\n> retrying at the same record. If the last record is really corrupted,\n> the server won't be able to finish recovery. I doubt we are good with\n> this behavior change.\n\nYou mean on an incorrect xl_tot_len? Yes that could be possible.\nAnother possibility would be a retry logic with an hardcoded number of\nattempts and a delay between each. Once the infrastructure is in\nplace, this still deserves more discussions but we can be flexible.\nThe immediate FATAL is choice.\n--\nMichael", "msg_date": "Wed, 9 Aug 2023 16:35:09 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Incorrect handling of OOM in WAL replay leading to data loss" }, { "msg_contents": "At Wed, 9 Aug 2023 16:35:09 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> Or perhaps just XLOG_READER_NO_ERROR?\n\nLooks fine.\n\n> > 0002 shifts the behavior for the OOM case from ending recovery to\n> > retrying at the same record. If the last record is really corrupted,\n> > the server won't be able to finish recovery. I doubt we are good with\n> > this behavior change.\n> \n> You mean on an incorrect xl_tot_len? Yes that could be possible.\n> Another possibility would be a retry logic with an hardcoded number of\n> attempts and a delay between each. Once the infrastructure is in\n> place, this still deserves more discussions but we can be flexible.\n> The immediate FATAL is choice.\n\nWhile it's a kind of bug in total, we encountered a case where an\nexcessively large xl_tot_len actually came from a corrupted\nrecord. [1]\n\nI'm glad to see this infrastructure comes in, and I'm on board with\nretrying due to an OOM. However, I think we really need official steps\nto wrap up recovery when there is a truly broken, oversized\nxl_tot_len.\n\n[1] https://www.postgresql.org/message-id/17928-aa92416a70ff44a2@postgresql.org\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 09 Aug 2023 17:00:49 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Incorrect handling of OOM in WAL replay leading to data loss" }, { "msg_contents": "On Wed, Aug 09, 2023 at 05:00:49PM +0900, Kyotaro Horiguchi wrote:\n> Looks fine.\n\nOkay, I've updated the patch in consequence. I'll look at 0001 again\nat the beginning of next week.\n\n> While it's a kind of bug in total, we encountered a case where an\n> excessively large xl_tot_len actually came from a corrupted\n> record. [1]\n\nRight, I remember this one. I think that Thomas was pretty much right\nthat this could be caused because of a lack of zeroing in the WAL\npages.\n\n> I'm glad to see this infrastructure comes in, and I'm on board with\n> retrying due to an OOM. However, I think we really need official steps\n> to wrap up recovery when there is a truly broken, oversized\n> xl_tot_len.\n\nThere are a few options on the table, only doable once the WAL reader\nprovider the error state to the startup process:\n1) Retry a few times and FATAL.\n2) Just FATAL immediately and don't wait.\n3) Retry and hope for the best that the host calms down.\nI have not seeing this issue being much of an issue in the field, so\nperhaps option 2 with the structure of 0002 and a FATAL when we catch\nXLOG_READER_OOM in the switch would be enough. At least that's enough\nfor the cases we've seen. I'll think a bit more about it, as well.\n\nYeah, agreed. That's orthogonal to the issue reported by Ethan,\nunfortunately, where he was able to trigger the issue of this thread\nby manipulating the sizing of a host after producing a record larger\nthan what the host could afford after the resizing :/\n--\nMichael", "msg_date": "Wed, 9 Aug 2023 17:44:49 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Incorrect handling of OOM in WAL replay leading to data loss" }, { "msg_contents": "At Wed, 9 Aug 2023 17:44:49 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> > While it's a kind of bug in total, we encountered a case where an\n> > excessively large xl_tot_len actually came from a corrupted\n> > record. [1]\n> \n> Right, I remember this one. I think that Thomas was pretty much right\n> that this could be caused because of a lack of zeroing in the WAL\n> pages.\n\nWe have treated every kind of broken data as end-of-recovery, like\nincorrect rm_id or prev link including excessively large record length\ndue to corruption. This patch is going to change the behavior only for\nthe last one. If you think there can't be non-zero broken data, we\nshould inhibit proceeding recovery after all non-zero incorrect\ndata. This seems to be a quite big change in our recovery policy.\n\n> There are a few options on the table, only doable once the WAL reader\n> provider the error state to the startup process:\n> 1) Retry a few times and FATAL.\n> 2) Just FATAL immediately and don't wait.\n> 3) Retry and hope for the best that the host calms down.\n\n4) Wrap up recovery then continue to normal operation.\n\nThis is the traditional behavior for currupt WAL data.\n\n> I have not seeing this issue being much of an issue in the field, so\n> perhaps option 2 with the structure of 0002 and a FATAL when we catch\n> XLOG_READER_OOM in the switch would be enough. At least that's enough\n> for the cases we've seen. I'll think a bit more about it, as well.\n> \n> Yeah, agreed. That's orthogonal to the issue reported by Ethan,\n> unfortunately, where he was able to trigger the issue of this thread\n> by manipulating the sizing of a host after producing a record larger\n> than what the host could afford after the resizing :/\n\nI'm not entirely certain, but if you were to ask me which is more\nprobable during recovery - encountering a correct record that's too\nlengthy for the server to buffer or stumbling upon a corrupt byte\nsequence - I'd bet on the latter.\n\nI'm not sure how often users encounter currupt WAL data, but I believe\nthey should have the option to terminate recovery and then switch to\nnormal operation.\n\nWhat if we introduced an option to increase the timeline whenever\nrecovery hits data error? If that option is disabled, the server stops\nwhen recovery detects an incorrect data, except in the case of an\nOOM. OOM cause record retry.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 10 Aug 2023 10:00:17 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Incorrect handling of OOM in WAL replay leading to data loss" }, { "msg_contents": "At Thu, 10 Aug 2023 10:00:17 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Wed, 9 Aug 2023 17:44:49 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> > > While it's a kind of bug in total, we encountered a case where an\n> > > excessively large xl_tot_len actually came from a corrupted\n> > > record. [1]\n> > \n> > Right, I remember this one. I think that Thomas was pretty much right\n> > that this could be caused because of a lack of zeroing in the WAL\n> > pages.\n> \n> We have treated every kind of broken data as end-of-recovery, like\n> incorrect rm_id or prev link including excessively large record length\n> due to corruption. This patch is going to change the behavior only for\n> the last one. If you think there can't be non-zero broken data, we\n> should inhibit proceeding recovery after all non-zero incorrect\n> data. This seems to be a quite big change in our recovery policy.\n> \n> > There are a few options on the table, only doable once the WAL reader\n> > provider the error state to the startup process:\n> > 1) Retry a few times and FATAL.\n> > 2) Just FATAL immediately and don't wait.\n> > 3) Retry and hope for the best that the host calms down.\n> \n> 4) Wrap up recovery then continue to normal operation.\n> \n> This is the traditional behavior for currupt WAL data.\n> \n> > I have not seeing this issue being much of an issue in the field, so\n> > perhaps option 2 with the structure of 0002 and a FATAL when we catch\n> > XLOG_READER_OOM in the switch would be enough. At least that's enough\n> > for the cases we've seen. I'll think a bit more about it, as well.\n> > \n> > Yeah, agreed. That's orthogonal to the issue reported by Ethan,\n> > unfortunately, where he was able to trigger the issue of this thread\n> > by manipulating the sizing of a host after producing a record larger\n> > than what the host could afford after the resizing :/\n> \n> I'm not entirely certain, but if you were to ask me which is more\n> probable during recovery - encountering a correct record that's too\n> lengthy for the server to buffer or stumbling upon a corrupt byte\n> sequence - I'd bet on the latter.\n\n... of course this refers to crash recovery. For replication, we\nshould keep retrying the current record until the operator commands\npromotion.\n\n> I'm not sure how often users encounter currupt WAL data, but I believe\n> they should have the option to terminate recovery and then switch to\n> normal operation.\n> \n> What if we introduced an option to increase the timeline whenever\n> recovery hits data error? If that option is disabled, the server stops\n> when recovery detects an incorrect data, except in the case of an\n> OOM. OOM cause record retry.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 10 Aug 2023 10:15:40 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Incorrect handling of OOM in WAL replay leading to data loss" }, { "msg_contents": "On Thu, Aug 10, 2023 at 10:15:40AM +0900, Kyotaro Horiguchi wrote:\n> At Thu, 10 Aug 2023 10:00:17 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n>> At Wed, 9 Aug 2023 17:44:49 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n>>>> While it's a kind of bug in total, we encountered a case where an\n>>>> excessively large xl_tot_len actually came from a corrupted\n>>>> record. [1]\n>>> \n>>> Right, I remember this one. I think that Thomas was pretty much right\n>>> that this could be caused because of a lack of zeroing in the WAL\n>>> pages.\n>> \n>> We have treated every kind of broken data as end-of-recovery, like\n>> incorrect rm_id or prev link including excessively large record length\n>> due to corruption. This patch is going to change the behavior only for\n>> the last one. If you think there can't be non-zero broken data, we\n>> should inhibit proceeding recovery after all non-zero incorrect\n>> data. This seems to be a quite big change in our recovery policy.\n\nWell, per se the report that led to this thread. We may lose data and\nfinish with corrupted pages. I was planning to reply to the other\nthread [1] and the patch of Noah anyway, because we have to fix the\ndetection of OOM vs corrupted records in the allocation path anyway.\nThe infra introduced by 0001 and something like 0002 that allows the\nstartup process to take a different path depending on the type of\nerror are still needed to avoid a too early end-of-recovery, though.\n\n>>> There are a few options on the table, only doable once the WAL reader\n>>> provider the error state to the startup process:\n>>> 1) Retry a few times and FATAL.\n>>> 2) Just FATAL immediately and don't wait.\n>>> 3) Retry and hope for the best that the host calms down.\n>> \n>> 4) Wrap up recovery then continue to normal operation.\n>> \n>> This is the traditional behavior for currupt WAL data.\n\nYeah, we've been doing a pretty bad job in classifying the errors that\ncan happen doing WAL replay in crash recovery, because we assume that\nall the WAL still in pg_wal/ is correct. That's not an easy problem,\nbecause the record CRC works for all the contents of the record, and\nwe look at the record header before that. Another idea may be to have\nan extra CRC only for the header itself, that can be used in isolation\nas one of the checks in XLogReaderValidatePageHeader().\n\n>>> I have not seeing this issue being much of an issue in the field, so\n>>> perhaps option 2 with the structure of 0002 and a FATAL when we catch\n>>> XLOG_READER_OOM in the switch would be enough. At least that's enough\n>>> for the cases we've seen. I'll think a bit more about it, as well.\n>>> \n>>> Yeah, agreed. That's orthogonal to the issue reported by Ethan,\n>>> unfortunately, where he was able to trigger the issue of this thread\n>>> by manipulating the sizing of a host after producing a record larger\n>>> than what the host could afford after the resizing :/\n>> \n>> I'm not entirely certain, but if you were to ask me which is more\n>> probable during recovery - encountering a correct record that's too\n>> lengthy for the server to buffer or stumbling upon a corrupt byte\n>> sequence - I'd bet on the latter.\n\nI don't really believe in chance when it comes to computer science,\nfacts and a correct detection of such facts are better :)\n\n> ... of course this refers to crash recovery. For replication, we\n> should keep retrying the current record until the operator commands\n> promotion.\n\nAre you referring about a retry if there is a standby.signal? I am a\nbit confused by this sentence, because we could do a crash recovery,\nthen switch to archive recovery. So, I guess that you mean that on\nOOM we should retry to retrieve WAL from the local pg_wal/ even in the\ncase where we are in the crash recovery phase, *before* switching to\narchive recovery and a different source, right? I think that this\nargument can go two ways, because it could be more helpful for some to\nsee a FATAL when we are still in crash recovery, even if there is a\nstandby.signal. It does not seem to me that we have a clear\ndefinition about what to do in which case, either. Now we just fail\nand hope for the best when doing crash recovery.\n\n>> I'm not sure how often users encounter currupt WAL data, but I believe\n>> they should have the option to terminate recovery and then switch to\n>> normal operation.\n>> \n>> What if we introduced an option to increase the timeline whenever\n>> recovery hits data error? If that option is disabled, the server stops\n>> when recovery detects an incorrect data, except in the case of an\n>> OOM. OOM cause record retry.\n\nI guess that it depends on how much responsiveness one may want.\nForcing a failure on OOM is at least something that users would be\nimmediately able to act on when we don't run a standby but just\nrecover from a crash, while a standby would do what it is designed to\ndo, aka continue to replay what it can see. One issue with the\nwait-and-continue is that a standby may loop continuously on OOM,\nwhich could be also bad if there's a replication slot retaining WAL on\nthe primary. Perhaps that's just OK to keep doing that for a\nstandby. At least this makes the discussion easier for the sake of\nthis thread: just consider the case of crash recovery when we don't\nhave a standby.\n--\nMichael", "msg_date": "Thu, 10 Aug 2023 13:33:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Incorrect handling of OOM in WAL replay leading to data loss" }, { "msg_contents": "At Thu, 10 Aug 2023 13:33:48 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Thu, Aug 10, 2023 at 10:15:40AM +0900, Kyotaro Horiguchi wrote:\n> > At Thu, 10 Aug 2023 10:00:17 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> >> We have treated every kind of broken data as end-of-recovery, like\n> >> incorrect rm_id or prev link including excessively large record length\n> >> due to corruption. This patch is going to change the behavior only for\n> >> the last one. If you think there can't be non-zero broken data, we\n> >> should inhibit proceeding recovery after all non-zero incorrect\n> >> data. This seems to be a quite big change in our recovery policy.\n> \n> Well, per se the report that led to this thread. We may lose data and\n> finish with corrupted pages. I was planning to reply to the other\n> thread [1] and the patch of Noah anyway, because we have to fix the\n> detection of OOM vs corrupted records in the allocation path anyway.\n\nDoes this mean we will address the distinction between an OOM and a\ncorrupt total record length later on? If that's the case, should we\nmodify that behavior right now?\n\n> The infra introduced by 0001 and something like 0002 that allows the\n> startup process to take a different path depending on the type of\n> error are still needed to avoid a too early end-of-recovery, though.\n\nAgreed.\n\n> >> 4) Wrap up recovery then continue to normal operation.\n> >> \n> >> This is the traditional behavior for currupt WAL data.\n> \n> Yeah, we've been doing a pretty bad job in classifying the errors that\n> can happen doing WAL replay in crash recovery, because we assume that\n> all the WAL still in pg_wal/ is correct. That's not an easy problem,\n\nI'm not quite sure what \"correct\" means here. I believe xlogreader\nruns various checks since the data may be incorrect. Given that can\nbreak for various reasons, during crash recovery, we continue as long\nas incoming WAL record remains consistently valid. The problem raised\nhere that we can't distinctly identify an OOM from a corrupted total\nrecord length field. One reason is we check the data after a part is\nloaded, but we can't load all the bytes from the record into memory in\nsuch cases.\n\n> because the record CRC works for all the contents of the record, and\n> we look at the record header before that. Another idea may be to have\n> an extra CRC only for the header itself, that can be used in isolation\n> as one of the checks in XLogReaderValidatePageHeader().\n\nSounds reasonable. By using CRC to protect the header part and\nallocating a fixed-length buffer for it, I believe we're adopting a\nstandard approach and can identify OOM and other kind of header errors\nwith a good degree of certainty.\n\n> >> I'm not entirely certain, but if you were to ask me which is more\n> >> probable during recovery - encountering a correct record that's too\n> >> lengthy for the server to buffer or stumbling upon a corrupt byte\n> >> sequence - I'd bet on the latter.\n> \n> I don't really believe in chance when it comes to computer science,\n> facts and a correct detection of such facts are better :)\n\nEven now, it seems like we're balancing the risk of potential data\nloss against the potential inability to start the server. I meant\nthat.. if I had to do choose, I'd lean slightly towards prioritizing\nsaving the latter, in other words, keeping the current behavior.\n\n> > ... of course this refers to crash recovery. For replication, we\n> > should keep retrying the current record until the operator commands\n> > promotion.\n> \n> Are you referring about a retry if there is a standby.signal? I am a\n> bit confused by this sentence, because we could do a crash recovery,\n> then switch to archive recovery. So, I guess that you mean that on\n> OOM we should retry to retrieve WAL from the local pg_wal/ even in the\n> case where we are in the crash recovery phase, *before* switching to\n\nApologies for the confusion. What I was thinking is that an OOM is\nmore likely to occur in replication downstreams than during server\nstartup. I also felt for the latter case that such a challenging\nenvironment probably wouldn't let the server enter stable normal\noperation.\n\n> archive recovery and a different source, right? I think that this\n> argument can go two ways, because it could be more helpful for some to\n> see a FATAL when we are still in crash recovery, even if there is a\n> standby.signal. It does not seem to me that we have a clear\n> definition about what to do in which case, either. Now we just fail\n> and hope for the best when doing crash recovery.\n\nAgreed.\n\n> >> I'm not sure how often users encounter currupt WAL data, but I believe\n> >> they should have the option to terminate recovery and then switch to\n> >> normal operation.\n> >> \n> >> What if we introduced an option to increase the timeline whenever\n> >> recovery hits data error? If that option is disabled, the server stops\n> >> when recovery detects an incorrect data, except in the case of an\n> >> OOM. OOM cause record retry.\n> \n> I guess that it depends on how much responsiveness one may want.\n> Forcing a failure on OOM is at least something that users would be\n> immediately able to act on when we don't run a standby but just\n> recover from a crash, while a standby would do what it is designed to\n> do, aka continue to replay what it can see. One issue with the\n> wait-and-continue is that a standby may loop continuously on OOM,\n> which could be also bad if there's a replication slot retaining WAL on\n> the primary. Perhaps that's just OK to keep doing that for a\n> standby. At least this makes the discussion easier for the sake of\n> this thread: just consider the case of crash recovery when we don't\n> have a standby.\n\nYeah, I'm with you on focusing on crash recovery cases; that's what I\nntended. Sorry for any confusion.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 10 Aug 2023 14:47:51 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Incorrect handling of OOM in WAL replay leading to data loss" }, { "msg_contents": "On Thu, Aug 10, 2023 at 02:47:51PM +0900, Kyotaro Horiguchi wrote:\n> At Thu, 10 Aug 2023 13:33:48 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n>> On Thu, Aug 10, 2023 at 10:15:40AM +0900, Kyotaro Horiguchi wrote:\n>> > At Thu, 10 Aug 2023 10:00:17 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n>> >> We have treated every kind of broken data as end-of-recovery, like\n>> >> incorrect rm_id or prev link including excessively large record length\n>> >> due to corruption. This patch is going to change the behavior only for\n>> >> the last one. If you think there can't be non-zero broken data, we\n>> >> should inhibit proceeding recovery after all non-zero incorrect\n>> >> data. This seems to be a quite big change in our recovery policy.\n>> \n>> Well, per se the report that led to this thread. We may lose data and\n>> finish with corrupted pages. I was planning to reply to the other\n>> thread [1] and the patch of Noah anyway, because we have to fix the\n>> detection of OOM vs corrupted records in the allocation path anyway.\n> \n> Does this mean we will address the distinction between an OOM and a\n> corrupt total record length later on? If that's the case, should we\n> modify that behavior right now?\n\nMy apologies if I sounded unclear here. It seems to me that we should\nwrap the patch on [1] first, and get it backpatched. At least that\nmakes for less conflicts when 0001 gets merged for HEAD when we are\nable to set a proper error code. (Was looking at it, actually.)\n\n>>>> 4) Wrap up recovery then continue to normal operation.\n>>>> \n>>>> This is the traditional behavior for currupt WAL data.\n>> \n>> Yeah, we've been doing a pretty bad job in classifying the errors that\n>> can happen doing WAL replay in crash recovery, because we assume that\n>> all the WAL still in pg_wal/ is correct. That's not an easy problem,\n> \n> I'm not quite sure what \"correct\" means here. I believe xlogreader\n> runs various checks since the data may be incorrect. Given that can\n> break for various reasons, during crash recovery, we continue as long\n> as incoming WAL record remains consistently valid. The problem raised\n> here that we can't distinctly identify an OOM from a corrupted total\n> record length field. One reason is we check the data after a part is\n> loaded, but we can't load all the bytes from the record into memory in\n> such cases.\n\nYep. Correct means that we end recovery in a consistent state, not\ntoo early than we should.\n\n>> because the record CRC works for all the contents of the record, and\n>> we look at the record header before that. Another idea may be to have\n>> an extra CRC only for the header itself, that can be used in isolation\n>> as one of the checks in XLogReaderValidatePageHeader().\n> \n> Sounds reasonable. By using CRC to protect the header part and\n> allocating a fixed-length buffer for it, I believe we're adopting a\n> standard approach and can identify OOM and other kind of header errors\n> with a good degree of certainty.\n\nNot something that I'd like to cover in this patch set, though.. This\nis a problem on its own.\n\n>>>> I'm not entirely certain, but if you were to ask me which is more\n>>>> probable during recovery - encountering a correct record that's too\n>>>> lengthy for the server to buffer or stumbling upon a corrupt byte\n>>>> sequence - I'd bet on the latter.\n>> \n>> I don't really believe in chance when it comes to computer science,\n>> facts and a correct detection of such facts are better :)\n> \n> Even now, it seems like we're balancing the risk of potential data\n> loss against the potential inability to start the server. I meant\n> that.. if I had to do choose, I'd lean slightly towards prioritizing\n> saving the latter, in other words, keeping the current behavior.\n\nOn OOM, this means data loss and silent corruption. A failure has the\nmerit to tell someone that something is wrong, at least, and that\nthey'd better look at it rather than hope for the best.\n\n>>> ... of course this refers to crash recovery. For replication, we\n>>> should keep retrying the current record until the operator commands\n>>> promotion.\n>> \n>> Are you referring about a retry if there is a standby.signal? I am a\n>> bit confused by this sentence, because we could do a crash recovery,\n>> then switch to archive recovery. So, I guess that you mean that on\n>> OOM we should retry to retrieve WAL from the local pg_wal/ even in the\n>> case where we are in the crash recovery phase, *before* switching to\n> \n> Apologies for the confusion. What I was thinking is that an OOM is\n> more likely to occur in replication downstreams than during server\n> startup. I also felt for the latter case that such a challenging\n> environment probably wouldn't let the server enter stable normal\n> operation.\n\nIt depends on what the user does with the host running the cluster.\nBoth could be impacted.\n\n>>>> I'm not sure how often users encounter currupt WAL data, but I believe\n>>>> they should have the option to terminate recovery and then switch to\n>>>> normal operation.\n>>>> \n>>>> What if we introduced an option to increase the timeline whenever\n>>>> recovery hits data error? If that option is disabled, the server stops\n>>>> when recovery detects an incorrect data, except in the case of an\n>>>> OOM. OOM cause record retry.\n>> \n>> I guess that it depends on how much responsiveness one may want.\n>> Forcing a failure on OOM is at least something that users would be\n>> immediately able to act on when we don't run a standby but just\n>> recover from a crash, while a standby would do what it is designed to\n>> do, aka continue to replay what it can see. One issue with the\n>> wait-and-continue is that a standby may loop continuously on OOM,\n>> which could be also bad if there's a replication slot retaining WAL on\n>> the primary. Perhaps that's just OK to keep doing that for a\n>> standby. At least this makes the discussion easier for the sake of\n>> this thread: just consider the case of crash recovery when we don't\n>> have a standby.\n> \n> Yeah, I'm with you on focusing on crash recovery cases; that's what I\n> intended. Sorry for any confusion.\n\nOkay, so we're on the same page here, keeping standbys as they are and\ndo something for the crash recovery case.\n--\nMichael", "msg_date": "Thu, 10 Aug 2023 14:59:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Incorrect handling of OOM in WAL replay leading to data loss" }, { "msg_contents": "On Thu, Aug 10, 2023 at 02:59:07PM +0900, Michael Paquier wrote:\n> My apologies if I sounded unclear here. It seems to me that we should\n> wrap the patch on [1] first, and get it backpatched. At least that\n> makes for less conflicts when 0001 gets merged for HEAD when we are\n> able to set a proper error code. (Was looking at it, actually.)\n\nNow that Thomas Munro has addressed the original problem to be able to\ntrust correctly xl_tot_len with bae868caf22, I am coming back to this\nthread.\n\nFirst, attached is a rebased set:\n- 0001 to introduce the new error infra for xlogreader.c with an error\ncode, so as callers can make the difference between an OOM and an\ninvalid record.\n- 0002 to tweak the startup process. Once again, I've taken the\napproach to make the startup process behave like a standby on crash\nrecovery: each time that an OOM is found, we loop and retry.\n- 0003 to emulate an OOM failure, that can be used with the script\nattached to see that we don't stop recovery too early.\n\n>>> I guess that it depends on how much responsiveness one may want.\n>>> Forcing a failure on OOM is at least something that users would be\n>>> immediately able to act on when we don't run a standby but just\n>>> recover from a crash, while a standby would do what it is designed to\n>>> do, aka continue to replay what it can see. One issue with the\n>>> wait-and-continue is that a standby may loop continuously on OOM,\n>>> which could be also bad if there's a replication slot retaining WAL on\n>>> the primary. Perhaps that's just OK to keep doing that for a\n>>> standby. At least this makes the discussion easier for the sake of\n>>> this thread: just consider the case of crash recovery when we don't\n>>> have a standby.\n>> \n>> Yeah, I'm with you on focusing on crash recovery cases; that's what I\n>> intended. Sorry for any confusion.\n> \n> Okay, so we're on the same page here, keeping standbys as they are and\n> do something for the crash recovery case.\n\nFor the crash recovery case, one argument that stood out in my mind is\nthat causing a hard failure has the disadvantage to force users to do\nagain WAL replay from the last redo position, which may be far away\neven if the checkpointer now runs during crash recovery. What I am\nproposing on this thread has the merit to avoid that. Anyway, let's\ndiscuss more before settling this point for the crash recovery case.\n\nBy the way, anything that I am proposing here cannot be backpatched\nbecause of the infrastructure changes required in walreader.c, so I am\ngoing to create a second thread with something that could be\nbackpatched (yeah, likely FATALs on OOM to stop recovery from doing\nsomething bad)..\n--\nMichael", "msg_date": "Tue, 26 Sep 2023 15:48:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Incorrect handling of OOM in WAL replay leading to data loss" }, { "msg_contents": "On Tue, Sep 26, 2023 at 03:48:07PM +0900, Michael Paquier wrote:\n> By the way, anything that I am proposing here cannot be backpatched\n> because of the infrastructure changes required in walreader.c, so I am\n> going to create a second thread with something that could be\n> backpatched (yeah, likely FATALs on OOM to stop recovery from doing\n> something bad)..\n\nPatch set is rebased as an effect of 6b18b3fe2c2f, that switched the\nOOMs to fail harder now in xlogreader.c. The patch set has nothing\nnew, except that 0001 is now a revert of 6b18b3fe2c2f to switch back\nxlogreader.c to use soft errors on OOMs.\n\nIf there's no interest in this patch set after the next CF, I'm OK to\ndrop it. The state of HEAD is at least correct in the OOM cases now.\n--\nMichael", "msg_date": "Tue, 3 Oct 2023 16:20:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Incorrect handling of OOM in WAL replay leading to data loss" }, { "msg_contents": "On Tue, Oct 03, 2023 at 04:20:45PM +0900, Michael Paquier wrote:\n> If there's no interest in this patch set after the next CF, I'm OK to\n> drop it. The state of HEAD is at least correct in the OOM cases now.\n\nI have been thinking about this patch for the last few days, and in\nlight of 6b18b3fe2c2f I am going to withdraw it for now as this makes\nthe error layers of the xlogreader more complicated.\n--\nMichael", "msg_date": "Tue, 24 Oct 2023 13:56:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Incorrect handling of OOM in WAL replay leading to data loss" } ]
[ { "msg_contents": "Hi:\n\nCurrently if we want to extract a numeric field in jsonb, we need to use\nthe following expression: cast (a->>'a' as numeric). It will turn a numeric\nto text first and then turn the text to numeric again. See\njsonb_object_field_text and JsonbValueAsText. However the binary format\nof numeric in JSONB is compatible with the numeric in SQL, so I think we\ncan have an operator to extract the numeric directly. If the value of a\ngiven\nfield is not a numeric data type, an error will be raised, this can be\ndocumented.\n\nIn this patch, I added a new operator for this purpose, here is the\nperformance gain because of this.\n\ncreate table tb (a jsonb);\ninsert into tb select '{\"a\": 1}'::jsonb from generate_series(1, 100000)i;\n\ncurrent method:\nselect count(*) from tb where cast (a->>'a' as numeric) = 2;\n167ms.\n\nnew method:\nselect count(*) from tb where a@->'a' = 2;\n65ms.\n\nIs this the right way to go? Testcase, document and catalog version are\nupdated.\n\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Tue, 1 Aug 2023 12:38:57 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Extract numeric filed in JSONB more effectively" }, { "msg_contents": "On Tue, 1 Aug 2023 at 06:39, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> Hi:\n>\n> Currently if we want to extract a numeric field in jsonb, we need to use\n> the following expression: cast (a->>'a' as numeric). It will turn a numeric\n> to text first and then turn the text to numeric again.\n\nWhy wouldn't you use cast(a->'a' as numeric), or ((a->'a')::numeric)?\nWe have a cast from jsonb to numeric (jsonb_numeric in jsonb.c) that\ndoes not require this additional (de)serialization through text.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Tue, 1 Aug 2023 13:03:29 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "On Tue, Aug 1, 2023 at 7:03 PM Matthias van de Meent <\nboekewurm+postgres@gmail.com> wrote:\n\n> On Tue, 1 Aug 2023 at 06:39, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> >\n> > Hi:\n> >\n> > Currently if we want to extract a numeric field in jsonb, we need to use\n> > the following expression: cast (a->>'a' as numeric). It will turn a\n> numeric\n> > to text first and then turn the text to numeric again.\n>\n> Why wouldn't you use cast(a->'a' as numeric), or ((a->'a')::numeric)?\n>\n\nThanks for this information! I didn't realize we have this function\nalready at [1].\n\nhttps://www.postgresql.org/docs/15/functions-json.html\n\n-- \nBest Regards\nAndy Fan\n\nOn Tue, Aug 1, 2023 at 7:03 PM Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:On Tue, 1 Aug 2023 at 06:39, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> Hi:\n>\n> Currently if we want to extract a numeric field in jsonb, we need to use\n> the following expression:  cast (a->>'a' as numeric). It will turn a numeric\n> to text first and then turn the text to numeric again.\n\nWhy wouldn't you use cast(a->'a' as numeric), or ((a->'a')::numeric)?Thanks for this information! I didn't realize we have this functionalready at [1].https://www.postgresql.org/docs/15/functions-json.html -- Best RegardsAndy Fan", "msg_date": "Wed, 2 Aug 2023 07:33:37 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "Hi Matthias:\n\nOn Wed, Aug 2, 2023 at 7:33 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n>\n>\n> On Tue, Aug 1, 2023 at 7:03 PM Matthias van de Meent <\n> boekewurm+postgres@gmail.com> wrote:\n>\n>> On Tue, 1 Aug 2023 at 06:39, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>> >\n>> > Hi:\n>> >\n>> > Currently if we want to extract a numeric field in jsonb, we need to use\n>> > the following expression: cast (a->>'a' as numeric). It will turn a\n>> numeric\n>> > to text first and then turn the text to numeric again.\n>>\n>> Why wouldn't you use cast(a->'a' as numeric), or ((a->'a')::numeric)?\n>>\n>\n> Thanks for this information! I didn't realize we have this function\n> already at [1].\n>\n> https://www.postgresql.org/docs/15/functions-json.html\n>\n\nHi:\n\nI just found ((a->'a')::numeric) is not as effective as I expected.\n\nFirst in the above expression we used jsonb_object_field which\nreturns a jsonb (see JsonbValueToJsonb), and then we convert jsonb\nto jsonbValue in jsonb_numeric (see JsonbExtractScalar). This\nlooks like a wastage.\n\nSecondly, because of the same reason above, we use PG_GETARG_JSONB_P(0),\nwhich may detoast a value so we need to free it with PG_FREE_IF_COPY.\nthen this looks like another potential wastage.\n\nThirdly, I am not sure we need to do the NumericCopy automatically\nin jsonb_numeric. an option in my mind is maybe we can leave this\nto the caller? At least in the normal case (a->'a')::numeric, we don't\nneed this copy IIUC.\n\n/*\n * v.val.numeric points into jsonb body, so we need to make a copy to\n * return\n */\nretValue = DatumGetNumericCopy(NumericGetDatum(v.val.numeric));\n\nAt last this method needs 1 extra FuncExpr than my method, this would\ncost some expression execution effort. I'm not saying we need to avoid\nexpression execution generally, but extracting numeric fields from jsonb\nlooks a reasonable case. As a comparison, cast to other data types like\nint2/int4 may be not needed since they are not binary compatible.\n\n\nHere is the performance comparison (with -O3, my previous post is -O0).\n\nselect 1 from tb where (a->'a')::numeric = 2; 31ms.\nselect 1 from tb where (a@->'a') = 2; 15ms\n\n\n-- \nBest Regards\nAndy Fan\n\nHi Matthias:On Wed, Aug 2, 2023 at 7:33 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:On Tue, Aug 1, 2023 at 7:03 PM Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:On Tue, 1 Aug 2023 at 06:39, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> Hi:\n>\n> Currently if we want to extract a numeric field in jsonb, we need to use\n> the following expression:  cast (a->>'a' as numeric). It will turn a numeric\n> to text first and then turn the text to numeric again.\n\nWhy wouldn't you use cast(a->'a' as numeric), or ((a->'a')::numeric)?Thanks for this information! I didn't realize we have this functionalready at [1].https://www.postgresql.org/docs/15/functions-json.html Hi:I just found ((a->'a')::numeric) is not as effective as I expected.First in the above expression we used jsonb_object_field whichreturns a jsonb (see JsonbValueToJsonb), and then we convert jsonbto jsonbValue in jsonb_numeric (see JsonbExtractScalar). Thislooks like a wastage.Secondly, because of the same reason above, we use PG_GETARG_JSONB_P(0),which may detoast a value so we need to free it with PG_FREE_IF_COPY.then this looks like another potential wastage.Thirdly, I am not sure we need to do the NumericCopy automaticallyin jsonb_numeric. an option in my mind is maybe we can leave thisto the caller?  At least in the normal case (a->'a')::numeric, we don'tneed this copy IIUC./* * v.val.numeric points into jsonb body, so we need to make a copy to * return */retValue = DatumGetNumericCopy(NumericGetDatum(v.val.numeric));At last this method needs 1 extra FuncExpr than my method, this wouldcost some expression execution effort. I'm not saying we need to avoidexpression execution generally, but extracting numeric fields from jsonblooks a reasonable case. As a comparison, cast to other data types likeint2/int4 may be not needed since they are not binary compatible.Here is the performance comparison (with -O3, my previous post is -O0).select 1 from tb where (a->'a')::numeric = 2;  31ms.select 1 from tb where (a@->'a') = 2;  15ms -- Best RegardsAndy Fan", "msg_date": "Wed, 2 Aug 2023 09:05:44 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "On Tue, Aug 1, 2023 at 12:39 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> Hi:\n>\n> Currently if we want to extract a numeric field in jsonb, we need to use\n> the following expression: cast (a->>'a' as numeric). It will turn a numeric\n> to text first and then turn the text to numeric again. See\n> jsonb_object_field_text and JsonbValueAsText. However the binary format\n> of numeric in JSONB is compatible with the numeric in SQL, so I think we\n> can have an operator to extract the numeric directly. If the value of a given\n> field is not a numeric data type, an error will be raised, this can be\n> documented.\n>\n> In this patch, I added a new operator for this purpose, here is the\n> performance gain because of this.\n>\n> create table tb (a jsonb);\n> insert into tb select '{\"a\": 1}'::jsonb from generate_series(1, 100000)i;\n>\n> current method:\n> select count(*) from tb where cast (a->>'a' as numeric) = 2;\n> 167ms.\n>\n> new method:\n> select count(*) from tb where a@->'a' = 2;\n> 65ms.\n>\n> Is this the right way to go? Testcase, document and catalog version are\n> updated.\n>\n>\n> --\n> Best Regards\n> Andy Fan\n\n\nreturn PointerGetDatum(v->val.numeric);\nshould be something like\nPG_RETURN_NUMERIC(v->val.numeric);\n?\n\n\n", "msg_date": "Wed, 2 Aug 2023 14:01:15 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "Hi Jian:\n\n\n> return PointerGetDatum(v->val.numeric);\n> should be something like\n> PG_RETURN_NUMERIC(v->val.numeric);\n> ?\n>\n\nThanks for this reminder, a new patch is attached. and commitfest\nentry is added as well[1]. For recording purposes, I compared the\nnew operator with all the existing operators.\n\nselect 1 from tb where (a->'a')::numeric = 2; 30.56ms\nselect 1 from tb where (a->>'a')::numeric = 2; 29.43ms\nselect 1 from tb where (a@->'a') = 2; 14.80ms\n\n[1] https://commitfest.postgresql.org/44/4476/\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Thu, 3 Aug 2023 08:50:47 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "Hi\n\nčt 3. 8. 2023 v 2:51 odesílatel Andy Fan <zhihui.fan1213@gmail.com> napsal:\n\n> Hi Jian:\n>\n>\n>> return PointerGetDatum(v->val.numeric);\n>> should be something like\n>> PG_RETURN_NUMERIC(v->val.numeric);\n>> ?\n>>\n>\n> Thanks for this reminder, a new patch is attached. and commitfest\n> entry is added as well[1]. For recording purposes, I compared the\n> new operator with all the existing operators.\n>\n> select 1 from tb where (a->'a')::numeric = 2; 30.56ms\n> select 1 from tb where (a->>'a')::numeric = 2; 29.43ms\n> select 1 from tb where (a@->'a') = 2; 14.80ms\n>\n> [1] https://commitfest.postgresql.org/44/4476/\n>\n>\nI don't like this solution because it is bloating operators and it is not\nextra readable. For completeness you should implement cast for date, int,\nboolean too. Next, the same problem is with XML or hstore type (probably\nwith any types that are containers).\n\nIt is strange so only casting is 2x slower. I don't like the idea so using\na special operator is 2x faster than common syntax for casting. It is a\nsignal, so there is a space for optimization. Black magic with special\noperators is not user friendly for relatively common problems.\n\nMaybe we can introduce some *internal operator* \"extract to type\", and in\nrewrite stage we can the pattern (x->'field')::type transform to OP(x,\n'field', typid)\n\nRegards\n\nPavel\n\n\n-- \n> Best Regards\n> Andy Fan\n>\n\nHičt 3. 8. 2023 v 2:51 odesílatel Andy Fan <zhihui.fan1213@gmail.com> napsal:Hi Jian: \nreturn PointerGetDatum(v->val.numeric);\nshould be something like\nPG_RETURN_NUMERIC(v->val.numeric);\n?\nThanks for this reminder, a new patch is attached.  and commitfestentry is added as well[1]. For recording purposes,  I compared thenew operator with all the existing operators.select 1 from tb where (a->'a')::numeric = 2;   30.56msselect 1 from tb where (a->>'a')::numeric = 2; 29.43msselect 1 from tb where (a@->'a') = 2;              14.80ms [1] https://commitfest.postgresql.org/44/4476/I don't like this solution because it is bloating  operators and it is not extra readable. For completeness you should implement cast for date, int, boolean too. Next, the same problem is with XML or hstore type (probably with any types that are containers). It is strange so only casting is 2x slower. I don't like the idea so using a special operator is 2x faster than common syntax for casting. It is a signal, so there is a space for optimization. Black magic with special operators is not user friendly for relatively common problems.Maybe we can introduce some *internal operator* \"extract to type\", and in rewrite stage we can the pattern (x->'field')::type transform to OP(x, 'field', typid)RegardsPavel-- Best RegardsAndy Fan", "msg_date": "Thu, 3 Aug 2023 06:59:19 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "Hi Pavel:\n\nThanks for the feedback.\n\nI don't like this solution because it is bloating operators and it is not\n> extra readable.\n>\n\nIf we support it with cast, could we say we are bloating CAST? It is true\nthat it is not extra readable, if so how about a->>'a' return text?\nActually\nI can't guess any meaning of the existing jsonb operations without\ndocumentation.\n\nFor completeness you should implement cast for date, int, boolean too.\n> Next, the same problem is with XML or hstore type (probably with any types\n> that are containers).\n>\n\nI am not sure completeness is a gold rule we should obey anytime,\nlike we have some function like int24le to avoid the unnecessary\ncast, but we just avoid casting for special types for performance\nreason, but not for all. At the same time, `int2/int4/int8` doesn't\nhave a binary compatibility type in jsonb. and the serialization\n/deserialization for boolean is pretty cheap.\n\nI didn't realize timetime types are binary compatible with SQL,\nso maybe we can have some similar optimization as well.\n(It is a pity that timestamp(tz) are not binary, or else we may\njust need one operator).\n\n\n>\n> I don't like the idea so using a special operator is 2x faster than common\n> syntax for casting. It is a signal, so there is a space for optimization.\n> Black magic with special operators is not user friendly for relatively\n> common problems.\n>\n\nI don't think \"Black magic\" is a proper word here, since it is not much\ndifferent from ->> return a text. If you argue text can be cast to\nmost-of-types, that would be a reason, but I doubt this difference\nshould generate a \"black magic\".\n\n\n>\n> Maybe we can introduce some *internal operator* \"extract to type\", and in\n> rewrite stage we can the pattern (x->'field')::type transform to OP(x,\n> 'field', typid)\n>\n\nNot sure what the OP should be? If it is a function, what is the\nreturn value? It looks to me like it is hard to do in c language?\n\nAfter all, if we really care about the number of operators, I'm OK\nwith just let users use the function directly, like\n\njsonb_field_as_numeric(jsonb, 'filedname')\njsonb_field_as_timestamp(jsonb, 'filedname');\njsonb_field_as_timestamptz(jsonb, 'filedname');\njsonb_field_as_date(jsonb, 'filedname');\n\nit can save an operator and sloves the readable issue.\n\n-- \nBest Regards\nAndy Fan\n\nHi Pavel:Thanks for the feedback. I don't like this solution because it is bloating  operators and it is not extra readable.  If we support it with cast, could we say we are bloating CAST?  It is truethat it is not extra readable, if so how about  a->>'a'  return text?  ActuallyI can't guess any meaning of the existing jsonb operations withoutdocumentation.For completeness you should implement cast for date, int, boolean too. Next, the same problem is with XML or hstore type (probably with any types that are containers).I am not sure completeness is a gold rule we should obey anytime,like we have some function like int24le to avoid the unnecessarycast, but we just avoid casting for special types for performancereason, but not for all. At the same time,  `int2/int4/int8` doesn'thave a binary compatibility type in jsonb. and the serialization/deserialization for boolean is pretty cheap.I didn't realize timetime types are binary compatible with SQL,  so maybe we can have some similar optimization as well. (It is a pity that timestamp(tz) are not binary, or else we may just need one operator).  I don't like the idea so using a special operator is 2x faster than common syntax for casting. It is a signal, so there is a space for optimization. Black magic with special operators is not user friendly for relatively common problems.I don't think \"Black magic\" is a proper word here, since it is not muchdifferent from ->> return a text.  If you argue text can be cast to most-of-types,  that would be a reason, but I doubt this differenceshould generate a \"black magic\".  Maybe we can introduce some *internal operator* \"extract to type\", and in rewrite stage we can the pattern (x->'field')::type transform to OP(x, 'field', typid)Not sure what the OP should be?  If it is a function, what is thereturn value?  It looks to me like it is hard to do in c language?After all,  if we really care about the number of operators, I'm OKwith just let users use the function directly, likejsonb_field_as_numeric(jsonb, 'filedname') jsonb_field_as_timestamp(jsonb, 'filedname'); jsonb_field_as_timestamptz(jsonb, 'filedname'); jsonb_field_as_date(jsonb, 'filedname'); it can save an operator and sloves the readable issue. -- Best RegardsAndy Fan", "msg_date": "Thu, 3 Aug 2023 15:53:47 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "Hi\n\nčt 3. 8. 2023 v 9:53 odesílatel Andy Fan <zhihui.fan1213@gmail.com> napsal:\n\n> Hi Pavel:\n>\n> Thanks for the feedback.\n>\n> I don't like this solution because it is bloating operators and it is not\n>> extra readable.\n>>\n>\n> If we support it with cast, could we say we are bloating CAST? It is true\n> that it is not extra readable, if so how about a->>'a' return text?\n> Actually\n> I can't guess any meaning of the existing jsonb operations without\n> documentation.\n>\n\nyes, it can bloat CAST, but for usage we have already used syntax, and\nthese casts are cooked already:\n\n(2023-08-03 11:04:51) postgres=# select castfunc::regprocedure from pg_cast\nwhere castsource = 'jsonb'::regtype;\n┌──────────────────┐\n│ castfunc │\n╞══════════════════╡\n│ - │\n│ bool(jsonb) │\n│ \"numeric\"(jsonb) │\n│ int2(jsonb) │\n│ int4(jsonb) │\n│ int8(jsonb) │\n│ float4(jsonb) │\n│ float8(jsonb) │\n└──────────────────┘\n(8 rows)\n\nthe operator ->> was a special case, the text type is special in postgres\nas the most convertible type. And when you want to visualise a value or\ndisplay the value, you should convert value to text.\n\nI can live with that because it is just one, but with your proposal opening\nthe doors for implementing tens of similar operators, I think it is bad.\nUsing ::target_type is common syntax and doesn't require reading\ndocumentation.\n\nMore, I believe so lot of people uses more common syntax, and then this\nsyntax should to have good performance - for jsonb - (val->'op')::numeric\nworks, and then there should not be performance penalty, because this\nsyntax will be used in 99%.\n\nUsage of cast is self documented.\n\n\n> For completeness you should implement cast for date, int, boolean too.\n>> Next, the same problem is with XML or hstore type (probably with any types\n>> that are containers).\n>>\n>\n> I am not sure completeness is a gold rule we should obey anytime,\n> like we have some function like int24le to avoid the unnecessary\n> cast, but we just avoid casting for special types for performance\n> reason, but not for all. At the same time, `int2/int4/int8` doesn't\n> have a binary compatibility type in jsonb. and the serialization\n> /deserialization for boolean is pretty cheap.\n>\n> I didn't realize timetime types are binary compatible with SQL,\n> so maybe we can have some similar optimization as well.\n> (It is a pity that timestamp(tz) are not binary, or else we may\n> just need one operator).\n>\n>\n>>\n>> I don't like the idea so using a special operator is 2x faster than\n>> common syntax for casting. It is a signal, so there is a space for\n>> optimization. Black magic with special operators is not user friendly for\n>> relatively common problems.\n>>\n>\n> I don't think \"Black magic\" is a proper word here, since it is not much\n> different from ->> return a text. If you argue text can be cast to\n> most-of-types, that would be a reason, but I doubt this difference\n> should generate a \"black magic\".\n>\n\nI used the term black magic, because nobody without reading documentation\ncan find this operator. It is used just for this special case, and the\nfunctionality is the same as using cast (only with different performance).\n\nThe operator ->> is more widely used. But if we have some possibility to\nwork without it, then the usage for a lot of users will be more simple.\nMore if the target types can be based on context\n\nCan be nice to use some like `EXTRACT(YEAR FROM val->'field')` instead\n`EXTRACT(YEAR FROM (val->>'field')::date)`\n\n\n>\n>>\n>> Maybe we can introduce some *internal operator* \"extract to type\", and in\n>> rewrite stage we can the pattern (x->'field')::type transform to OP(x,\n>> 'field', typid)\n>>\n>\n> Not sure what the OP should be? If it is a function, what is the\n> return value? It looks to me like it is hard to do in c language?\n>\n\nIt should be internal structure - it can be similar like COALESCE or IS\noperator\n\n\n>\n> After all, if we really care about the number of operators, I'm OK\n> with just let users use the function directly, like\n>\n> jsonb_field_as_numeric(jsonb, 'filedname')\n> jsonb_field_as_timestamp(jsonb, 'filedname');\n> jsonb_field_as_timestamptz(jsonb, 'filedname');\n> jsonb_field_as_date(jsonb, 'filedname');\n>\n> it can save an operator and sloves the readable issue.\n>\n\nI don't like it too much, but it is better than introduction new operator\n\nWe already have the jsonb_extract_path and jsonb_extract_path_text\nfunction.\n\nI can imagine to usage \"anyelement\" type too. some like\n`jsonb_extract_path_type(jsonb, anyelement, variadic text[] )`\n\n\n\n\n\n\n> --\n> Best Regards\n> Andy Fan\n>\n\nHičt 3. 8. 2023 v 9:53 odesílatel Andy Fan <zhihui.fan1213@gmail.com> napsal:Hi Pavel:Thanks for the feedback. I don't like this solution because it is bloating  operators and it is not extra readable.  If we support it with cast, could we say we are bloating CAST?  It is truethat it is not extra readable, if so how about  a->>'a'  return text?  ActuallyI can't guess any meaning of the existing jsonb operations withoutdocumentation.yes, it can bloat CAST, but for usage we have already used syntax, and these casts are cooked already:(2023-08-03 11:04:51) postgres=# select castfunc::regprocedure from pg_cast where castsource = 'jsonb'::regtype;┌──────────────────┐│     castfunc     │╞══════════════════╡│ -                ││ bool(jsonb)      ││ \"numeric\"(jsonb) ││ int2(jsonb)      ││ int4(jsonb)      ││ int8(jsonb)      ││ float4(jsonb)    ││ float8(jsonb)    │└──────────────────┘(8 rows) the operator ->> was a special case, the text type is special in postgres as the most convertible type. And when you want to visualise a value or display the value, you should convert value to text.I can live with that because it is just one, but with your proposal opening the doors for implementing tens of similar operators, I think it is bad. Using ::target_type is common syntax and doesn't require reading documentation.More, I believe so lot of people uses more common syntax, and then this syntax should to have good performance - for jsonb - (val->'op')::numeric works, and then there should not be performance penalty, because this syntax will be used in 99%. Usage of cast is self documented. For completeness you should implement cast for date, int, boolean too. Next, the same problem is with XML or hstore type (probably with any types that are containers).I am not sure completeness is a gold rule we should obey anytime,like we have some function like int24le to avoid the unnecessarycast, but we just avoid casting for special types for performancereason, but not for all. At the same time,  `int2/int4/int8` doesn'thave a binary compatibility type in jsonb. and the serialization/deserialization for boolean is pretty cheap.I didn't realize timetime types are binary compatible with SQL,  so maybe we can have some similar optimization as well. (It is a pity that timestamp(tz) are not binary, or else we may just need one operator).  I don't like the idea so using a special operator is 2x faster than common syntax for casting. It is a signal, so there is a space for optimization. Black magic with special operators is not user friendly for relatively common problems.I don't think \"Black magic\" is a proper word here, since it is not muchdifferent from ->> return a text.  If you argue text can be cast to most-of-types,  that would be a reason, but I doubt this differenceshould generate a \"black magic\". I used the term black magic, because nobody without reading documentation can find this operator. It is used just for this special case, and the functionality is the same as using cast (only with different performance).The operator ->> is more widely used. But if we have some possibility to work without it, then the usage for a lot of users will be more simple. More if the target types can be based on contextCan be nice to use some like `EXTRACT(YEAR FROM val->'field')` instead `EXTRACT(YEAR FROM (val->>'field')::date)` Maybe we can introduce some *internal operator* \"extract to type\", and in rewrite stage we can the pattern (x->'field')::type transform to OP(x, 'field', typid)Not sure what the OP should be?  If it is a function, what is thereturn value?  It looks to me like it is hard to do in c language?It should be internal structure - it can be similar like COALESCE or IS operator After all,  if we really care about the number of operators, I'm OKwith just let users use the function directly, likejsonb_field_as_numeric(jsonb, 'filedname') jsonb_field_as_timestamp(jsonb, 'filedname'); jsonb_field_as_timestamptz(jsonb, 'filedname'); jsonb_field_as_date(jsonb, 'filedname'); it can save an operator and sloves the readable issue. I don't like it too much, but it is better than introduction new operator We already have the jsonb_extract_path and jsonb_extract_path_text function. I can imagine to usage \"anyelement\" type too. some like `jsonb_extract_path_type(jsonb, anyelement, variadic text[] )`-- Best RegardsAndy Fan", "msg_date": "Thu, 3 Aug 2023 11:47:33 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "On Wed, 2 Aug 2023 at 03:05, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> Hi Matthias:\n>\n> On Wed, Aug 2, 2023 at 7:33 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>>\n>>\n>>\n>> On Tue, Aug 1, 2023 at 7:03 PM Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:\n>>>\n>>> On Tue, 1 Aug 2023 at 06:39, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>>> >\n>>> > Hi:\n>>> >\n>>> > Currently if we want to extract a numeric field in jsonb, we need to use\n>>> > the following expression: cast (a->>'a' as numeric). It will turn a numeric\n>>> > to text first and then turn the text to numeric again.\n>>>\n>>> Why wouldn't you use cast(a->'a' as numeric), or ((a->'a')::numeric)?\n>>\n>>\n>> Thanks for this information! I didn't realize we have this function\n>> already at [1].\n>>\n>> https://www.postgresql.org/docs/15/functions-json.html\n>\n>\n> Hi:\n>\n> I just found ((a->'a')::numeric) is not as effective as I expected.\n>\n> First in the above expression we used jsonb_object_field which\n> returns a jsonb (see JsonbValueToJsonb), and then we convert jsonb\n> to jsonbValue in jsonb_numeric (see JsonbExtractScalar). This\n> looks like a wastage.\n\nYes, it's not great, but that's just how this works. We can't\npre-specialize all possible operations that one might want to do in\nPostgreSQL - that'd be absurdly expensive for binary and initial\ndatabase sizes.\n\n> Secondly, because of the same reason above, we use PG_GETARG_JSONB_P(0),\n> which may detoast a value so we need to free it with PG_FREE_IF_COPY.\n> then this looks like another potential wastage.\n\nIs it? Detoasting only happens if the argument was toasted, and I have\nserious doubts that the result of (a->'a') will be toasted in our\ncurrent system. Sure, we do need to allocate an intermediate result,\nbut that's in a temporary memory context that should be trivially\ncheap to free.\n\n> /*\n> * v.val.numeric points into jsonb body, so we need to make a copy to\n> * return\n> */\n> retValue = DatumGetNumericCopy(NumericGetDatum(v.val.numeric));\n>\n> At last this method needs 1 extra FuncExpr than my method, this would\n> cost some expression execution effort. I'm not saying we need to avoid\n> expression execution generally, but extracting numeric fields from jsonb\n> looks a reasonable case.\n\nBut we don't have special cases for the other jsonb types - the one\nthat is available (text) is lossy and doesn't work reliably without\nmaking sure the field we're accessing is actually a string, and not\nany other type of value.\n\n> As a comparison, cast to other data types like\n> int2/int4 may be not needed since they are not binary compatible.\n\nYet there are casts from jsonb to and back from int2, int4 and int8. I\ndon't see a very good reason to add this, for the same reasons\nmentioned by Pavel.\n\n*If* we were to add this operator, I would want this patch to also\ninclude a #-variant for text[]-based deep access (c.q. #> / #>>), and\nequivalent operators for the json type to keep the current access\noperator parity.\n\n> Here is the performance comparison (with -O3, my previous post is -O0).\n>\n> select 1 from tb where (a->'a')::numeric = 2; 31ms.\n> select 1 from tb where (a@->'a') = 2; 15ms\n\nWhat's tb here?\n\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Thu, 3 Aug 2023 12:04:18 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "Hi:\n\n\n>\n> Yes, it's not great, but that's just how this works. We can't\n> pre-specialize all possible operations that one might want to do in\n> PostgreSQL - that'd be absurdly expensive for binary and initial\n> database sizes.\n>\n\nAre any people saying we would pre-specialize all possible operators?\nI would say anything if adding operators will be expensive for binary and\ninitial database sizes. If so, how many per operator and how many\noperators would be in your expectation?\n\n\n> > Secondly, because of the same reason above, we use PG_GETARG_JSONB_P(0),\n> > which may detoast a value so we need to free it with PG_FREE_IF_COPY.\n> > then this looks like another potential wastage.\n>\n> Is it? Detoasting only happens if the argument was toasted, and I have\n> serious doubts that the result of (a->'a') will be toasted in our\n> current system. Sure, we do need to allocate an intermediate result,\n> but that's in a temporary memory context that should be trivially\n> cheap to free.\n\n\nIf you take care about my context, I put this as a second factor for the\ncurrent strategy. and it is the side effects of factor 1. FWIW, that cost\nis paid for every jsonb object, not something during the initial database.\n\n\n> > As a comparison, cast to other data types like\n> > int2/int4 may be not needed since they are not binary compatible.\n>\n> Yet there are casts from jsonb to and back from int2, int4 and int8. I\n> don't see a very good reason to add this, for the same reasons\n> mentioned by Pavel.\n>\n\nWho is insisting on adding such an operator in your opinion?\n\n\n> *If* we were to add this operator, I would want this patch to also\n> include a #-variant for text[]-based deep access (c.q. #> / #>>), and\n> equivalent operators for the json type to keep the current access\n> operator parity.\n>\n> > Here is the performance comparison (with -O3, my previous post is -O0).\n> >\n> > select 1 from tb where (a->'a')::numeric = 2; 31ms.\n> > select 1 from tb where (a@->'a') = 2; 15ms\n>\n> What's tb here?\n>\n\nThis is my first post. Copy it here again.\n\ncreate table tb (a jsonb);\ninsert into tb select '{\"a\": 1}'::jsonb from generate_series(1, 100000)i;\n\n-- \nBest Regards\nAndy Fan\n\nHi:  \n\nYes, it's not great, but that's just how this works. We can't\npre-specialize all possible operations that one might want to do in\nPostgreSQL - that'd be absurdly expensive for binary and initial\ndatabase sizes.Are any people saying we would  pre-specialize all possible operators?I would say anything if adding operators will be expensive for binary andinitial database sizes.  If so,  how many per operator and how manyoperators would be in your expectation?   \n\n> Secondly, because of the same reason above, we use PG_GETARG_JSONB_P(0),\n> which may detoast a value so we need to free it with PG_FREE_IF_COPY.\n> then this looks like another potential wastage.\n\nIs it? Detoasting only happens if the argument was toasted, and I have\nserious doubts that the result of (a->'a') will be toasted in our\ncurrent system. Sure, we do need to allocate an intermediate result,\nbut that's in a temporary memory context that should be trivially\ncheap to free.If you take care about my context, I put this as a second factor for thecurrent strategy.  and it is the side effects of factor 1.  FWIW,  that costis paid for every jsonb object, not something during the initial database. \n\n> As a comparison, cast to other data types like\n> int2/int4 may be not needed since they are not binary compatible.\n\nYet there are casts from jsonb to and back from int2, int4 and int8. I\ndon't see a very good reason to add this, for the same reasons\nmentioned by Pavel. Who is insisting on adding such an operator in your opinion?  \n\n*If* we were to add this operator, I would want this patch to also\ninclude a #-variant for text[]-based deep access (c.q. #> / #>>), and\nequivalent operators for the json type to keep the current access\noperator parity.\n\n> Here is the performance comparison (with -O3, my previous post is -O0).\n>\n> select 1 from tb where (a->'a')::numeric = 2;  31ms.\n> select 1 from tb where (a@->'a') = 2;  15ms\n\nWhat's tb here?This is my first post.  Copy it here again. create table tb (a jsonb);insert into tb select '{\"a\": 1}'::jsonb from generate_series(1, 100000)i;-- Best RegardsAndy Fan", "msg_date": "Thu, 3 Aug 2023 20:31:34 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "On 2023-08-03 03:53, Andy Fan wrote:\n> I didn't realize timetime types are binary compatible with SQL,\n> so maybe we can have some similar optimization as well.\n> (It is a pity that timestamp(tz) are not binary, or else we may\n> just need one operator).\n\nNot to veer from the thread, but something about that paragraph\nhas been hard for me to parse/follow.\n\n>> Maybe we can introduce some *internal operator* \"extract to type\", and \n>> in\n>> rewrite stage we can the pattern (x->'field')::type transform to OP(x,\n>> 'field', typid)\n> \n> Not sure what the OP should be? If it is a function, what is the\n> return value? It looks to me like it is hard to do in c language?\n\nNow I am wondering about the 'planner support function' available\nin CREATE FUNCTION since PG 12. I've never played with that yet.\nWould that make it possible to have some, rather generic, extract\nfrom JSON operator that can look at the surrounding expression\nand replace itself sometimes with something more efficient?\n\nRegards,\n-Chap\n\n\n", "msg_date": "Thu, 03 Aug 2023 08:34:44 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "Hi:\n\n\n> More, I believe so lot of people uses more common syntax, and then this\n> syntax should to have good performance - for jsonb - (val->'op')::numeric\n> works, and then there should not be performance penalty, because this\n> syntax will be used in 99%.\n>\n\nThis looks like a valid opinion IMO, but to rescue it, we have to do\nsomething like \"internal structure\" and remove the existing cast.\nBut even we pay the effort, it still breaks some common knowledge,\nsince xx:numeric is not a cast. It is an \"internal structure\"!\n\n\nI don't think \"Black magic\" is a proper word here, since it is not much\n>> different from ->> return a text. If you argue text can be cast to\n>> most-of-types, that would be a reason, but I doubt this difference\n>> should generate a \"black magic\".\n>>\n>\n> I used the term black magic, because nobody without reading documentation\n> can find this operator.\n>\n\nI think this is what document is used for..\n\n\n> It is used just for this special case, and the functionality is the same\n> as using cast (only with different performance).\n>\n\nThis is not good, but I didn't see a better choice so far, see my first\ngraph.\n\n\n>\n> The operator ->> is more widely used. But if we have some possibility to\n> work without it, then the usage for a lot of users will be more simple.\n> More if the target types can be based on context\n>\n\nIt would be cool but still I didn't see a way to do that without making\nsomething else complex.\n\n\n>>> Maybe we can introduce some *internal operator* \"extract to type\", and\n>>> in rewrite stage we can the pattern (x->'field')::type transform to OP(x,\n>>> 'field', typid)\n>>>\n>>\n>> Not sure what the OP should be? If it is a function, what is the\n>> return value? It looks to me like it is hard to do in c language?\n>>\n>\n> It should be internal structure - it can be similar like COALESCE or IS\n> operator\n>\n\nIt may work, but see my answer in the first graph.\n\n\n>\n>\n>>\n>> After all, if we really care about the number of operators, I'm OK\n>> with just let users use the function directly, like\n>>\n>> jsonb_field_as_numeric(jsonb, 'filedname')\n>> jsonb_field_as_timestamp(jsonb, 'filedname');\n>> jsonb_field_as_timestamptz(jsonb, 'filedname');\n>> jsonb_field_as_date(jsonb, 'filedname');\n>>\n>> it can save an operator and sloves the readable issue.\n>>\n>\n> I don't like it too much, but it is better than introduction new operator\n>\n\nGood to know it. Naming operators is a complex task if we add four.\n\n\n> We already have the jsonb_extract_path and jsonb_extract_path_text\n> function.\n>\n\nI can't follow this. jsonb_extract_path returns a jsonb, which is far\naway from\nour goal: return a numeric effectively?\n\nI can imagine to usage \"anyelement\" type too. some like\n> `jsonb_extract_path_type(jsonb, anyelement, variadic text[] )`\n>\n\nCan you elaborate this please?\n\n-- \nBest Regards\nAndy Fan\n\nHi: More, I believe so lot of people uses more common syntax, and then this syntax should to have good performance - for jsonb - (val->'op')::numeric works, and then there should not be performance penalty, because this syntax will be used in 99%.This looks like a valid opinion IMO,  but to rescue it, we have to dosomething like \"internal structure\" and remove the existing cast. But even we pay the effort, it still breaks some common knowledge,since xx:numeric is not a cast.  It is an \"internal structure\"!  I don't think \"Black magic\" is a proper word here, since it is not muchdifferent from ->> return a text.  If you argue text can be cast to most-of-types,  that would be a reason, but I doubt this differenceshould generate a \"black magic\". I used the term black magic, because nobody without reading documentation can find this operator. I think this is what document is used for..    It is used just for this special case, and the functionality is the same as using cast (only with different performance).This is not good, but I didn't see a better choice so far,  see my first graph. The operator ->> is more widely used. But if we have some possibility to work without it, then the usage for a lot of users will be more simple. More if the target types can be based on contextIt would be cool but still I didn't see a way to do that without makingsomething else complex. Maybe we can introduce some *internal operator* \"extract to type\", and in rewrite stage we can the pattern (x->'field')::type transform to OP(x, 'field', typid)Not sure what the OP should be?  If it is a function, what is thereturn value?  It looks to me like it is hard to do in c language?It should be internal structure - it can be similar like COALESCE or IS operatorIt may work, but see my answer in the first graph.   After all,  if we really care about the number of operators, I'm OKwith just let users use the function directly, likejsonb_field_as_numeric(jsonb, 'filedname') jsonb_field_as_timestamp(jsonb, 'filedname'); jsonb_field_as_timestamptz(jsonb, 'filedname'); jsonb_field_as_date(jsonb, 'filedname'); it can save an operator and sloves the readable issue. I don't like it too much, but it is better than introduction new operator Good to know it.  Naming operators is a complex task  if we add four. We already have the jsonb_extract_path and jsonb_extract_path_text function.I can't follow this.  jsonb_extract_path returns a jsonb, which is  far away fromour goal: return a numeric effectively?I can imagine to usage \"anyelement\" type too. some like `jsonb_extract_path_type(jsonb, anyelement, variadic text[] )` Can you elaborate this please?   -- Best RegardsAndy Fan", "msg_date": "Thu, 3 Aug 2023 21:22:48 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "Hi:\n\nOn Thu, Aug 3, 2023 at 8:34 PM Chapman Flack <chap@anastigmatix.net> wrote:\n\n> On 2023-08-03 03:53, Andy Fan wrote:\n> > I didn't realize timetime types are binary compatible with SQL,\n> > so maybe we can have some similar optimization as well.\n> > (It is a pity that timestamp(tz) are not binary, or else we may\n> > just need one operator).\n>\n> Not to veer from the thread, but something about that paragraph\n> has been hard for me to parse/follow.\n>\n\nI don't think this is a key conflict so far. but I'd explain this in more\ndetail. If timestamp -> timestamptz or timestamptz -> timestamp is\nbinary compatible, we can only have 1 operator to return a timestamp.\nthen when we cast it to timestamptz, it will be a no-op during runtime.\nhowever cast between timestamp and timestamptz is not binary\ncompatible. whose castmethod is 'f';\n\n\n\n>\n> >> Maybe we can introduce some *internal operator* \"extract to type\", and\n> >> in\n> >> rewrite stage we can the pattern (x->'field')::type transform to OP(x,\n> >> 'field', typid)\n> >\n> > Not sure what the OP should be? If it is a function, what is the\n> > return value? It looks to me like it is hard to do in c language?\n>\n> Now I am wondering about the 'planner support function' available\n> in CREATE FUNCTION since PG 12. I've never played with that yet.\n> Would that make it possible to have some, rather generic, extract\n> from JSON operator that can look at the surrounding expression\n> and replace itself sometimes with something efficient?\n>\n\nI didn't realize this before, 'planner support function' looks\namazing and SupportRequestSimplify looks promising, I will check it\nmore.\n\n-- \nBest Regards\nAndy Fan\n\nHi: On Thu, Aug 3, 2023 at 8:34 PM Chapman Flack <chap@anastigmatix.net> wrote:On 2023-08-03 03:53, Andy Fan wrote:\n> I didn't realize timetime types are binary compatible with SQL,\n> so maybe we can have some similar optimization as well.\n> (It is a pity that timestamp(tz) are not binary, or else we may\n> just need one operator).\n\nNot to veer from the thread, but something about that paragraph\nhas been hard for me to parse/follow.I don't think this is a key conflict so far. but I'd explain this in moredetail. If timestamp -> timestamptz or timestamptz -> timestamp isbinary compatible,  we can only have 1 operator to return a timestamp.then when we cast it to timestamptz, it will be a no-op during runtime.however cast between timestamp and timestamptz is not binarycompatible. whose castmethod is 'f'; \n \n\n>> Maybe we can introduce some *internal operator* \"extract to type\", and \n>> in\n>> rewrite stage we can the pattern (x->'field')::type transform to OP(x,\n>> 'field', typid)\n> \n> Not sure what the OP should be?  If it is a function, what is the\n> return value?  It looks to me like it is hard to do in c language?\n\nNow I am wondering about the 'planner support function' available\nin CREATE FUNCTION since PG 12. I've never played with that yet.\nWould that make it possible to have some, rather generic, extract\nfrom JSON operator that can look at the surrounding expression\nand replace itself sometimes with something  efficient?I didn't realize this before,  'planner support function' looksamazing and SupportRequestSimplify looks promising, I will check it more. -- Best RegardsAndy Fan", "msg_date": "Thu, 3 Aug 2023 21:50:15 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "Hi\n\nčt 3. 8. 2023 v 15:23 odesílatel Andy Fan <zhihui.fan1213@gmail.com> napsal:\n\n> Hi:\n>\n>\n>> More, I believe so lot of people uses more common syntax, and then this\n>> syntax should to have good performance - for jsonb - (val->'op')::numeric\n>> works, and then there should not be performance penalty, because this\n>> syntax will be used in 99%.\n>>\n>\n> This looks like a valid opinion IMO, but to rescue it, we have to do\n> something like \"internal structure\" and remove the existing cast.\n> But even we pay the effort, it still breaks some common knowledge,\n> since xx:numeric is not a cast. It is an \"internal structure\"!\n>\n\nI didn't study jsonb function, but there is an xml function that extracts\nvalue and next casts it to some target type. It does what is expected - for\nknown types use hard coded casts, for other ask system catalog for cast\nfunction or does IO cast. This code is used for the XMLTABLE function. The\nJSON_TABLE function is not implemented yet, but there should be similar\ncode. If you use explicit cast, then the code should not be hard, in the\nrewrite stage all information should be known.\n\n\n\n>\n> I don't think \"Black magic\" is a proper word here, since it is not much\n>>> different from ->> return a text. If you argue text can be cast to\n>>> most-of-types, that would be a reason, but I doubt this difference\n>>> should generate a \"black magic\".\n>>>\n>>\n>> I used the term black magic, because nobody without reading documentation\n>> can find this operator.\n>>\n>\n> I think this is what document is used for..\n>\n>\n>> It is used just for this special case, and the functionality is the same\n>> as using cast (only with different performance).\n>>\n>\n> This is not good, but I didn't see a better choice so far, see my first\n> graph.\n>\n>\n>>\n>> The operator ->> is more widely used. But if we have some possibility to\n>> work without it, then the usage for a lot of users will be more simple.\n>> More if the target types can be based on context\n>>\n>\n> It would be cool but still I didn't see a way to do that without making\n> something else complex.\n>\n\nsure - it is significantly more work, but it should be usable for all types\nand just use common syntax. The custom @-> operator you can implement in\nyour own custom extension. Builtin solutions should be generic as it is\npossible.\n\nThe things should be as simple as possible - mainly for users, that missing\nknowledge, and any other possibility of how to do some task just increases\ntheir confusion. Can be nice if users find one solution on stack overflow\nand this solution should be great for performance too. It is worse if users\nfind more solutions, but it is not too bad, if these solutions have similar\nperformance. It is too bad if any solution has great performance and others\nnot too much. Users has not internal knowledge, and then don't understand\nwhy sometimes should to use special operator and not common syntax.\n\n\n>\n>\n>>>> Maybe we can introduce some *internal operator* \"extract to type\", and\n>>>> in rewrite stage we can the pattern (x->'field')::type transform to OP(x,\n>>>> 'field', typid)\n>>>>\n>>>\n>>> Not sure what the OP should be? If it is a function, what is the\n>>> return value? It looks to me like it is hard to do in c language?\n>>>\n>>\n>> It should be internal structure - it can be similar like COALESCE or IS\n>> operator\n>>\n>\n> It may work, but see my answer in the first graph.\n>\n\n>\n>>\n>>\n>>>\n>>> After all, if we really care about the number of operators, I'm OK\n>>> with just let users use the function directly, like\n>>>\n>>> jsonb_field_as_numeric(jsonb, 'filedname')\n>>> jsonb_field_as_timestamp(jsonb, 'filedname');\n>>> jsonb_field_as_timestamptz(jsonb, 'filedname');\n>>> jsonb_field_as_date(jsonb, 'filedname');\n>>>\n>>> it can save an operator and sloves the readable issue.\n>>>\n>>\n>> I don't like it too much, but it is better than introduction new operator\n>>\n>\n> Good to know it. Naming operators is a complex task if we add four.\n>\n>\n>> We already have the jsonb_extract_path and jsonb_extract_path_text\n>> function.\n>>\n>\n> I can't follow this. jsonb_extract_path returns a jsonb, which is far\n> away from\n> our goal: return a numeric effectively?\n>\n\nI proposed `jsonb_extract_path_type` that is of anyelement type.\n\nRegards\n\nPavel\n\n\n\n> I can imagine to usage \"anyelement\" type too. some like\n>> `jsonb_extract_path_type(jsonb, anyelement, variadic text[] )`\n>>\n>\n> Can you elaborate this please?\n>\n> --\n> Best Regards\n> Andy Fan\n>\n\nHičt 3. 8. 2023 v 15:23 odesílatel Andy Fan <zhihui.fan1213@gmail.com> napsal:Hi: More, I believe so lot of people uses more common syntax, and then this syntax should to have good performance - for jsonb - (val->'op')::numeric works, and then there should not be performance penalty, because this syntax will be used in 99%.This looks like a valid opinion IMO,  but to rescue it, we have to dosomething like \"internal structure\" and remove the existing cast. But even we pay the effort, it still breaks some common knowledge,since xx:numeric is not a cast.  It is an \"internal structure\"!  I didn't study jsonb function, but there is an xml function that extracts value and next casts it to some target type. It does what is expected - for known types use hard coded casts, for other ask system catalog for cast function or does IO cast. This code is used for the XMLTABLE function. The JSON_TABLE function is not implemented yet, but there should be similar code. If you use explicit cast, then the code should not be hard, in the rewrite stage all information should be known. I don't think \"Black magic\" is a proper word here, since it is not muchdifferent from ->> return a text.  If you argue text can be cast to most-of-types,  that would be a reason, but I doubt this differenceshould generate a \"black magic\". I used the term black magic, because nobody without reading documentation can find this operator. I think this is what document is used for..    It is used just for this special case, and the functionality is the same as using cast (only with different performance).This is not good, but I didn't see a better choice so far,  see my first graph. The operator ->> is more widely used. But if we have some possibility to work without it, then the usage for a lot of users will be more simple. More if the target types can be based on contextIt would be cool but still I didn't see a way to do that without makingsomething else complex. sure - it is significantly more work, but it should be usable for all \ntypes and just use common syntax. The custom @-> operator you can \nimplement in your own custom extension. Builtin solutions should be \ngeneric as it is possible.The things should be as simple as possible - mainly for users, that missing knowledge, and any other possibility of how to do some task just increases their confusion. Can be nice if users find one solution on stack overflow and this solution should be great for performance too. It is worse if users find more solutions, but it is not too bad, if these solutions have similar performance. It is too bad if any solution has great performance and others not too much. Users has not internal knowledge, and then don't understand why sometimes should to use special operator and not common syntax.  Maybe we can introduce some *internal operator* \"extract to type\", and in rewrite stage we can the pattern (x->'field')::type transform to OP(x, 'field', typid)Not sure what the OP should be?  If it is a function, what is thereturn value?  It looks to me like it is hard to do in c language?It should be internal structure - it can be similar like COALESCE or IS operatorIt may work, but see my answer in the first graph.   After all,  if we really care about the number of operators, I'm OKwith just let users use the function directly, likejsonb_field_as_numeric(jsonb, 'filedname') jsonb_field_as_timestamp(jsonb, 'filedname'); jsonb_field_as_timestamptz(jsonb, 'filedname'); jsonb_field_as_date(jsonb, 'filedname'); it can save an operator and sloves the readable issue. I don't like it too much, but it is better than introduction new operator Good to know it.  Naming operators is a complex task  if we add four. We already have the jsonb_extract_path and jsonb_extract_path_text function.I can't follow this.  jsonb_extract_path returns a jsonb, which is  far away fromour goal: return a numeric effectively?I proposed `jsonb_extract_path_type` that is of anyelement type. RegardsPavel I can imagine to usage \"anyelement\" type too. some like `jsonb_extract_path_type(jsonb, anyelement, variadic text[] )` Can you elaborate this please?   -- Best RegardsAndy Fan", "msg_date": "Thu, 3 Aug 2023 15:52:40 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "Hi:\n\n\n> If you use explicit cast, then the code should not be hard, in the\n> rewrite stage all information should be known.\n>\n\nCan you point to me where the code is for the XML stuff? I thought\nthis is a bad idea but I may accept it if some existing code does\nsuch a thing already. \"such thing\" is typeA:typeB is\nconverted something else but user can't find out an entry in\npg_cast for typeA to typeB.\n\n\n> It would be cool but still I didn't see a way to do that without making\n>> something else complex.\n>>\n>\n> The custom @-> operator you can implement in your own custom extension.\n> Builtin solutions should be generic as it is possible.\n>\n\nI agree, but actually I think there is no clean way to do it, at least I\ndislike the conversion of typeA to typeB in a cast syntax but there\nis no entry in pg_cast for it. Are you saying something like this\nor I misunderstood you?\n\n>\n\n-- \nBest Regards\nAndy Fan\n\nHi:  If you use explicit cast, then the code should not be hard, in the rewrite stage all information should be known.Can you point to me where the code is for the XML stuff?  I thoughtthis is a bad idea but I may accept it if some existing code doessuch a thing already.   \"such thing\"  is  typeA:typeB isconverted something else but user can't find out an entry inpg_cast for typeA to typeB.  It would be cool but still I didn't see a way to do that without makingsomething else complex.  The custom @-> operator you can \nimplement in your own custom extension. Builtin solutions should be \ngeneric as it is possible.I agree, but actually I think there is no clean way to do it, at least Idislike the conversion of typeA to typeB in a cast syntax but thereis no entry in pg_cast for it.  Are you saying something like this or I misunderstood you? \n\n-- Best RegardsAndy Fan", "msg_date": "Thu, 3 Aug 2023 22:27:34 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "Andy Fan <zhihui.fan1213@gmail.com> writes:\n>> If you use explicit cast, then the code should not be hard, in the\n>> rewrite stage all information should be known.\n\n> Can you point to me where the code is for the XML stuff?\n\nI think Pavel means XMLTABLE, which IMO is an ugly monstrosity of\nsyntax --- but count on the SQL committee to do it that way :-(.\n\nAs far as the current discussion goes, I'm strongly against\nintroducing new functions or operators to do the same things that\nwe already have perfectly good syntax for. \"There's more than one\nway to do it\" isn't necessarily a virtue, and for sure it isn't a\nvirtue if people have to rewrite their existing queries to make use\nof your optimization.\n\nAlso, why stop at optimizing \"(jsonbval->'fld')::sometype\"? There are\nmany other extraction cases that people might wish were faster, such\nas json array indexing, nested fields, etc. It certainly won't make\nsense to introduce yet another set of functions for each pattern you\nwant to optimize --- or at least, we won't want to ask users to change\ntheir queries to invoke those functions explicitly.\n\nI do like the idea of attaching a Simplify planner support function\nto jsonb_numeric (and any other ones that seem worth optimizing)\nthat can convert a stack of jsonb transformations into a bundled\noperation that avoids unnecessary conversions. Then you get the\nspeedup without any need for people to change their existing queries.\nWe'd still have functions like jsonb_field_as_numeric() under the\nhood, but there's not an expectation that users call them explicitly.\n(Alternatively, the output of this Simplify could be a new kind of\nexpression node that bundles one or more jsonb extractions with a\ntype conversion. I don't have an opinion yet on which way is better.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 03 Aug 2023 15:13:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "On Thu, Aug 3, 2023 at 6:04 PM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n>\n>\n> Is it? Detoasting only happens if the argument was toasted, and I have\n> serious doubts that the result of (a->'a') will be toasted in our\n> current system. Sure, we do need to allocate an intermediate result,\n> but that's in a temporary memory context that should be trivially\n> cheap to free.\n>\n> > /*\n> > * v.val.numeric points into jsonb body, so we need to make a copy to\n> > * return\n> > */\n> > retValue = DatumGetNumericCopy(NumericGetDatum(v.val.numeric));\n> >\n> > At last this method needs 1 extra FuncExpr than my method, this would\n> > cost some expression execution effort. I'm not saying we need to avoid\n> > expression execution generally, but extracting numeric fields from jsonb\n> > looks a reasonable case.\n>\n> What's tb here?\n>\n>\n> Kind regards,\n>\n> Matthias van de Meent\n> Neon (https://neon.tech)\n>\n>\n\ncan confirm the patch's jsonb_object_field_numeric is faster than\npg_catalog.\"numeric\"(jsonb).\nalso it works accurately either jsonb is in the page or in toast schema chunks.\nI don't understand why we need to allocate an intermediate result\npart. since you cannot directly update a jsonb value field.\n\nThis function is not easy to find out...\nselect numeric('{\"a\":11}'->'a'); --fail.\nselect jsonb_numeric(jsonb'{\"a\":11}'->'a'); --fail\nselect \"numeric\"('{\"a\":11}'::jsonb->'a'); --ok\n\n\n", "msg_date": "Fri, 4 Aug 2023 11:32:04 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "čt 3. 8. 2023 v 16:27 odesílatel Andy Fan <zhihui.fan1213@gmail.com> napsal:\n\n> Hi:\n>\n>\n>> If you use explicit cast, then the code should not be hard, in the\n>> rewrite stage all information should be known.\n>>\n>\n> Can you point to me where the code is for the XML stuff? I thought\n> this is a bad idea but I may accept it if some existing code does\n> such a thing already. \"such thing\" is typeA:typeB is\n> converted something else but user can't find out an entry in\n> pg_cast for typeA to typeB.\n>\n\nin XML there is src/backend/utils/adt/xml.c, the XmlTableGetValue routine.\nIt is not an internal transformation - and from XML type to some else.\n\nyou can look at parser - parse_expr, parse_func. You can watch the\nlifecycle of :: operator. There are transformations of nodes to different\nnodes\n\nyou can look to patches related to SQL/JSON (not fully committed yet) and\njson_table\n\n\n\n\n>\n>\n>> It would be cool but still I didn't see a way to do that without making\n>>> something else complex.\n>>>\n>>\n>> The custom @-> operator you can implement in your own custom extension.\n>> Builtin solutions should be generic as it is possible.\n>>\n>\n> I agree, but actually I think there is no clean way to do it, at least I\n> dislike the conversion of typeA to typeB in a cast syntax but there\n> is no entry in pg_cast for it. Are you saying something like this\n> or I misunderstood you?\n>\n\nThere is not any possibility of user level space. The conversions should\nbe supported by cast from pg_cast, where it is possible. When it is\nimpossible, then you can raise an exception in some strict mode, or you can\ndo IO cast. But this is not hard part\n\nYou should to teach parser to push type info deeper to some nodes about\nexpected result\n\n(2023-08-04 05:28:36) postgres=# select ('{\"a\":2,\n\"b\":\"nazdar\"}'::jsonb)['a']::numeric;\n┌─────────┐\n│ numeric │\n╞═════════╡\n│ 2 │\n└─────────┘\n(1 row)\n\n(2023-08-04 05:28:36) postgres=# select ('{\"a\":2,\n\"b\":\"nazdar\"}'::jsonb)['a']::numeric;\n┌─────────┐\n│ numeric │\n╞═════════╡\n│ 2 │\n└─────────┘\n(1 row)\n\n(2023-08-04 05:28:41) postgres=# select ('{\"a\":2,\n\"b\":\"nazdar\"}'::jsonb)['a']::int;\n┌──────┐\n│ int4 │\n╞══════╡\n│ 2 │\n└──────┘\n(1 row)\n\nwhen the parser iterates over the expression, it crosses ::type node first,\nso you have information about the target type. Currently this information\nis used when the parser is going back and when the source type is the same\nas the target type, the cast can be ignored. Probably it needs to add some\nflag to the operator if they are able to use this. Maybe it can be a new\nthird argument with an expected type. So new kinds of op functions can look\nlike opfx(\"any\", \"any\", anyelement) returns anyelement. Maybe you find\nanother possibility. It can be invisible for me (or for you) now.\n\nIt is much more work, but the benefits will be generic. I think this is an\nimportant part for container types, so partial fix is not good, and it\nrequires a system solution. The performance is important, but without\ngeneric solutions, the complexity increases, and this is a much bigger\nproblem.\n\nRegards\n\nPavel\n\n\n\n\n>\n> --\n> Best Regards\n> Andy Fan\n>\n\nčt 3. 8. 2023 v 16:27 odesílatel Andy Fan <zhihui.fan1213@gmail.com> napsal:Hi:  If you use explicit cast, then the code should not be hard, in the rewrite stage all information should be known.Can you point to me where the code is for the XML stuff?  I thoughtthis is a bad idea but I may accept it if some existing code doessuch a thing already.   \"such thing\"  is  typeA:typeB isconverted something else but user can't find out an entry inpg_cast for typeA to typeB. in XML there is src/backend/utils/adt/xml.c, the XmlTableGetValue routine. It is not an internal transformation - and from XML type to some else.you can look at parser - parse_expr, parse_func. You can watch the lifecycle of :: operator. There are transformations of nodes to different nodesyou can look to patches related to SQL/JSON (not fully committed yet) and json_table   It would be cool but still I didn't see a way to do that without makingsomething else complex.  The custom @-> operator you can \nimplement in your own custom extension. Builtin solutions should be \ngeneric as it is possible.I agree, but actually I think there is no clean way to do it, at least Idislike the conversion of typeA to typeB in a cast syntax but thereis no entry in pg_cast for it.  Are you saying something like this or I misunderstood you? There is not any possibility of user level space.  The conversions should be supported by cast from pg_cast, where it is possible. When it is impossible, then you can raise an exception in some strict mode, or you can do IO cast. But this is not hard partYou should to teach parser to push type info deeper to some nodes about expected result(2023-08-04 05:28:36) postgres=# select ('{\"a\":2, \"b\":\"nazdar\"}'::jsonb)['a']::numeric;┌─────────┐│ numeric │╞═════════╡│       2 │└─────────┘(1 row)(2023-08-04 05:28:36) postgres=# select ('{\"a\":2, \"b\":\"nazdar\"}'::jsonb)['a']::numeric;┌─────────┐│ numeric │╞═════════╡│       2 │└─────────┘(1 row)(2023-08-04 05:28:41) postgres=# select ('{\"a\":2, \"b\":\"nazdar\"}'::jsonb)['a']::int;┌──────┐│ int4 │╞══════╡│    2 │└──────┘(1 row)when the parser iterates over the expression, it crosses ::type node first, so you have information about the target type. Currently this information is used when the parser is going back and when the source type is the same as the target type, the cast can be ignored. Probably it needs to add some flag to the operator if they are able to use this. Maybe it can be a new third argument with an expected type. So new kinds of op functions can look like opfx(\"any\", \"any\", anyelement) returns anyelement. Maybe you find another possibility. It can be invisible for me (or for you) now.It is much more work, but the benefits will be generic. I think this is an important part for container types, so partial fix is not good, and it requires a system solution. The performance is important, but without generic solutions, the complexity increases, and this is a much bigger problem.RegardsPavel\n\n-- Best RegardsAndy Fan", "msg_date": "Fri, 4 Aug 2023 05:54:15 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "Hi:\n\n>\n> can confirm the patch's jsonb_object_field_numeric is faster than\n> pg_catalog.\"numeric\"(jsonb).\n>\n\nThanks for the confirmation.\n\n\n>\n> This function is not easy to find out...\n>\n> select jsonb_numeric(jsonb'{\"a\":11}'->'a'); --fail\n>\n\njsonb_numeric is a prosrc rather than a proname, that's why you\ncan't call them directly.\n\nselect * from pg_proc where prosrc = 'jsonb_numeric';\nselect * from pg_proc where proname = 'jsonb_numeric';\n\nIt is bound to \"numeric\"(jsonb) cast, so we can call it with\na->'a'::numeric.\n\n select numeric('{\"a\":11}'->'a'); --fail.\n\n> select \"numeric\"('{\"a\":11}'::jsonb->'a'); --ok\n>\n\nThe double quotes look weird to me. but it looks like a common situation.\n\nselect numeric('1'::int); -- failed.\nselect \"numeric\"('1'::int); -- ok.\n\n-- \nBest Regards\nAndy Fan\n\nHi:  \n\ncan confirm the patch's jsonb_object_field_numeric is faster than\npg_catalog.\"numeric\"(jsonb).Thanks for the confirmation.  \n\nThis function is not easy to find out...\nselect jsonb_numeric(jsonb'{\"a\":11}'->'a'); --failjsonb_numeric is a prosrc rather than a proname,  that's why youcan't call them directly. select * from pg_proc where prosrc = 'jsonb_numeric';select * from pg_proc where proname = 'jsonb_numeric';It is bound to \"numeric\"(jsonb) cast, so we can call it witha->'a'::numeric.     select numeric('{\"a\":11}'->'a'); --fail.\n\nselect \"numeric\"('{\"a\":11}'::jsonb->'a'); --ok\nThe double quotes look weird to me.  but it looks like  a common situation.\n\n\n\n\n\nselect numeric('1'::int); -- failed.select \"numeric\"('1'::int); -- ok.-- Best RegardsAndy Fan", "msg_date": "Fri, 4 Aug 2023 11:55:07 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "Hi Tom:\n\nOn Fri, Aug 4, 2023 at 3:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Andy Fan <zhihui.fan1213@gmail.com> writes:\n> >> If you use explicit cast, then the code should not be hard, in the\n> >> rewrite stage all information should be known.\n>\n> > Can you point to me where the code is for the XML stuff?\n>\n> I think Pavel means XMLTABLE, which IMO is an ugly monstrosity of\n> syntax --- but count on the SQL committee to do it that way :-(.\n>\n\nThanks for this input!\n\n\n>\n> As far as the current discussion goes, I'm strongly against\n> introducing new functions or operators to do the same things that\n> we already have perfectly good syntax for. \"There's more than one\n> way to do it\" isn't necessarily a virtue, and for sure it isn't a\n> virtue if people have to rewrite their existing queries to make use\n> of your optimization.\n>\n\nI agree, this is always the best/only reason I'd like to accept.\n\n\n>\n>\n> I do like the idea of attaching a Simplify planner support function\n> to jsonb_numeric (and any other ones that seem worth optimizing)\n>\n\nI have a study planner support function today, that looks great and\nI don't think we need much work to do to get our goal, that's amzing.\n\nFor all the people who are interested in this topic, I will post a\nplanner support function soon, you can check that then.\n\n-- \nBest Regards\nAndy Fan\n\nHi Tom:On Fri, Aug 4, 2023 at 3:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Andy Fan <zhihui.fan1213@gmail.com> writes:\n>> If you use explicit cast, then the code should not be hard, in the\n>> rewrite stage all information should be known.\n\n> Can you point to me where the code is for the XML stuff?\n\nI think Pavel means XMLTABLE, which IMO is an ugly monstrosity of\nsyntax --- but count on the SQL committee to do it that way :-(.Thanks for this input!  \n\nAs far as the current discussion goes, I'm strongly against\nintroducing new functions or operators to do the same things that\nwe already have perfectly good syntax for.  \"There's more than one\nway to do it\" isn't necessarily a virtue, and for sure it isn't a\nvirtue if people have to rewrite their existing queries to make use\nof your optimization.I agree, this is always the best/only reason I'd like to accept.  \n\nI do like the idea of attaching a Simplify planner support function\nto jsonb_numeric (and any other ones that seem worth optimizing) I have a study planner support function today,  that looks great andI don't think we need much work to do to get our goal, that's amzing.For all the people who are interested in this topic, I will post a planner support function soon,  you can check that then.-- Best RegardsAndy Fan", "msg_date": "Fri, 4 Aug 2023 12:38:23 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "On 2023-08-03 23:55, Andy Fan wrote:\n> The double quotes look weird to me. but it looks like a common\n> situation.\n> \n> select numeric('1'::int); -- failed.\n> select \"numeric\"('1'::int); -- ok.\n\nIt arises when you have an object (type, function, cast, whatever) whose\nname in the catalog is the same as some SQL-standard keyword that the\nparser knows. The same thing happens with the PG type named char, which has\nto be spelled \"char\" in a query because otherwise you get the SQL standard\nchar type, which is different.\n\nOn 2023-08-03 09:50, Andy Fan wrote:\n> I don't think this is a key conflict so far. but I'd explain this in more\n> detail. If timestamp -> timestamptz or timestamptz -> timestamp is\n> binary compatible, ... however cast between timestamp and timestamptz is\n> not binary compatible. whose castmethod is 'f';\n\nThis is one of those cases where the incompatibility is a semantic one.\nBoth types are the same number of bits and they both represent microseconds\nsince midnight of the \"Postgres epoch\", but only timestamptz is anchored\nto a time zone (UTC), so unless you happen to live in that time zone, they\nmean different things. To just copy the binary bits from one to the other\nwould be lying (unless you know that the person you are copying them for\nlives in UTC).\n\n> On Fri, Aug 4, 2023 at 3:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Andy Fan <zhihui.fan1213@gmail.com> writes:\n>> > Can you point to me where the code is for the XML stuff?\n>>\n>> I think Pavel means XMLTABLE, which IMO is an ugly monstrosity of\n>> syntax --- but count on the SQL committee to do it that way :-(.\n\nAnother interesting thing about XMLTABLE is that ISO defines its behavior\nentirely as a big rewriting into another SQL query built out of XMLQUERY\nand XMLCAST functions, and a notional XMLITERATE function that isn't\nvisible to a user but is a factor of the rewriting. And the definition of\nXMLCAST itself is something that is sometimes rewritten to plain CAST, and\nsometimes rewritten away. Almost as if they had visions of planner support\nfunctions.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Fri, 04 Aug 2023 16:50:17 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "Hi:\n\n\n> For all the people who are interested in this topic, I will post a\n> planner support function soon, you can check that then.\n>\n>\nThe updated patch doesn't need users to change their codes and can get\nbetter performance. Thanks for all the feedback which makes things better.\n\nTo verify there is no unexpected stuff happening, here is the performance\ncomparison between master and patched.\n\ncreate table tb(a jsonb);\ninsert into tb select '{\"a\": true, \"b\": 23.3333}' from generate_series(1,\n100000)i;\n\nMaster:\nselect 1 from tb where (a->'b')::numeric = 1;\nTime: 31.020 ms\n\nselect 1 from tb where not (a->'a')::boolean;\nTime: 25.888 ms\n\nselect 1 from tb where (a->'b')::int2 = 1;\nTime: 30.138 ms\n\nselect 1 from tb where (a->'b')::int4 = 1;\nTime: 32.384 ms\n\nselect 1 from tb where (a->'b')::int8 = 1;\\\nTime: 29.922 ms\n\nselect 1 from tb where (a->'b')::float4 = 1;\nTime: 54.139 ms\n\nselect 1 from tb where (a->'b')::float8 = 1;\nTime: 66.933 ms\n\nPatched:\n\nselect 1 from tb where (a->'b')::numeric = 1;\nTime: 15.203 ms\n\nselect 1 from tb where not (a->'a')::boolean;\nTime: 12.894 ms\n\nselect 1 from tb where (a->'b')::int2 = 1;\nTime: 16.847 ms\n\nselect 1 from tb where (a->'b')::int4 = 1;\nTime: 17.105 ms\n\nselect 1 from tb where (a->'b')::int8 = 1;\nTime: 16.720 ms\n\nselect 1 from tb where (a->'b')::float4 = 1;\nTime: 33.409 ms\n\nselect 1 from tb where (a->'b')::float8 = 1;\nTime: 34.660 ms\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Mon, 7 Aug 2023 11:04:05 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "Hi\n\npo 7. 8. 2023 v 5:04 odesílatel Andy Fan <zhihui.fan1213@gmail.com> napsal:\n\n> Hi:\n>\n>\n>> For all the people who are interested in this topic, I will post a\n>> planner support function soon, you can check that then.\n>>\n>>\n> The updated patch doesn't need users to change their codes and can get\n> better performance. Thanks for all the feedback which makes things better.\n>\n> To verify there is no unexpected stuff happening, here is the performance\n> comparison between master and patched.\n>\n\nI am looking on your patch, and the message\n\n+\n+ default:\n+ elog(ERROR, \"cast jsonb field to %d is not supported.\", targetOid);\n\nis a little bit messy. This case should not be possible, because it is\nfiltered by jsonb_cast_is_optimized. So the message should be changed or it\nneeds a comment.\n\nRegards\n\nPavel\n\n\n>\n> create table tb(a jsonb);\n> insert into tb select '{\"a\": true, \"b\": 23.3333}' from generate_series(1,\n> 100000)i;\n>\n> Master:\n> select 1 from tb where (a->'b')::numeric = 1;\n> Time: 31.020 ms\n>\n> select 1 from tb where not (a->'a')::boolean;\n> Time: 25.888 ms\n>\n> select 1 from tb where (a->'b')::int2 = 1;\n> Time: 30.138 ms\n>\n> select 1 from tb where (a->'b')::int4 = 1;\n> Time: 32.384 ms\n>\n> select 1 from tb where (a->'b')::int8 = 1;\\\n> Time: 29.922 ms\n>\n> select 1 from tb where (a->'b')::float4 = 1;\n> Time: 54.139 ms\n>\n> select 1 from tb where (a->'b')::float8 = 1;\n> Time: 66.933 ms\n>\n> Patched:\n>\n> select 1 from tb where (a->'b')::numeric = 1;\n> Time: 15.203 ms\n>\n> select 1 from tb where not (a->'a')::boolean;\n> Time: 12.894 ms\n>\n> select 1 from tb where (a->'b')::int2 = 1;\n> Time: 16.847 ms\n>\n> select 1 from tb where (a->'b')::int4 = 1;\n> Time: 17.105 ms\n>\n> select 1 from tb where (a->'b')::int8 = 1;\n> Time: 16.720 ms\n>\n> select 1 from tb where (a->'b')::float4 = 1;\n> Time: 33.409 ms\n>\n> select 1 from tb where (a->'b')::float8 = 1;\n> Time: 34.660 ms\n>\n> --\n> Best Regards\n> Andy Fan\n>\n\nHipo 7. 8. 2023 v 5:04 odesílatel Andy Fan <zhihui.fan1213@gmail.com> napsal:Hi: For all the people who are interested in this topic, I will post a planner support function soon,  you can check that then.The updated patch doesn't need users to change their codes and can getbetter performance. Thanks for all the feedback which makes things better.To verify there is no unexpected stuff happening, here is the performancecomparison between master and patched.I am looking on your patch, and the message++\t\tdefault:+\t\t\telog(ERROR, \"cast jsonb field to %d is not supported.\", targetOid);is a little bit messy. This case should not be possible, because it is filtered by \njsonb_cast_is_optimized. So the message should be changed or it needs a \ncomment.RegardsPavel create table tb(a jsonb);insert into tb select '{\"a\": true, \"b\": 23.3333}' from generate_series(1,100000)i;Master:select 1 from tb where  (a->'b')::numeric = 1;Time: 31.020 msselect 1 from tb where not (a->'a')::boolean;Time: 25.888 msselect 1 from tb where  (a->'b')::int2 = 1;Time: 30.138 msselect 1 from tb where  (a->'b')::int4 = 1;Time: 32.384 msselect 1 from tb where  (a->'b')::int8 = 1;\\Time: 29.922 msselect 1 from tb where  (a->'b')::float4 = 1;Time: 54.139 msselect 1 from tb where  (a->'b')::float8 = 1;Time: 66.933 msPatched:select 1 from tb where  (a->'b')::numeric = 1;Time: 15.203 msselect 1 from tb where not (a->'a')::boolean;Time: 12.894 msselect 1 from tb where  (a->'b')::int2 = 1;Time: 16.847 msselect 1 from tb where  (a->'b')::int4 = 1;Time: 17.105 msselect 1 from tb where  (a->'b')::int8 = 1;Time: 16.720 msselect 1 from tb where  (a->'b')::float4 = 1;Time: 33.409 msselect 1 from tb where  (a->'b')::float8 = 1;Time: 34.660 ms -- Best RegardsAndy Fan", "msg_date": "Mon, 7 Aug 2023 08:19:48 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "Hi Pavel:\n\nThanks for the code level review!\n\n\n>\n> I am looking on your patch, and the message\n>\n> +\n> + default:\n> + elog(ERROR, \"cast jsonb field to %d is not supported.\", targetOid);\n>\n> is a little bit messy. This case should not be possible, because it is\n> filtered by jsonb_cast_is_optimized. So the message should be changed or it\n> needs a comment.\n>\n\nYes, the double check is not necessary, that is removed in the attached v4\npatch.\n\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Mon, 7 Aug 2023 15:32:25 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "Hi.\n\n+Datum\n+jsonb_object_field_type(PG_FUNCTION_ARGS)\n+{\n+ Jsonb *jb = PG_GETARG_JSONB_P(0);\n+ text *key = PG_GETARG_TEXT_PP(1);\n+ Oid targetOid = PG_GETARG_OID(2);\n\ncompared with jsonb_numeric. I am wondering if you need a free *jb.\nelog(INFO,\"jb=%p arg pointer=%p \", jb, PG_GETARG_POINTER(0));\nsays there two are not the same.\n\n\n", "msg_date": "Mon, 7 Aug 2023 17:36:40 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "Hi Jian:\n\nThanks for the review!\n\ncompared with jsonb_numeric. I am wondering if you need a free *jb.\n> elog(INFO,\"jb=%p arg pointer=%p \", jb, PG_GETARG_POINTER(0));\n> says there two are not the same.\n>\n\nThanks for pointing this out, I am not sure what to do right now.\nBasically the question is that shall we free the memory which\nis allocated in a function call. The proof to do it is obvious, but the\nproof to NOT do it may be usually the memory is allocated under\nExprContext Memorycontext, it will be reset once the current\ntuple is proceed, and MemoryContextReset will be more effective\nthan pfrees;\n\nI checked most of the functions to free its memory, besides the\nones you mentioned, numeric_gt/ne/xxx function also free them\ndirectly. But the functions like jsonb_object_field_text,\njsonb_array_element, jsonb_array_element_text don't.\n\nI'd like to hear more options from more experienced people,\nthis issue also confused me before. and I'm neutral to this now.\nafter we get an agreement on this, I will update the patch\naccordingly.\n\n-- \nBest Regards\nAndy Fan\n\nHi Jian:Thanks for the review! \ncompared with jsonb_numeric. I am wondering if you need a free *jb.\nelog(INFO,\"jb=%p arg pointer=%p \", jb, PG_GETARG_POINTER(0));\nsays there two are not the same.\nThanks for pointing this out,  I am not sure what to do right now.Basically the question is that shall we free the memory whichis allocated in a function call.  The proof to do it is obvious, but theproof to NOT do it may be usually the memory is allocated underExprContext  Memorycontext,  it will be reset once the current tuple is proceed, and MemoryContextReset will be more effectivethan pfrees; I checked most of the functions to free its memory, besides theones you mentioned,  numeric_gt/ne/xxx function also free themdirectly.  But the functions like jsonb_object_field_text, jsonb_array_element,  jsonb_array_element_text don't.I'd like to hear more options from more experienced people,this issue also confused me before. and I'm neutral to this now. after we get an agreement on this,  I will update the patchaccordingly. -- Best RegardsAndy Fan", "msg_date": "Mon, 7 Aug 2023 19:51:18 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "Hi:\n\nOn Mon, Aug 7, 2023 at 7:51 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n> Hi Jian:\n>\n> Thanks for the review!\n>\n> compared with jsonb_numeric. I am wondering if you need a free *jb.\n>> elog(INFO,\"jb=%p arg pointer=%p \", jb, PG_GETARG_POINTER(0));\n>> says there two are not the same.\n>>\n>\n> Thanks for pointing this out, I am not sure what to do right now.\n> Basically the question is that shall we free the memory which\n> is allocated in a function call. The proof to do it is obvious, but the\n> proof to NOT do it may be usually the memory is allocated under\n> ExprContext Memorycontext, it will be reset once the current\n> tuple is proceed, and MemoryContextReset will be more effective\n> than pfrees;\n>\n\nI just found Andres's opinion on this, it looks like he would suggest\nnot free it [1], and the reason is similar here [2], so I would like to\nkeep it as it is.\n\n[1]\nhttps://www.postgresql.org/message-id/20230216213554.vintskinrqqrxf6d%40awork3.anarazel.de\n\n[2]\nhttps://www.postgresql.org/message-id/20230217202626.ihd55rgxgkr2uqim%40awork3.anarazel.de\n\n\n-- \nBest Regards\nAndy Fan\n\nHi: On Mon, Aug 7, 2023 at 7:51 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:Hi Jian:Thanks for the review! \ncompared with jsonb_numeric. I am wondering if you need a free *jb.\nelog(INFO,\"jb=%p arg pointer=%p \", jb, PG_GETARG_POINTER(0));\nsays there two are not the same.\nThanks for pointing this out,  I am not sure what to do right now.Basically the question is that shall we free the memory whichis allocated in a function call.  The proof to do it is obvious, but theproof to NOT do it may be usually the memory is allocated underExprContext  Memorycontext,  it will be reset once the current tuple is proceed, and MemoryContextReset will be more effectivethan pfrees; I just found Andres's opinion on this, it looks like he would suggestnot free it [1], and the reason is similar here [2], so I would like to keep it as it is.[1] https://www.postgresql.org/message-id/20230216213554.vintskinrqqrxf6d%40awork3.anarazel.de [2] https://www.postgresql.org/message-id/20230217202626.ihd55rgxgkr2uqim%40awork3.anarazel.de  -- Best RegardsAndy Fan", "msg_date": "Tue, 8 Aug 2023 10:28:18 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "Hi,\n\nLooking at the most recent patch, so far I have a minor\nspelling point, and a question (which I have not personally\nexplored).\n\nThe minor spelling point, the word 'field' has been spelled\n'filed' throughout this comment (just as in the email subject):\n\n+\t\t/*\n+\t\t * Simplify cast(jsonb_object_filed(jsonb, filedName) as type)\n+\t\t * to jsonb_object_field_type(jsonb, filedName, targetTypeOid);\n+\t\t */\n\nThe question: the simplification is currently being applied\nwhen the underlying operation uses F_JSONB_OBJECT_FIELD.\nAre there opportunities for a similar benefit if applied\nover F_JSONB_ARRAY_ELEMENT and/or F_JSONB_EXTRACT_PATH?\n\nRegards,\n-Chap\n\n\n", "msg_date": "Tue, 08 Aug 2023 16:30:10 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Extract numeric [field] in JSONB more effectively" }, { "msg_contents": "On Wed, Aug 9, 2023 at 4:30 AM Chapman Flack <chap@anastigmatix.net> wrote:\n>\n> Hi,\n>\n> Looking at the most recent patch, so far I have a minor\n> spelling point, and a question (which I have not personally\n> explored).\n>\n> The minor spelling point, the word 'field' has been spelled\n> 'filed' throughout this comment (just as in the email subject):\n>\n> + /*\n> + * Simplify cast(jsonb_object_filed(jsonb, filedName) as\ntype)\n> + * to jsonb_object_field_type(jsonb, filedName,\ntargetTypeOid);\n> + */\n>\n> The question: the simplification is currently being applied\n> when the underlying operation uses F_JSONB_OBJECT_FIELD.\n> Are there opportunities for a similar benefit if applied\n> over F_JSONB_ARRAY_ELEMENT and/or F_JSONB_EXTRACT_PATH?\n>\n> Regards,\n> -Chap\n\n\nBased on most recent patch\nin jsonb_object_field_type function, I made some changes, need to\ninclude <unistd.h>. just created a C function, but didn't rebuild. then\ncompare it with the \"numeric\"(jsonb) function. overall it's fast.\n\nsome changes I made in jsonb_object_field_type.\n\nuint32 i;\nchar *endptr;\nif (JB_ROOT_IS_OBJECT(jb))\nv = getKeyJsonValueFromContainer(&jb->root,\nVARDATA_ANY(key),\nVARSIZE_ANY_EXHDR(key),\n&vbuf);\nelse if (JB_ROOT_IS_ARRAY(jb) && !JB_ROOT_IS_SCALAR(jb)) /* scalar element\nis pseudo-array */\n{\nerrno = 0;\nchar *src = text_to_cstring(key);\ni = (uint32) strtoul(src, &endptr, 10);\n\nif (endptr == src || *endptr != '\\0' || errno != 0)\n{\nelog(ERROR,\"invalid input syntax when convert to integer:\");\n}\n// i boundary index checked inside.\nv = getIthJsonbValueFromContainer(&jb->root,i);\n}\nelse if (JB_ROOT_IS_SCALAR(jb))\n{\nif (!JsonbExtractScalar(&jb->root, &vbuf) || vbuf.type != jbvNumeric)\ncannotCastJsonbValue(vbuf.type, \"numeric\");\nv = &vbuf;\n}\nelse\nPG_RETURN_NULL();\n---------------------------------------\nThe following query will return zero rows. but jsonb_object_field_type will\nbe faster.\nselect jsonb_object_field_type('[1.1,2.2]'::jsonb,'1', 1700),\njsonb_object_field_type('{\"1\":10.2}'::jsonb,'1', 1700),\njsonb_object_field_type('10.2'::jsonb,'1', 1700)\nexcept\nselect \"numeric\"(('[1.1,2.2]'::jsonb)[1]),\n\"numeric\"('{\"1\":10.2}'::jsonb->'1'),\n\"numeric\"('10.2'::jsonb);\n\nhow to glue it as a support function, or make it more generic needs extra\nthinking.\n\nOn Wed, Aug 9, 2023 at 4:30 AM Chapman Flack <chap@anastigmatix.net> wrote:>> Hi,>> Looking at the most recent patch, so far I have a minor> spelling point, and a question (which I have not personally> explored).>> The minor spelling point, the word 'field' has been spelled> 'filed' throughout this comment (just as in the email subject):>> +               /*> +                * Simplify cast(jsonb_object_filed(jsonb, filedName) as type)> +                * to jsonb_object_field_type(jsonb, filedName, targetTypeOid);> +                */>> The question: the simplification is currently being applied> when the underlying operation uses F_JSONB_OBJECT_FIELD.> Are there opportunities for a similar benefit if applied> over F_JSONB_ARRAY_ELEMENT and/or F_JSONB_EXTRACT_PATH?>> Regards,> -Chap Based on most recent patchin jsonb_object_field_type function, I made some changes, need to include  <unistd.h>. just created a C function, but didn't rebuild. then compare it with the \"numeric\"(jsonb) function. overall it's fast. some changes I made in jsonb_object_field_type.uint32\t\t\ti;\tchar\t\t\t*endptr;\tif (JB_ROOT_IS_OBJECT(jb))\t\tv = getKeyJsonValueFromContainer(&jb->root,\t\t\t\t\t\t\t\t\t\tVARDATA_ANY(key),\t\t\t\t\t\t\t\t\t\tVARSIZE_ANY_EXHDR(key),\t\t\t\t\t\t\t\t\t\t&vbuf);\telse if (JB_ROOT_IS_ARRAY(jb) && !JB_ROOT_IS_SCALAR(jb)) /* scalar element is  pseudo-array */\t{\t\terrno = 0;\t\tchar\t*src = text_to_cstring(key);\t\ti = (uint32) strtoul(src, &endptr, 10); \t\tif (endptr == src || *endptr != '\\0' || errno != 0)\t\t{\t\t\telog(ERROR,\"invalid input syntax when convert to integer:\");\t\t}\t\t// i boundary index checked inside.\t\tv = getIthJsonbValueFromContainer(&jb->root,i);\t}\telse if (JB_ROOT_IS_SCALAR(jb))\t{\t\t\t\tif (!JsonbExtractScalar(&jb->root, &vbuf) || vbuf.type != jbvNumeric)\t\t\tcannotCastJsonbValue(vbuf.type, \"numeric\");\t\tv\t= &vbuf;\t}\telse\t\tPG_RETURN_NULL();---------------------------------------The following query will return zero rows. but jsonb_object_field_type will be faster.select\tjsonb_object_field_type('[1.1,2.2]'::jsonb,'1', 1700),\t\tjsonb_object_field_type('{\"1\":10.2}'::jsonb,'1', 1700),\t\tjsonb_object_field_type('10.2'::jsonb,'1', 1700)exceptselect\t\"numeric\"(('[1.1,2.2]'::jsonb)[1]),\t\t\"numeric\"('{\"1\":10.2}'::jsonb->'1'),\t\t\"numeric\"('10.2'::jsonb);how to glue it as a support function, or make it more generic needs extra thinking.", "msg_date": "Wed, 9 Aug 2023 15:46:03 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extract numeric [field] in JSONB more effectively" }, { "msg_contents": "Hi Chap:\n\n Thanks for the review.\n\n\n> The minor spelling point, the word 'field' has been spelled\n> 'filed' throughout this comment (just as in the email subject):\n>\n> + /*\n> + * Simplify cast(jsonb_object_filed(jsonb, filedName) as\n> type)\n> + * to jsonb_object_field_type(jsonb, filedName,\n> targetTypeOid);\n> + */\n>\n>\nThanks for catching this, fixed in v5.\n\n\n> The question: the simplification is currently being applied\n> when the underlying operation uses F_JSONB_OBJECT_FIELD.\n> Are there opportunities for a similar benefit if applied\n> over F_JSONB_ARRAY_ELEMENT and/or F_JSONB_EXTRACT_PATH?\n>\n\nYes, we do have similar opportunities for both functions. v5 attached for\nthis.\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Thu, 10 Aug 2023 16:03:15 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extract numeric [field] in JSONB more effectively" }, { "msg_contents": ">\n>\n> We'd still have functions like jsonb_field_as_numeric() under the\n> hood, but there's not an expectation that users call them explicitly.\n>\n\nTo avoid the lots of functions like jsonb_field_as_int2/int4, I defined\nDatum jsonb_object_field_type(.., Oid target_oid) at last, so the\nfunction must return \"internal\" or \"anyelement\". Then we can see:\n\nselect jsonb_object_field_type(tb.a, 'a'::text, 1700) from tb;\nERROR: cannot display a value of type anyelement.\n\nThe reason is clear to me, but I'm not sure how to fix that or deserves\na fix? Or shall I define jsonb_object_field_int2/int8 to avoid this?\n\nThis is an unresolved issue at the latest patch.\n-- \nBest Regards\nAndy Fan\n\n\nWe'd still have functions like jsonb_field_as_numeric() under the\nhood, but there's not an expectation that users call them explicitly.To avoid the lots of functions like jsonb_field_as_int2/int4, I definedDatum jsonb_object_field_type(.., Oid target_oid) at last,  so thefunction must return \"internal\" or \"anyelement\".  Then we can see:select jsonb_object_field_type(tb.a, 'a'::text, 1700) from tb;ERROR:  cannot display a value of type anyelement.The reason is clear to me, but  I'm not sure how to fix that or deservesa fix? Or shall I define jsonb_object_field_int2/int8 to avoid this? This is an unresolved issue at the latest patch. -- Best RegardsAndy Fan", "msg_date": "Mon, 14 Aug 2023 15:06:30 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "po 14. 8. 2023 v 9:06 odesílatel Andy Fan <zhihui.fan1213@gmail.com> napsal:\n\n>\n>> We'd still have functions like jsonb_field_as_numeric() under the\n>> hood, but there's not an expectation that users call them explicitly.\n>>\n>\n> To avoid the lots of functions like jsonb_field_as_int2/int4, I defined\n> Datum jsonb_object_field_type(.., Oid target_oid) at last, so the\n> function must return \"internal\" or \"anyelement\". Then we can see:\n>\n> select jsonb_object_field_type(tb.a, 'a'::text, 1700) from tb;\n> ERROR: cannot display a value of type anyelement.\n>\n\nyou cannot to use type as parameter. There should be some typed value - like\n\njsonb_object_field, '{\"a\":10}', 'a', NULL::int)\n\nand return type should be anyelement.\n\nAnother solution should be more deeper change like implementation of\n\"coalesce\"\n\n\n\n>\n> The reason is clear to me, but I'm not sure how to fix that or deserves\n> a fix? Or shall I define jsonb_object_field_int2/int8 to avoid this?\n>\n> This is an unresolved issue at the latest patch.\n> --\n> Best Regards\n> Andy Fan\n>\n\npo 14. 8. 2023 v 9:06 odesílatel Andy Fan <zhihui.fan1213@gmail.com> napsal:\nWe'd still have functions like jsonb_field_as_numeric() under the\nhood, but there's not an expectation that users call them explicitly.To avoid the lots of functions like jsonb_field_as_int2/int4, I definedDatum jsonb_object_field_type(.., Oid target_oid) at last,  so thefunction must return \"internal\" or \"anyelement\".  Then we can see:select jsonb_object_field_type(tb.a, 'a'::text, 1700) from tb;ERROR:  cannot display a value of type anyelement.you cannot to use type as parameter. There should be some typed value - likejsonb_object_field, '{\"a\":10}', 'a', NULL::int)and return type should be anyelement.Another solution should be more deeper change like implementation of \"coalesce\" The reason is clear to me, but  I'm not sure how to fix that or deservesa fix? Or shall I define jsonb_object_field_int2/int8 to avoid this? This is an unresolved issue at the latest patch. -- Best RegardsAndy Fan", "msg_date": "Mon, 14 Aug 2023 10:01:31 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": ">\n>\n> you cannot to use type as parameter. There should be some typed value -\n> like\n>\n> jsonb_object_field, '{\"a\":10}', 'a', NULL::int)\n>\n> and return type should be anyelement.\n>\n>\nSo could we get the inputted type in the body of jsonb_object_field?\nI guess no. IIUC, our goal will still be missed in this way.\n\n-- \nBest Regards\nAndy Fan\n\nyou cannot to use type as parameter. There should be some typed value - likejsonb_object_field, '{\"a\":10}', 'a', NULL::int)and return type should be anyelement.So could we get the inputted type in the body of jsonb_object_field? I guess no.  IIUC, our goal will still be missed in this way. -- Best RegardsAndy Fan", "msg_date": "Mon, 14 Aug 2023 17:16:52 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "po 14. 8. 2023 v 11:17 odesílatel Andy Fan <zhihui.fan1213@gmail.com>\nnapsal:\n\n>\n>> you cannot to use type as parameter. There should be some typed value -\n>> like\n>>\n>> jsonb_object_field, '{\"a\":10}', 'a', NULL::int)\n>>\n>> and return type should be anyelement.\n>>\n>>\n> So could we get the inputted type in the body of jsonb_object_field?\n> I guess no. IIUC, our goal will still be missed in this way.\n>\n\nwhy not? You can easily build null constant of any type.\n\n>\n> --\n> Best Regards\n> Andy Fan\n>\n\npo 14. 8. 2023 v 11:17 odesílatel Andy Fan <zhihui.fan1213@gmail.com> napsal:you cannot to use type as parameter. There should be some typed value - likejsonb_object_field, '{\"a\":10}', 'a', NULL::int)and return type should be anyelement.So could we get the inputted type in the body of jsonb_object_field? I guess no.  IIUC, our goal will still be missed in this way. why not? You can easily build null constant of any type.  -- Best RegardsAndy Fan", "msg_date": "Mon, 14 Aug 2023 14:54:22 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "On 2023-08-14 03:06, Andy Fan wrote:\n>> We'd still have functions like jsonb_field_as_numeric() under the\n>> hood, but there's not an expectation that users call them explicitly.\n> \n> To avoid the lots of functions like jsonb_field_as_int2/int4, I defined\n> Datum jsonb_object_field_type(.., Oid target_oid) at last, so the\n> function must return \"internal\" or \"anyelement\".\n> ...\n> I'm not sure how to fix that or deserves\n> a fix? Or shall I define jsonb_object_field_int2/int8 to avoid this?\n\nAs far as I'm concerned, if the intent is for this to be a function\nthat is swapped in by SupportRequestSimplify and not necessarily to\nbe called by users directly, I don't mind if users can't call it\ndirectly. As long as there is a nice familiar jsonb function the user\ncan call in a nice familiar way and knows it will be handled\nefficiently behind the curtain, that seems to be good enough for\nthe user--better, even, than having a new oddball function to\nremember.\n\nHowever, I believe the rule is that a function declared to return\ninternal must also declare at least one parameter as internal.\nThat way, a user won't be shown errors about displaying its\nreturned value, because the user won't be able to call it\nin the first place, having no values of type 'internal' lying\naround to pass to it. It could simply have that trailing oid\nparameter declared as internal, and there you have a strictly\ninternal-use function.\n\nProviding a function with return type declared internal but\nwith no parameter of that type is not good, because then a\nuser could, in principle, call it and obtain a value of\n'internal' type, and so get around the typing rules that\nprevent calling other internal functions.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Mon, 14 Aug 2023 10:04:03 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> Providing a function with return type declared internal but\n> with no parameter of that type is not good,\n\nNot so much \"not good\" as \"absolutely, positively WILL NOT HAPPEN\".\n\n> because then a\n> user could, in principle, call it and obtain a value of\n> 'internal' type, and so get around the typing rules that\n> prevent calling other internal functions.\n\nRight --- it'd completely break the system's type-safety for\nother internal-using functions.\n\nYou could argue that we should never have abused \"internal\"\nto this extent in the first place, compared to inventing a\nplethora of internal-ish types to correspond to each of the\nthings \"internal\" is used for. But here we are so we'd\nbetter be darn careful with it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 14 Aug 2023 10:10:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "On Mon, Aug 14, 2023 at 10:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Chapman Flack <chap@anastigmatix.net> writes:\n> > Providing a function with return type declared internal but\n> > with no parameter of that type is not good,\n>\n> Not so much \"not good\" as \"absolutely, positively WILL NOT HAPPEN\".\n\n\nChap is pretty nice to others:).\n\n\n>\n>\n> because then a\n> > user could, in principle, call it and obtain a value of\n> > 'internal' type, and so get around the typing rules that\n> > prevent calling other internal functions.\n>\n> Right --- it'd completely break the system's type-safety for\n> other internal-using functions.\n>\n>\nI do see something bad in opr_sanity.sql. Pavel suggested\nget_fn_expr_argtype which can resolve this issue pretty well, so\nI have changed\n\njsonb_extract_xx_type(.., Oid taget_oid) -> anyelement.\nto\njsonb_extract_xx_type(.., anyelement) -> anyelement.\n\nThe only bad smell left is since I want to define jsonb_extract_xx_type\nas strict so I can't use jsonb_extract_xx_type(.., NULL::a-type)\nsince it will be evaluated to NULL directly. So I hacked it with\n\n/* mock the type. */\n Const *target = makeNullConst(fexpr->funcresulttype,\n -1,\n InvalidOid);\n\n/* hack the NULL attribute */\n /*\n\n\n\n * Since all the above functions are strict, we can't input\n\n\n\n * a NULL value.\n\n\n\n */\n target->constisnull = false;\n\n jsonb_extract_xx_type just cares about the argtype, but\n'explain select xx' will still access the const->constvalue.\nconst->constvalue is 0 which is set by makeNullConst currently,\nand it is ok for the current supported type. but I'm not sure\nabout the future or if we still have a better solution.\n\nv6 is attached. any feedback is welcome!\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Mon, 14 Aug 2023 23:42:12 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": ">\n>\n> jsonb_extract_xx_type just cares about the argtype, but\n> 'explain select xx' will still access the const->constvalue.\n> const->constvalue is 0 which is set by makeNullConst currently,\n> and it is ok for the current supported type.\n>\n\nThe exception is numeric data type, the constvalue can't be 0.\nso hack it with the below line. maybe not good enough, but I\nhave no better solution now.\n\n+ Const *target =\n makeNullConst(fexpr->funcresulttype,\n+\n -1,\n+\n InvalidOid);\n+ /*\n+ * Since all the above functions are strict, we\ncan't input\n+ * a NULL value.\n+ */\n+ target->constisnull = false;\n+\n+ Assert(target->constbyval || target->consttype ==\nNUMERICOID);\n+\n+ /* Mock a valid datum for !constbyval type. */\n+ if (fexpr->funcresulttype == NUMERICOID)\n+ target->constvalue =\nDirectFunctionCall1(numeric_in, CStringGetDatum(\"0\"));\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Tue, 15 Aug 2023 11:24:35 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "Hi\n\nút 15. 8. 2023 v 5:24 odesílatel Andy Fan <zhihui.fan1213@gmail.com> napsal:\n\n>\n>> jsonb_extract_xx_type just cares about the argtype, but\n>> 'explain select xx' will still access the const->constvalue.\n>> const->constvalue is 0 which is set by makeNullConst currently,\n>> and it is ok for the current supported type.\n>>\n>\n> The exception is numeric data type, the constvalue can't be 0.\n> so hack it with the below line. maybe not good enough, but I\n> have no better solution now.\n>\n> + Const *target =\n> makeNullConst(fexpr->funcresulttype,\n> +\n> -1,\n> +\n> InvalidOid);\n> + /*\n> + * Since all the above functions are strict, we\n> can't input\n> + * a NULL value.\n> + */\n> + target->constisnull = false;\n> +\n> + Assert(target->constbyval || target->consttype ==\n> NUMERICOID);\n> +\n> + /* Mock a valid datum for !constbyval type. */\n> + if (fexpr->funcresulttype == NUMERICOID)\n> + target->constvalue =\n> DirectFunctionCall1(numeric_in, CStringGetDatum(\"0\"));\n>\n>\nPersonally I think this workaround is too dirty, and better to use a strict\nfunction (I believe so the overhead for NULL values is acceptable), or\nintroduce a different mechanism.\n\nYour design is workable, and I think acceptable, but I don't think it is an\nideal or final solution. It is not really generic. It doesn't help with XML\nor Hstore. You need to touch cast functions, which I think is not best,\nbecause cast functions should not cooperate on optimization of execution of\nanother function.\n\nMy idea of an ideal solution is the introduction of the possibility to use\n\"any\" pseudotype as return type with possibility to set default return\ntype. Now, \"any\" is allowed only for arguments. The planner can set the\nexpected type when it knows it, or can use the default type.\n\nso for extraction of jsonb field we can use FUNCTION\njsonb_extract_field(jsonb, text) RETURNS \"any\" DEFAULT jsonb\n\nif we call SELECT jsonb_extract_field(..., 'x') -> then it returns jsonb,\nif we use SELECT jsonb_extract_field('...', 'x')::date, then it returns date\n\nWith this possibility we don't need to touch to cast functions, and we can\nsimply implement similar functions for other non atomic types.\n\n\n\n-- \n> Best Regards\n> Andy Fan\n>\n\nHiút 15. 8. 2023 v 5:24 odesílatel Andy Fan <zhihui.fan1213@gmail.com> napsal: jsonb_extract_xx_type just cares about the argtype, but 'explain select xx'  will still access the const->constvalue.const->constvalue is 0 which is set by makeNullConst currently, and it is ok for the current supported type. \nThe exception is numeric data type, the constvalue can't be 0. so hack it with the below line.  maybe not good enough,  but Ihave no better solution now. +                       Const   *target =  makeNullConst(fexpr->funcresulttype,+                                                                                        -1,+                                                                                        InvalidOid);+                       /*+                        * Since all the above functions are strict, we can't input+                        * a NULL value.+                        */+                       target->constisnull = false;+      +                       Assert(target->constbyval || target->consttype == NUMERICOID);+              +                       /* Mock a valid datum for !constbyval type. */+                       if (fexpr->funcresulttype == NUMERICOID)+                               target->constvalue = DirectFunctionCall1(numeric_in, CStringGetDatum(\"0\"));Personally I think this workaround is too dirty, and better to use a strict function (I believe so the overhead for NULL values is acceptable), or introduce a different mechanism.Your design is workable, and I think acceptable, but I don't think it is an ideal or final solution. It is not really generic. It doesn't help with XML or Hstore. You need to touch cast functions, which I think is not best, because cast functions should not cooperate on optimization of execution of another function.My idea of an ideal solution is the introduction of the possibility to use \"any\" pseudotype as return type with possibility to set default return type. Now, \"any\" is allowed only for arguments. The planner can set the expected type when it knows it, or can use the default type.so for extraction of jsonb field we can use FUNCTION jsonb_extract_field(jsonb, text) RETURNS \"any\" DEFAULT jsonbif we call SELECT jsonb_extract_field(..., 'x') -> then it returns jsonb, if we use SELECT jsonb_extract_field('...', 'x')::date, then it returns dateWith this possibility we don't need to touch to cast functions, and we can simply implement similar functions for other non atomic types.-- Best RegardsAndy Fan", "msg_date": "Tue, 15 Aug 2023 07:23:26 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "út 15. 8. 2023 v 7:23 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> út 15. 8. 2023 v 5:24 odesílatel Andy Fan <zhihui.fan1213@gmail.com>\n> napsal:\n>\n>>\n>>> jsonb_extract_xx_type just cares about the argtype, but\n>>> 'explain select xx' will still access the const->constvalue.\n>>> const->constvalue is 0 which is set by makeNullConst currently,\n>>> and it is ok for the current supported type.\n>>>\n>>\n>> The exception is numeric data type, the constvalue can't be 0.\n>> so hack it with the below line. maybe not good enough, but I\n>> have no better solution now.\n>>\n>> + Const *target =\n>> makeNullConst(fexpr->funcresulttype,\n>> +\n>> -1,\n>> +\n>> InvalidOid);\n>> + /*\n>> + * Since all the above functions are strict, we\n>> can't input\n>> + * a NULL value.\n>> + */\n>> + target->constisnull = false;\n>> +\n>> + Assert(target->constbyval || target->consttype ==\n>> NUMERICOID);\n>> +\n>> + /* Mock a valid datum for !constbyval type. */\n>> + if (fexpr->funcresulttype == NUMERICOID)\n>> + target->constvalue =\n>> DirectFunctionCall1(numeric_in, CStringGetDatum(\"0\"));\n>>\n>>\n> Personally I think this workaround is too dirty, and better to use a\n> strict function (I believe so the overhead for NULL values is acceptable),\n> or introduce a different mechanism.\n>\n> Your design is workable, and I think acceptable, but I don't think it is\n> an ideal or final solution. It is not really generic. It doesn't help with\n> XML or Hstore. You need to touch cast functions, which I think is not best,\n> because cast functions should not cooperate on optimization of execution of\n> another function.\n>\n> My idea of an ideal solution is the introduction of the possibility to use\n> \"any\" pseudotype as return type with possibility to set default return\n> type. Now, \"any\" is allowed only for arguments. The planner can set the\n> expected type when it knows it, or can use the default type.\n>\n> so for extraction of jsonb field we can use FUNCTION\n> jsonb_extract_field(jsonb, text) RETURNS \"any\" DEFAULT jsonb\n>\n> if we call SELECT jsonb_extract_field(..., 'x') -> then it returns jsonb,\n> if we use SELECT jsonb_extract_field('...', 'x')::date, then it returns date\n>\n> With this possibility we don't need to touch to cast functions, and we can\n> simply implement similar functions for other non atomic types.\n>\n\nthis syntax can be used instead NULL::type trick\n\nlike\n\nSELECT jsonb_populate_record('{...}')::pg_class;\n\ninstead\n\nSELECT jsonb_populate_record(NULL::pg_class, '{...}')\n\n\n\n>\n>\n>\n> --\n>> Best Regards\n>> Andy Fan\n>>\n>\n\nút 15. 8. 2023 v 7:23 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:Hiút 15. 8. 2023 v 5:24 odesílatel Andy Fan <zhihui.fan1213@gmail.com> napsal: jsonb_extract_xx_type just cares about the argtype, but 'explain select xx'  will still access the const->constvalue.const->constvalue is 0 which is set by makeNullConst currently, and it is ok for the current supported type. \nThe exception is numeric data type, the constvalue can't be 0. so hack it with the below line.  maybe not good enough,  but Ihave no better solution now. +                       Const   *target =  makeNullConst(fexpr->funcresulttype,+                                                                                        -1,+                                                                                        InvalidOid);+                       /*+                        * Since all the above functions are strict, we can't input+                        * a NULL value.+                        */+                       target->constisnull = false;+      +                       Assert(target->constbyval || target->consttype == NUMERICOID);+              +                       /* Mock a valid datum for !constbyval type. */+                       if (fexpr->funcresulttype == NUMERICOID)+                               target->constvalue = DirectFunctionCall1(numeric_in, CStringGetDatum(\"0\"));Personally I think this workaround is too dirty, and better to use a strict function (I believe so the overhead for NULL values is acceptable), or introduce a different mechanism.Your design is workable, and I think acceptable, but I don't think it is an ideal or final solution. It is not really generic. It doesn't help with XML or Hstore. You need to touch cast functions, which I think is not best, because cast functions should not cooperate on optimization of execution of another function.My idea of an ideal solution is the introduction of the possibility to use \"any\" pseudotype as return type with possibility to set default return type. Now, \"any\" is allowed only for arguments. The planner can set the expected type when it knows it, or can use the default type.so for extraction of jsonb field we can use FUNCTION jsonb_extract_field(jsonb, text) RETURNS \"any\" DEFAULT jsonbif we call SELECT jsonb_extract_field(..., 'x') -> then it returns jsonb, if we use SELECT jsonb_extract_field('...', 'x')::date, then it returns dateWith this possibility we don't need to touch to cast functions, and we can simply implement similar functions for other non atomic types.this syntax can be used instead NULL::type tricklikeSELECT jsonb_populate_record('{...}')::pg_class;instead SELECT jsonb_populate_record(NULL::pg_class, '{...}') -- Best RegardsAndy Fan", "msg_date": "Tue, 15 Aug 2023 07:33:03 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": ">\n>\n> My idea of an ideal solution is the introduction of the possibility to use\n> \"any\" pseudotype as return type with possibility to set default return\n> type. Now, \"any\" is allowed only for arguments. The planner can set the\n> expected type when it knows it, or can use the default type.\n>\n> so for extraction of jsonb field we can use FUNCTION\n> jsonb_extract_field(jsonb, text) RETURNS \"any\" DEFAULT jsonb\n>\n\nIs this an existing framework or do you want to create something new?\n\n>\n> if we call SELECT jsonb_extract_field(..., 'x') -> then it returns jsonb,\n> if we use SELECT jsonb_extract_field('...', 'x')::date, then it returns date\n>\n\nIf so, what is the difference from the current jsonb->'f' and\n(jsonb->'f' )::date?\n\n>\n> With this possibility we don't need to touch to cast functions, and we can\n> simply implement similar functions for other non atomic types.\n>\n\nWhat do you mean by \"atomic type\" here? If you want to introduce some new\nframework, I think we need a very clear benefit.\n\n-- \nBest Regards\nAndy Fan\n\nMy idea of an ideal solution is the introduction of the possibility to use \"any\" pseudotype as return type with possibility to set default return type. Now, \"any\" is allowed only for arguments. The planner can set the expected type when it knows it, or can use the default type.so for extraction of jsonb field we can use FUNCTION jsonb_extract_field(jsonb, text) RETURNS \"any\" DEFAULT jsonbIs this an existing framework or do you want to create something new? if we call SELECT jsonb_extract_field(..., 'x') -> then it returns jsonb, if we use SELECT jsonb_extract_field('...', 'x')::date, then it returns dateIf so, what is the difference from the current  jsonb->'f'   and (jsonb->'f' )::date?  With this possibility we don't need to touch to cast functions, and we can simply implement similar functions for other non atomic types. What do you mean by \"atomic type\" here?   If you want to introduce some new framework,  I think we need a very clear benefit.  -- Best RegardsAndy Fan", "msg_date": "Tue, 15 Aug 2023 14:04:19 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "út 15. 8. 2023 v 8:04 odesílatel Andy Fan <zhihui.fan1213@gmail.com> napsal:\n\n>\n>> My idea of an ideal solution is the introduction of the possibility to\n>> use \"any\" pseudotype as return type with possibility to set default return\n>> type. Now, \"any\" is allowed only for arguments. The planner can set the\n>> expected type when it knows it, or can use the default type.\n>>\n>> so for extraction of jsonb field we can use FUNCTION\n>> jsonb_extract_field(jsonb, text) RETURNS \"any\" DEFAULT jsonb\n>>\n>\n>\nIs this an existing framework or do you want to create something new?\n>\n\nThis should be created\n\n\n>\n>> if we call SELECT jsonb_extract_field(..., 'x') -> then it returns jsonb,\n>> if we use SELECT jsonb_extract_field('...', 'x')::date, then it returns date\n>>\n>\n> If so, what is the difference from the current jsonb->'f' and\n> (jsonb->'f' )::date?\n>\n\na) effectiveness. The ending performance should be similar like your\ncurrent patch, but without necessity to use planner support API.\n\nb) more generic usage. For example, the expressions in plpgsql are executed\na little bit differently than SQL queries. So there the optimization from\nyour patch probably should not work, because you can write only var :=\nj->'f', and plpgsql forces cast function execution, but not via planner.\n\nc) nothing else. It should not to require to modify cast function\ndefinitions\n\n\n\n>> With this possibility we don't need to touch to cast functions, and we\n>> can simply implement similar functions for other non atomic types.\n>>\n>\n> What do you mean by \"atomic type\" here? If you want to introduce some\n> new framework, I think we need a very clear benefit.\n>\n\nAtomic types (skalar types like int, varchar, date), nonatomic types -\narray, composite, xml, jsonb, hstore or arrays of composite types.\n\n\n\n>\n> --\n> Best Regards\n> Andy Fan\n>\n\nút 15. 8. 2023 v 8:04 odesílatel Andy Fan <zhihui.fan1213@gmail.com> napsal:My idea of an ideal solution is the introduction of the possibility to use \"any\" pseudotype as return type with possibility to set default return type. Now, \"any\" is allowed only for arguments. The planner can set the expected type when it knows it, or can use the default type.so for extraction of jsonb field we can use FUNCTION jsonb_extract_field(jsonb, text) RETURNS \"any\" DEFAULT jsonb Is this an existing framework or do you want to create something new? This should be created if we call SELECT jsonb_extract_field(..., 'x') -> then it returns jsonb, if we use SELECT jsonb_extract_field('...', 'x')::date, then it returns dateIf so, what is the difference from the current  jsonb->'f'   and (jsonb->'f' )::date?  a) effectiveness. The ending performance should be similar like your current patch, but without necessity to use planner support API.b) more generic usage. For example, the expressions in plpgsql are executed a little bit differently than SQL queries. So there the optimization from your patch probably should not work, because you can write only var := j->'f', and plpgsql forces cast function execution, but not via planner. c) nothing else. It should not to require to modify cast function definitions  With this possibility we don't need to touch to cast functions, and we can simply implement similar functions for other non atomic types. What do you mean by \"atomic type\" here?   If you want to introduce some new framework,  I think we need a very clear benefit.  Atomic types (skalar types like int, varchar, date), nonatomic types - array, composite, xml, jsonb, hstore or arrays of composite types. -- Best RegardsAndy Fan", "msg_date": "Tue, 15 Aug 2023 08:50:25 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": ">\n> a) effectiveness. The ending performance should be similar like your\n> current patch, but without necessity to use planner support API.\n>\n\nSo the cost is we need to create a new & different framework.\n\n>\n>\nb) because you can write only var := j->'f', and plpgsql forces cast\n> function execution, but not via planner.\n>\n\nvar a := 1 needs going with planner, IIUC, same with j->'f'.\n\nc) nothing else. It should not to require to modify cast function\n> definitions\n>\n\nIf you look at how the planner support function works, that is\npretty simple, just modify the prosupport attribute. I'm not sure\nthis should be called an issue or avoiding it can be described\nas a benefit.\n\nI don't think the current case is as bad as the other ones like\nusers needing to modify their queries or type-safety system\nbeing broken. So personally I'm not willing to creating some\nthing new & heavy. However I'm open to see what others say.\n\n-- \nBest Regards\nAndy Fan\n\na) effectiveness. The ending performance should be similar like your current patch, but without necessity to use planner support API. So the cost is we need to create a new & different framework.   b) because you can write only var := j->'f', and plpgsql forces cast function execution, but not via planner. var a := 1 needs going with planner,  IIUC,  same with j->'f'.  c) nothing else. It should not to require to modify cast function definitions  If you look at how the planner support function works,  that ispretty simple,  just modify the prosupport attribute. I'm not surethis should be called an issue or avoiding it can be describedas a benefit. I don't think the current case is as bad as the other ones likeusers needing to modify their queries or type-safety systembeing broken. So personally I'm not willing to creating something new & heavy. However I'm open to see what others say.   -- Best RegardsAndy Fan", "msg_date": "Tue, 15 Aug 2023 15:05:26 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "út 15. 8. 2023 v 9:05 odesílatel Andy Fan <zhihui.fan1213@gmail.com> napsal:\n\n>\n>\n>> a) effectiveness. The ending performance should be similar like your\n>> current patch, but without necessity to use planner support API.\n>>\n>\n> So the cost is we need to create a new & different framework.\n>\n\nyes, it can be less work, code than for example introduction of\n\"anycompatible\".\n\n\n>\n>>\n> b) because you can write only var := j->'f', and plpgsql forces cast\n>> function execution, but not via planner.\n>>\n>\n> var a := 1 needs going with planner, IIUC, same with j->'f'.\n>\n\ni was wrong, the planner is full, but the executor is reduced.\n\n\n\n>\n> c) nothing else. It should not to require to modify cast function\n>> definitions\n>>\n>\n> If you look at how the planner support function works, that is\n> pretty simple, just modify the prosupport attribute. I'm not sure\n> this should be called an issue or avoiding it can be described\n> as a benefit.\n>\n> I don't think the current case is as bad as the other ones like\n> users needing to modify their queries or type-safety system\n> being broken. So personally I'm not willing to creating some\n> thing new & heavy. However I'm open to see what others say.\n>\n\nok\n\nregards\n\nPavel\n\n\n>\n> --\n> Best Regards\n> Andy Fan\n>\n\nút 15. 8. 2023 v 9:05 odesílatel Andy Fan <zhihui.fan1213@gmail.com> napsal:a) effectiveness. The ending performance should be similar like your current patch, but without necessity to use planner support API. So the cost is we need to create a new & different framework.  yes, it can be less work, code than for example introduction of \"anycompatible\".   b) because you can write only var := j->'f', and plpgsql forces cast function execution, but not via planner. var a := 1 needs going with planner,  IIUC,  same with j->'f'.  i was wrong, the planner is full, but the executor is reduced. c) nothing else. It should not to require to modify cast function definitions  If you look at how the planner support function works,  that ispretty simple,  just modify the prosupport attribute. I'm not surethis should be called an issue or avoiding it can be describedas a benefit. I don't think the current case is as bad as the other ones likeusers needing to modify their queries or type-safety systembeing broken. So personally I'm not willing to creating something new & heavy. However I'm open to see what others say.  okregardsPavel  -- Best RegardsAndy Fan", "msg_date": "Tue, 15 Aug 2023 09:45:59 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "On Tue, Aug 15, 2023 at 1:24 PM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n> Hi\n>\n> út 15. 8. 2023 v 5:24 odesílatel Andy Fan <zhihui.fan1213@gmail.com>\n> napsal:\n>\n>>\n>>> jsonb_extract_xx_type just cares about the argtype, but\n>>> 'explain select xx' will still access the const->constvalue.\n>>> const->constvalue is 0 which is set by makeNullConst currently,\n>>> and it is ok for the current supported type.\n>>>\n>>\n>> The exception is numeric data type, the constvalue can't be 0.\n>> so hack it with the below line. maybe not good enough, but I\n>> have no better solution now.\n>>\n>> + Const *target =\n>> makeNullConst(fexpr->funcresulttype,\n>> +\n>> -1,\n>> +\n>> InvalidOid);\n>> + /*\n>> + * Since all the above functions are strict, we\n>> can't input\n>> + * a NULL value.\n>> + */\n>> + target->constisnull = false;\n>> +\n>> + Assert(target->constbyval || target->consttype ==\n>> NUMERICOID);\n>> +\n>> + /* Mock a valid datum for !constbyval type. */\n>> + if (fexpr->funcresulttype == NUMERICOID)\n>> + target->constvalue =\n>> DirectFunctionCall1(numeric_in, CStringGetDatum(\"0\"));\n>>\n>>\n> Personally I think this workaround is too dirty, and better to use a\n> strict function (I believe so the overhead for NULL values is acceptable).\n>\n\nIn the patch v8, I created a new routine named makeDummyConst,\nwhich just sits by makeNullConst. It may be helpful to some extent.\na). The code is self-document for the user/reader. b). We have a\ncentral place to maintain this routine.\n\nBesides the framework, the troubles for the reviewer may be if the\ncode has some corner case issue or behavior changes. Especially\nI have some code refactor when working on jsonb_extract_path.\nso the attached test.sql is designed for this. I have compared the\nresult between master and patched version and I think reviewer\ncan do some extra testing with it.\n\nv8 is the finished version in my mind, so I think it is ready for review\nnow.\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Wed, 16 Aug 2023 14:12:16 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "update with the correct patch..", "msg_date": "Wed, 16 Aug 2023 14:27:57 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "On Wed, Aug 16, 2023 at 2:28 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> update with the correct patch..\n\nregression=# select proname, pg_catalog.pg_get_function_arguments(oid)\nfrom pg_proc\nwhere proname = 'jsonb_extract_path_type';\n proname | pg_get_function_arguments\n-------------------------+--------------------------------------------------------------------\n jsonb_extract_path_type | from_json jsonb, VARIADIC path_elems\ntext[], target_oid anyelement\n(1 row)\n\nVARIADIC should be the last argument?\n\nselect jsonb_array_element_type(jsonb'[1231]',0, null::int);\nnow return null.\nShould it return 1231?\n\nregression=# select jsonb_array_element_type(jsonb'1231',0, 1::int);\n jsonb_array_element_type\n--------------------------\n 1231\n(1 row)\n\nnot sure if it's ok. if you think it's not ok then:\n+ if (!JB_ROOT_IS_ARRAY(jb))\n+PG_RETURN_NULL();\nchange to\n+if (JB_ROOT_IS_SCALAR(jb) || !JB_ROOT_IS_ARRAY(jb))\n+PG_RETURN_NULL();\n\nselect jsonb_array_element_type(jsonb'[1231]',0, '1'::jsonb);\nwill crash, because jsonb_array_element_type call\ncast_jsonbvalue_to_type then in switch case, it will go to\ndefault part. in default part you have Assert(false);\nalso in cast_jsonbvalue_to_type, PG_RETURN_POINTER(NULL) code won't be reached.\n\nin jsonb_cast_support function. you already have\n!jsonb_cast_is_optimized(fexpr->funcresulttype)). then in the default\nbranch of cast_jsonbvalue_to_type, you can just elog(error, \"can only\ncast to xxx type\"). jsonb_array_element_type, jsonb_object_field_type,\n third argument is anyelement. so targetOid can be any datatype's oid.\n\n\n", "msg_date": "Thu, 17 Aug 2023 00:32:08 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "Hi jian:\n\nOn Thu, Aug 17, 2023 at 12:32 AM jian he <jian.universality@gmail.com>\nwrote:\n\n> On Wed, Aug 16, 2023 at 2:28 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> >\n> > update with the correct patch..\n>\n> regression=# select proname, pg_catalog.pg_get_function_arguments(oid)\n> from pg_proc\n> where proname = 'jsonb_extract_path_type';\n> proname | pg_get_function_arguments\n>\n> -------------------------+--------------------------------------------------------------------\n> jsonb_extract_path_type | from_json jsonb, VARIADIC path_elems\n> text[], target_oid anyelement\n> (1 row)\n>\n> VARIADIC should be the last argument?\n>\n\nCurrently if users call this function directly(usually I don't think\nso), they will get something wrong. This issue is fixed in the\nv9 version. To keep the consistency among all the functions,\nI moved the 'target_type anyelement' to the 1st argument.\nThanks for the report!\n\n\n> select jsonb_array_element_type(jsonb'[1231]',0, null::int);\n> now return null.\n> Should it return 1231?\n>\n\nNo, this is by design. the function is declared as strict, so\nany NULL inputs yield a NULL output. That's just what we\ntalked above (the markDummyConst section). I don't\nthink this should be addressed.\n\n\n> select jsonb_array_element_type(jsonb'[1231]',0, '1'::jsonb);\n> will crash\n>\n\nOK, looks I didn't pay enough attention to the 'user directly call\njsonb_xx_type' function, so I changed the code in v9 based on\nyour suggestion.\n\nThanks for the review, v9 attached!\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Thu, 17 Aug 2023 17:07:31 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "On 2023-08-17 05:07, Andy Fan wrote:\n> Thanks for the review, v9 attached!\n\n From the earliest iterations of this patch, I seem to recall\na couple of designs being considered:\n\nIn one, the type-specific cast function would only be internally\nusable, would take a type oid as an extra parameter (supplied in\nthe SupportRequestSimplify rewriting), and would have to be\ndeclared with some nonspecific return type; 'internal' was\nmentioned.\n\nThe idea of an 'internal' return type with no 'internal' parameter\nwas quickly and rightly shot down. But it would have seemed to me\nenough to address that objection by using 'internal' also in its\nparameter list. I could imagine a function declared with two\n'internal' parameters, one understood to be a JsonbValue and one\nunderstood to be a type oid, and an 'internal' result, treated in\nthe rewritten expression tree as binary-coercible to the desired\nresult.\n\nAdmittedly, I have not tried to implement that myself to see\nwhat unexpected roadblocks might exist on that path. Perhaps\nthere are parts of that rewriting that no existing node type\ncan represent? Someone more familiar with those corners of\nPostgreSQL may immediately see other difficulties I do not.\n\nBut I have the sense that that approach was abandoned early, in\nfavor of the current approach using user-visible polymorphic\ntypes, and supplying typed dummy constants for use in the\nresolution of those types, with a new function introduced to create\nsaid dummy constants, including allocation and input conversion\nin the case of numeric, just so said dummy constants can be\npassed into functions that have no use for them other than to\ncall get_fn_expr_argtype to recover the type oid, which was the\nonly thing needed in the first place.\n\nCompared to the initial direction I thought this was going,\nnone of that strikes me as better.\n\nNothing makes my opinion authoritative here, and there may\nindeed be reasons it is better, known to others more familiar\nwith that code than I am. But it bugs me.\n\nIf the obstacles to the earlier approach came down to needing\na new type of expression node, something like \"assertion of\n'internal'-to-foo binary coercibility, vouched by a prosupport\nfunction\", would that be a bad thing?\n\nRegards,\n-Chap\n\n\n", "msg_date": "Thu, 17 Aug 2023 16:30:41 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "Hi Chapman,\n\nThanks for the review!\n\nThe idea of an 'internal' return type with no 'internal' parameter\n> was quickly and rightly shot down.\n\n\nYes, it mainly breaks the type-safety system. Parser need to know\nthe result type, so PG defines the rule like this:\n\nanyelement fn(anyment in);\n\nif the exprType(in) in the query tree is X, then PG would think fn\nreturn type X. that's why we have to have an anyelement in the\ninput.\n\n\n> But it would have seemed to me\n> enough to address that objection by using 'internal' also in its\n> parameter list. I could imagine a function declared with two\n> 'internal' parameters, one understood to be a JsonbValue and one\n> understood to be a type oid, and an 'internal' result, treated in\n> the rewritten expression tree as binary-coercible to the desired\n> result.\n>\n\nI have some trouble understanding this. are you saying something\nlike:\n\ninternal fn(internal jsonValue, internal typeOid)?\n\nIf so, would it break the type-safety system? And I'm not pretty\nsure the 'binary-coercible' here. is it same as the 'binary-coercible'\nin \"timestamp is not binary coercible with timestamptz since...\"?\nI have a strong feeling that I think I misunderstood you here.\n\n\n> Perhaps there are parts of that rewriting that no existing node type\n> can represent?\n>\n\nI didn't understand this as well:(:(\n\nBut I have the sense that that approach was abandoned early, in\n> favor of the current approach using user-visible polymorphic\n> types, and supplying typed dummy constants for use in the\n> resolution of those types, with a new function introduced to create\n> said dummy constants, including allocation and input conversion\n> in the case of numeric, just so said dummy constants can be\n> passed into functions that have no use for them other than to\n> call get_fn_expr_argtype to recover the type oid, which was the\n> only thing needed in the first place.\n\n\nYes, but if we follow the type-safety system, we can't simply input\na Oid targetOid, then there are some more considerations here:\na). we can't use the makeNullConst because jsonb_xxx_type is\nstrict, so if we have NULL constant input here, the PG system\nwill return NULL directly. b). Not only the type oid is the thing\nWe are interested in the const.constvalue is as well since\n'explain select xxxx' to access it to show it as a string.\nDatum(0) as the constvalue will crash in this sense. That's why\nmakeDummyConst was introduced.\n\n\n> something like \"assertion of\n> 'internal'-to-foo binary coercibility, vouched by a prosupport\n> function\", would that be a bad thing?\n>\n\nI can't follow this as well. Could you provide the function prototype\nhere?\n\n-- \nBest Regards\nAndy Fan\n\nHi Chapman, Thanks for the review!\nThe idea of an 'internal' return type with no 'internal' parameter\nwas quickly and rightly shot down. Yes, it mainly breaks the type-safety system.  Parser need to knowthe result type, so PG defines the rule like this:anyelement fn(anyment in); if the exprType(in) in the query tree is X, then PG would think fnreturn type X.  that's why we have to have an anyelement in theinput.  But it would have seemed to me\nenough to address that objection by using 'internal' also in its\nparameter list. I could imagine a function declared with two\n'internal' parameters, one understood to be a JsonbValue and one\nunderstood to be a type oid, and an 'internal' result, treated in\nthe rewritten expression tree as binary-coercible to the desired\nresult.I have some trouble understanding this.  are you saying somethinglike:internal fn(internal jsonValue,  internal typeOid)?  If so, would it break the type-safety system?  And I'm not prettysure the 'binary-coercible' here.  is it same as the 'binary-coercible'in \"timestamp is not binary coercible with timestamptz since...\"?I have a strong feeling that I think I misunderstood you here.  Perhaps there are parts of that rewriting that no existing node type\ncan represent?  I didn't understand this as well:(:( \nBut I have the sense that that approach was abandoned early, in\nfavor of the current approach using user-visible polymorphic\ntypes, and supplying typed dummy constants for use in the\nresolution of those types, with a new function introduced to create\nsaid dummy constants, including allocation and input conversion\nin the case of numeric, just so said dummy constants can be\npassed into functions that have no use for them other than to\ncall get_fn_expr_argtype to recover the type oid, which was the\nonly thing needed in the first place.Yes,  but if we follow the type-safety system, we can't simply inputa Oid targetOid, then there are some more considerations here:  a).  we can't use the makeNullConst because jsonb_xxx_type isstrict,  so if we have NULL constant input here,  the PG systemwill return NULL directly.  b).  Not only the type oid is the thingWe are interested in the  const.constvalue is as well since 'explain select xxxx'  to access it to show it as a string.  Datum(0) as the constvalue will crash in this sense.  That's whymakeDummyConst was introduced.  something like \"assertion of\n'internal'-to-foo binary coercibility, vouched by a prosupport\nfunction\", would that be a bad thing?I can't follow this as well.  Could you provide the function prototypehere? -- Best RegardsAndy Fan", "msg_date": "Fri, 18 Aug 2023 09:14:27 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "On 2023-08-17 21:14, Andy Fan wrote:\n>> The idea of an 'internal' return type with no 'internal' parameter\n>> was quickly and rightly shot down.\n> \n> Yes, it mainly breaks the type-safety system. Parser need to know\n> the result type, so PG defines the rule like this:\n\nWell, the reason \"internal return type with no internal parameter type\"\nwas shot down was more specific: if there is such a function in the\ncatalog, an SQL user can call it, and then its return type is a value\ntyped 'internal', and with that the SQL user could call other\nfunctions with 'internal' parameters, and that's what breaks type\nsafety. The specific problem is not having at least one 'internal'\ninput parameter.\n\nThere are lots of functions in the catalog with internal return type\n(I count 111). They are not inherently bad; the rule is simply that\neach one also needs at least one IN parameter typed internal, to\nmake sure it can't be directly called from SQL.\n\n> anyelement fn(anyment in);\n> \n> if the exprType(in) in the query tree is X, then PG would think fn\n> return type X. that's why we have to have an anyelement in the\n> input.\n\nThat's a consequence of the choice to have anyelement as the return\ntype, though. A different choice wouldn't have that consequence.\n\n> I have some trouble understanding this. are you saying something\n> like:\n> \n> internal fn(internal jsonValue, internal typeOid)?\n> \n> If so, would it break the type-safety system?\n\nThat is what I'm saying, and it doesn't break type safety at the\nSQL level, because as long as it has parameters declared internal,\nno SQL can ever call it. So it can only appear in an expression\ntree because your SupportRequestSimplify put it there properly\ntyped, after the SQL query was parsed but before evaluation.\n\nThe thing about 'internal' is it doesn't represent any specific\ntype, it doesn't necessarily represent the same type every time it\nis mentioned, and it often means something that isn't a cataloged\ntype at all, such as a pointer to some kind of struct. There must be\ndocumentation explaining what it has to be. For example, your\njsonb_cast_support function has an 'internal' parameter and\n'internal' return type. From the specification for support\nfunctions, you know the 'internal' for the parameter type means\n\"one of the Node structs in supportnodes.h\", and the 'internal'\nfor the return type means \"an expression tree semantically\nequivalent to the FuncExpr\".\n\nSo, in addition to declaring\ninternal fn(internal jsonValue, internal typeOid), you would\nhave to write a clear spec that jsonValue has to be a JsonbValue,\ntypeOid has to be something you can call DatumGetObjectId on,\nand the return value should be a Datum in proper form\ncorresponding to typeOid. And, of course, generate the expression\ntree so all of that is true when it's evaluated.\n\n>> Perhaps there are parts of that rewriting that no existing node type\n>> can represent?\n\nThe description above was in broad strokes. Because I haven't\ntried to implement this, I don't know whether some roadblock would\nappear, such as, is it hard to make a Const node of type internal\nand containing an oid? Or, what sort of node must be inserted to\nclarify that the 'internal' return is actually a Datum of the\nexpected type? By construction, we know that it is, but how to\nmake that explicit in the expression tree?\n\n> a). we can't use the makeNullConst because jsonb_xxx_type is\n> strict, so if we have NULL constant input here, the PG system\n> will return NULL directly. b). Not only the type oid is the thing\n> We are interested in the const.constvalue is as well since\n> 'explain select xxxx' to access it to show it as a string.\n> Datum(0) as the constvalue will crash in this sense. That's why\n> makeDummyConst was introduced.\n\nAgain, all of that complication stems from the choice to use the\nanyelement return type and rely on polymorphic type resolution\nto figure the oid out, when we already have the oid to begin with\nand the oid is all we want.\n\n>> something like \"assertion of\n>> 'internal'-to-foo binary coercibility, vouched by a prosupport\n>> function\", would that be a bad thing?\n> \n> I can't follow this as well.\n\nThat was just another way of saying what I was getting at above\nabout what's needed in the expression tree to indicate that the\n'internal' produced by this function is, in fact, really a bool\n(or whatever). We know that it is, but perhaps the expression\ntree will be considered ill-formed without a node that says so.\nA node representing a no-op, binary conversion would suffice,\nbut is there already a node that's allowed to represent an\ninternal-to-bool no-op cast?\n\nIf there isn't, one might have to be invented. So it might be that\nif we go down the \"use polymorphic resolution\" road, we have to\ninvent dummy Consts, and down the \"internal\" road we also have to\ninvent something, like the \"no-op cast considered correct because\na SupportRequestSimplify function put it here\" node.\n\nIf it came down to having to invent one of those things or the\nother, I'd think the latter more directly captures what we really\nwant to do.\n\n(And I'm not even sure anything has to be invented. If there's an\nexisting node for no-op binary casts, I think I'd first try\nputting that there and see if anything complains.)\n\nIf this thread is being followed by others more familiar with\nthe relevant code or who see obvious problems I'm missing,\nplease chime in!\n\nRegards,\n-Chap\n\n\n", "msg_date": "Thu, 17 Aug 2023 22:55:56 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "On Fri, Aug 18, 2023 at 10:55 AM Chapman Flack <chap@anastigmatix.net> wrote:\n>\n>\n> Again, all of that complication stems from the choice to use the\n> anyelement return type and rely on polymorphic type resolution\n> to figure the oid out, when we already have the oid to begin with\n> and the oid is all we want.\n>\n\nyou want jsonb_object_field_type(internal, jsonb, text)? because on\nsql level, it's safe.\n\nThe return data type is determined when we are in jsonb_cast_support.\nwe just need to pass the {return data type} information to the next\nfunction: jsonb_object_field_type.\n\n\n", "msg_date": "Fri, 18 Aug 2023 13:02:55 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "> because as long as it has parameters declared internal,\n> no SQL can ever call it.\n\n\nI was confused about the difference between anyelement and\ninternal, and I want to know a way to create a function which\nis disallowed to be called by the user. Your above words\nresolved two questions of mine!\n\nSo it can only appear in an expression\n> tree because your SupportRequestSimplify put it there properly\n> typed, after the SQL query was parsed but before evaluation.\n>\n> The thing about 'internal' is it doesn't represent any specific\n> type, it doesn't necessarily represent the same type every time it\n> is mentioned, and it often means something that isn't a cataloged\n> type at all, such as a pointer to some kind of struct.\n\n\nI should have noticed this during the study planner support function,\nbut highlighting this is pretty amazing.\n\n\n> If there isn't, one might have to be invented. So it might be that\n> if we go down the \"use polymorphic resolution\" road, we have to\n> invent dummy Consts, and down the \"internal\" road we also have to\n> invent something.\n\n\nI think you might already feel that putting an internal function\ninto an expression would cause something wrong. I just have\na quick hack on this, and crash happens at the simplest case.\nIf something already exists to fix this, I am inclined\nto use 'internal', but I didn't find the way. I'm thinking if we\nshould clarify \"internal\" should only be used internally and\nshould never be used in expression by design?\n\n\n> (And I'm not even sure anything has to be invented. If there's an\n> existing node for no-op binary casts, I think I'd first try\n> putting that there and see if anything complains.)\n>\n> If this thread is being followed by others more familiar with\n> the relevant code or who see obvious problems I'm missing,\n> please chime in!\n>\n\nThank you wise & modest gentleman, I would really hope Tom can\nchime in at this time.\n\nIn general, the current decision we need to make is shall we use\n'internal' or 'anyelement' to present the target OID. the internal way\nwould be more straight but have troubles to be in the expression tree.\nthe 'anyelement' way is compatible with expression, but it introduces\nthe makeDummyConst overhead and I'm not pretty sure it is a correct\nimplementation in makeDummyConst. see the XXX part.\n\n+/*\n+ * makeDummyConst\n+ * create a Const node with the specified type/typmod.\n+ *\n+ * This is a convenience routine to create a Const which only the\n+ * type is interested but make sure the value is accessible.\n+ */\n+Const *\n+makeDummyConst(Oid consttype, int32 consttypmod, Oid constcollid)\n+{\n+ int16 typLen;\n+ bool typByVal;\n+ Const *c;\n+ Datum val = 0;\n+\n+\n+ get_typlenbyval(consttype, &typLen, &typByVal);\n+\n+ if (consttype == NUMERICOID)\n+ val = DirectFunctionCall1(numeric_in, CStringGetDatum(\"0\"));\n+ else if (!typByVal)\n+ elog(ERROR, \"create dummy const for type %u is not\nsupported.\", consttype);\n+\n+ /* XXX: here I assume constvalue=0 is accessible for const by value\ntype.*/\n+ c = makeConst(consttype, consttypmod, 0, (int) typLen, val, false,\ntypByVal);\n+\n+ return c;\n+}\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Fri, 18 Aug 2023 15:41:13 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "On 2023-08-18 03:41, Andy Fan wrote:\n> I just have\n> a quick hack on this, and crash happens at the simplest case.\n\nIf I build from this patch, this test:\n\nSELECT (test_json -> 0)::int4, test_json -> 0 FROM test_jsonb WHERE \njson_type = 'scalarint';\n\nfails like this:\n\nProgram received signal SIGSEGV, Segmentation fault.\nconvert_saop_to_hashed_saop_walker (node=0x17, context=0x0)\n at \n/var/tmp/nohome/pgbuildh/../postgresql/src/backend/optimizer/util/clauses.c:2215\n2215\t\tif (IsA(node, ScalarArrayOpExpr))\n\n(gdb) p node\n$1 = (Node *) 0x17\n\nSo the optimizer is looking at some node to see if it is a\nScalarArrayOpExpr, but the node has some rather weird address.\n\nOr maybe it's not that weird. 0x17 is 23, and so is:\n\nselect 'int4'::regtype::oid;\n oid\n-----\n 23\n\nSee what happened?\n\n+ int64 target_typ = fexpr->funcresulttype;\n...\n+ fexpr->args = list_insert_nth(fexpr->args, 0, (void *) target_typ);\n\nThis is inserting the desired result type oid directly as the first\nthing in the list of fexpr's args.\n\nBut at the time your support function is called, nothing is being\nevaluated yet. You are just manipulating a tree of expressions to\nbe evaluated later, and you want fexpr's first arg to be an\nexpression that will produce this type oid later, when it is\nevaluated.\n\nA constant would do nicely:\n\n+ Const\t*target = makeConst(\n\tINTERNALOID, -1, InvalidOid, SIZEOF_DATUM,\n\tObjectIdGetDatum(fexpr->funcresulttype), false, true);\n+ fexpr->args = list_insert_nth(fexpr->args, 0, target);\n\nWith that change, it doesn't segfault, but it does do this:\n\nERROR: cast jsonb to type 0 is not allowed\n\nand that's because of this:\n\n+ Oid\t\t\ttargetOid = DatumGetObjectId(0);\n\nThe DatumGetFoo(x) macros are for when you already have the Datum\n(it's x) and you know it's a Foo. So this is just setting targetOid\nto zero. When you want to get something from function argument 0 and\nyou know that's a Foo, you use a PG_GETARG_FOO(argno) macro (which\namounts to PG_GETARG_DATUM(argno) followed by DatumGetFoo.\n\nSo, with\n\n+ Oid\t\t\ttargetOid = PG_GETARG_OID(0);\n\nSELECT (test_json -> 0)::int4, test_json -> 0 FROM test_jsonb WHERE \njson_type = 'scalarint';\n int4 | ?column?\n------+----------\n 2 | 2\n\nHowever, EXPLAIN is sad:\n\nERROR: cannot display a value of type internal\n\nand that may be where this idea runs aground.\n\nNow, I was expecting something to complain about the result of\njsonb_array_element_type, and that didn't happen. We rewrote\na function that was supposed to be a cast to int4, and\nreplaced it with a function returning internal, and evaluation\nhappily just took that as the int4 that the next node expected.\n\nIf something had complained about that, it might have been\nnecessary to insert some new node above the internal-returning\nfunction to say the result was really int4. Notice there is a\nmakeRelabelType() for that. (I had figured there probably was,\nbut didn't know its exact name.)\n\nSo it doesn't seem strictly necessary to do that, but it might\nmake the EXPLAIN result look better (if EXPLAIN were made to work,\nof course).\n\nNow, my guess is EXPLAIN is complaining when it sees the Const\nof type internal, and doesn't know how to show that value.\nPerhaps makeRelabelType is the answer there, too: what if the\nConst has Oid type, so EXPLAIN can show it, and what's inserted\nas the function argument is a relabel node saying it's internal?\n\nHaven't tried that yet.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Fri, 18 Aug 2023 14:50:15 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "On 2023-08-18 14:50, Chapman Flack wrote:\n> Now, my guess is EXPLAIN is complaining when it sees the Const\n> of type internal, and doesn't know how to show that value.\n> Perhaps makeRelabelType is the answer there, too: what if the\n> Const has Oid type, so EXPLAIN can show it, and what's inserted\n> as the function argument is a relabel node saying it's internal?\n\nSimply changing the Const to be of type Oid makes EXPLAIN happy,\nand nothing ever says \"hey, why are you passing this oid for an\narg that wants internal?\". This is without adding any relabel\nnodes anywhere.\n\n Seq Scan on pg_temp.test_jsonb\n Output: pg_catalog.jsonb_array_element_type('23'::oid, test_json, 0), \n(test_json -> 0)\n Filter: (test_jsonb.json_type = 'scalarint'::text)\n\nNothing in that EXPLAIN output to make you think anything weird\nwas going on, unless you went and looked up jsonb_array_element_type\nand saw that its arg0 isn't oid and its return type isn't int4.\n\nBut I don't know that adding relabel nodes wouldn't still be\nthe civilized thing to do.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Fri, 18 Aug 2023 15:08:57 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "On 2023-08-18 15:08, Chapman Flack wrote:\n> But I don't know that adding relabel nodes wouldn't still be\n> the civilized thing to do.\n\nInterestingly, when I relabel both places, like this:\n\n Oid targetOid = fexpr->funcresulttype;\n Const *target = makeConst(\n OIDOID, -1, InvalidOid, sizeof(Oid),\n ObjectIdGetDatum(targetOid), false, true);\n RelabelType *rTarget = makeRelabelType((Expr *)target,\n INTERNALOID, -1, InvalidOid, COERCE_IMPLICIT_CAST);\n fexpr->funcid = new_func_id;\n fexpr->args = opexpr->args;\n fexpr->args = list_insert_nth(fexpr->args, 0, rTarget);\n expr = (Expr *)makeRelabelType((Expr *)fexpr,\n targetOid, -1, InvalidOid, COERCE_IMPLICIT_CAST);\n }\n PG_RETURN_POINTER(expr);\n\nEXPLAIN looks like this:\n\n Seq Scan on pg_temp.test_jsonb\n Output: jsonb_array_element_type(('23'::oid)::internal, test_json, \n0), (test_json -> 0)\n Filter: (test_jsonb.json_type = 'scalarint'::text)\n\nWith COERCE_IMPLICIT_CAST both places, the relabeling of the\nfunction result is invisible, but the relabeling of the argument\nis visible.\n\nWith the second one changed to COERCE_EXPLICIT_CAST:\n\n Seq Scan on pg_temp.test_jsonb\n Output: (jsonb_array_element_type(('23'::oid)::internal, test_json, \n0))::integer, (test_json -> 0)\n Filter: (test_jsonb.json_type = 'scalarint'::text)\n\nthen both relabelings are visible.\n\nI'm not sure whether one way is better than the other, or whether\nit is even important to add the relabel nodes at all, as nothing\nraises an error without them. As a matter of taste, it seems like\na good idea though.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Fri, 18 Aug 2023 17:02:52 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "On Sat, Aug 19, 2023 at 3:09 AM Chapman Flack <chap@anastigmatix.net> wrote:\n\n> On 2023-08-18 14:50, Chapman Flack wrote:\n> > Now, my guess is EXPLAIN is complaining when it sees the Const\n> > of type internal, and doesn't know how to show that value.\n> > Perhaps makeRelabelType is the answer there, too: what if the\n> > Const has Oid type, so EXPLAIN can show it, and what's inserted\n> > as the function argument is a relabel node saying it's internal?\n\n\n>\nSimply changing the Const to be of type Oid makes EXPLAIN happy,\n> and nothing ever says \"hey, why are you passing this oid for an\n> arg that wants internal?\". This is without adding any relabel\n> nodes anywhere.\n>\n\n\nHighlighting the user case of makeRelableType is interesting! But using\nthe Oid directly looks more promising for this question IMO, it looks like:\n\"you said we can put anything in this arg, so I put an OID const here\",\nseems nothing is wrong. Compared with the makeRelableType method,\nI think the current method is more straightforward. Compared with\nanyelement, it avoids the creation of makeDummyConst which I'm not\nsure the implementation is alway correct. So I am pretty inclined to this\nway!\n\nv10 attached.\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Mon, 21 Aug 2023 09:31:56 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "On 2023-08-20 21:31, Andy Fan wrote:\n> Highlighting the user case of makeRelableType is interesting! But using\n> the Oid directly looks more promising for this question IMO, it looks \n> like:\n> \"you said we can put anything in this arg, so I put an OID const \n> here\",\n> seems nothing is wrong.\n\nPerhaps one of the more senior developers will chime in, but to me,\nleaving out the relabel nodes looks more like \"all of PostgreSQL's\ntype checking happened before the SupportRequestSimplify, so nothing\nhas noticed that we rewrote the tree with mismatched types, and as\nlong as nothing crashes we sort of got away with it.\"\n\nSuppose somebody writes an extension to double-check that plan\ntrees are correctly typed. Or improves EXPLAIN to check a little more\ncarefully than it seems to. Omitting the relabel nodes could spell\ntrouble then.\n\nOr, someone more familiar with the code than I am might say \"oh,\nmismatches like that are common in rewritten trees, we live with it.\"\nBut unless somebody tells me that, I'm not believing it.\n\nBut I would say we have proved the concept of SupportRequestSimplify\nfor this task. :)\n\nNow, it would make me happy to further reduce some of the code\nduplication between the original and the _type versions of these\nfunctions. I see that you noticed the duplication in the case of\njsonb_extract_path, and you factored out jsonb_get_jsonbvalue so\nit could be reused. There is also some duplication with object_field\nand array_element. (Also, we may have overlooked jsonb_path_query\nand jsonb_path_query_first as candidates for the source of the\ncast. Two more candidates; five total.)\n\nHere is one way this could be structured. Observe that every one\nof those five source candidates operates in two stages:\n\nStart: All of the processing until a JsonbValue has been obtained.\nFinish: Converting the JsonbValue to some form for return.\n\nBefore this patch, there were two choices for Finish:\nJsonbValueToJsonb or JsonbValueAsText.\n\nWith this patch, there are four Finish choices: those two, plus\nPG_RETURN_BOOL(v->val.boolean), PG_RETURN_NUMERIC(v->val.numeric).\n\nClearly, with rewriting, we can avoid 5✕4 = 20 distinct\nfunctions. The five candidate functions only differ in Start.\nSuppose each of those had a _start version, like\njsonb_object_field_start, that only proceeds as far as\nobtaining the JsonbValue, and returns that directly (an\n'internal' return type). Naturally, each _start function would\nneed an 'internal' parameter also, even if it isn't used,\njust to make sure it is not SQL-callable.\n\nNow consider four Finish functions: jsonb_finish_jsonb,\njsonb_finish_text, jsonb_finish_boolean, jsonb_finish_numeric.\n\nEach would have one 'internal' parameter (a JsonbValue), and\nits return type declared normally. There is no need to pass\na type oid to any of these, and they need not contain any\nswitch to select a return type. The correct finisher to use\nis simply chosen once at the time of rewriting.\n\nSo cast(jsonb_array_element(jb, 0) as numeric) would just get\nrewritten as jsonb_finish_numeric(jsonb_array_element_start(jb,0)).\n\nThe other (int and float) types don't need new code; just have\nthe rewriter add a cast-from-numeric node on top. That's all\nthose other switch cases in cast_jsonbvalue_to_type are doing,\nanyway.\n\nNotice in this structure, less relabeling is needed. The\nfinal return does not need relabeling, because each finish\nfunction has the expected return type. Each finish function's\nparameter is typed 'internal' (a JsonbValue), but that's just\nwhat each start function returns, so no relabeling needed\nthere either.\n\nThe rewriter will have to supply some 'internal' constant\nas a start-function parameter (because of the necessary\n'internal' parameter). It might still be civilized to relabel\nthat.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Sun, 20 Aug 2023 23:19:47 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": ">\n>\n> Interestingly, when I relabel both places, like this:\n>\n> Oid targetOid = fexpr->funcresulttype;\n> Const *target = makeConst(\n> OIDOID, -1, InvalidOid, sizeof(Oid),\n> ObjectIdGetDatum(targetOid), false, true);\n> RelabelType *rTarget = makeRelabelType((Expr *)target,\n> INTERNALOID, -1, InvalidOid, COERCE_IMPLICIT_CAST);\n> fexpr->funcid = new_func_id;\n> fexpr->args = opexpr->args;\n> fexpr->args = list_insert_nth(fexpr->args, 0, rTarget);\n> expr = (Expr *)makeRelabelType((Expr *)fexpr,\n> targetOid, -1, InvalidOid, COERCE_IMPLICIT_CAST);\n> }\n> PG_RETURN_POINTER(expr);\n>\n> EXPLAIN looks like this:\n>\n> Seq Scan on pg_temp.test_jsonb\n> Output: jsonb_array_element_type(('23'::oid)::internal, test_json,\n> 0), (test_json -> 0)\n> Filter: (test_jsonb.json_type = 'scalarint'::text)\n>\n> With COERCE_IMPLICIT_CAST both places, the relabeling of the\n> function result is invisible, but the relabeling of the argument\n> is visible.\n>\n>\nI think this is because get_rule_expr's showimplicit is always\ntrue for args in this case, checking the implementation of\nget_rule_expr, I found PG behavior like this in many places.\n\n-- \nBest Regards\nAndy Fan\n\n\nInterestingly, when I relabel both places, like this:\n\n     Oid   targetOid = fexpr->funcresulttype;\n     Const *target  = makeConst(\n       OIDOID, -1, InvalidOid, sizeof(Oid),\n       ObjectIdGetDatum(targetOid), false, true);\n     RelabelType *rTarget = makeRelabelType((Expr *)target,\n       INTERNALOID, -1, InvalidOid, COERCE_IMPLICIT_CAST);\n     fexpr->funcid = new_func_id;\n     fexpr->args = opexpr->args;\n     fexpr->args = list_insert_nth(fexpr->args, 0, rTarget);\n     expr = (Expr *)makeRelabelType((Expr *)fexpr,\n       targetOid, -1, InvalidOid, COERCE_IMPLICIT_CAST);\n   }\n   PG_RETURN_POINTER(expr);\n\nEXPLAIN looks like this:\n\n  Seq Scan on pg_temp.test_jsonb\n    Output: jsonb_array_element_type(('23'::oid)::internal, test_json, \n0), (test_json -> 0)\n    Filter: (test_jsonb.json_type = 'scalarint'::text)\n\nWith COERCE_IMPLICIT_CAST both places, the relabeling of the\nfunction result is invisible, but the relabeling of the argument\nis visible.\nI think this is because get_rule_expr's showimplicit is alwaystrue for args in this case, checking the implementation of get_rule_expr, I found PG behavior like this in many places.\n-- Best RegardsAndy Fan", "msg_date": "Mon, 21 Aug 2023 18:58:05 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "(Just relalized this was sent to chap in private, resent it again).\n\nOn Mon, Aug 21, 2023 at 6:50 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n>\n>\n> On Mon, Aug 21, 2023 at 11:19 AM Chapman Flack <chap@anastigmatix.net>\n> wrote:\n>\n>> On 2023-08-20 21:31, Andy Fan wrote:\n>> > Highlighting the user case of makeRelableType is interesting! But using\n>> > the Oid directly looks more promising for this question IMO, it looks\n>> > like:\n>> > \"you said we can put anything in this arg, so I put an OID const\n>> > here\",\n>> > seems nothing is wrong.\n>>\n>> Perhaps one of the more senior developers will chime in, but to me,\n>> leaving out the relabel nodes looks more like \"all of PostgreSQL's\n>> type checking happened before the SupportRequestSimplify, so nothing\n>> has noticed that we rewrote the tree with mismatched types, and as\n>> long as nothing crashes we sort of got away with it.\"\n>>\n>> Suppose somebody writes an extension to double-check that plan\n>> trees are correctly typed. Or improves EXPLAIN to check a little more\n>> carefully than it seems to. Omitting the relabel nodes could spell\n>> trouble then.\n>>\n>> Or, someone more familiar with the code than I am might say \"oh,\n>> mismatches like that are common in rewritten trees, we live with it.\"\n>> But unless somebody tells me that, I'm not believing it.\n>>\n>\n> Well, this sounds long-lived. I kind of prefer to label it now. Adding\n> the 3rd commit to relabel the arg and return value.\n>\n>\n>> But I would say we have proved the concept of SupportRequestSimplify\n>> for this task. :)\n>>\n>\n> Yes, this is great!\n>\n>\n>> Now, it would make me happy to further reduce some of the code\n>> duplication between the original and the _type versions of these\n>> functions. I see that you noticed the duplication in the case of\n>> jsonb_extract_path, and you factored out jsonb_get_jsonbvalue so\n>> it could be reused. There is also some duplication with object_field\n>> and array_element.\n>\n>\nYes, compared with jsonb_extract_path, object_field and array_element\njust have much less duplication, which are 2 lines and 6 lines separately.\n\n\n> (Also, we may have overlooked jsonb_path_query\n>> and jsonb_path_query_first as candidates for the source of the\n>> cast. Two more candidates; five total.)\n>>\n>\nI can try to add them at the same time when we talk about the\ninfrastruct, thanks for the hint!\n\n\n>> Here is one way this could be structured. Observe that every one\n>> of those five source candidates operates in two stages:\n>>\n>\n> I'm not very excited with this manner, reasons are: a). It will have\n> to emit more steps in ExprState->steps which will be harmful for\n> execution. The overhead is something I'm not willing to afford.\n> b). this manner requires more *internal*, which is kind of similar\n> to \"void *\" in C. Could you explain more about the benefits of this?\n>\n> --\n> Best Regards\n> Andy Fan\n>\n\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Tue, 22 Aug 2023 11:14:48 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": ">\n>\n>>> Perhaps one of the more senior developers will chime in, but to me,\n>>> leaving out the relabel nodes looks more like \"all of PostgreSQL's\n>>> type checking happened before the SupportRequestSimplify, so nothing\n>>> has noticed that we rewrote the tree with mismatched types, and as\n>>> long as nothing crashes we sort of got away with it.\"\n>>>\n>>> Suppose somebody writes an extension to double-check that plan\n>>> trees are correctly typed. Or improves EXPLAIN to check a little more\n>>> carefully than it seems to. Omitting the relabel nodes could spell\n>>> trouble then.\n>>>\n>>> Or, someone more familiar with the code than I am might say \"oh,\n>>> mismatches like that are common in rewritten trees, we live with it.\"\n>>> But unless somebody tells me that, I'm not believing it.\n>>>\n>>\n>> Well, this sounds long-lived. I kind of prefer to label it now. Adding\n>> the 3rd commit to relabel the arg and return value.\n>>\n>>\nAfter we label it, we will get error like this:\n\nselect (a->'a')::int4 from m;\nERROR: cannot display a value of type internal\n\nHowever the following statement can work well.\n\n select ('{\"a\": 12345}'::jsonb->'a')::numeric;\n numeric\n---------\n 12345\n\nThat's mainly because the later query doesn't go through the planner\nsupport function. I didn't realize this before so the test case doesn't\ncatch it. Will add the test case in the next version. The reason why\nwe get the error for the first query is because the query tree says\nwe should output an \"internal\" result at last and then pg doesn't\nknow how to output an internal data type. This is kind of in conflict\nwith our goal.\n\nSo currently the only choices are: PATCH 001 or PATCH 001 + 002.\n\nhttps://www.postgresql.org/message-id/CAKU4AWrs4Pzajm2_tgtUTf%3DCWfDJEx%3D3h45Lhqg7tNOVZw5YxA%40mail.gmail.com\n\n\n-- \nBest Regards\nAndy Fan\n\n\nPerhaps one of the more senior developers will chime in, but to me,\nleaving out the relabel nodes looks more like \"all of PostgreSQL's\ntype checking happened before the SupportRequestSimplify, so nothing\nhas noticed that we rewrote the tree with mismatched types, and as\nlong as nothing crashes we sort of got away with it.\"\n\nSuppose somebody writes an extension to double-check that plan\ntrees are correctly typed. Or improves EXPLAIN to check a little more\ncarefully than it seems to. Omitting the relabel nodes could spell\ntrouble then.\n\nOr, someone more familiar with the code than I am might say \"oh,\nmismatches like that are common in rewritten trees, we live with it.\"\nBut unless somebody tells me that, I'm not believing it.Well, this sounds long-lived.  I kind of prefer to label it now.  Addingthe 3rd commit to relabel the arg and return value. After we label it, we will get error like this: \n\n\n\n\n\nselect (a->'a')::int4 from m;ERROR:  cannot display a value of type internal However the following statement can work well. select ('{\"a\": 12345}'::jsonb->'a')::numeric; numeric---------   12345That's mainly because the later query doesn't go through the plannersupport function. I didn't realize this before so the test case doesn't catch it.  Will add the test case  in the next version.  The reason why we get the error for the first query is because the query tree says we should output  an \"internal\"  result at last and then pg doesn'tknow how to output an internal data type. This is kind of in conflictwith our goal.So currently the only choices are:  PATCH 001 or PATCH 001 + 002. https://www.postgresql.org/message-id/CAKU4AWrs4Pzajm2_tgtUTf%3DCWfDJEx%3D3h45Lhqg7tNOVZw5YxA%40mail.gmail.com -- Best RegardsAndy Fan", "msg_date": "Tue, 22 Aug 2023 13:54:27 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "On 2023-08-22 01:54, Andy Fan wrote:\n> After we label it, we will get error like this:\n> \n> select (a->'a')::int4 from m;\n> ERROR: cannot display a value of type internal\n\nWithout looking in depth right now, I would double-check\nwhat relabel node is being applied at the result. The idea,\nof course, was to relabel the result as the expected result\ntype, not internal.\n\n(Or, as in the restructuring suggested earlier, to use a\nfinish function whose return type is already as expected,\nand needs no relabeling.)\n\nRegards,\n-Chap\n\n\n", "msg_date": "Tue, 22 Aug 2023 08:16:02 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "On 2023-08-22 08:16, Chapman Flack wrote:\n> On 2023-08-22 01:54, Andy Fan wrote:\n>> After we label it, we will get error like this:\n>> \n>> select (a->'a')::int4 from m;\n>> ERROR: cannot display a value of type internal\n> \n> Without looking in depth right now, I would double-check\n> what relabel node is being applied at the result. The idea,\n> of course, was to relabel the result as the expected result\n\nas I suspected, looking at v10-0003, there's this:\n\n+ fexpr = (FuncExpr *)makeRelabelType((Expr *) fexpr, INTERNALOID,\n+ 0, InvalidOid, \nCOERCE_IMPLICIT_CAST);\n\ncompared to the example I had sent earlier:\n\nOn 2023-08-18 17:02, Chapman Flack wrote:\n> expr = (Expr *)makeRelabelType((Expr *)fexpr,\n> targetOid, -1, InvalidOid, COERCE_IMPLICIT_CAST);\n\nThe key difference: this is the label going on the result type\nof the function we are swapping in. The function already has\nreturn type declared internal; we want to relabel it as\nreturning the type identified by targetOid. A relabel node\nto type internal is the reverse of what's needed (and also\nsuperfluous, as the function's return type is internal already).\n\nTwo more minor differences: (1) the node you get from\nmakeRelabelType is an Expr, but not really a FuncExpr. Casting\nit to FuncExpr is a bit bogus. Also, the third argument to\nmakeRelabelType is a typmod, and I believe the \"not-modified\"\ntypmod is -1, not 0.\n\nIn the example I had sent earlier, there were two relabel nodes,\nserving different purposes. In one, we have a function that wants\ninternal for an argument, but we've made a Const with type OIDOID,\nso we want to say \"this is internal, the thing the function wants.\"\n\nThe situation is reversed at the return type of the function: the\nfunction declares its return type internal, but the surrounding\nquery is expecting an int4 (or whatever targetOid identifies),\nso any relabel node there needs to say it's an int4 (or whatever).\n\nSo, that approach involves two relabelings. In the one idea for\nrestructuring that I suggested earlier, that's reduced: the\n..._int function produces a JsonbValue (typed internal) and\nthe selected ..._finish function expects that, so those types\nmatch and no relabeling is called for. And with the correct\n..._finish function selected at rewrite time, it already has\nthe same return type the surrounding query expects, so no\nrelabeling is called for there either.\n\nHowever, simply to ensure the ..._int function cannot be\ncasually called, it needs an internal parameter, even an\nunused one, and the rewriter must supply a value for that,\nwhich may call for one relabel node.\n\nOn 2023-08-21 06:50, Andy Fan wrote:\n> I'm not very excited with this manner, reasons are: a). It will have\n> to emit more steps in ExprState->steps which will be harmful for\n> execution. The overhead is something I'm not willing to afford.\n\nI would be open to a performance comparison, but offhand I am not\nsure whether the overhead of another step or two in an ExprState\nis appreciably more than some of the overhead in the present patch,\nsuch as the every-time-through fcinfo initialization buried in\nDirectFunctionCall1 where you don't necessarily see it. I bet\nthe fcinfo in an ExprState step gets set up once, and just has\nnew argument values slammed into it each time through.\n\n(Also, I know very little about how the JIT compiler is used in PG,\nbut I suspect that a step you bury inside your function is a step\nit may not get to see.)\n\n> b). this manner requires more *internal*, which is kind of similar\n> to \"void *\" in C.\n\nI'm not sure in what sense you mean \"more\". The present patch\nhas to deal with two places where some other type must be\nrelabeled internal or something internal relabeled to another\ntype. The approach I suggested does involve two families of\nfunction, one returning internal (a JsonbValue) and one\nexpecting internal (a JsonbValue), where the rewriter would\ncompose one over the other, no relabeling needed. There's\nalso an internal parameter needed for whatever returns\ninternal, and that's just the protocol for how such things\nare done. To me, that seems like a fairly principled use of\nthe type.\n\nI would not underestimate the benefit of reducing the code\nduplication and keeping the patch as clear as possible.\nThe key contributions of the patch are getting a numeric or\nboolean efficiently out of the JSON operation. Getting from\nnumeric to int or float are things the system already does\nwell. A patch that focuses on what it contributes, and avoids\nredoing things the system already can do--unless the duplication\ncan be shown to have a strong performance benefit--is easier to\nreview and probably to get integrated.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Sat, 26 Aug 2023 18:28:17 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "(Sorry for leaving this discussion for such a long time, how times fly!)\n\nOn Sun, Aug 27, 2023 at 6:28 AM Chapman Flack <chap@anastigmatix.net> wrote:\n\n> On 2023-08-22 08:16, Chapman Flack wrote:\n> > On 2023-08-22 01:54, Andy Fan wrote:\n> >> After we label it, we will get error like this:\n> >>\n> >> select (a->'a')::int4 from m;\n> >> ERROR: cannot display a value of type internal\n> >\n> > Without looking in depth right now, I would double-check\n> > what relabel node is being applied at the result. The idea,\n> > of course, was to relabel the result as the expected result\n>\n> as I suspected, looking at v10-0003, there's this:\n>\n> + fexpr = (FuncExpr *)makeRelabelType((Expr *) fexpr, INTERNALOID,\n> + 0, InvalidOid,\n> COERCE_IMPLICIT_CAST);\n>\n> compared to the example I had sent earlier:\n>\n> On 2023-08-18 17:02, Chapman Flack wrote:\n> > expr = (Expr *)makeRelabelType((Expr *)fexpr,\n> > targetOid, -1, InvalidOid, COERCE_IMPLICIT_CAST);\n>\n> The key difference: this is the label going on the result type\n> of the function we are swapping in.\n\n\nI'm feeling we have some understanding gap in this area, let's\nsee what it is. Suppose the original query is:\n\nnumeric(jsonb_object_field(v_jsonb, text)) -> numeric.\n\nwithout the patch 003, the rewritten query is:\njsonb_object_field_type(NUMERICOID, v_jsonb, text) -> NUMERIC.\n\nHowever the declared type of jsonb_object_field_type is:\n\njsonb_object_field_type(internal, jsonb, text) -> internal.\n\nSo the situation is: a). We input a CONST(type=OIDOID, ..) for an\ninternal argument. b). We return a NUMERIC type which matches\nthe original query c). result type NUMERIC doesn't match the declared\ntype 'internal' d). it doesn't match the run-time type of internal\nargument which is OID.\n\ncase a) is fixed by RelableType. case b) shouldn't be treat as an\nissue. I thought you wanted to address the case c), and patch\n003 tries to fix it, then the ERROR above. At last I realized case\nc) isn't the one you want to fix. case d) shouldn't be requirement\nat the first place IIUC.\n\nSo your new method is:\n1. jsonb_{op}_start() -> internal (internal actually JsonbValue).\n2. jsonb_finish_{type}(internal, ..) -> type. (internal is JsonbValue ).\n\nThis avoids the case a) at the very beginning. I'd like to provides\npatches for both solutions for comparison. Any other benefits of\nthis method I am missing?\n\n\n> Two more minor differences: (1) the node you get from\n> makeRelabelType is an Expr, but not really a FuncExpr. Casting\n> it to FuncExpr is a bit bogus. Also, the third argument to\n> makeRelabelType is a typmod, and I believe the \"not-modified\"\n> typmod is -1, not 0.\n>\n\nMy fault, you are right.\n\n\n>\n> On 2023-08-21 06:50, Andy Fan wrote:\n> > I'm not very excited with this manner, reasons are: a). It will have\n> > to emit more steps in ExprState->steps which will be harmful for\n> > execution. The overhead is something I'm not willing to afford.\n>\n> I would be open to a performance comparison, but offhand I am not\n> sure whether the overhead of another step or two in an ExprState\n> is appreciably more than some of the overhead in the present patch,\n> such as the every-time-through fcinfo initialization buried in\n> DirectFunctionCall1 where you don't necessarily see it. I bet\n\nthe fcinfo in an ExprState step gets set up once, and just has\n> new argument values slammed into it each time through.\n>\n\nfcinfo initialization in DirectFunctionCall1 is an interesting point!\nso I am persuaded the extra steps in ExprState may not be\nworse than the current way due to the \"every-time-through\nfcinfo initialization\" (in which case the memory is allocated\nonce in heap rather than every time in stack). I can do a\ncomparison at last to see if we can find some other interesting\nfindings.\n\n\n\n> I would not underestimate the benefit of reducing the code\n> duplication and keeping the patch as clear as possible.\n> The key contributions of the patch are getting a numeric or\n> boolean efficiently out of the JSON operation. Getting from\n> numeric to int or float are things the system already does\n> well.\n\n\nTrue, reusing the casting system should be better than hard-code\nthe casting function manually. I'd apply this on both methods.\n\n\n> A patch that focuses on what it contributes, and avoids\n> redoing things the system already can do--unless the duplication\n> can be shown to have a strong performance benefit--is easier to\n> review and probably to get integrated.\n>\n\nAgreed.\n\nAt last, thanks for the great insights and patience!\n\n-- \nBest Regards\nAndy Fan\n\n(Sorry for leaving this discussion for such a long time,  how times fly!) On Sun, Aug 27, 2023 at 6:28 AM Chapman Flack <chap@anastigmatix.net> wrote:On 2023-08-22 08:16, Chapman Flack wrote:\n> On 2023-08-22 01:54, Andy Fan wrote:\n>> After we label it, we will get error like this:\n>> \n>> select (a->'a')::int4 from m;\n>> ERROR:  cannot display a value of type internal\n> \n> Without looking in depth right now, I would double-check\n> what relabel node is being applied at the result. The idea,\n> of course, was to relabel the result as the expected result\n\nas I suspected, looking at v10-0003, there's this:\n\n+   fexpr = (FuncExpr *)makeRelabelType((Expr *) fexpr, INTERNALOID,\n+                                       0, InvalidOid, \nCOERCE_IMPLICIT_CAST);\n\ncompared to the example I had sent earlier:\n\nOn 2023-08-18 17:02, Chapman Flack wrote:\n>     expr = (Expr *)makeRelabelType((Expr *)fexpr,\n>       targetOid, -1, InvalidOid, COERCE_IMPLICIT_CAST);\n\nThe key difference: this is the label going on the result type\nof the function we are swapping in. I'm feeling we have some understanding gap in this area, let'ssee what it is.  Suppose the original query is:numeric(jsonb_object_field(v_jsonb, text)) -> numeric.without the patch 003,  the rewritten query is:jsonb_object_field_type(NUMERICOID,  v_jsonb, text) -> NUMERIC. However the declared type of jsonb_object_field_type is:jsonb_object_field_type(internal, jsonb, text) -> internal. So the situation is:  a).  We input a CONST(type=OIDOID, ..) for aninternal argument.  b).  We return a NUMERIC type which matchesthe original query c).  result type NUMERIC doesn't match the declaredtype  'internal'  d).  it doesn't match the  run-time type of internal argument which is OID. case a) is fixed by RelableType.  case b) shouldn't be treat as anissue.  I thought you wanted to address the case c), and patch 003 tries to fix it, then the ERROR above.  At last I realized case c) isn't the one you want to fix.  case d) shouldn't be requirementat the first place IIUC. So your new method is:1. jsonb_{op}_start() ->  internal  (internal actually JsonbValue). 2. jsonb_finish_{type}(internal, ..) -> type.   (internal is JsonbValue ). This avoids the case a) at the very beginning.  I'd like to providespatches for both solutions for comparison.  Any other benefits ofthis method I am missing?  \nTwo more minor differences: (1) the node you get from\nmakeRelabelType is an Expr, but not really a FuncExpr. Casting\nit to FuncExpr is a bit bogus. Also, the third argument to\nmakeRelabelType is a typmod, and I believe the \"not-modified\"\ntypmod is -1, not 0.My fault, you are right.  \nOn 2023-08-21 06:50, Andy Fan wrote:\n> I'm not very excited with this manner, reasons are: a).  It will have\n> to emit more steps in ExprState->steps which will be harmful for\n> execution. The overhead  is something I'm not willing to afford.\n\nI would be open to a performance comparison, but offhand I am not\nsure whether the overhead of another step or two in an ExprState\nis appreciably more than some of the overhead in the present patch,\nsuch as the every-time-through fcinfo initialization buried in\nDirectFunctionCall1 where you don't necessarily see it. I bet\nthe fcinfo in an ExprState step gets set up once, and just has\nnew argument values slammed into it each time through.fcinfo initialization in DirectFunctionCall1 is an interesting point!so  I am persuaded the extra steps in  ExprState may not be worse than the current way due to the \"every-time-through fcinfo initialization\" (in which case the memory is allocated once in heap rather than every time in stack).   I can do a comparison at last to see if we can find some other interestingfindings.   \nI would not underestimate the benefit of reducing the code\nduplication and keeping the patch as clear as possible.\nThe key contributions of the patch are getting a numeric or\nboolean efficiently out of the JSON operation. Getting from\nnumeric to int or float are things the system already does\nwell. True, reusing the casting system should be better than hard-codethe casting function manually.  I'd apply this on both methods.  A patch that focuses on what it contributes, and avoids\nredoing things the system already can do--unless the duplication\ncan be shown to have a strong performance benefit--is easier to\nreview and probably to get integrated.Agreed.  At last, thanks for the great insights and patience!-- Best RegardsAndy Fan", "msg_date": "Wed, 30 Aug 2023 12:47:35 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "On 2023-08-30 00:47, Andy Fan wrote:\n> see what it is. Suppose the original query is:\n> \n> numeric(jsonb_object_field(v_jsonb, text)) -> numeric.\n> ...\n> However the declared type of jsonb_object_field_type is:\n> \n> jsonb_object_field_type(internal, jsonb, text) -> internal.\n> \n> So the situation is: b). We return a NUMERIC type which matches\n> the original query ...\n> case b) shouldn't be treat as an issue.\n\n*We* may know we are returning a NUMERIC type which matches the\noriginal query, but nothing else knows that. Anything that\nexamined the complete tree after our rewriting would see some\nexpression that wants a numeric type, but supplied with a\nsubexpression that returns internal. Without a relabel node\nthere to promise that we know this internal is really numeric,\nany type checker would reject the tree.\n\nThe fact that it even works at all without a relabel node there\nseems to indicate that all of PostgreSQL's type checking was\ndone before calling the support function, and that there is not\nmuch sanity checking of what the support function returns,\nwhich I guess is efficient, if a little scary. Seems like\nwriting a support function is a bit like trapeze performing\nwithout a net.\n\n> So your new method is:\n> 1. jsonb_{op}_start() -> internal (internal actually JsonbValue).\n> 2. jsonb_finish_{type}(internal, ..) -> type. (internal is JsonbValue \n> ).\n> \n> This avoids the case a) at the very beginning. I'd like to provides\n> patches for both solutions for comparison.\n\nI think, unavoidably, there is still a case a) at the very beginning,\njust because of the rule that if json_{op}_start is going to have an\ninternal return type, it needs to have at least one internal parameter\nto prevent casual calls from SQL, even if that parameter is not used\nfor anything.\n\nIt would be ok to write in a Const for that parameter, just zero or\n42 or anything besides null (in case the function is strict), but\nagain if the Const has type internal then EXPLAIN will be sad, so\nit has to be some type that makes EXPLAIN cheerful, and relabeled\ninternal.\n\nBut with this approach there is no longer a type mismatch of the\nend result.\n\n> fcinfo initialization in DirectFunctionCall1 is an interesting point!\n> so I am persuaded the extra steps in ExprState may not be\n> worse than the current way due to the \"every-time-through\n> fcinfo initialization\" (in which case the memory is allocated\n> once in heap rather than every time in stack).\n\nStack allocation is super cheap, just by emitting the function\nentry to reserve n+m bytes instead of just m, so it there's any\nmeasurable cost to the DirectFunctionCall I would think it more\nlikely to be in the initialization after allocation ... but I\nhaven't looked at that code closely to see how much there is.\nI just wanted to make the point that another step or two in\nExprState might not be a priori worse. We might be talking\nabout negligible effects in either direction.\n\n> I can do a\n> comparison at last to see if we can find some other interesting\n> findings.\n\nThat would be the way to find out. I think I would still lean\ntoward the approach with less code duplication, unless there\nis a strong timing benefit the other way.\n\n> True, reusing the casting system should be better than hard-code\n> the casting function manually. I'd apply this on both methods.\n\nI noticed there is another patch registered in this CF: [1]\nIt adds new operations within jsonpath like .bigint .time\nand so on.\n\nI was wondering whether that work would be conflicting or\ncomplementary with this. It looks to be complementary. The\noperations being added there are within jsonpath evaluation.\nHere we are working on faster ways to get those results out.\n\nIt does not seem that [1] will add any new choices in\nJsonbValue. All of its (.bigint .integer .number) seem to\nverify the requested form and then put the result as a\nnumeric in ->val.numeric. So that doesn't add any new\ncases for this patch to handle. (Too bad, in a way: if that\nother patch added ->val.bigint, this patch could add a case\nto retrieve that value without going through the work of\nmaking a numeric. But that would complicate other things\ntouching JsonbValue, and be a matter for that other patch.)\n\nIt may be expanding the choices for what we might one day\nfind in ->val.datetime though.\n\nRegards,\n-Chap\n\n\n[1] https://commitfest.postgresql.org/44/4526/\n\n\n", "msg_date": "Wed, 30 Aug 2023 09:47:53 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "Hi Chap,\n\nThe v11 attached, mainly changes are:\n1. use the jsonb_xx_start and jsonb_finish_numeric style.\n2. improve the test case a bit.\n\nIt doesn't include:\n1. the jsonb_finish_text function, since we have a operator ->> for text\nalready and the performance for it is OK and there is no cast entry for\njsonb to text.\n2. the jsonb_finish_jsonb since I can't see a clear user case for now.\nRewriting jsonb_object_field with 2 DirectFunctionCall looks not pretty\nreasonable as we paid 2 DirectFunctionCall overhead to reduce ~10 lines\ncode duplication.\n\n\nAn incompatible issue at error message level is found during test:\ncreate table jb(a jsonb);\ninsert into jb select '{\"a\": \"a\"}'::jsonb;\nselect (a->'a')::int4 from jb;\n\nmaster: ERROR: cannot cast jsonb string to type *integer*\npatch: ERROR: cannot cast jsonb string to type *numeric*\n\nThat's mainly because we first extract the field to numeric and\nthen cast it to int4 and the error raised at the first step and it\ndoesn't know the final type. One way to fix it is adding a 2nd\nargument for jsonb_finish_numeric for the real type, but\nit looks weird and more suggestions on this would be good.\n\nPerformance comparison between v10 and v11.\n\ncreate table tb (a jsonb);\ninsert into tb select '{\"a\": 1}'::jsonb from generate_series(1, 100000)i;\nselect 1 from tb where (a->'a')::int2 = 2; (pgbench 5 times)\n\nv11: 16.273 ms\nv10: 15.986 ms\nmaster: 32.530ms\n\nSo I think the performance would not be an issue.\n\n\n> I noticed there is another patch registered in this CF: [1]\n> It adds new operations within jsonpath like .bigint .time\n> and so on.\n>\n> I was wondering whether that work would be conflicting or\n> complementary with this. It looks to be complementary. The\n> operations being added there are within jsonpath evaluation.\n> Here we are working on faster ways to get those results out.\n>\n> It does not seem that [1] will add any new choices in\n> JsonbValue. All of its (.bigint .integer .number) seem to\n> verify the requested form and then put the result as a\n> numeric in ->val.numeric. So that doesn't add any new\n> cases for this patch to handle. (Too bad, in a way: if that\n> other patch added ->val.bigint, this patch could add a case\n> to retrieve that value without going through the work of\n> making a numeric. But that would complicate other things\n> touching JsonbValue, and be a matter for that other patch.)\n>\n> It may be expanding the choices for what we might one day\n> find in ->val.datetime though.\n>\n> Thanks for this information. I tried the jsonb_xx_start and\njsonb_finish_numeric style, and it looks like a good experience\nand it may not make things too complicated even if the above\nthings happen IMO.\n\nAny feedback is welcome.\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Thu, 31 Aug 2023 17:10:39 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "> An incompatible issue at error message level is found during test:\n> create table jb(a jsonb);\n> insert into jb select '{\"a\": \"a\"}'::jsonb;\n> select (a->'a')::int4 from jb;\n>\n> master: ERROR: cannot cast jsonb string to type *integer*\n> patch: ERROR: cannot cast jsonb string to type *numeric*\n>\n> That's mainly because we first extract the field to numeric and\n> then cast it to int4 and the error raised at the first step and it\n> doesn't know the final type. One way to fix it is adding a 2nd\n> argument for jsonb_finish_numeric for the real type, but\n> it looks weird and more suggestions on this would be good.\n>\n>\nv12 is attached to address the above issue, I added a new argument\nnamed target_oid for jsonb_finish_numeric so that it can raise a\ncorrect error message. I also fixed the issue reported by opr_sanity\nin this version.\n\nChap, do you still think we should refactor the code for the previous\nexisting functions like jsonb_object_field for less code duplication\npurpose? I think we should not do it because a). The code duplication\nis just ~10 rows. b). If we do the refactor, we have to implement\ntwo DirectFunctionCall1. Point b) is the key reason I am not willing\nto do it. Or do I miss other important reasons?\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Fri, 1 Sep 2023 11:09:22 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "I think the last patch failed. I am not 100% sure.\nhttps://cirrus-ci.com/task/5464366154252288\nsays \"Created 21 hours ago\", I assume the latest patch.\n\nthe diff in Artifacts section. you can go to\ntestrun/build/testrun/regress/regress/regression.diffs\n\ndiff -U3 /tmp/cirrus-ci-build/src/test/regress/expected/jsonb.out\n/tmp/cirrus-ci-build/build/testrun/regress/regress/results/jsonb.out\n--- /tmp/cirrus-ci-build/src/test/regress/expected/jsonb.out\n2023-09-01 03:34:43.585036700 +0000\n+++ /tmp/cirrus-ci-build/build/testrun/regress/regress/results/jsonb.out\n2023-09-01 03:39:05.800452844 +0000\n@@ -528,7 +528,7 @@\n (3 rows)\n\n SELECT (test_json -> 'field1')::int4 FROM test_jsonb WHERE json_type\n= 'object';\n-ERROR: cannot cast jsonb string to type integer\n+ERROR: unknown jsonb type: 1125096840\n SELECT (test_json -> 'field1')::bool FROM test_jsonb WHERE json_type\n= 'object';\n ERROR: cannot cast jsonb string to type boolean\n \\pset null ''\n\n\n", "msg_date": "Sat, 2 Sep 2023 09:25:37 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "Hi Jian,\n\n SELECT (test_json -> 'field1')::int4 FROM test_jsonb WHERE json_type\n> = 'object';\n> -ERROR: cannot cast jsonb string to type integer\n> +ERROR: unknown jsonb type: 1125096840\n>\n\nThanks for the report! The reason is I return the address of a local\nvariable.\n\njsonb_object_field_start(PG_FUNCTION_ARGS)\n{\n\n JsonbValue *v;\n JsonbValue vbuf;\n v = getKeyJsonValueFromContainer(&jb->root,\n VARDATA_ANY(key),\\\n VARSIZE_ANY_EXHDR(key),\n &vbuf);\n PG_RETURN_POINTER(v);\n}\n\nHere the v points to vbuf which is a local variable in stack. I'm confused\nthat why it works on my local machine and also works in the most queries\nin cfbot, the fix is below\n\n v = getKeyJsonValueFromContainer(&jb->root,\n VARDATA_ANY(key),\\\n VARSIZE_ANY_EXHDR(key),\n NULL);\n\n\nI will send an updated version soon.\n\n-- \nBest Regards\nAndy Fan\n\nHi Jian, SELECT (test_json -> 'field1')::int4 FROM test_jsonb WHERE json_type\n= 'object';\n-ERROR:  cannot cast jsonb string to type integer\n+ERROR:  unknown jsonb type: 1125096840Thanks for the report!  The reason is I return the address of a local variable. jsonb_object_field_start(PG_FUNCTION_ARGS){    JsonbValue  *v;    JsonbValue  vbuf;    v = getKeyJsonValueFromContainer(&jb->root,                                     VARDATA_ANY(key),\\                                     VARSIZE_ANY_EXHDR(key),                                     &vbuf);    PG_RETURN_POINTER(v);}Here the v points to vbuf which is a local variable in stack.  I'm confusedthat why it works on my local machine and also works in the most queriesin cfbot, the fix is below    v = getKeyJsonValueFromContainer(&jb->root,                                     VARDATA_ANY(key),\\                                     VARSIZE_ANY_EXHDR(key),                                     NULL);I will send an updated version soon. -- Best RegardsAndy Fan", "msg_date": "Mon, 4 Sep 2023 19:43:50 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "Hi,\n\n v13 attached. Changes includes:\n\n1. fix the bug Jian provides.\n2. reduce more code duplication without DirectFunctionCall.\n3. add the overlooked jsonb_path_query and jsonb_path_query_first as\ncandidates\n\n\n--\nBest Regards\nAndy Fan", "msg_date": "Mon, 4 Sep 2023 22:35:16 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "On Mon, Sep 4, 2023 at 10:35 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> Hi,\n>\n> v13 attached. Changes includes:\n>\n> 1. fix the bug Jian provides.\n> 2. reduce more code duplication without DirectFunctionCall.\n> 3. add the overlooked jsonb_path_query and jsonb_path_query_first as candidates\n>\n>\n> --\n> Best Regards\n> Andy Fan\n\nbased on v13.\nIMHO, it might be a good idea to write some comments on\njsonb_object_field_internal. especially the second boolean argument.\nsomething like \"some case, we just want return JsonbValue rather than\nJsonb. to return JsonbValue, make as_jsonb be false\".\n\nI am not sure \"jsonb_object_field_start\" is a good name, so far I only\ncome up with \"jsonb_object_field_to_jsonbvalues\".\n\nlinitial(jsonb_start_func->args) =\nmakeRelabelType(linitial(jsonb_start_func->args),\n INTERNALOID, 0,\n InvalidOid,\n COERCE_IMPLICIT_CAST);\n\nif no need, output typmod (usually -1), so here should be -1 rather than 0?\n\nlist_make2(jsonb_start_func, makeConst(.....). you can just combine\ntwo different types then make a list, seems pretty cool...\n\n\n", "msg_date": "Tue, 5 Sep 2023 20:51:01 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": ">\n>\n> based on v13.\n> IMHO, it might be a good idea to write some comments on\n> jsonb_object_field_internal. especially the second boolean argument.\n> something like \"some case, we just want return JsonbValue rather than\n> Jsonb. to return JsonbValue, make as_jsonb be false\".\n>\n\nOK, I will proposal \"return a JsonbValue when as_jsonb is false\".\n\n\n> I am not sure \"jsonb_object_field_start\" is a good name, so far I only\n> come up with \"jsonb_object_field_to_jsonbvalues\".\n\n\nYes, I think it is a good idea. Puting the jsonbvalue in the name can\ncompensate for the imprecision of \"internal\" as a return type. I am\nthinking\nif we should rename jsonb_finish_numeric to jsonbvalue_to_numeric as\nwell.\n\n\n>\nlinitial(jsonb_start_func->args) =\n> makeRelabelType(linitial(jsonb_start_func->args),\n> INTERNALOID, 0,\n> InvalidOid,\n> COERCE_IMPLICIT_CAST);\n>\n> if no need, output typmod (usually -1), so here should be -1 rather than 0?\n\n\nI agree. -1 is better than 0.\n\nThanks for the code level review again! I want to wait for some longer time\nto gather more feedback. I'm willing to name it better, but hope I didn't\nrename it to A and rename it back shortly.\n\n-- \nBest Regards\nAndy Fan\n\n\nbased on v13.\nIMHO, it might be a good idea to write some comments on\njsonb_object_field_internal. especially the second boolean argument.\nsomething like \"some case, we just want return JsonbValue rather than\nJsonb. to return JsonbValue, make as_jsonb be false\".OK,  I will proposal  \"return a JsonbValue when as_jsonb is false\".  \nI am not sure \"jsonb_object_field_start\" is a good name, so far I only\ncome up with \"jsonb_object_field_to_jsonbvalues\".Yes, I think it is a good idea.  Puting the jsonbvalue in the name cancompensate for the imprecision of \"internal\" as a return type.  I am thinkingif we should rename jsonb_finish_numeric to jsonbvalue_to_numeric aswell.  \nlinitial(jsonb_start_func->args) =\nmakeRelabelType(linitial(jsonb_start_func->args),\n   INTERNALOID, 0,\n   InvalidOid,\n   COERCE_IMPLICIT_CAST);\n\nif no need, output typmod (usually -1), so here should be -1 rather than 0? I agree. -1 is better than 0. Thanks for the code level review again! I want to wait for some longer timeto gather more feedback.  I'm willing to name it better,  but hope I didn'trename it to A and rename it back shortly. -- Best RegardsAndy Fan", "msg_date": "Wed, 6 Sep 2023 15:00:00 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "On 2023-09-04 10:35, Andy Fan wrote:\n> v13 attached. Changes includes:\n> \n> 1. fix the bug Jian provides.\n> 2. reduce more code duplication without DirectFunctionCall.\n> 3. add the overlooked jsonb_path_query and jsonb_path_query_first as\n> candidates\n\nApologies for the delay. I like the way this is shaping up.\n\nMy first comment will be one that may be too large for this patch\n(and too large to rest on my opinion alone); that's why I'm making\nit first.\n\nIt seems at first a minor point: to me it feels like a wart to have\nto pass jsonb_finish_numeric (and *only* jsonb_finish_numeric) a type\noid reflecting the target type of a cast that's going to be applied\n*after jsonb_finish_numeric has done its work*, and only for the\npurpose of generating a message if the jsonb type *isn't numeric*,\nbut saying \"cannot cast to\" (that later target type) instead.\n\nI understand this is done to exactly match the existing behavior,\nso what makes this a larger issue is I'm not convinced the existing\nbehavior is good. Therefore I'm not convinced that bending over\nbackward to preserve it is good.\n\nWhat's not good: the places a message from cannotCastJsonbValue\nare produced, there has been no attempt yet to cast anything.\nThe message purely tells you about whether you have the kind\nof jsonb node you think you have (and array, bool, null, numeric,\nobject, string are the only kinds of those). If you're wrong\nabout what kind of jsonb node it is, you get this message.\nIf you're right about the kind of node, you don't get this\nmessage, regardless of whether its value can be cast to the\nlater target type. For example, '32768'::jsonb::int2 produces\nERRCODE_NUMERIC_VALUE_OUT_OF_RANGE \"smallint out of range\"\nbut that message comes from the actual int2 cast.\n\nIMV, what the \"cannot cast jsonb foo to type %s\" message really\nmeans is \"jsonb foo where jsonb bar is required\" and that's what\nit should say, and that message depends on nothing about any\nfuture plans for what will be done to the jsonb bar, so it can\nbe produced without needing any extra information to be passed.\n\nI'm also not convinced ERRCODE_INVALID_PARAMETER_VALUE is a\ngood errcode for that message (whatever the wording). I do not\nsee much precedent elsewhere in the code for using\nINVALID_PARAMETER_VALUE to signal this kind of \"data value\nisn't what you think it is\" condition. Mostly it is used\nwhen checking the kinds of parameters passed to a function to\nindicate what it should do.\n\nThere seem to be several more likely choices for an errcode\nthere in the 2203x range.\n\nBut I understand that issue is not with this patch so much\nas with preexisting behavior, and because it's preexisting,\nthere can be sound arguments against changing it.\n\nBut if that preexisting message could be changed, it would\neliminate the need for an unpleasing wart here.\n\nOther notes are more minor:\n\n+\t\telse\n+\t\t\t/* not the desired pattern. */\n+\t\t\tPG_RETURN_POINTER(fexpr);\n...\n+\n+\t\tif (!OidIsValid(new_func_id))\n+\t\t\tPG_RETURN_POINTER(fexpr);\n...\n+\t\t\tdefault:\n+\t\t\t\tPG_RETURN_POINTER(fexpr);\n\nIf I am reading supportnodes.h right, returning NULL is\nsufficient to say no transformation is needed.\n\n+\t\tFuncExpr\t*fexpr = palloc0(sizeof(FuncExpr));\n...\n+\t\tmemcpy(fexpr, req->fcall, sizeof(FuncExpr));\n\nIs the shallow copy necessary? If so, a comment explaining why\nmight be apropos. Because the copy is shallow, if there is any\nconcern about the lifespan of req->fcall, would there not be a\nconcern about its children?\n\nIs there a reason not to transform the _tz flavors of\njsonb_path_query and jsonb_path-query_first?\n\n-\tJsonbValue *v;\n-\tJsonbValue\tvbuf;\n+\tJsonbValue\t*v;\n...\n+\tint i;\n \tJsonbValue *jbvp = NULL;\n-\tint\t\t\ti;\n\nSometimes it's worth looking over a patch for changes like these,\nand reverting such whitespace or position changes, if they aren't\nthings you want a reviewer to be squinting at. :)\n\nRegards,\n-Chap\n\n\n", "msg_date": "Wed, 13 Sep 2023 17:18:16 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "On Thu, Sep 14, 2023 at 5:18 AM Chapman Flack <chap@anastigmatix.net> wrote:\n\n> On 2023-09-04 10:35, Andy Fan wrote:\n> > v13 attached. Changes includes:\n> >\n> > 1. fix the bug Jian provides.\n> > 2. reduce more code duplication without DirectFunctionCall.\n> > 3. add the overlooked jsonb_path_query and jsonb_path_query_first as\n> > candidates\n>\n> Apologies for the delay. I like the way this is shaping up.\n\n\nThis is a great signal:)\n\n\n>\n>\nMy first comment will be one that may be too large for this patch\n> (and too large to rest on my opinion alone); that's why I'm making\n> it first.\n>\n> It seems at first a minor point: to me it feels like a wart to have\n> to pass jsonb_finish_numeric (and *only* jsonb_finish_numeric) a type\n> oid reflecting the target type of a cast that's going to be applied\n> *after jsonb_finish_numeric has done its work*, and only for the\n> purpose of generating a message if the jsonb type *isn't numeric*,\n> but saying \"cannot cast to\" (that later target type) instead.\n>\n> I understand this is done to exactly match the existing behavior,\n> so what makes this a larger issue is I'm not convinced the existing\n> behavior is good. Therefore I'm not convinced that bending over\n> backward to preserve it is good.\n>\n\nI hesitated to do so, but I'm thinking if any postgresql user uses\nsome code like if (errMsg.equals('old-error-message')), and if we\nchange the error message, the application will be broken. I agree\nwith the place for the error message, IIUC, you intend to choose\nnot compatible with the old error message?\n\nWhat's not good: the places a message from cannotCastJsonbValue\n> are produced, there has been no attempt yet to cast anything.\n> The message purely tells you about whether you have the kind\n> of jsonb node you think you have (and array, bool, null, numeric,\n> object, string are the only kinds of those). If you're wrong\n> about what kind of jsonb node it is, you get this message.\n> If you're right about the kind of node, you don't get this\n> message, regardless of whether its value can be cast to the\n> later target type. For example, '32768'::jsonb::int2 produces\n> ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE \"smallint out of range\"\n> but that message comes from the actual int2 cast.\n>\n> IMV, what the \"cannot cast jsonb foo to type %s\" message really\n> means is \"jsonb foo where jsonb bar is required\" and that's what\n> it should say, and that message depends on nothing about any\n> future plans for what will be done to the jsonb bar, so it can\n> be produced without needing any extra information to be passed.\n>\n> I'm also not convinced ERRCODE_INVALID_PARAMETER_VALUE is a\n> good errcode for that message (whatever the wording). I do not\n> see much precedent elsewhere in the code for using\n> INVALID_PARAMETER_VALUE to signal this kind of \"data value\n> isn't what you think it is\" condition. Mostly it is used\n> when checking the kinds of parameters passed to a function to\n> indicate what it should do.\n>\n> There seem to be several more likely choices for an errcode\n> there in the 2203x range.\n>\n> But I understand that issue is not with this patch so much\n> as with preexisting behavior, and because it's preexisting,\n> there can be sound arguments against changing it.\n\n\n> But if that preexisting message could be changed, it would\n> eliminate the need for an unpleasing wart here.\n>\n> Other notes are more minor:\n>\n> + else\n> + /* not the desired pattern. */\n> + PG_RETURN_POINTER(fexpr);\n> ...\n> +\n> + if (!OidIsValid(new_func_id))\n> + PG_RETURN_POINTER(fexpr);\n> ...\n> + default:\n> + PG_RETURN_POINTER(fexpr);\n>\n> If I am reading supportnodes.h right, returning NULL is\n> sufficient to say no transformation is needed.\n>\n\nI double confirmed you are right here.\nChanged it to PG_RETURN_POINT(null); here in the next version.\n\n>\n> + FuncExpr *fexpr = palloc0(sizeof(FuncExpr));\n> ...\n> + memcpy(fexpr, req->fcall, sizeof(FuncExpr));\n>\n> Is the shallow copy necessary? If so, a comment explaining why\n> might be apropos. Because the copy is shallow, if there is any\n> concern about the lifespan of req->fcall, would there not be a\n> concern about its children?\n>\n\nAll the interesting parts in the input FuncExpr are heap based,\nbut the FuncExpr itself is stack based (I'm not sure why the fact\nworks like this), Also based on my previously understanding, I\nneed to return a FuncExpr original even if the function can't be\nsimplified, so I made a shallow copy. There will be no copy at\nall in the following version since I returned NULL in such a case.\n\n\n> Is there a reason not to transform the _tz flavors of\n> jsonb_path_query and jsonb_path-query_first?\n>\n\nI misunderstood the _tz flavors return timestamp, after some deep\nreading of these functions, they just work at the comparisons part.\nso I will add them in the following version.\n\n\n>\n> - JsonbValue *v;\n> - JsonbValue vbuf;\n> + JsonbValue *v;\n> ...\n> + int i;\n> JsonbValue *jbvp = NULL;\n> - int i;\n>\n> Sometimes it's worth looking over a patch for changes like these,\n> and reverting such whitespace or position changes, if they aren't\n> things you want a reviewer to be squinting at. :)\n>\n\nYes, I look over my patch carefully before sending it out usually,\nbut this is an oversight.\n\nLastly, do you have any idea about the function naming as Jian & I\ndiscussed at [1]?\n\n[1]\nhttps://www.postgresql.org/message-id/CAKU4AWqQ0hed%3DZmpT%2B7Vxnp4H9ZxQqFz30%3Dk%3DvvrmNe57X4dKQ%40mail.gmail.com\n\n-- \nBest Regards\nAndy Fan\n\nOn Thu, Sep 14, 2023 at 5:18 AM Chapman Flack <chap@anastigmatix.net> wrote:On 2023-09-04 10:35, Andy Fan wrote:\n>   v13 attached.  Changes includes:\n> \n> 1.  fix the bug Jian provides.\n> 2.  reduce more code duplication without DirectFunctionCall.\n> 3.  add the overlooked  jsonb_path_query and jsonb_path_query_first as\n> candidates\n\nApologies for the delay. I like the way this is shaping up.This is a great signal:)   \nMy first comment will be one that may be too large for this patch\n(and too large to rest on my opinion alone); that's why I'm making\nit first.\n\nIt seems at first a minor point: to me it feels like a wart to have\nto pass jsonb_finish_numeric (and *only* jsonb_finish_numeric) a type\noid reflecting the target type of a cast that's going to be applied\n*after jsonb_finish_numeric has done its work*, and only for the\npurpose of generating a message if the jsonb type *isn't numeric*,\nbut saying \"cannot cast to\" (that later target type) instead.\n\nI understand this is done to exactly match the existing behavior,\nso what makes this a larger issue is I'm not convinced the existing\nbehavior is good. Therefore I'm not convinced that bending over\nbackward to preserve it is good.I hesitated to do so, but I'm thinking if any postgresql user usessome code like   if (errMsg.equals('old-error-message')),  and if wechange the error message, the application will be broken. I agree with the place for the error message,  IIUC,  you intend to choosenot compatible with the old error message? \nWhat's not good: the places a message from cannotCastJsonbValue\nare produced, there has been no attempt yet to cast anything.\nThe message purely tells you about whether you have the kind\nof jsonb node you think you have (and array, bool, null, numeric,\nobject, string are the only kinds of those). If you're wrong\nabout what kind of jsonb node it is, you get this message.\nIf you're right about the kind of node, you don't get this\nmessage, regardless of whether its value can be cast to the\nlater target type. For example, '32768'::jsonb::int2 produces\nERRCODE_NUMERIC_VALUE_OUT_OF_RANGE \"smallint out of range\"\nbut that message comes from the actual int2 cast.\n\nIMV, what the \"cannot cast jsonb foo to type %s\" message really\nmeans is \"jsonb foo where jsonb bar is required\" and that's what\nit should say, and that message depends on nothing about any\nfuture plans for what will be done to the jsonb bar, so it can\nbe produced without needing any extra information to be passed.\n\nI'm also not convinced ERRCODE_INVALID_PARAMETER_VALUE is a\ngood errcode for that message (whatever the wording). I do not\nsee much precedent elsewhere in the code for using\nINVALID_PARAMETER_VALUE to signal this kind of \"data value\nisn't what you think it is\" condition. Mostly it is used\nwhen checking the kinds of parameters passed to a function to\nindicate what it should do.\n\nThere seem to be several more likely choices for an errcode\nthere in the 2203x range.\n\nBut I understand that issue is not with this patch so much\nas with preexisting behavior, and because it's preexisting,\nthere can be sound arguments against changing it.  \n\nBut if that preexisting message could be changed, it would\neliminate the need for an unpleasing wart here.\n\nOther notes are more minor:\n\n+               else\n+                       /* not the desired pattern. */\n+                       PG_RETURN_POINTER(fexpr);\n...\n+\n+               if (!OidIsValid(new_func_id))\n+                       PG_RETURN_POINTER(fexpr);\n...\n+                       default:\n+                               PG_RETURN_POINTER(fexpr);\n\nIf I am reading supportnodes.h right, returning NULL is\nsufficient to say no transformation is needed.I double confirmed you are right here.  Changed it to PG_RETURN_POINT(null);   here in the next version. \n+               FuncExpr        *fexpr = palloc0(sizeof(FuncExpr));\n...\n+               memcpy(fexpr, req->fcall, sizeof(FuncExpr));\n\nIs the shallow copy necessary? If so, a comment explaining why\nmight be apropos. Because the copy is shallow, if there is any\nconcern about the lifespan of req->fcall, would there not be a\nconcern about its children?All the interesting parts in the input FuncExpr are heap based, but the FuncExpr itself is stack based (I'm not sure why the factworks like this),  Also based on my previously understanding, Ineed to return a FuncExpr original even if the function can't be simplified, so I made a shallow copy.  There will be no copy at all in the following version since I returned NULL in such a case.  \nIs there a reason not to transform the _tz flavors of\njsonb_path_query and jsonb_path-query_first?I misunderstood the _tz flavors return timestamp,  after some deepreading of these functions, they just work at the comparisons part.so I will add them in the following version.   \n\n-       JsonbValue *v;\n-       JsonbValue      vbuf;\n+       JsonbValue      *v;\n...\n+       int i;\n        JsonbValue *jbvp = NULL;\n-       int                     i;\n\nSometimes it's worth looking over a patch for changes like these,\nand reverting such whitespace or position changes, if they aren't\nthings you want a reviewer to be squinting at. :)Yes, I  look over my patch carefully before sending it out usually,but this is an oversight. Lastly,  do you have any idea about the function naming as Jian & Idiscussed at [1]?[1] https://www.postgresql.org/message-id/CAKU4AWqQ0hed%3DZmpT%2B7Vxnp4H9ZxQqFz30%3Dk%3DvvrmNe57X4dKQ%40mail.gmail.com-- Best RegardsAndy Fan", "msg_date": "Thu, 14 Sep 2023 14:41:36 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "> Is there a reason not to transform the _tz flavors of\n>> jsonb_path_query and jsonb_path-query_first?\n>>\n>\n> I misunderstood the _tz flavors return timestamp, after some deep\n> reading of these functions, they just work at the comparisons part.\n> so I will add them in the following version.\n>\n\n_tz favors did return timestamp.. the reason is stated in the commit\n messge of patch 2.\n\ntry to apply jsonb extraction optimization to _tz functions.\n\nboth jsonb_path_query_tz and jsonb_path_query_tz_first returns\nthe elements which are timestamp comparable, but such elements\nare impossible to be cast to numeric or boolean IIUC. Just provides\nthis commit for communication purpose only.\n\nso v14 is attached, changes include:\n1. Change the typmod for internal type from 0 to -1.\n2. return NULL for non-simplify expressions from the planner\nsupport function, hence shallow copy is removed as well.\n\nThings are not addressed yet:\n1. the error message handling.\n2. if we have chances to optimize _tz functions, I guess no.\n3. function naming issue. I think I can get it modified once after\nall the other issues are addressed.\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Fri, 15 Sep 2023 09:53:20 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "Hi,\n\nI am feeling this topic has been well discussed and the only pending\nissues are below, it would be great that any committer can have a\nlook at these, so I mark this entry as \"Ready for Committer\".\n\nThings are not addressed yet:\n> 1. the error message handling.\n>\n\nYou can check [1] for more background of this, I think blocking this\nfeature at an error message level is not pretty reasonable.\n\n\n> 2. if we have chances to optimize _tz functions, I guess no.\n>\n\npatch 002 is dedicated for this, I think it should not be committed,\nthe reason is described in the commit message.\n\n3. function naming issue. I think I can get it modified once after\n> all the other issues are addressed.\n>\n>\n[1]\nhttps://www.postgresql.org/message-id/d70280648894e56f9f0d12c75090c3d8%40anastigmatix.net\n\n\n-- \nBest Regards\nAndy Fan\n\nHi,I am feeling this topic has been well discussed and the only pending issues are below,  it would be great that any committer can have a look at these,  so I mark this entry as \"Ready for Committer\". Things are not addressed yet:1.  the error message handling. You can check [1] for more background of this,  I think blocking thisfeature at an error message level is not pretty reasonable.  2.  if we have chances to optimize _tz functions, I guess no. patch 002 is dedicated  for this,  I think it should not be committed,the reason is described in the commit message.3.  function naming issue. I think I can get it modified once afterall the other issues are addressed. [1] https://www.postgresql.org/message-id/d70280648894e56f9f0d12c75090c3d8%40anastigmatix.net  -- Best RegardsAndy Fan", "msg_date": "Thu, 5 Oct 2023 14:53:05 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "Adding this comment via the CF app so it isn't lost, while an improperly-interpreted-DKIM-headers issue is still preventing me from mailing directly to -hackers.\r\n\r\nIt was my view that the patch was getting close by the end of the last commitfest, but still contained a bit of a logic wart made necessary by a questionable choice of error message wording, such that in my view it would be better to determine whether a different error message would better conform to ISO SQL in the first place, and obviate the need for the logic wart.\r\n\r\nThere seemed to be some progress possible on that when petere had time to weigh in on the standard shortly after the last CF ended.\r\n\r\nSo, it would not have been my choice to assign RfC status before getting to a resolution on that.\r\n\r\nAlso, it is possible for a JsonbValue to hold a timestamp (as a result of a jsonpath evaluation, I don't think that can happen any other way), and if such a jsonpath evaluation were to be the source expression of a cast to SQL timestamp, that situation seems exactly analogous to the other situations being optimized here and would require only a few more lines in the exact pattern here introduced. While that could be called out of scope when this patch's title refers to \"numeric field\" specifically, it might be worth considering for completeness. The patch does, after all, handle boolean already, as well as numeric.", "msg_date": "Wed, 01 Nov 2023 02:17:24 +0000", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "On Wed, Nov 1, 2023 at 9:18 AM Chapman Flack <chap@anastigmatix.net> wrote:\n> So, it would not have been my choice to assign RfC status before getting to a resolution on that.\n\nIt's up to the reviewer (here Chapman), not the author, to decide\nwhether to set it to RfC. I've set the status to \"needs review\".\n\n\n", "msg_date": "Thu, 2 Nov 2023 21:21:02 +0700", "msg_from": "John Naylor <johncnaylorls@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n\n(This is Andy Fan and I just switch to my new email address).\n\nHi Chap,\n\nThanks for alway keep an eye on this!\n\n> Adding this comment via the CF app so it isn't lost, while an\n> improperly-interpreted-DKIM-headers issue is still preventing me from\n> mailing directly to -hackers.\n>\n> It was my view that the patch was getting close by the end of the last\n> commitfest, but still contained a bit of a logic wart made necessary by\n> a questionable choice of error message wording, such that in my view it\n> would be better to determine whether a different error message would\n> better conform to ISO SQL in the first place, and obviate the need for\n> the logic wart.\n>\n> There seemed to be some progress possible on that when petere had time\n> to weigh in on the standard shortly after the last CF ended.\n>\n> So, it would not have been my choice to assign RfC status before\n> getting to a resolution on that.\n\nI agree with this.\n\n>\n> Also, it is possible for a JsonbValue to hold a timestamp (as a result\n> of a jsonpath evaluation, I don't think that can happen any other\n> way),\n\nI believe this is where our disagreement lies.\n\nCREATE TABLE employees ( \n id serial PRIMARY KEY,\n data jsonb\n);\n\nINSERT INTO employees (data) VALUES (\n '{\n \"employees\":[\n {\n \"firstName\":\"John\",\n \"lastName\":\"Doe\",\n \"hireDate\":\"2022-01-01T09:00:00Z\",\n \"age\": 30\n },\n {\n \"firstName\":\"Jane\",\n \"lastName\":\"Smith\",\n \"hireDate\":\"2022-02-01T10:00:00Z\",\n \"age\": 25\n }\n ]\n }'\n);\n\nselect\njsonb_path_query_tz(data, '$.employees[*] ? (@.hireDate >=\n\"2022-02-01T00:00:00Z\" && @.hireDate < \"2022-03-01T00:00:00Z\")')\nfrom employees;\n\nselect jsonb_path_query_tz(data, '$.employees[*].hireDate ? (@ >=\n\"2022-02-01T00:00:00Z\" && @ < \"2022-03-01T00:00:00Z\")') from employees;\nselect pg_typeof(jsonb_path_query_tz(data, '$.employees[*].hireDate ? (@\n>= \"2022-02-01T00:00:00Z\" && @ < \"2022-03-01T00:00:00Z\")')) from\nemployees;\n\nselect jsonb_path_query_tz(data, '$.employees[*].hireDate ? (@\n>= \"2022-02-01T00:00:00Z\" && @ < \"2022-03-01T00:00:00Z\")')::timestamp\nfrom employees;\nselect jsonb_path_query_tz(data, '$.employees[*].hireDate ? (@\n>= \"2022-02-01T00:00:00Z\" && @ < \"2022-03-01T00:00:00Z\")')::timestamptz\nfrom employees;\n\nI tried all of the above queires and can't find a place where this\noptimization would apply. am I miss something? \n\n\n> and if such a jsonpath evaluation were to be the source expression of a\n> cast to SQL timestamp, that situation seems exactly analogous to the\n> other situations being optimized here and would require only a few more\n> lines in the exact pattern here introduced.\n\nCould you provide an example of this? \n\n> While that could be called\n> out of scope when this patch's title refers to \"numeric field\"\n> specifically, it might be worth considering for completeness. The patch\n> does, after all, handle boolean already, as well as numeric.\n\nI'd never arugment for this, at this point at least. \n\nv15 is provides without any fundamental changes. Just rebase to the\nlastest code and prepared a better commit message.\n\n\n\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Mon, 06 Nov 2023 11:26:28 +0800", "msg_from": "zhihuifan1213@163.com", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "hi.\nyou don't need to change src/include/catalog/catversion.h\nas mentioned in https://wiki.postgresql.org/wiki/Committing_checklist\nOtherwise, cfbot will fail many times.\n\n+typedef enum JsonbValueTarget\n+{\n+ JsonbValue_AsJsonbValue,\n+ JsonbValue_AsJsonb,\n+ JsonbValue_AsText\n+} JsonbValueTarget;\n\nchange to\n\n+typedef enum JsonbValueTarget\n+{\n+ JsonbValue_AsJsonbValue,\n+ JsonbValue_AsJsonb,\n+ JsonbValue_AsText,\n+} JsonbValueTarget;\n\ncurrently cannot do `git apply`.\n\n\n", "msg_date": "Tue, 2 Jan 2024 16:18:00 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "\nHi,\n\n> hi.\n> you don't need to change src/include/catalog/catversion.h\n> as mentioned in https://wiki.postgresql.org/wiki/Committing_checklist\n> Otherwise, cfbot will fail many times.\n\nThanks for the wiki.\n\nI checked the wiki and search \"catversion\", the only message I got is:\n\n\"Consider the need for a catversion bump.\"\n\nHow could this be explained as \"no need to change ../catversion.h\"? \n\n>\n> +typedef enum JsonbValueTarget\n> +{\n> + JsonbValue_AsJsonbValue,\n> + JsonbValue_AsJsonb,\n> + JsonbValue_AsText\n> +} JsonbValueTarget;\n>\n> change to\n>\n> +typedef enum JsonbValueTarget\n> +{\n> + JsonbValue_AsJsonbValue,\n> + JsonbValue_AsJsonb,\n> + JsonbValue_AsText,\n> +} JsonbValueTarget;\n>\n> currently cannot do `git apply`.\n\nOK, I guess it's something about whitespaces, my git-commit hook has\nbeen configured to capture this during commit. After we reach an\nagreement about the 'catversion.h' stuff, the next version of patch\nshould fix this issue. \n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Sun, 07 Jan 2024 15:17:16 +0800", "msg_from": "Andy Fan <zhihuifan1213@163.com>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "On Sun, Jan 7, 2024 at 3:26 PM Andy Fan <zhihuifan1213@163.com> wrote:\n>\n>\n> Hi,\n>\n> > hi.\n> > you don't need to change src/include/catalog/catversion.h\n> > as mentioned in https://wiki.postgresql.org/wiki/Committing_checklist\n> > Otherwise, cfbot will fail many times.\n>\n> Thanks for the wiki.\n>\n> I checked the wiki and search \"catversion\", the only message I got is:\n>\n> \"Consider the need for a catversion bump.\"\n>\n> How could this be explained as \"no need to change ../catversion.h\"?\n\nthat means catversion.h changes is the committer's responsibility, IMHO.\n\nIMHO, main reason is every time the catversion.h change, cfbot\nhttp://cfbot.cputube.org will fail.\none patch took very long time to be committable.\nyou don't need update your patch for the every catversion.h changes.\n\n> >\n> > +typedef enum JsonbValueTarget\n> > +{\n> > + JsonbValue_AsJsonbValue,\n> > + JsonbValue_AsJsonb,\n> > + JsonbValue_AsText\n> > +} JsonbValueTarget;\n> >\n> > change to\n> >\n> > +typedef enum JsonbValueTarget\n> > +{\n> > + JsonbValue_AsJsonbValue,\n> > + JsonbValue_AsJsonb,\n> > + JsonbValue_AsText,\n> > +} JsonbValueTarget;\n> >\n\nreason: https://git.postgresql.org/cgit/postgresql.git/commit/?id=611806cd726fc92989ac918eac48fd8d684869c7\n\n> > currently cannot do `git apply`.\n>\n> OK, I guess it's something about whitespaces, my git-commit hook has\n> been configured to capture this during commit. After we reach an\n> agreement about the 'catversion.h' stuff, the next version of patch\n> should fix this issue.\n\nAnyway, I made the following change:\nremove catversion.h changes.\nrefactored the tests. Some of the explain(costs off, verbose) output\nis very very long.\nit's unreadable on the web browser. so I cut them into small pieces.\nresolve duplicate OID issues.\nslight refactored jsonbvalue_covert function, for the switch\nstatement, add a default branch.\nsee file v16-0001-Improve-the-performance-of-Jsonb-extraction.patch\n\nyou made a lot of changes, that might not be easy to get committed, i think.\nMaybe we can split the patch into several pieces.\nThe first part is the original idea that: pattern: (jsonb(object) ->\n'key')::numerica_data_type can be optimized.\nThe second part: is other cases where cast jsonb to scalar data type\ncan also be optimized.\n\nSo, I refactor your patch. only have optimized casts for:\n(jsonb(object) -> 'key')::numerica_data_type.\nWe can optimize more cast cases, but IMHO,\nmake it as minimal as possible, easier to review, easier to understand.\nIf people think this performance gain is good, then later we can add\nmore on top of it.\n\nsummary: 2 files attached.\nv16-0001-Improve-the-performance-of-Jsonb-extraction.patch\nrefactored of your patch, that covers all the cast optimization cases,\nthis file will run the CI test.\n\nv1-0001-Improve-performance-of-Jsonb-extract-via-key-and-c.no-cfbot\nthis one also based on your patch. but as a minimum patch to optimize\n(jsonb(object) -> 'key')::numerica_data_type case only. (this one will\nnot run CI test).", "msg_date": "Mon, 8 Jan 2024 08:00:00 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "Hi,\n\nHere is the update of this patch.\n\n1. What is it for?\n\ncommit f7b93acc24b4a152984048fefc6d71db606e3204 (HEAD -> jsonb_numeric)\nAuthor: yizhi.fzh <yizhi.fzh@alibaba-inc.com>\nDate: Fri Feb 9 16:54:06 2024 +0800\n\n Improve the performance of Jsonb numeric/bool extraction.\n \n JSONB object uses a binary compatible numeric format with the numeric\n data type in SQL. However in the past, extracting a numeric value from a\n JSONB field still needs to find the corresponding JsonbValue first,\n then convert the JsonbValue to Jsonb, and finally use the cast system to\n convert the Jsonb to a Numeric data type. This approach was very\n inefficient in terms of performance.\n \n In the current patch, It is handled that the JsonbValue is converted to\n numeric data type directly. This is done by the planner support\n function which detects the above case and simplify the expression.\n Because the boolean type and numeric type share certain similarities in\n their attributes, we have implemented the same optimization approach for\n both. In the ideal test case, the performance can be 2x than before.\n \n The optimized functions and operators includes:\n 1. jsonb_object_field / ->\n 2. jsonb_array_element / ->\n 3. jsonb_extract_path / #>\n 4. jsonb_path_query\n 5. jsonb_path_query_first\n\nexample:\ncreate table tb(a jsonb);\ninsert into tb select '{\"a\": 1, \"b\": \"a\"}'::jsonb;\n\n\nmaster:\nexplain (costs off, verbose) select * from tb where (a->'a')::numeric > 3::numeric;\n QUERY PLAN \n-----------------------------------------------------------\n Seq Scan on public.tb\n Output: a\n Filter: (((tb.a -> 'a'::text))::numeric > '3'::numeric)\n(3 rows)\n\npatched:\n\npostgres=# explain (costs off, verbose) select * from tb where (a->'a')::numeric > 3::numeric;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------\n Seq Scan on public.tb\n Output: a\n Filter: (jsonb_finish_numeric(jsonb_object_field_start((tb.a)::internal, 'a'::text), '1700'::oid) > '3'::numeric)\n(3 rows)\n\nThe final expression generated by planner support function includes:\n\n1).\njsonb_object_field_start((tb.a)::internal, 'a'::text) first, this\nfunction returns the internal datum which is JsonbValue in fact.\n2).\njsonb_finish_numeric(internal (jsonbvalue), '1700::oid) convert the\njsonbvalue to numeric directly without the jsonb as a intermedia result.\n\nthe reason why \"1700::oid\" will be explained later, that's the key issue\nright now.\n\nThe reason why we need the 2 steps rather than 1 step is because the\ncode can be better abstracted, the idea comes from Chap, the detailed\nexplaination is at [1]. You can search \"Now, it would make me happy to\nfurther reduce some of the code duplication\" and read the following\ngraph. \n\n\n2. Where is the current feature blocked for the past few months?\n\nIt's error message compatible issue! Continue with above setup:\n\nmaster:\n\nselect * from tb where (a->'b')::numeric > 3::numeric;\nERROR: cannot cast jsonb string to type numeric\n\nselect * from tb where (a->'b')::int4 > 3::numeric;\nERROR: cannot cast jsonb string to type integer\n\nYou can see the error message is different (numeric vs integer). \n\n\nPatched:\n\nWe still can get the same error message as master BUT the code\nlooks odd.\n\nselect * from tb where (a->'b')::int4 > 3;\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------\n Seq Scan on public.tb\n Output: a\n Filter: ((jsonb_finish_numeric(jsonb_object_field_start((tb.a)::internal, 'b'::text), '23'::oid))::integer > 3)\n(3 rows)\n\nYou can see \"jsonb_finish_numeric(.., '23::oid)\" the '23::oid' is just\nfor the *\"integer\"* output in error message:\n\n\"cannot cast jsonb string to type *integer*\"\n\nNow the sistuation is either we use the odd argument (23::oid) in\njsonb_finish_numeric, or we use a incompatible error message with the\nprevious version. I'm not sure which way is better, but this is the\nplace the current feature is blocked.\n\n3. what do I want now?\n\nSince this feature uses the planner support function which needs some\ncatalog changes, so it is better that we can merge this feature in PG17,\nor else, we have to target it in PG18. So if some senior developers can\nchime in, for the current blocking issue at least, will be pretty\nhelpful.\n\n[1]\nhttps://www.postgresql.org/message-id/5138c6b5fd239e7ce4e1a4e63826ac27%40anastigmatix.net \n\n-- \nBest Regards\nAndy Fan", "msg_date": "Fri, 09 Feb 2024 17:05:21 +0800", "msg_from": "Andy Fan <zhihuifan1213@163.com>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "On 09.02.24 10:05, Andy Fan wrote:\n> 2. Where is the current feature blocked for the past few months?\n> \n> It's error message compatible issue! Continue with above setup:\n> \n> master:\n> \n> select * from tb where (a->'b')::numeric > 3::numeric;\n> ERROR: cannot cast jsonb string to type numeric\n> \n> select * from tb where (a->'b')::int4 > 3::numeric;\n> ERROR: cannot cast jsonb string to type integer\n> \n> You can see the error message is different (numeric vs integer).\n> \n> \n> Patched:\n> \n> We still can get the same error message as master BUT the code\n> looks odd.\n> \n> select * from tb where (a->'b')::int4 > 3;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------\n> Seq Scan on public.tb\n> Output: a\n> Filter: ((jsonb_finish_numeric(jsonb_object_field_start((tb.a)::internal, 'b'::text), '23'::oid))::integer > 3)\n> (3 rows)\n> \n> You can see \"jsonb_finish_numeric(.., '23::oid)\" the '23::oid' is just\n> for the *\"integer\"* output in error message:\n> \n> \"cannot cast jsonb string to type*integer*\"\n> \n> Now the sistuation is either we use the odd argument (23::oid) in\n> jsonb_finish_numeric, or we use a incompatible error message with the\n> previous version. I'm not sure which way is better, but this is the\n> place the current feature is blocked.\n\nI'm not bothered by that. It also happens on occasion in the backend C \ncode that we pass around extra information to be able to construct \nbetter error messages. The functions here are not backend C code, but \nthey are internal functions, so similar considerations can apply.\n\n\nBut I have a different question about this patch set. This has some \noverlap with the JSON_VALUE function that is being discussed at [0][1]. \nFor example, if I apply the patch \nv39-0001-Add-SQL-JSON-query-functions.patch from that thread, I can run\n\nselect count(*) from tb where json_value(a, '$.a' returning numeric) = 2;\n\nand I get a noticeable performance boost over\n\nselect count(*) from tb where cast (a->'a' as numeric) = 2;\n\nSo some questions to think about:\n\n1. Compare performance of base case vs. this patch vs. json_value.\n\n2. Can json_value be optimized further?\n\n3. Is this patch still needed?\n\n3a. If yes, should the internal rewriting make use of json_value or \nshare code with it?\n\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/CA+HiwqE4XTdfb1nW=Ojoy_tQSRhYt-q_kb6i5d4xcKyrLC1Nbg@mail.gmail.com\n[1]: https://commitfest.postgresql.org/47/4377/\n\n\n", "msg_date": "Mon, 4 Mar 2024 13:33:12 +0100", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "\nPeter Eisentraut <peter@eisentraut.org> writes:\n\n> On 09.02.24 10:05, Andy Fan wrote:\n>> 2. Where is the current feature blocked for the past few months?\n>> It's error message compatible issue! Continue with above setup:\n>> master:\n>> select * from tb where (a->'b')::numeric > 3::numeric;\n>> ERROR: cannot cast jsonb string to type numeric\n>> select * from tb where (a->'b')::int4 > 3::numeric;\n>> ERROR: cannot cast jsonb string to type integer\n>> You can see the error message is different (numeric vs integer).\n>> Patched:\n>> We still can get the same error message as master BUT the code\n>> looks odd.\n>> select * from tb where (a->'b')::int4 > 3;\n>> QUERY PLAN\n>> -------------------------------------------------------------------------------------------------------------------\n>> Seq Scan on public.tb\n>> Output: a\n>> Filter: ((jsonb_finish_numeric(jsonb_object_field_start((tb.a)::internal, 'b'::text), '23'::oid))::integer > 3)\n>> (3 rows)\n>> You can see \"jsonb_finish_numeric(.., '23::oid)\" the '23::oid' is\n>> just\n>> for the *\"integer\"* output in error message:\n>> \"cannot cast jsonb string to type*integer*\"\n>> Now the sistuation is either we use the odd argument (23::oid) in\n>> jsonb_finish_numeric, or we use a incompatible error message with the\n>> previous version. I'm not sure which way is better, but this is the\n>> place the current feature is blocked.\n>\n> I'm not bothered by that. It also happens on occasion in the backend C\n> code that we pass around extra information to be able to construct\n> better error messages. The functions here are not backend C code, but\n> they are internal functions, so similar considerations can apply.\n\nThanks for speaking on this!\n\n>\n> But I have a different question about this patch set. This has some\n> overlap with the JSON_VALUE function that is being discussed at\n> [0][1]. For example, if I apply the patch\n> v39-0001-Add-SQL-JSON-query-functions.patch from that thread, I can run\n>\n> select count(*) from tb where json_value(a, '$.a' returning numeric) = 2;\n>\n> and I get a noticeable performance boost over\n>\n> select count(*) from tb where cast (a->'a' as numeric) = 2;\n\nHere is my test and profile about the above 2 queries.\n\ncreate table tb(a jsonb);\ninsert into tb\nselect jsonb_build_object('a', i) from generate_series(1, 10000)i;\n\ncat a.sql\nselect count(*) from tb\nwhere json_value(a, '$.a' returning numeric) = 2;\n\ncat b.sql\nselect count(*) from tb where cast (a->'a' as numeric) = 2;\n\npgbench -n -f xxx.sql postgres -T100 | grep lat\n\nThen here is the result:\n\n| query | master | patched (patch here and jsonb_value) |\n|-------+---------+-------------------------------------|\n| a.sql | / | 2.59 (ms) |\n| b.sql | 3.34 ms | 1.75 (ms) |\n\nAs we can see the patch here has the best performance (this result looks\nbe different from yours?).\n\nAfter I check the code, I am sure both patches *don't* have the problem\nin master where it get a jsonbvalue first and convert it to jsonb and\nthen cast to numeric.\n\nThen I perf the result, and find the below stuff:\n\nJSOB_VALUE\n------------\n- 32.02% 4.30% postgres postgres [.] JsonPathValue\n - 27.72% JsonPathValue\n - 22.63% executeJsonPath (inlined)\n - 19.97% executeItem (inlined)\n - executeItemOptUnwrapTarget\n - 17.79% executeNextItem\n - 15.49% executeItem (inlined)\n - executeItemOptUnwrapTarget\n + 8.50% getKeyJsonValueFromContainer (note here..)\n + 2.30% executeNextItem\n 0.79% findJsonbValueFromContainer\n + 0.68% check_stack_depth\n + 1.51% jspGetNext\n + 0.73% check_stack_depth\n 1.27% jspInitByBuffer\n 0.85% JsonbExtractScalar\n + 4.91% DatumGetJsonbP (inlined)\n\nPatch here for b.sql:\n---------------------\n\n- 19.98% 2.10% postgres postgres [.] jsonb_object_field_start\n - 17.88% jsonb_object_field_start\n - 17.70% jsonb_object_field_internal (inlined)\n + 11.03% getKeyJsonValueFromContainer\n - 6.26% DatumGetJsonbP (inlined)\n + 5.78% detoast_attr\n + 1.21% _start\n + 0.54% 0x55ddb44552a\n\nJSONB_VALUE has a much longer way to get getKeyJsonValueFromContainer,\nthen I think JSON_VALUE probably is designed for some more complex path \nwhich need to pay extra effort which bring the above performance\ndifference. \n\nI added Amit and Alvaro to this thread in case they can have more\ninsight on this.\n\n> So some questions to think about:\n>\n> 1. Compare performance of base case vs. this patch vs. json_value.\n\nDone, as above. \n>\n> 2. Can json_value be optimized further?\n\nhmm, I have some troubles to understand A's performance boost over B,\nthen who is better. But in my test above, looks the patch here is better\non the given case and the differece may comes from JSON_VALUE is\ndesigned to handle more generic purpose.\n\n> 3. Is this patch still needed?\n\nI think yes. One reason is the patch here have better performance, the\nother reason is the patch here prevent user from changing their existing\nqueries.\n\n>\n> 3a. If yes, should the internal rewriting make use of json_value or\n> share code with it?\n\nAs for now, looks json_value is designed for more generic case, not sure\nif we could share some code. My patch actually doesn't add much code on\nthe json function part. \n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Tue, 05 Mar 2024 00:14:43 +0800", "msg_from": "Andy Fan <zhihuifan1213@163.com>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "\n>> But I have a different question about this patch set. This has some\n>> overlap with the JSON_VALUE function that is being discussed at\n>> [0][1]. For example, if I apply the patch\n>> v39-0001-Add-SQL-JSON-query-functions.patch from that thread, I can run\n>>\n>> select count(*) from tb where json_value(a, '$.a' returning numeric) = 2;\n>>\n>> and I get a noticeable performance boost over\n>>\n>> select count(*) from tb where cast (a->'a' as numeric) = 2;\n>\n> Here is my test and profile about the above 2 queries.\n>\n..\n> As we can see the patch here has the best performance (this result looks\n> be different from yours?).\n>\n> After I check the code, I am sure both patches *don't* have the problem\n> in master where it get a jsonbvalue first and convert it to jsonb and\n> then cast to numeric.\n>\n> Then I perf the result, and find the below stuff:\n>\n..\n\n> JSONB_VALUE has a much longer way to get getKeyJsonValueFromContainer,\n> then I think JSON_VALUE probably is designed for some more complex path \n> which need to pay extra effort which bring the above performance\n> difference. \n\n\nHello Peter,\n\nThanks for highlight the JSON_VALUE patch! Here is the sistuation in my\nmind now. \n\nMy patch is desigined to *not* introducing any new user-faced functions, \nbut let some existing functions run faster. JSON_VALUE patch is designed\nto following more on SQL standard so introuduced one new function which\nhas more flexibility on ERROR handling [1]. \n\nBoth patches are helpful on the subject here, but my patch here has a\nbetter performance per my testing, I don't think I did anything better\nhere, just because JSON_VALUE function is designed for some more generic\npurpose which has to pay some extra effort, and even if we have some\nchance to improve JSON_VALUE, I don't think it shoud not block the patch\nhere (I'd like to learn more about this, it may takes some time!)\n\nSo I think the my patch here can be go ahead again, what do you think? \n\n[1]\nhttps://www.postgresql.org/message-id/CACJufxGtetrn34Hwnb9D2if5D_HOPAh235MtEZ1meVYx-BiNtg%40mail.gmail.com \n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Sun, 10 Mar 2024 07:16:40 +0800", "msg_from": "Andy Fan <zhihuifan1213@163.com>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "Here is latest version, nothing changed besides the rebase to the latest\nmaster. The most recent 3 questions should be addressed.\n\n- The error message compatible issue [1] and the Peter's answer at [2].\n- Peter's new question at [2] and my answer at [3].\n\nAny effrot to move this patch ahead is welcome and thanks all the people\nwho provided high quaility feedback so far, especially chapman!\n\n[1] https://www.postgresql.org/message-id/87r0hmvuvr.fsf@163.com\n[2]\nhttps://www.postgresql.org/message-id/8102ff5b-b156-409e-a48f-e53e63a39b36%40eisentraut.org\n[3] https://www.postgresql.org/message-id/8734t6c5rh.fsf%40163.com\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Mon, 01 Apr 2024 09:42:12 +0800", "msg_from": "Andy Fan <zhihuifan1213@163.com>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "Andy Fan <zhihuifan1213@163.com> writes:\n\n> Here is latest version, nothing changed besides the rebase to the latest\n> master. The most recent 3 questions should be addressed.\n>\n> - The error message compatible issue [1] and the Peter's answer at [2].\n> - Peter's new question at [2] and my answer at [3].\n>\n> Any effrot to move this patch ahead is welcome and thanks all the people\n> who provided high quaility feedback so far, especially chapman!\n>\n> [1] https://www.postgresql.org/message-id/87r0hmvuvr.fsf@163.com\n> [2]\n> https://www.postgresql.org/message-id/8102ff5b-b156-409e-a48f-e53e63a39b36%40eisentraut.org\n> [3] https://www.postgresql.org/message-id/8734t6c5rh.fsf%40163.com\n\nrebase to the latest master again.\n\ncommit bc990b983136ef658cd3be03cdb07f2eadc4cd5c (HEAD -> jsonb_numeric)\nAuthor: yizhi.fzh <yizhi.fzh@alibaba-inc.com>\nDate: Mon Apr 1 09:36:08 2024 +0800\n\n Improve the performance of Jsonb numeric/bool extraction.\n \n JSONB object uses a binary compatible numeric format with the numeric\n data type in SQL. However in the past, extracting a numeric value from a\n JSONB field still needs to find the corresponding JsonbValue first,\n then convert the JsonbValue to Jsonb, and finally use the cast system to\n convert the Jsonb to a Numeric data type. This approach was very\n inefficient in terms of performance.\n \n In the current patch, It is handled that the JsonbValue is converted to\n numeric data type directly. This is done by the planner support\n function which detects the above case and simplify the expression.\n Because the boolean type and numeric type share certain similarities in\n their attributes, we have implemented the same optimization approach for\n both. In the ideal test case, the performance can be 2x than before.\n \n The optimized functions and operators includes:\n 1. jsonb_object_field / ->\n 2. jsonb_array_element / ->\n 3. jsonb_extract_path / #>\n 4. jsonb_path_query\n 5. jsonb_path_query_first\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Wed, 17 Apr 2024 13:13:34 +0800", "msg_from": "Andy Fan <zhihuifan1213@163.com>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "On Wed, 17 Apr 2024 at 17:17, Andy Fan <zhihuifan1213@163.com> wrote:\n> rebase to the latest master again.\n\nThere's a lot of complexity in the v18 patch that I don't understand\nthe need for.\n\nI imagined you'd the patch should create a SupportRequestSimplify\nsupport function for jsonb_numeric() that checks if the input\nexpression is an OpExpr with funcid of jsonb_object_field(). All you\ndo then is ditch the cast and change the OpExpr to call a new function\nnamed jsonb_object_field_numeric() which returns the val.numeric\ndirectly. Likely the same support function could handle jsonb casts\nto other types too, in which case you'd just call some other function,\ne.g jsonb_object_field_timestamp() or jsonb_object_field_boolean().\n\nCan you explain why the additional complexity is needed over what's in\nthe attached patch?\n\nDavid", "msg_date": "Thu, 12 Sep 2024 09:00:41 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" }, { "msg_contents": "\nHello David,\n\n Thanks for checking this!\n \n> There's a lot of complexity in the v18 patch that I don't understand\n> the need for.\n>\n> I imagined you'd the patch should create a SupportRequestSimplify\n> support function for jsonb_numeric() that checks if the input\n> expression is an OpExpr with funcid of jsonb_object_field(). All you\n> do then is ditch the cast and change the OpExpr to call a new function\n> named jsonb_object_field_numeric() which returns the val.numeric\n> directly. Likely the same support function could handle jsonb casts\n> to other types too, in which case you'd just call some other function,\n> e.g jsonb_object_field_timestamp() or jsonb_object_field_boolean().\n\nBasically yes. The reason complexity comes when we many operators we\nwant to optimize AND my patch I want to reduce the number of function\ncreated. \n\nThe optimized functions and operators includes:\n1. jsonb_object_field / ->\n2. jsonb_array_element / ->\n3. jsonb_extract_path / #>\n4. jsonb_path_query\n5. jsonb_path_query_first\n\n \n> ..., in which case you'd just call some other function,\n> e.g jsonb_object_field_timestamp() or jsonb_object_field_boolean().\n\nThis works, but We need to create 2 functions for each operator. In the\npatched case, we have 5 operators, so we need to create 10 functions.\n\nop[1,2,3,4,5]_bool\nop[1,2,3,4,5]_numeric.\n\nWithin the start / finish function, we need to create *7* functions.\n\nop[1,2,3,4,5]_start : return the \"JsonbVaue\".\n\njsonb_finish_numeric: convert jsonbvalue to numeric\njsonb_finish_bool : convert jsonbvalue to bool.\n\nI think the above is the major factor for the additional complexity. \n\nSome other factors contribute to complexity a bit.\n\n1. we also have jsonb_int{2,4,8}/float{4,8} in pg_proc for '->'\n operator, not only numeric.\n2. user may use OpExpr, like (jb->'x')::numeric, user may also use\n FuncExpr, like (jsonb_object_field(a, 'x'))::numeric. \n\n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Thu, 12 Sep 2024 03:03:18 +0000", "msg_from": "Andy Fan <zhihuifan1213@163.com>", "msg_from_op": false, "msg_subject": "Re: Extract numeric filed in JSONB more effectively" } ]
[ { "msg_contents": "Dear hackers,\n\n# Background\n\nBased on [1], I did configure and build with options:\n(I used Meson build system, but it could be reproduced by Autoconf/Make)\n\n```\n$ meson setup -Dcassert=true -Ddebug=true -Dc_args=-Og ../builder\n$ cd ../builder\n$ ninja\n```\n\nMy gcc version is 4.8.5, and ninja is 1.10.2.\n\n# Problem\n\nI got warnings while compiling. I found that these were reported when -Og was specified.\nAll of raised said that the variable might be used without initialization.\n\n```\n[122/1910] Compiling C object src/backend/postgres_lib.a.p/libpq_be-fsstubs.c.o\n../postgres/src/backend/libpq/be-fsstubs.c: In function ‘be_lo_export’:\n../postgres/src/backend/libpq/be-fsstubs.c:537:24: warning: ‘fd’ may be used uninitialized in this function [-Wmaybe-uninitialized]\n if (CloseTransientFile(fd) != 0)\n ^\n[390/1910] Compiling C object src/backend/postgres_lib.a.p/commands_trigger.c.o\nIn file included from ../postgres/src/backend/commands/trigger.c:14:0:\n../postgres/src/backend/commands/trigger.c: In function ‘ExecCallTriggerFunc’:\n../postgres/src/include/postgres.h:314:2: warning: ‘result’ may be used uninitialized in this function [-Wmaybe-uninitialized]\n return (Pointer) X;\n ^\n../postgres/src/backend/commands/trigger.c:2316:9: note: ‘result’ was declared here\n Datum result;\n ^\n[1023/1910] Compiling C object src/backend/postgres_lib.a.p/utils_adt_xml.c.o\n../postgres/src/backend/utils/adt/xml.c: In function ‘xml_pstrdup_and_free’:\n../postgres/src/backend/utils/adt/xml.c:1359:2: warning: ‘result’ may be used uninitialized in this function [-Wmaybe-uninitialized]\n return result;\n ^\n[1600/1910] Compiling C object contrib/pg_stat_statements/pg_stat_statements.so.p/pg_stat_statements.c.o\n../postgres/contrib/pg_stat_statements/pg_stat_statements.c: In function ‘pgss_planner’:\n../postgres/contrib/pg_stat_statements/pg_stat_statements.c:960:2: warning: ‘result’ may be used uninitialized in this function [-Wmaybe-uninitialized]\n return result;\n ^\n[1651/1910] Compiling C object contrib/postgres_fdw/postgres_fdw.so.p/postgres_fdw.c.o\n../postgres/contrib/postgres_fdw/postgres_fdw.c: In function ‘postgresAcquireSampleRowsFunc’:\n../postgres/contrib/postgres_fdw/postgres_fdw.c:5346:14: warning: ‘reltuples’ may be used uninitialized in this function [-Wmaybe-uninitialized]\n *totalrows = reltuples;\n ^\n[1910/1910] Linking target src/interfaces/ecpg/test/thread/alloc\n```\n\n# Solution\n\nPSA the patch to keep the compiler quiet. IIUC my compiler considered that\nsubstitutions in PG_TRY() might be skipped. I'm not sure it is real problem,\nbut the workarounds are harmless.\n\nOr, did I miss something for ignoring above?\n\n[1]: https://wiki.postgresql.org/wiki/Developer_FAQ#:~:text=or%20MSVC%20tracepoints.-,What%20debugging%20features%20are%20available%3F,-Compile%2Dtime\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED", "msg_date": "Tue, 1 Aug 2023 04:51:55 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": true, "msg_subject": "Fix compilation warnings when CFLAGS -Og is specified" }, { "msg_contents": "At Tue, 1 Aug 2023 04:51:55 +0000, \"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com> wrote in \n> Dear hackers,\n> \n> # Background\n> \n> Based on [1], I did configure and build with options:\n> (I used Meson build system, but it could be reproduced by Autoconf/Make)\n> \n> ```\n> $ meson setup -Dcassert=true -Ddebug=true -Dc_args=-Og ../builder\n> $ cd ../builder\n> $ ninja\n> ```\n> \n> My gcc version is 4.8.5, and ninja is 1.10.2.\n\ngcc 4.8 looks very old? \n\nAFAIS all of those complaints are false positives and if I did this\ncorreclty, gcc 11.3 seems to have been fixed in this regard.\n\n> # Solution\n> \n> PSA the patch to keep the compiler quiet. IIUC my compiler considered that\n> substitutions in PG_TRY() might be skipped. I'm not sure it is real problem,\n> but the workarounds are harmless.\n> \n> Or, did I miss something for ignoring above?\n> \n> [1]: https://wiki.postgresql.org/wiki/Developer_FAQ#:~:text=or%20MSVC%20tracepoints.-,What%20debugging%20features%20are%20available%3F,-Compile%2Dtime\n\nI think we don't want \"fix\" those as far as modern compilers don't\nemit the false positives.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 01 Aug 2023 15:46:55 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix compilation warnings when CFLAGS -Og is specified" }, { "msg_contents": "Dear Horiguchi-san,\n\nThanks for replying!\n\n> > My gcc version is 4.8.5, and ninja is 1.10.2.\n> \n> gcc 4.8 looks very old?\n> \n> AFAIS all of those complaints are false positives and if I did this\n> correclty, gcc 11.3 seems to have been fixed in this regard.\n\nI switched to newer gcc (8.3, still old...) and tried. As you said it did not raise\nsuch warnings.\n\n> > # Solution\n> >\n> > PSA the patch to keep the compiler quiet. IIUC my compiler considered that\n> > substitutions in PG_TRY() might be skipped. I'm not sure it is real problem,\n> > but the workarounds are harmless.\n> >\n> > Or, did I miss something for ignoring above?\n> >\n> > [1]:\n> https://wiki.postgresql.org/wiki/Developer_FAQ#:~:text=or%20MSVC%20trace\n> points.-,What%20debugging%20features%20are%20available%3F,-Compile%2Dti\n> me\n> \n> I think we don't want \"fix\" those as far as modern compilers don't\n> emit the false positives.\n\nOK, I should use newer one for modern codes. Sorry for noise and thank you for confirmation.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n", "msg_date": "Tue, 1 Aug 2023 10:25:10 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Fix compilation warnings when CFLAGS -Og is specified" } ]
[ { "msg_contents": "Hi,\n\nMy colleague, Mitsuru Hinata (in CC), found the following issue.\n\nThe documentation of pg_stat_reset_single_table_counters() says\n> pg_stat_reset_single_table_counters ( oid ) → void\n> Resets statistics for a single table or index in the current database \n> or shared across all databases in the cluster to zero.\n> This function is restricted to superusers by default, but other users \n> can be granted EXECUTE to run the function.\nhttps://www.postgresql.org/docs/devel/monitoring-stats.html\n\nBut, the stats will not be reset if the table shared across all\ndatabases is specified. IIUC, 5891c7a8e seemed to have mistakenly\nremoved the feature implemented in e04267844. What do you think?\n\n* reproduce procedure\n\nSELECT COUNT(*) FROM pg_stat_database;\nSELECT pg_stat_reset_single_table_counters('pg_database'::regclass);\nSELECT seq_scan FROM pg_stat_all_tables WHERE relid = \n'pg_database'::regclass;\n\n* unexpected result\n * Rename OverrideSearchPath to SearchPathMatcher (current HEAD: \nd3a38318a)\n * pgstat: store statistics in shared memory (5891c7a8e)\n\npsql=# SELECT seq_scan FROM pg_stat_all_tables WHERE relid = \n'pg_database'::regclass;\n seq_scan\n----------\n 11\n(1 row)\n\n* expected result\n * Enhance pg_stat_reset_single_table_counters function (e04267844)\n * pgstat: normalize function naming (be902e2651), which is previous \ncommit of 5891c7a8e.\n\npsql=# SELECT seq_scan FROM pg_stat_all_tables WHERE relid = \n'pg_database'::regclass;\n seq_scan\n----------\n 0\n(1 row)\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION", "msg_date": "Tue, 01 Aug 2023 15:23:54 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Fix pg_stat_reset_single_table_counters function" }, { "msg_contents": "On 2023-08-01 15:23, Masahiro Ikeda wrote:\n> Hi,\n> \n> My colleague, Mitsuru Hinata (in CC), found the following issue.\n> \n> The documentation of pg_stat_reset_single_table_counters() says\n>> pg_stat_reset_single_table_counters ( oid ) → void\n>> Resets statistics for a single table or index in the current database \n>> or shared across all databases in the cluster to zero.\n>> This function is restricted to superusers by default, but other users \n>> can be granted EXECUTE to run the function.\n> https://www.postgresql.org/docs/devel/monitoring-stats.html\n> \n> But, the stats will not be reset if the table shared across all\n> databases is specified. IIUC, 5891c7a8e seemed to have mistakenly\n> removed the feature implemented in e04267844. What do you think?\n\nI fix an issue with the v1 patch reported by cfbot.\n\nThe test case didn't assume the number of databases will change in\nthe test of pg_upgrade.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION", "msg_date": "Thu, 10 Aug 2023 14:09:57 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Fix pg_stat_reset_single_table_counters function" }, { "msg_contents": "On Thu, Aug 10, 2023 at 2:10 PM Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:\n>\n> On 2023-08-01 15:23, Masahiro Ikeda wrote:\n> > Hi,\n> >\n> > My colleague, Mitsuru Hinata (in CC), found the following issue.\n> >\n> > The documentation of pg_stat_reset_single_table_counters() says\n> >> pg_stat_reset_single_table_counters ( oid ) → void\n> >> Resets statistics for a single table or index in the current database\n> >> or shared across all databases in the cluster to zero.\n> >> This function is restricted to superusers by default, but other users\n> >> can be granted EXECUTE to run the function.\n> > https://www.postgresql.org/docs/devel/monitoring-stats.html\n> >\n> > But, the stats will not be reset if the table shared across all\n> > databases is specified. IIUC, 5891c7a8e seemed to have mistakenly\n> > removed the feature implemented in e04267844. What do you think?\n\nGood catch! I've confirmed that the issue has been fixed by your patch.\n\nHowever, I'm not sure the added regression tests are stable since\nautovacuum workers may scan the pg_database and increment the\nstatistics after resetting the stats. Also, the patch bumps the\nCATALOG_VERSION_NO but I don't think it's necessary.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 10 Aug 2023 17:48:10 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix pg_stat_reset_single_table_counters function" }, { "msg_contents": "Hi,\n\nOn 2023-08-10 17:48:10 +0900, Masahiko Sawada wrote:\n> Good catch! I've confirmed that the issue has been fixed by your patch.\n\nIndeed.\n\n\n> However, I'm not sure the added regression tests are stable since\n> autovacuum workers may scan the pg_database and increment the\n> statistics after resetting the stats.\n\nWhat about updating the table and checking the update count is reset? That'd\nnot be reset by autovacuum.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 12 Aug 2023 12:12:07 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Fix pg_stat_reset_single_table_counters function" }, { "msg_contents": "Hi,\n\nOn 2023-08-13 04:12, Andres Freund wrote:\n> On 2023-08-10 17:48:10 +0900, Masahiko Sawada wrote:\n>> Good catch! I've confirmed that the issue has been fixed by your \n>> patch.\n> \n> Indeed.\n\nThanks for your responses!\n\n>> However, I'm not sure the added regression tests are stable since\n>> autovacuum workers may scan the pg_database and increment the\n>> statistics after resetting the stats.\n> \n> What about updating the table and checking the update count is reset? \n> That'd\n> not be reset by autovacuum.\n\nYes. I confirmed that the stats are incremented by autovacuum as you \nsaid.\n\nI updated the patch to v3.\n* remove the code to bump the CATALOG_VERSION_NO because I misunderstood\n* change the test logic to check the update count instead of scan count\n\nI changed the table to check the stats from pg_database to \npg_shdescription\nbecause the stats can update via the SQL interface COMMENT command.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION", "msg_date": "Mon, 14 Aug 2023 17:12:47 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Fix pg_stat_reset_single_table_counters function" }, { "msg_contents": "Hi,\n\nOn Mon, Aug 14, 2023 at 5:12 PM Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:\n>\n> Hi,\n>\n> On 2023-08-13 04:12, Andres Freund wrote:\n> > On 2023-08-10 17:48:10 +0900, Masahiko Sawada wrote:\n> >> Good catch! I've confirmed that the issue has been fixed by your\n> >> patch.\n> >\n> > Indeed.\n>\n> Thanks for your responses!\n>\n> >> However, I'm not sure the added regression tests are stable since\n> >> autovacuum workers may scan the pg_database and increment the\n> >> statistics after resetting the stats.\n> >\n> > What about updating the table and checking the update count is reset?\n> > That'd\n> > not be reset by autovacuum.\n>\n> Yes. I confirmed that the stats are incremented by autovacuum as you\n> said.\n>\n> I updated the patch to v3.\n> * remove the code to bump the CATALOG_VERSION_NO because I misunderstood\n> * change the test logic to check the update count instead of scan count\n\nThank you for updating the patch!\n\n>\n> I changed the table to check the stats from pg_database to\n> pg_shdescription\n> because the stats can update via the SQL interface COMMENT command.\n\nIt seems to work well.\n\n+COMMENT ON DATABASE :current_database IS 'This is a test comment';\n-- insert or update in 'pg_shdescription'\n\nI think the current_database should be quoted (see other examples\nwhere using current_database(), e.g. collate.linux.utf8.sql). Also it\nwould be better to reset the comment after the test.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 15 Aug 2023 11:48:30 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix pg_stat_reset_single_table_counters function" }, { "msg_contents": "On 2023-08-15 11:48, Masahiko Sawada wrote:\n> On Mon, Aug 14, 2023 at 5:12 PM Masahiro Ikeda \n> <ikedamsh@oss.nttdata.com> wrote:\n>> I changed the table to check the stats from pg_database to\n>> pg_shdescription\n>> because the stats can update via the SQL interface COMMENT command.\n> \n> It seems to work well.\n> \n> +COMMENT ON DATABASE :current_database IS 'This is a test comment';\n> -- insert or update in 'pg_shdescription'\n> \n> I think the current_database should be quoted (see other examples\n> where using current_database(), e.g. collate.linux.utf8.sql). Also it\n> would be better to reset the comment after the test.\n\nThanks! I fixed the issues in the v4 patch.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION", "msg_date": "Tue, 15 Aug 2023 14:13:46 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Fix pg_stat_reset_single_table_counters function" }, { "msg_contents": "Apologies. In the previous mail, I mistakenly addressed it to the wrong \nrecipients. Reposted.\n\nOn 2023/08/15 14:13, Masahiro Ikeda wrote:\n> On 2023-08-15 11:48, Masahiko Sawada wrote:\n>> +COMMENT ON DATABASE :current_database IS 'This is a test comment';\n>> -- insert or update in 'pg_shdescription'\n>>\n>> I think the current_database should be quoted (see other examples\n>> where using current_database(), e.g. collate.linux.utf8.sql). Also it\n>> would be better to reset the comment after the test.\n>\n> Thanks! I fixed the issues in the v4 patch.\n\nI'm not sure about others, but I would avoid using the name \n\"current_database\" for the variable.\n\nI would just use \"database\" or \"db\" instead.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n", "msg_date": "Mon, 21 Aug 2023 11:33:28 +0900", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix pg_stat_reset_single_table_counters function" }, { "msg_contents": "On 2023-08-21 11:33, Kyotaro Horiguchi wrote:\n> On 2023/08/15 14:13, Masahiro Ikeda wrote:\n>> On 2023-08-15 11:48, Masahiko Sawada wrote:\n>>> +COMMENT ON DATABASE :current_database IS 'This is a test comment';\n>>> -- insert or update in 'pg_shdescription'\n>>> \n>>> I think the current_database should be quoted (see other examples\n>>> where using current_database(), e.g. collate.linux.utf8.sql). Also it\n>>> would be better to reset the comment after the test.\n>> \n> \n> I'm not sure about others, but I would avoid using the name\n> \"current_database\" for the variable.\n> \n> I would just use \"database\" or \"db\" instead.\n\nThanks! I agree your comment.\nI fixed in v5 patch.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION", "msg_date": "Mon, 21 Aug 2023 13:32:24 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Fix pg_stat_reset_single_table_counters function" }, { "msg_contents": "On Mon, Aug 21, 2023 at 11:33:28AM +0900, Kyotaro Horiguchi wrote:\n> I'm not sure about others, but I would avoid using the name\n> \"current_database\" for the variable.\n\nAgreed. Looking at the surroundings, we have for example \"datname\" in\nthe collation tests so I have just used that, and applied the patch\ndown to 15.\n--\nMichael", "msg_date": "Mon, 21 Aug 2023 13:34:26 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix pg_stat_reset_single_table_counters function" }, { "msg_contents": "On 2023-08-21 13:34, Michael Paquier wrote:\n> On Mon, Aug 21, 2023 at 11:33:28AM +0900, Kyotaro Horiguchi wrote:\n>> I'm not sure about others, but I would avoid using the name\n>> \"current_database\" for the variable.\n> \n> Agreed. Looking at the surroundings, we have for example \"datname\" in\n> the collation tests so I have just used that, and applied the patch\n> down to 15.\n\nThanks!\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 21 Aug 2023 13:38:45 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Fix pg_stat_reset_single_table_counters function" } ]
[ { "msg_contents": "Hi PG Users.\n\nWe are using Debezium to capture the CDC events into Kafka. \nWith decoderbufs and wal2json plugins the connector is able to capture the generated columns in the table but not with pgoutput plugin.\n\nWe tested with the following example:\n\nCREATE TABLE employees (\n id SERIAL PRIMARY KEY,\n first_name VARCHAR(50),\n last_name VARCHAR(50),\n full_name VARCHAR(100) GENERATED ALWAYS AS (first_name || ' ' || last_name) STORED\n);\n\n// Inserted few records when the connector was running\n\nInsert into employees (first_name, last_name) VALUES ('ABC' , 'XYZ’);\n\n\nWith decoderbufs and wal2json the connector is able to capture the generated column `full_name` in above example. But with pgoutput the generated column was not captured. \nIs this a known limitation of pgoutput plugin? If yes, where can we request to add support for this feature?\n\nThanks.\nRajendra.\n\n", "msg_date": "Tue, 1 Aug 2023 12:17:33 +0530", "msg_from": "Rajendra Kumar Dangwal <dangwalrajendra888@gmail.com>", "msg_from_op": true, "msg_subject": "Pgoutput not capturing the generated columns" }, { "msg_contents": "On Tue, Aug 1, 2023, at 3:47 AM, Rajendra Kumar Dangwal wrote:\n> With decoderbufs and wal2json the connector is able to capture the generated column `full_name` in above example. But with pgoutput the generated column was not captured. \n\nwal2json materializes the generated columns before delivering the output. I\ndecided to materialized the generated columns in the output plugin because the\ntarget consumers expects a complete row.\n\n> Is this a known limitation of pgoutput plugin? If yes, where can we request to add support for this feature?\n\nI wouldn't say limitation but a design decision.\n\nThe logical replication design decides to compute the generated columns at\nsubscriber side. It was a wise decision aiming optimization (it doesn't\noverload the publisher that is *already* in charge of logical decoding).\n\nShould pgoutput provide a complete row? Probably. If it is an option that\ndefaults to false and doesn't impact performance.\n\nThe request for features should be done in this mailing list.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Tue, Aug 1, 2023, at 3:47 AM, Rajendra Kumar Dangwal wrote:With decoderbufs and wal2json the connector is able to capture the generated column `full_name` in above example. But with pgoutput the generated column was not captured. wal2json materializes the generated columns before delivering the output. Idecided to materialized the generated columns in the output plugin because thetarget consumers expects a complete row.Is this a known limitation of pgoutput plugin? If yes, where can we request to add support for this feature?I wouldn't say limitation but a design decision.The logical replication design decides to compute the generated columns atsubscriber side. It was a wise decision aiming optimization (it doesn'toverload the publisher that is *already* in charge of logical decoding).Should pgoutput provide a complete row? Probably. If it is an option thatdefaults to false and doesn't impact performance.The request for features should be done in this mailing list.--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Tue, 01 Aug 2023 11:51:45 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Thanks Euler,\nGreatly appreciate your inputs.\n\n> Should pgoutput provide a complete row? Probably. If it is an option that defaults to false and doesn't impact performance.\n\nYes, it would be great if this feature can be implemented.\n\n> The logical replication design decides to compute the generated columns at subscriber side.\n\nIf I understand correctly, this approach involves establishing a function on the subscriber's side that emulates the operation executed to derive the generated column values.\nIf yes, I see one potential issue where disparities might surface between the values of generated columns on the subscriber's side and those computed within Postgres. This could happen if the generated column's value relies on the current_time function.\n\nPlease let me know how can we track the feature requests and the discussions around that.\n\nThanks,\nRajendra.\nThanks Euler,\nGreatly appreciate your inputs.\n\n> Should pgoutput provide a complete row? Probably. If it is an option that defaults to false and doesn't impact performance.\n\nYes, it would be great if this feature can be implemented.\n\n> The logical replication design decides to compute the generated columns at subscriber side.\n\nIf I understand correctly, this approach involves establishing a function on the subscriber's side that emulates the operation executed to derive the generated column values.\nIf yes, I see one potential issue where disparities might surface between the values of generated columns on the subscriber's side and those computed within Postgres. This could happen if the generated column's value relies on the current_time function.Please let me know how can we track the feature requests and the discussions around that.Thanks,Rajendra.", "msg_date": "Mon, 21 Aug 2023 13:15:55 +0530", "msg_from": "Rajendra Kumar Dangwal <dangwalrajendra888@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi PG Hackers.\n\nWe are interested in enhancing the functionality of the pgoutput plugin by adding support for generated columns. \nCould you please guide us on the necessary steps to achieve this? Additionally, do you have a platform for tracking such feature requests? Any insights or assistance you can provide on this matter would be greatly appreciated.\n\nMany thanks.\nRajendra.\n\n\n\n", "msg_date": "Mon, 11 Sep 2023 14:53:00 +0530", "msg_from": "Rajendra Kumar Dangwal <dangwalrajendra888@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Wed, May 8, 2024 at 11:39 AM Rajendra Kumar Dangwal\n<dangwalrajendra888@gmail.com> wrote:\n>\n> Hi PG Hackers.\n>\n> We are interested in enhancing the functionality of the pgoutput plugin by adding support for generated columns.\n> Could you please guide us on the necessary steps to achieve this? Additionally, do you have a platform for tracking such feature requests? Any insights or assistance you can provide on this matter would be greatly appreciated.\n\nThe attached patch has the changes to support capturing generated\ncolumn data using ‘pgoutput’ and’ test_decoding’ plugin. Now if the\n‘include_generated_columns’ option is specified, the generated column\ninformation and generated column data also will be sent.\n\nUsage from pgoutput plugin:\nCREATE TABLE gencoltable (a int PRIMARY KEY, b int GENERATED ALWAYS AS\n(a * 2) STORED);\nCREATE publication pub1 for all tables;\nSELECT 'init' FROM pg_create_logical_replication_slot('slot1', 'pgoutput');\nSELECT * FROM pg_logical_slot_peek_binary_changes('slot1', NULL, NULL,\n'proto_version', '1', 'publication_names', 'pub1',\n'include_generated_columns', 'true');\n\nUsage from test_decoding plugin:\nSELECT 'init' FROM pg_create_logical_replication_slot('slot2', 'test_decoding');\nCREATE TABLE gencoltable (a int PRIMARY KEY, b int GENERATED ALWAYS AS\n(a * 2) STORED);\nINSERT INTO gencoltable (a) VALUES (1), (2), (3);\nSELECT data FROM pg_logical_slot_get_changes('slot2', NULL, NULL,\n'include-xids', '0', 'skip-empty-xacts', '1',\n'include_generated_columns', '1');\n\nCurrently it is not supported as a subscription option because table\nsync for the generated column is not possible as copy command does not\nsupport getting data for the generated column. If this feature is\nrequired we can remove this limitation from the copy command and then\nadd it as a subscription option later.\nThoughts?\n\nThanks and Regards,\nShubham Khanna.", "msg_date": "Wed, 8 May 2024 12:43:45 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Dear Shubham,\r\n\r\nThanks for creating a patch! Here are high-level comments.\r\n\r\n1.\r\nPlease document the feature. If it is hard to describe, we should change the API.\r\n\r\n2.\r\nCurrently, the option is implemented as streaming option. Are there any reasons\r\nto choose the way? Another approach is to implement as slot option, like failover\r\nand temporary.\r\n\r\n3.\r\nYou said that subscription option is not supported for now. Not sure, is it mean\r\nthat logical replication feature cannot be used for generated columns? If so,\r\nthe restriction won't be acceptable. If the combination between this and initial\r\nsync is problematic, can't we exclude them in CreateSubscrition and AlterSubscription?\r\nE.g., create_slot option cannot be set if slot_name is NONE.\r\n\r\n4.\r\nRegarding the test_decoding plugin, it has already been able to decode the\r\ngenerated columns. So... as the first place, is the proposed option really needed\r\nfor the plugin? Why do you include it?\r\nIf you anyway want to add the option, the default value should be on - which keeps\r\ncurrent behavior.\r\n\r\n5.\r\nAssuming that the feature become usable used for logical replicaiton. Not sure,\r\nshould we change the protocol version at that time? Nodes prior than PG17 may\r\nnot want receive values for generated columns. Can we control only by the option?\r\n\r\n6. logicalrep_write_tuple()\r\n\r\n```\r\n- if (!column_in_column_list(att->attnum, columns))\r\n+ if (!column_in_column_list(att->attnum, columns) && !att->attgenerated)\r\n+ continue;\r\n```\r\n\r\nHmm, does above mean that generated columns are decoded even if they are not in\r\nthe column list? If so, why? I think such columns should not be sent.\r\n\r\n7.\r\n\r\nSome functions refer data->publish_generated_column many times. Can we store\r\nthe value to a variable?\r\n\r\nBelow comments are for test_decoding part, but they may be not needed.\r\n\r\n=====\r\n\r\na. pg_decode_startup()\r\n\r\n```\r\n+ else if (strcmp(elem->defname, \"include_generated_columns\") == 0)\r\n```\r\n\r\nOther options for test_decoding do not have underscore. It should be\r\n\"include-generated-columns\".\r\n\r\nb. pg_decode_change()\r\n\r\ndata->include_generated_columns is referred four times in the function.\r\nCan you store the value to a varibable?\r\n\r\n\r\nc. pg_decode_change()\r\n\r\n```\r\n- true);\r\n+ true, data->include_generated_columns );\r\n```\r\n\r\nPlease remove the blank.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/ \r\n\r\n", "msg_date": "Wed, 8 May 2024 12:07:09 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Pgoutput not capturing the generated columns" }, { "msg_contents": "Here are some review comments for the patch v1-0001.\n\n======\nGENERAL\n\nG.1. Use consistent names\n\nIt seems to add unnecessary complications by having different names\nfor all the new options, fields and API parameters.\n\ne.g. sometimes 'include_generated_columns'\ne.g. sometimes 'publish_generated_columns'\n\nWon't it be better to just use identical names everywhere for\neverything? I don't mind which one you choose; I just felt you only\nneed one name, not two. This comment overrides everything else in this\npost so whatever name you choose, make adjustments for all my other\nreview comments as necessary.\n\n======\n\nG.2. Is it possible to just use the existing bms?\n\nA very large part of this patch is adding more API parameters to\ndelegate the 'publish_generated_columns' flag value down to when it is\nfinally checked and used. e.g.\n\nThe functions:\n- logicalrep_write_insert(), logicalrep_write_update(),\nlogicalrep_write_delete()\n... are delegating the new parameter 'publish_generated_column' down to:\n- logicalrep_write_tuple\n\nThe functions:\n- logicalrep_write_rel()\n... are delegating the new parameter 'publish_generated_column' down to:\n- logicalrep_write_attrs\n\nAFAICT in all these places the API is already passing a \"Bitmapset\n*columns\". I was wondering if it might be possible to modify the\n\"Bitmapset *columns\" BEFORE any of those functions get called so that\nthe \"columns\" BMS either does or doesn't include generated cols (as\nappropriate according to the option).\n\nWell, it might not be so simple because there are some NULL BMS\nconsiderations also, but I think it would be worth investigating at\nleast, because if indeed you can find some common place (somewhere\nlike pgoutput_change()?) where the columns BMS can be filtered to\nremove bits for generated cols then it could mean none of those other\npatch API changes are needed at all -- then the patch would only be\n1/2 the size.\n\n======\nCommit message\n\n1.\nNow if include_generated_columns option is specified, the generated\ncolumn information and generated column data also will be sent.\n\nUsage from pgoutput plugin:\nSELECT * FROM pg_logical_slot_peek_binary_changes('slot1', NULL, NULL,\n'proto_version', '1', 'publication_names', 'pub1',\n'include_generated_columns', 'true');\n\nUsage from test_decoding plugin:\nSELECT data FROM pg_logical_slot_get_changes('slot2', NULL, NULL,\n'include-xids', '0', 'skip-empty-xacts', '1',\n'include_generated_columns', '1');\n\n~\n\nI think there needs to be more background information given here. This\ncommit message doesn't seem to describe anything about what is the\nproblem and how this patch fixes it. It just jumps straight into\ngiving usages of a 'include_generated_columns' option.\n\nIt also doesn't say that this is an option that was newly *introduced*\nby the patch -- it refers to it as though the reader should already\nknow about it.\n\nFurthermore, your hacker's post says \"Currently it is not supported as\na subscription option because table sync for the generated column is\nnot possible as copy command does not support getting data for the\ngenerated column. If this feature is required we can remove this\nlimitation from the copy command and then add it as a subscription\noption later.\" IMO that all seems like the kind of information that\nought to also be mentioned in this commit message.\n\n======\ncontrib/test_decoding/sql/ddl.sql\n\n2.\n+-- check include_generated_columns option with generated column\n+CREATE TABLE gencoltable (a int PRIMARY KEY, b int GENERATED ALWAYS\nAS (a * 2) STORED);\n+INSERT INTO gencoltable (a) VALUES (1), (2), (3);\n+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,\nNULL, 'include-xids', '0', 'skip-empty-xacts', '1',\n'include_generated_columns', '1');\n+DROP TABLE gencoltable;\n+\n\n2a.\nPerhaps you should include both option values to demonstrate the\ndifference in behaviour:\n\n'include_generated_columns', '0'\n'include_generated_columns', '1'\n\n~\n\n2b.\nI think you maybe need to include more some test combinations where\nthere is and isn't a COLUMN LIST, because I am not 100% sure I\nunderstand the current logic/expectations for all combinations.\n\ne.g. When the generated column is in a column list but\n'publish_generated_columns' is false then what should happen? etc.\nAlso if there are any special rules then those should be mentioned in\nthe commit message.\n\n======\nsrc/backend/replication/logical/proto.c\n\n3.\nFor all the API changes the new parameter name should be plural.\n\n/publish_generated_column/publish_generated_columns/\n\n~~~\n\n4. logical_rep_write_tuple:\n\n- if (att->attisdropped || att->attgenerated)\n+ if (att->attisdropped)\n continue;\n\n- if (!column_in_column_list(att->attnum, columns))\n+ if (!column_in_column_list(att->attnum, columns) && !att->attgenerated)\n+ continue;\n+\n+ if (att->attgenerated && !publish_generated_column)\n continue;\nThat code seems confusing. Shouldn't the logic be exactly as also in\nlogicalrep_write_attrs()?\n\ne.g. Shouldn't they both look like this:\n\nif (att->attisdropped)\n continue;\n\nif (att->attgenerated && !publish_generated_column)\n continue;\n\nif (!column_in_column_list(att->attnum, columns))\n continue;\n======\nsrc/backend/replication/pgoutput/pgoutput.c\n\n5.\n static void send_relation_and_attrs(Relation relation, TransactionId xid,\n LogicalDecodingContext *ctx,\n- Bitmapset *columns);\n+ Bitmapset *columns,\n+ bool publish_generated_column);\n\nUse plural. /publish_generated_column/publish_generated_columns/\n\n~~~\n\n6. parse_output_parameters\n\n bool origin_option_given = false;\n+ bool generate_column_option_given = false;\n\n data->binary = false;\n data->streaming = LOGICALREP_STREAM_OFF;\n data->messages = false;\n data->two_phase = false;\n+ data->publish_generated_column = false;\n\nI think the 1st var should be 'include_generated_columns_option_given'\nfor consistency with the name of the actual option that was given.\n\n======\nsrc/include/replication/logicalproto.h\n\n7.\n(Same as a previous review comment)\n\nFor all the API changes the new parameter name should be plural.\n\n/publish_generated_column/publish_generated_columns/\n\n======\nsrc/include/replication/pgoutput.h\n\n8.\n bool publish_no_origin;\n+ bool publish_generated_column;\n } PGOutputData;\n\n/publish_generated_column/publish_generated_columns/\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 16 May 2024 16:04:56 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi Kuroda-san,\n\nThanks for reviewing the patch. I have fixed some of the comments\n> 2.\n> Currently, the option is implemented as streaming option. Are there any reasons\n> to choose the way? Another approach is to implement as slot option, like failover\n> and temporary.\nI think the current approach is appropriate. The options such as\nfailover and temporary seem like properties of a slot and I think\ndecoding of generated column should not be slot specific. Also adding\na new option for slot may create an overhead.\n\n> 3.\n> You said that subscription option is not supported for now. Not sure, is it mean\n> that logical replication feature cannot be used for generated columns? If so,\n> the restriction won't be acceptable. If the combination between this and initial\n> sync is problematic, can't we exclude them in CreateSubscrition and AlterSubscription?\n> E.g., create_slot option cannot be set if slot_name is NONE.\nAdded an option 'generated_column' for create subscription. Currently\nit allow to set 'generated_column' option as true only if 'copy_data'\nis set to false.\nAlso we don't allow user to alter the 'generated_column' option.\n\n> 6. logicalrep_write_tuple()\n>\n> ```\n> - if (!column_in_column_list(att->attnum, columns))\n> + if (!column_in_column_list(att->attnum, columns) && !att->attgenerated)\n> + continue;\n> ```\n>\n> Hmm, does above mean that generated columns are decoded even if they are not in\n> the column list? If so, why? I think such columns should not be sent.\nFixed\n\nThanks and Regards,\nShlok Kyal", "msg_date": "Mon, 20 May 2024 12:16:17 +0530", "msg_from": "Shlok Kyal <shlok.kyal.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi,\n\nOn Wed, May 8, 2024 at 4:14 PM Shubham Khanna\n<khannashubham1197@gmail.com> wrote:\n>\n> On Wed, May 8, 2024 at 11:39 AM Rajendra Kumar Dangwal\n> <dangwalrajendra888@gmail.com> wrote:\n> >\n> > Hi PG Hackers.\n> >\n> > We are interested in enhancing the functionality of the pgoutput plugin by adding support for generated columns.\n> > Could you please guide us on the necessary steps to achieve this? Additionally, do you have a platform for tracking such feature requests? Any insights or assistance you can provide on this matter would be greatly appreciated.\n>\n> The attached patch has the changes to support capturing generated\n> column data using ‘pgoutput’ and’ test_decoding’ plugin. Now if the\n> ‘include_generated_columns’ option is specified, the generated column\n> information and generated column data also will be sent.\n\nAs Euler mentioned earlier, I think it's a decision not to replicate\ngenerated columns because we don't know the target table on the\nsubscriber has the same expression and there could be locale issues\neven if it looks the same. I can see that a benefit of this proposal\nwould be to save cost to compute generated column values if the user\nwants the target table on the subscriber to have exactly the same data\nas the publisher's one. Are there other benefits or use cases?\n\n>\n> Usage from pgoutput plugin:\n> CREATE TABLE gencoltable (a int PRIMARY KEY, b int GENERATED ALWAYS AS\n> (a * 2) STORED);\n> CREATE publication pub1 for all tables;\n> SELECT 'init' FROM pg_create_logical_replication_slot('slot1', 'pgoutput');\n> SELECT * FROM pg_logical_slot_peek_binary_changes('slot1', NULL, NULL,\n> 'proto_version', '1', 'publication_names', 'pub1',\n> 'include_generated_columns', 'true');\n>\n> Usage from test_decoding plugin:\n> SELECT 'init' FROM pg_create_logical_replication_slot('slot2', 'test_decoding');\n> CREATE TABLE gencoltable (a int PRIMARY KEY, b int GENERATED ALWAYS AS\n> (a * 2) STORED);\n> INSERT INTO gencoltable (a) VALUES (1), (2), (3);\n> SELECT data FROM pg_logical_slot_get_changes('slot2', NULL, NULL,\n> 'include-xids', '0', 'skip-empty-xacts', '1',\n> 'include_generated_columns', '1');\n>\n> Currently it is not supported as a subscription option because table\n> sync for the generated column is not possible as copy command does not\n> support getting data for the generated column. If this feature is\n> required we can remove this limitation from the copy command and then\n> add it as a subscription option later.\n> Thoughts?\n\nI think that if we want to support an option to replicate generated\ncolumns, the initial tablesync should support it too. Otherwise, we\nend up filling the target columns data with NULL during the initial\ntablesync but with replicated data during the streaming changes.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 20 May 2024 17:18:41 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Mon, 20 May 2024 at 13:49, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Hi,\n>\n> On Wed, May 8, 2024 at 4:14 PM Shubham Khanna\n> <khannashubham1197@gmail.com> wrote:\n> >\n> > On Wed, May 8, 2024 at 11:39 AM Rajendra Kumar Dangwal\n> > <dangwalrajendra888@gmail.com> wrote:\n> > >\n> > > Hi PG Hackers.\n> > >\n> > > We are interested in enhancing the functionality of the pgoutput plugin by adding support for generated columns.\n> > > Could you please guide us on the necessary steps to achieve this? Additionally, do you have a platform for tracking such feature requests? Any insights or assistance you can provide on this matter would be greatly appreciated.\n> >\n> > The attached patch has the changes to support capturing generated\n> > column data using ‘pgoutput’ and’ test_decoding’ plugin. Now if the\n> > ‘include_generated_columns’ option is specified, the generated column\n> > information and generated column data also will be sent.\n>\n> As Euler mentioned earlier, I think it's a decision not to replicate\n> generated columns because we don't know the target table on the\n> subscriber has the same expression and there could be locale issues\n> even if it looks the same. I can see that a benefit of this proposal\n> would be to save cost to compute generated column values if the user\n> wants the target table on the subscriber to have exactly the same data\n> as the publisher's one. Are there other benefits or use cases?\n\nI think this will be useful mainly for the use cases where the\npublisher has generated columns and the subscriber does not have\ngenerated columns.\nIn the case where both the publisher and subscriber have generated\ncolumns, the current patch will overwrite the generated column values\nbased on the expression for the generated column in the subscriber.\n\n> >\n> > Usage from pgoutput plugin:\n> > CREATE TABLE gencoltable (a int PRIMARY KEY, b int GENERATED ALWAYS AS\n> > (a * 2) STORED);\n> > CREATE publication pub1 for all tables;\n> > SELECT 'init' FROM pg_create_logical_replication_slot('slot1', 'pgoutput');\n> > SELECT * FROM pg_logical_slot_peek_binary_changes('slot1', NULL, NULL,\n> > 'proto_version', '1', 'publication_names', 'pub1',\n> > 'include_generated_columns', 'true');\n> >\n> > Usage from test_decoding plugin:\n> > SELECT 'init' FROM pg_create_logical_replication_slot('slot2', 'test_decoding');\n> > CREATE TABLE gencoltable (a int PRIMARY KEY, b int GENERATED ALWAYS AS\n> > (a * 2) STORED);\n> > INSERT INTO gencoltable (a) VALUES (1), (2), (3);\n> > SELECT data FROM pg_logical_slot_get_changes('slot2', NULL, NULL,\n> > 'include-xids', '0', 'skip-empty-xacts', '1',\n> > 'include_generated_columns', '1');\n> >\n> > Currently it is not supported as a subscription option because table\n> > sync for the generated column is not possible as copy command does not\n> > support getting data for the generated column. If this feature is\n> > required we can remove this limitation from the copy command and then\n> > add it as a subscription option later.\n> > Thoughts?\n>\n> I think that if we want to support an option to replicate generated\n> columns, the initial tablesync should support it too. Otherwise, we\n> end up filling the target columns data with NULL during the initial\n> tablesync but with replicated data during the streaming changes.\n\n+1 for supporting initial sync.\nCurrently copy_data = true and generate_column = true are not\nsupported, this limitation will be removed in one of the upcoming\npatches.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 20 May 2024 14:28:30 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi,\n\nAFAICT this v2-0001 patch differences from v1 is mostly about adding\nthe new CREATE SUBSCRIPTION option. Specifically, I don't think it is\naddressing any of my previous review comments for patch v1. [1]. So\nthese comments below are limited only to the new option code; All my\nprevious review comments probably still apply.\n\n======\nCommit message\n\n1. (General)\nThe commit message is seriously lacking background explanation to describe:\n- What is the current behaviour w.r.t. generated columns\n- What is the problem with the current behaviour?\n- What exactly is this patch doing to address that problem?\n\n~\n\n2.\nNew option generated_option is added in create subscription. Now if this\noption is specified as 'true' during create subscription, generated\ncolumns in the tables, present in publisher (to which this subscription is\nsubscribed) can also be replicated.\n\n-\n\n2A.\n\"generated_option\" is not the name of the new option.\n\n~\n\n2B.\n\"create subscription\" stmt should be UPPERCASE; will also be more\nreadable if the option name is quoted.\n\n~\n\n2C.\nNeeds more information like under what condition is this option ignored etc.\n\n======\ndoc/src/sgml/ref/create_subscription.sgml\n\n3.\n+ <varlistentry id=\"sql-createsubscription-params-with-generated-column\">\n+ <term><literal>generated-column</literal> (<type>boolean</type>)</term>\n+ <listitem>\n+ <para>\n+ Specifies whether the generated columns present in the tables\n+ associated with the subscription should be replicated. The default is\n+ <literal>false</literal>.\n+ </para>\n+\n+ <para>\n+ This parameter can only be set true if copy_data is set to false.\n+ This option works fine when a generated column (in\npublisher) is replicated to a\n+ non-generated column (in subscriber). Else if it is\nreplicated to a generated\n+ column, it will ignore the replicated data and fill the\ncolumn with computed or\n+ default data.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n\n3A.\nThere is a typo in the name \"generated-column\" because we should use\nunderscores (not hyphens) for the option names.\n\n~\n\n3B.\nThis it is not a good option name because there is no verb so it\ndoesn't mean anything to set it true/false -- actually there IS a verb\n\"generate\" but we are not saying generate = true/false, so this name\nis also quite confusing.\n\nI think \"include_generated_columns\" would be much better, but if\nothers think that name is too long then maybe \"include_generated_cols\"\nor \"include_gen_cols\" or similar. Of course, whatever if the final\ndecision should be propagated same thru all the code comments, params,\nfields, etc.\n\n~\n\n3C.\ncopy_data and false should be marked up as <literal> fonts in the sgml\n\n~\n\n3D.\n\nSuggest re-word this part. Don't need to explain when it \"works fine\".\n\nBEFORE\nThis option works fine when a generated column (in publisher) is\nreplicated to a non-generated column (in subscriber). Else if it is\nreplicated to a generated column, it will ignore the replicated data\nand fill the column with computed or default data.\n\nSUGGESTION\nIf the subscriber-side column is also a generated column then this\noption has no effect; the replicated data will be ignored and the\nsubscriber column will be filled as normal with the subscriber-side\ncomputed or default data.\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\n4. AlterSubscription\n SUBOPT_STREAMING | SUBOPT_DISABLE_ON_ERR |\n SUBOPT_PASSWORD_REQUIRED |\n SUBOPT_RUN_AS_OWNER | SUBOPT_FAILOVER |\n- SUBOPT_ORIGIN);\n+ SUBOPT_ORIGIN | SUBOPT_GENERATED_COLUMN);\n\nHmm. Is this correct? If ALTER is not allowed (later in this patch\nthere is a message \"toggling generated_column option is not allowed.\"\nthen why are we even saying that SUBOPT_GENERATED_COLUMN is a\nsupport_opt for ALTER?\n\n~~~\n\n5.\n+ if (IsSet(opts.specified_opts, SUBOPT_GENERATED_COLUMN))\n+ {\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"toggling generated_column option is not allowed.\")));\n+ }\n\n5A.\nI suspect this is not even needed if the 'supported_opt' is fixed per\nthe previous comment.\n\n~\n\n5B.\nBut if this message is still needed then I think it should say \"ALTER\nis not allowed\" (not \"toggling is not allowed\") and also the option\nname should be quoted as per the new guidelines for error messages.\n\n======\nsrc/backend/replication/logical/proto.c\n\n\n6. logicalrep_write_tuple\n\n- if (att->attisdropped || att->attgenerated)\n+ if (att->attisdropped)\n continue;\n\n if (!column_in_column_list(att->attnum, columns))\n continue;\n\n+ if (att->attgenerated && !publish_generated_column)\n+\n\nCalling column_in_column_list() might be a more expensive operation\nthan checking just generated columns flag so maybe reverse the order\nand check the generated columns first for a tiny performance gain.\n\n~~\n\n7.\n- if (att->attisdropped || att->attgenerated)\n+ if (att->attisdropped)\n continue;\n\n if (!column_in_column_list(att->attnum, columns))\n continue;\n\n+ if (att->attgenerated && !publish_generated_column)\n+ continue;\n\nditto #6\n\n~~\n\n8. logicalrep_write_attrs\n\n- if (att->attisdropped || att->attgenerated)\n+ if (att->attisdropped)\n continue;\n\n if (!column_in_column_list(att->attnum, columns))\n continue;\n\n+ if (att->attgenerated && !publish_generated_column)\n+ continue;\n+\n\nditto #6\n\n~~\n\n9.\n- if (att->attisdropped || att->attgenerated)\n+ if (att->attisdropped)\n continue;\n\n if (!column_in_column_list(att->attnum, columns))\n continue;\n\n+ if (att->attgenerated && !publish_generated_column)\n+ continue;\n\nditto #6\n\n======\nsrc/include/catalog/pg_subscription.h\n\n\n10. CATALOG\n\n+ bool subgeneratedcolumn; /* True if generated colums must be published */\n\n/colums/columns/\n\n======\nsrc/test/regress/sql/publication.sql\n\n11.\n--- error: generated column \"d\" can't be in list\n+-- ok\n\n\nMaybe change \"ok\" to say like \"ok: generated cols can be in the list too\"\n\n======\n\n12.\nGENERAL - Missing CREATE SUBSCRIPTION test?\nGENERAL - Missing ALTER SUBSCRIPTION test?\n\nHow come this patch adds a new CREATE SUBSCRIPTION option but does not\nseem to include any test case for that option in either the CREATE\nSUBSCRIPTION or ALTER SUBSCRIPTION regression tests?\n\n======\n[1] My v1 review -\nhttps://www.postgresql.org/message-id/CAHut+PsuJfcaeg6zst=6PE5uyJv_UxVRHU3ck7W2aHb1uQYKng@mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 21 May 2024 16:52:33 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On 08.05.24 09:13, Shubham Khanna wrote:\n> The attached patch has the changes to support capturing generated\n> column data using ‘pgoutput’ and’ test_decoding’ plugin. Now if the\n> ‘include_generated_columns’ option is specified, the generated column\n> information and generated column data also will be sent.\n\nIt might be worth keeping half an eye on the development of virtual \ngenerated columns [0]. I think it won't be possible to include those \ninto the replication output stream.\n\nI think having an option for including stored generated columns is in \ngeneral ok.\n\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/a368248e-69e4-40be-9c07-6c3b5880b0a6@eisentraut.org\n\n\n", "msg_date": "Wed, 22 May 2024 13:40:27 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "> Dear Shubham,\n>\n> Thanks for creating a patch! Here are high-level comments.\n\n> 1.\n> Please document the feature. If it is hard to describe, we should change the API.\n\nI have added the feature in the document.\n\n> 4.\n> Regarding the test_decoding plugin, it has already been able to decode the\n> generated columns. So... as the first place, is the proposed option really needed\n> for the plugin? Why do you include it?\n> If you anyway want to add the option, the default value should be on - which keeps\n> current behavior.\n\nI have made the generated column options as true for test_decoding\nplugin so by default we will send generated column data.\n\n> 5.\n> Assuming that the feature become usable used for logical replicaiton. Not sure,\n> should we change the protocol version at that time? Nodes prior than PG17 may\n> not want receive values for generated columns. Can we control only by the option?\n\nI verified the backward compatibility test by using the generated\ncolumn option and it worked fine. I think there is no need to make any\nfurther changes.\n\n> 7.\n>\n> Some functions refer data->publish_generated_column many times. Can we store\n> the value to a variable?\n>\n> Below comments are for test_decoding part, but they may be not needed.\n>\n> =====\n>\n> a. pg_decode_startup()\n>\n> ```\n> + else if (strcmp(elem->defname, \"include_generated_columns\") == 0)\n> ```\n>\n> Other options for test_decoding do not have underscore. It should be\n> \"include-generated-columns\".\n>\n> b. pg_decode_change()\n>\n> data->include_generated_columns is referred four times in the function.\n> Can you store the value to a varibable?\n>\n>\n> c. pg_decode_change()\n>\n> ```\n> - true);\n> + true, data->include_generated_columns );\n> ```\n>\n> Please remove the blank.\n\nFixed.\nThe attached v3 Patch has the changes for the same.\n\nThanks and Regards,\nShubham Khanna.", "msg_date": "Thu, 23 May 2024 09:19:03 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Dear Shubham,\r\n\r\nThanks for updating the patch! I checked your patches briefly. Here are my comments.\r\n\r\n01. API\r\n\r\nSince the option for test_decoding is enabled by default, I think it should be renamed.\r\nE.g., \"skip-generated-columns\" or something.\r\n\r\n02. ddl.sql\r\n\r\n```\r\n+-- check include-generated-columns option with generated column\r\n+CREATE TABLE gencoltable (a int PRIMARY KEY, b int GENERATED ALWAYS AS (a * 2) STORED);\r\n+INSERT INTO gencoltable (a) VALUES (1), (2), (3);\r\n+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-generated-columns', '1');\r\n+ data \r\n+-------------------------------------------------------------\r\n+ BEGIN\r\n+ table public.gencoltable: INSERT: a[integer]:1 b[integer]:2\r\n+ table public.gencoltable: INSERT: a[integer]:2 b[integer]:4\r\n+ table public.gencoltable: INSERT: a[integer]:3 b[integer]:6\r\n+ COMMIT\r\n+(5 rows)\r\n```\r\n\r\nWe should test non-default case, which the generated columns are not generated.\r\n\r\n03. ddl.sql\r\n\r\nNot sure new tests are in the correct place. Do we have to add new file and move tests to it?\r\nThought?\r\n\r\n04. protocol.sgml\r\n\r\nPlease keep the format of the sgml file.\r\n\r\n05. protocol.sgml\r\n\r\nThe option is implemented as the streaming option of pgoutput plugin, so they should be\r\nlocated under \"Logical Streaming Replication Parameters\" section.\r\n\r\n05. AlterSubscription\r\n\r\n```\r\n+\t\t\t\tif (IsSet(opts.specified_opts, SUBOPT_GENERATED_COLUMN))\r\n+\t\t\t\t{\r\n+\t\t\t\t\tereport(ERROR,\r\n+\t\t\t\t\t\t\t(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\r\n+\t\t\t\t\t\t\t errmsg(\"toggling generated_column option is not allowed.\")));\r\n+\t\t\t\t}\r\n```\r\n\r\nIf you don't want to support the option, you can remove SUBOPT_GENERATED_COLUMN\r\nmacro from the function. But can you clarify the reason why you do not want?\r\n\r\n07. logicalrep_write_tuple\r\n\r\n```\r\n-\t\tif (!column_in_column_list(att->attnum, columns))\r\n+\t\tif (!column_in_column_list(att->attnum, columns) && !att->attgenerated)\r\n+\t\t\tcontinue;\r\n+\r\n+\t\tif (att->attgenerated && !publish_generated_column)\r\n \t\t\tcontinue;\r\n```\r\n\r\nI think changes in v2 was reverted or wrongly merged.\r\n\r\n08. test code\r\n\r\nCan you add tests that generated columns are replicated by the logical replication?\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/ \r\n\r\n", "msg_date": "Thu, 23 May 2024 05:26:38 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Thu, 23 May 2024 at 09:19, Shubham Khanna\n<khannashubham1197@gmail.com> wrote:\n>\n> > Dear Shubham,\n> >\n> > Thanks for creating a patch! Here are high-level comments.\n>\n> > 1.\n> > Please document the feature. If it is hard to describe, we should change the API.\n>\n> I have added the feature in the document.\n>\n> > 4.\n> > Regarding the test_decoding plugin, it has already been able to decode the\n> > generated columns. So... as the first place, is the proposed option really needed\n> > for the plugin? Why do you include it?\n> > If you anyway want to add the option, the default value should be on - which keeps\n> > current behavior.\n>\n> I have made the generated column options as true for test_decoding\n> plugin so by default we will send generated column data.\n>\n> > 5.\n> > Assuming that the feature become usable used for logical replicaiton. Not sure,\n> > should we change the protocol version at that time? Nodes prior than PG17 may\n> > not want receive values for generated columns. Can we control only by the option?\n>\n> I verified the backward compatibility test by using the generated\n> column option and it worked fine. I think there is no need to make any\n> further changes.\n>\n> > 7.\n> >\n> > Some functions refer data->publish_generated_column many times. Can we store\n> > the value to a variable?\n> >\n> > Below comments are for test_decoding part, but they may be not needed.\n> >\n> > =====\n> >\n> > a. pg_decode_startup()\n> >\n> > ```\n> > + else if (strcmp(elem->defname, \"include_generated_columns\") == 0)\n> > ```\n> >\n> > Other options for test_decoding do not have underscore. It should be\n> > \"include-generated-columns\".\n> >\n> > b. pg_decode_change()\n> >\n> > data->include_generated_columns is referred four times in the function.\n> > Can you store the value to a varibable?\n> >\n> >\n> > c. pg_decode_change()\n> >\n> > ```\n> > - true);\n> > + true, data->include_generated_columns );\n> > ```\n> >\n> > Please remove the blank.\n>\n> Fixed.\n> The attached v3 Patch has the changes for the same.\n\nFew comments:\n1) Since this is removed, tupdesc variable is not required anymore:\n+++ b/src/backend/catalog/pg_publication.c\n@@ -534,12 +534,6 @@ publication_translate_columns(Relation targetrel,\nList *columns,\n errmsg(\"cannot use system\ncolumn \\\"%s\\\" in publication column list\",\n colname));\n\n- if (TupleDescAttr(tupdesc, attnum - 1)->attgenerated)\n- ereport(ERROR,\n-\nerrcode(ERRCODE_INVALID_COLUMN_REFERENCE),\n- errmsg(\"cannot use generated\ncolumn \\\"%s\\\" in publication column list\",\n- colname));\n\n2) In test_decoding include_generated_columns option is used:\n+ else if (strcmp(elem->defname,\n\"include_generated_columns\") == 0)\n+ {\n+ if (elem->arg == NULL)\n+ continue;\n+ else if (!parse_bool(strVal(elem->arg),\n&data->include_generated_columns))\n+ ereport(ERROR,\n+\n(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"could not\nparse value \\\"%s\\\" for parameter \\\"%s\\\"\",\n+\nstrVal(elem->arg), elem->defname)));\n+ }\n\nIn subscription we have used generated_column, we can try to use the\nsame option in both places:\n+ else if (IsSet(supported_opts, SUBOPT_GENERATED_COLUMN) &&\n+ strcmp(defel->defname,\n\"generated_column\") == 0)\n+ {\n+ if (IsSet(opts->specified_opts,\nSUBOPT_GENERATED_COLUMN))\n+ errorConflictingDefElem(defel, pstate);\n+\n+ opts->specified_opts |= SUBOPT_GENERATED_COLUMN;\n+ opts->generated_column = defGetBoolean(defel);\n+ }\n\n3) Tab completion can be added for create subscription to include\ngenerated_column option\n\n4) There are few whitespace issues while applying the patch, check for\ngit diff --check\n\n5) Add few tests for the new option added\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 23 May 2024 17:56:11 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi,\n\nHere are some review comments for the patch v3-0001.\n\nI don't think v3 addressed any of my previous review comments for\npatches v1 and v2. [1][2]\n\nSo the comments below are limited only to the new code (i.e. the v3\nversus v2 differences). Meanwhile, all my previous review comments may\nstill apply.\n\n======\nGENERAL\n\nThe patch applied gives whitespace warnings:\n\n[postgres@CentOS7-x64 oss_postgres_misc]$ git apply\n../patches_misc/v3-0001-Support-generated-column-capturing-generated-colu.patch\n../patches_misc/v3-0001-Support-generated-column-capturing-generated-colu.patch:150:\ntrailing whitespace.\n\n../patches_misc/v3-0001-Support-generated-column-capturing-generated-colu.patch:202:\ntrailing whitespace.\n\n../patches_misc/v3-0001-Support-generated-column-capturing-generated-colu.patch:730:\ntrailing whitespace.\nwarning: 3 lines add whitespace errors.\n\n======\ncontrib/test_decoding/test_decoding.c\n\n1. pg_decode_change\n\n MemoryContext old;\n+ bool include_generated_columns;\n+\n\nI'm not really convinced this variable saves any code.\n\n======\ndoc/src/sgml/protocol.sgml\n\n2.\n+ <varlistentry>\n+ <term><replaceable\nclass=\"parameter\">include-generated-columns</replaceable></term>\n+ <listitem>\n+ <para>\n+ The include-generated-columns option controls whether\ngenerated columns should be included in the string representation of\ntuples during logical decoding in PostgreSQL. This allows users to\ncustomize the output format based on whether they want to include\nthese columns or not.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n\n2a.\nSomething is not correct when this name has hyphens and all the nearby\nparameter names do not. Shouldn't it be all uppercase like the other\nboolean parameter?\n\n~\n\n2b.\nText in the SGML file should be wrapped properly.\n\n~\n\n2c.\nIMO the comment can be more terse and it also needs to specify that it\nis a boolean type, and what is the default value if not passed.\n\nSUGGESTION\n\nINCLUDE_GENERATED_COLUMNS [ boolean ]\n\nIf true, then generated columns should be included in the string\nrepresentation of tuples during logical decoding in PostgreSQL. The\ndefault is false.\n\n======\nsrc/backend/replication/logical/proto.c\n\n3. logicalrep_write_tuple\n\n- if (!column_in_column_list(att->attnum, columns))\n+ if (!column_in_column_list(att->attnum, columns) && !att->attgenerated)\n+ continue;\n+\n+ if (att->attgenerated && !publish_generated_column)\n continue;\n\n3a.\nThis code seems overcomplicated checking the same flag multiple times.\n\nSUGGESTION\nif (att->attgenerated)\n{\n if (!publish_generated_column)\n continue;\n}\nelse\n{\n if (!column_in_column_list(att->attnum, columns))\n continue;\n}\n\n~\n\n3b.\nThe same logic occurs several times in logicalrep_write_tuple\n\n~~~\n\n4. logicalrep_write_attrs\n\n if (!column_in_column_list(att->attnum, columns))\n continue;\n\n+ if (att->attgenerated && !publish_generated_column)\n+ continue;\n+\n\nShouldn't these code fragments (2x in this function) look the same as\nin logicalrep_write_tuple? See the above review comments.\n\n======\nsrc/backend/replication/pgoutput/pgoutput.c\n\n5. maybe_send_schema\n\n TransactionId topxid = InvalidTransactionId;\n+ bool publish_generated_column = data->publish_generated_column;\n\nI'm not convinced this saves any code, and anyway, it is not\nconsistent with other fields in this function that are not extracted\nto another variable (e.g. data->streaming).\n\n~~~\n\n6. pgoutput_change\n-\n+ bool publish_generated_column = data->publish_generated_column;\n+\n\nI'm not convinced this saves any code, and anyway, it is not\nconsistent with other fields in this function that are not extracted\nto another variable (e.g. data->binary).\n\n======\n[1] My v1 review -\nhttps://www.postgresql.org/message-id/CAHut+PsuJfcaeg6zst=6PE5uyJv_UxVRHU3ck7W2aHb1uQYKng@mail.gmail.com\n[2] My v2 review -\nhttps://www.postgresql.org/message-id/CAHut%2BPv4RpOsUgkEaXDX%3DW2rhHAsJLiMWdUrUGZOcoRHuWj5%2BQ%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 24 May 2024 12:55:34 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Thu, May 16, 2024 at 11:35 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are some review comments for the patch v1-0001.\n>\n> ======\n> GENERAL\n>\n> G.1. Use consistent names\n>\n> It seems to add unnecessary complications by having different names\n> for all the new options, fields and API parameters.\n>\n> e.g. sometimes 'include_generated_columns'\n> e.g. sometimes 'publish_generated_columns'\n>\n> Won't it be better to just use identical names everywhere for\n> everything? I don't mind which one you choose; I just felt you only\n> need one name, not two. This comment overrides everything else in this\n> post so whatever name you choose, make adjustments for all my other\n> review comments as necessary.\n\nI have updated the name to 'include_generated_columns' everywhere in the Patch.\n\n> ======\n>\n> G.2. Is it possible to just use the existing bms?\n>\n> A very large part of this patch is adding more API parameters to\n> delegate the 'publish_generated_columns' flag value down to when it is\n> finally checked and used. e.g.\n>\n> The functions:\n> - logicalrep_write_insert(), logicalrep_write_update(),\n> logicalrep_write_delete()\n> ... are delegating the new parameter 'publish_generated_column' down to:\n> - logicalrep_write_tuple\n>\n> The functions:\n> - logicalrep_write_rel()\n> ... are delegating the new parameter 'publish_generated_column' down to:\n> - logicalrep_write_attrs\n>\n> AFAICT in all these places the API is already passing a \"Bitmapset\n> *columns\". I was wondering if it might be possible to modify the\n> \"Bitmapset *columns\" BEFORE any of those functions get called so that\n> the \"columns\" BMS either does or doesn't include generated cols (as\n> appropriate according to the option).\n>\n> Well, it might not be so simple because there are some NULL BMS\n> considerations also, but I think it would be worth investigating at\n> least, because if indeed you can find some common place (somewhere\n> like pgoutput_change()?) where the columns BMS can be filtered to\n> remove bits for generated cols then it could mean none of those other\n> patch API changes are needed at all -- then the patch would only be\n> 1/2 the size.\n\nI will analyse and reply to this in the next version.\n\n> ======\n> Commit message\n>\n> 1.\n> Now if include_generated_columns option is specified, the generated\n> column information and generated column data also will be sent.\n>\n> Usage from pgoutput plugin:\n> SELECT * FROM pg_logical_slot_peek_binary_changes('slot1', NULL, NULL,\n> 'proto_version', '1', 'publication_names', 'pub1',\n> 'include_generated_columns', 'true');\n>\n> Usage from test_decoding plugin:\n> SELECT data FROM pg_logical_slot_get_changes('slot2', NULL, NULL,\n> 'include-xids', '0', 'skip-empty-xacts', '1',\n> 'include_generated_columns', '1');\n>\n> ~\n>\n> I think there needs to be more background information given here. This\n> commit message doesn't seem to describe anything about what is the\n> problem and how this patch fixes it. It just jumps straight into\n> giving usages of a 'include_generated_columns' option.\n>\n> It also doesn't say that this is an option that was newly *introduced*\n> by the patch -- it refers to it as though the reader should already\n> know about it.\n>\n> Furthermore, your hacker's post says \"Currently it is not supported as\n> a subscription option because table sync for the generated column is\n> not possible as copy command does not support getting data for the\n> generated column. If this feature is required we can remove this\n> limitation from the copy command and then add it as a subscription\n> option later.\" IMO that all seems like the kind of information that\n> ought to also be mentioned in this commit message.\n\nI have updated the Commit message mentioning the suggested changes.\n\n> ======\n> contrib/test_decoding/sql/ddl.sql\n>\n> 2.\n> +-- check include_generated_columns option with generated column\n> +CREATE TABLE gencoltable (a int PRIMARY KEY, b int GENERATED ALWAYS\n> AS (a * 2) STORED);\n> +INSERT INTO gencoltable (a) VALUES (1), (2), (3);\n> +SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,\n> NULL, 'include-xids', '0', 'skip-empty-xacts', '1',\n> 'include_generated_columns', '1');\n> +DROP TABLE gencoltable;\n> +\n>\n> 2a.\n> Perhaps you should include both option values to demonstrate the\n> difference in behaviour:\n>\n> 'include_generated_columns', '0'\n> 'include_generated_columns', '1'\n\nAdded the other option values to demonstrate the difference in behaviour:\n\n> 2b.\n> I think you maybe need to include more some test combinations where\n> there is and isn't a COLUMN LIST, because I am not 100% sure I\n> understand the current logic/expectations for all combinations.\n>\n> e.g. When the generated column is in a column list but\n> 'publish_generated_columns' is false then what should happen? etc.\n> Also if there are any special rules then those should be mentioned in\n> the commit message.\n\nTest case is added and the same is mentioned in the documentation.\n\n> ======\n> src/backend/replication/logical/proto.c\n>\n> 3.\n> For all the API changes the new parameter name should be plural.\n>\n> /publish_generated_column/publish_generated_columns/\n\nUpdated the name to 'include_generated_columns'\n\n> 4. logical_rep_write_tuple:\n>\n> - if (att->attisdropped || att->attgenerated)\n> + if (att->attisdropped)\n> continue;\n>\n> - if (!column_in_column_list(att->attnum, columns))\n> + if (!column_in_column_list(att->attnum, columns) && !att->attgenerated)\n> + continue;\n> +\n> + if (att->attgenerated && !publish_generated_column)\n> continue;\n> That code seems confusing. Shouldn't the logic be exactly as also in\n> logicalrep_write_attrs()?\n>\n> e.g. Shouldn't they both look like this:\n>\n> if (att->attisdropped)\n> continue;\n>\n> if (att->attgenerated && !publish_generated_column)\n> continue;\n>\n> if (!column_in_column_list(att->attnum, columns))\n> continue;\n\nFixed.\n\n> ======\n> src/backend/replication/pgoutput/pgoutput.c\n>\n> 5.\n> static void send_relation_and_attrs(Relation relation, TransactionId xid,\n> LogicalDecodingContext *ctx,\n> - Bitmapset *columns);\n> + Bitmapset *columns,\n> + bool publish_generated_column);\n>\n> Use plural. /publish_generated_column/publish_generated_columns/\n\nUpdated the name to 'include_generated_columns'\n\n> 6. parse_output_parameters\n>\n> bool origin_option_given = false;\n> + bool generate_column_option_given = false;\n>\n> data->binary = false;\n> data->streaming = LOGICALREP_STREAM_OFF;\n> data->messages = false;\n> data->two_phase = false;\n> + data->publish_generated_column = false;\n>\n> I think the 1st var should be 'include_generated_columns_option_given'\n> for consistency with the name of the actual option that was given.\n\nUpdated the name to 'include_generated_columns_option_given'\n\n> ======\n> src/include/replication/logicalproto.h\n>\n> 7.\n> (Same as a previous review comment)\n>\n> For all the API changes the new parameter name should be plural.\n>\n> /publish_generated_column/publish_generated_columns/\n\nUpdated the name to 'include_generated_columns'\n\n> ======\n> src/include/replication/pgoutput.h\n>\n> 8.\n> bool publish_no_origin;\n> + bool publish_generated_column;\n> } PGOutputData;\n>\n> /publish_generated_column/publish_generated_columns/\n\nUpdated the name to 'include_generated_columns'\n\nThe attached Patch contains the suggested changes.\n\nThanks and Regards,\nShubham Khanna.", "msg_date": "Mon, 3 Jun 2024 13:03:00 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Tue, May 21, 2024 at 12:23 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi,\n>\n> AFAICT this v2-0001 patch differences from v1 is mostly about adding\n> the new CREATE SUBSCRIPTION option. Specifically, I don't think it is\n> addressing any of my previous review comments for patch v1. [1]. So\n> these comments below are limited only to the new option code; All my\n> previous review comments probably still apply.\n>\n> ======\n> Commit message\n>\n> 1. (General)\n> The commit message is seriously lacking background explanation to describe:\n> - What is the current behaviour w.r.t. generated columns\n> - What is the problem with the current behaviour?\n> - What exactly is this patch doing to address that problem?\n\nAdded the information related to this inside the Patch.\n\n> 2.\n> New option generated_option is added in create subscription. Now if this\n> option is specified as 'true' during create subscription, generated\n> columns in the tables, present in publisher (to which this subscription is\n> subscribed) can also be replicated.\n>\n> -\n>\n> 2A.\n> \"generated_option\" is not the name of the new option.\n>\n> ~\n>\n> 2B.\n> \"create subscription\" stmt should be UPPERCASE; will also be more\n> readable if the option name is quoted.\n>\n> ~\n>\n> 2C.\n> Needs more information like under what condition is this option ignored etc.\n\nFixed.\n\n> ======\n> doc/src/sgml/ref/create_subscription.sgml\n>\n> 3.\n> + <varlistentry id=\"sql-createsubscription-params-with-generated-column\">\n> + <term><literal>generated-column</literal> (<type>boolean</type>)</term>\n> + <listitem>\n> + <para>\n> + Specifies whether the generated columns present in the tables\n> + associated with the subscription should be replicated. The default is\n> + <literal>false</literal>.\n> + </para>\n> +\n> + <para>\n> + This parameter can only be set true if copy_data is set to false.\n> + This option works fine when a generated column (in\n> publisher) is replicated to a\n> + non-generated column (in subscriber). Else if it is\n> replicated to a generated\n> + column, it will ignore the replicated data and fill the\n> column with computed or\n> + default data.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n>\n> 3A.\n> There is a typo in the name \"generated-column\" because we should use\n> underscores (not hyphens) for the option names.\n>\n> ~\n>\n> 3B.\n> This it is not a good option name because there is no verb so it\n> doesn't mean anything to set it true/false -- actually there IS a verb\n> \"generate\" but we are not saying generate = true/false, so this name\n> is also quite confusing.\n>\n> I think \"include_generated_columns\" would be much better, but if\n> others think that name is too long then maybe \"include_generated_cols\"\n> or \"include_gen_cols\" or similar. Of course, whatever if the final\n> decision should be propagated same thru all the code comments, params,\n> fields, etc.\n>\n> ~\n>\n> 3C.\n> copy_data and false should be marked up as <literal> fonts in the sgml\n>\n> ~\n>\n> 3D.\n>\n> Suggest re-word this part. Don't need to explain when it \"works fine\".\n>\n> BEFORE\n> This option works fine when a generated column (in publisher) is\n> replicated to a non-generated column (in subscriber). Else if it is\n> replicated to a generated column, it will ignore the replicated data\n> and fill the column with computed or default data.\n>\n> SUGGESTION\n> If the subscriber-side column is also a generated column then this\n> option has no effect; the replicated data will be ignored and the\n> subscriber column will be filled as normal with the subscriber-side\n> computed or default data.\n\nFixed.\n\n> ======\n> src/backend/commands/subscriptioncmds.c\n>\n> 4. AlterSubscription\n> SUBOPT_STREAMING | SUBOPT_DISABLE_ON_ERR |\n> SUBOPT_PASSWORD_REQUIRED |\n> SUBOPT_RUN_AS_OWNER | SUBOPT_FAILOVER |\n> - SUBOPT_ORIGIN);\n> + SUBOPT_ORIGIN | SUBOPT_GENERATED_COLUMN);\n>\n> Hmm. Is this correct? If ALTER is not allowed (later in this patch\n> there is a message \"toggling generated_column option is not allowed.\"\n> then why are we even saying that SUBOPT_GENERATED_COLUMN is a\n> support_opt for ALTER?\n\nFixed.\n\n> 5.\n> + if (IsSet(opts.specified_opts, SUBOPT_GENERATED_COLUMN))\n> + {\n> + ereport(ERROR,\n> + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> + errmsg(\"toggling generated_column option is not allowed.\")));\n> + }\n>\n> 5A.\n> I suspect this is not even needed if the 'supported_opt' is fixed per\n> the previous comment.\n>\n> ~\n>\n> 5B.\n> But if this message is still needed then I think it should say \"ALTER\n> is not allowed\" (not \"toggling is not allowed\") and also the option\n> name should be quoted as per the new guidelines for error messages.\n>\n> ======\n> src/backend/replication/logical/proto.c\n\nFixed.\n\n> 6. logicalrep_write_tuple\n>\n> - if (att->attisdropped || att->attgenerated)\n> + if (att->attisdropped)\n> continue;\n>\n> if (!column_in_column_list(att->attnum, columns))\n> continue;\n>\n> + if (att->attgenerated && !publish_generated_column)\n> +\n>\n> Calling column_in_column_list() might be a more expensive operation\n> than checking just generated columns flag so maybe reverse the order\n> and check the generated columns first for a tiny performance gain.\n\nFixed.\n\n> 7.\n> - if (att->attisdropped || att->attgenerated)\n> + if (att->attisdropped)\n> continue;\n>\n> if (!column_in_column_list(att->attnum, columns))\n> continue;\n>\n> + if (att->attgenerated && !publish_generated_column)\n> + continue;\n>\n> ditto #6\n\nFixed.\n\n> 8. logicalrep_write_attrs\n>\n> - if (att->attisdropped || att->attgenerated)\n> + if (att->attisdropped)\n> continue;\n>\n> if (!column_in_column_list(att->attnum, columns))\n> continue;\n>\n> + if (att->attgenerated && !publish_generated_column)\n> + continue;\n> +\n>\n> ditto #6\n\nFixed.\n\n> 9.\n> - if (att->attisdropped || att->attgenerated)\n> + if (att->attisdropped)\n> continue;\n>\n> if (!column_in_column_list(att->attnum, columns))\n> continue;\n>\n> + if (att->attgenerated && !publish_generated_column)\n> + continue;\n>\n> ditto #6\n>\n> ======\n> src/include/catalog/pg_subscription.h\n\nFixed.\n\n> 10. CATALOG\n>\n> + bool subgeneratedcolumn; /* True if generated colums must be published */\n>\n> /colums/columns/\n>\n> ======\n> src/test/regress/sql/publication.sql\n\nFixed.\n\n> 11.\n> --- error: generated column \"d\" can't be in list\n> +-- ok\n>\n>\n> Maybe change \"ok\" to say like \"ok: generated cols can be in the list too\"\n\nFixed.\n\n> 12.\n> GENERAL - Missing CREATE SUBSCRIPTION test?\n> GENERAL - Missing ALTER SUBSCRIPTION test?\n>\n> How come this patch adds a new CREATE SUBSCRIPTION option but does not\n> seem to include any test case for that option in either the CREATE\n> SUBSCRIPTION or ALTER SUBSCRIPTION regression tests?\n\nAdded the test cases for the same.\n\nPatch v4-0001 contains all the changes required. See [1] for the changes added.\n\n[1] https://www.postgresql.org/message-id/CAHv8RjJcOsk%3Dy%2BvJ3y%2BvXhzR9ZUzUEURvS_90hQW3MNfJ5di7A%40mail.gmail.com\n\nThanks and Regards,\nShubham Khanna.\n\n\n", "msg_date": "Mon, 3 Jun 2024 15:00:38 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Thu, May 23, 2024 at 10:56 AM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Dear Shubham,\n>\n> Thanks for updating the patch! I checked your patches briefly. Here are my comments.\n>\n> 01. API\n>\n> Since the option for test_decoding is enabled by default, I think it should be renamed.\n> E.g., \"skip-generated-columns\" or something.\n\nLet's keep the same name 'include_generated_columns' for both the cases.\n\n> 02. ddl.sql\n>\n> ```\n> +-- check include-generated-columns option with generated column\n> +CREATE TABLE gencoltable (a int PRIMARY KEY, b int GENERATED ALWAYS AS (a * 2) STORED);\n> +INSERT INTO gencoltable (a) VALUES (1), (2), (3);\n> +SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'include-generated-columns', '1');\n> + data\n> +-------------------------------------------------------------\n> + BEGIN\n> + table public.gencoltable: INSERT: a[integer]:1 b[integer]:2\n> + table public.gencoltable: INSERT: a[integer]:2 b[integer]:4\n> + table public.gencoltable: INSERT: a[integer]:3 b[integer]:6\n> + COMMIT\n> +(5 rows)\n> ```\n>\n> We should test non-default case, which the generated columns are not generated.\n\nAdded the non-default case, which the generated columns are not generated.\n\n> 03. ddl.sql\n>\n> Not sure new tests are in the correct place. Do we have to add new file and move tests to it?\n> Thought?\n\nAdded the new tests in the 'decoding_into_rel.out' file.\n\n> 04. protocol.sgml\n>\n> Please keep the format of the sgml file.\n\nFixed.\n\n> 05. protocol.sgml\n>\n> The option is implemented as the streaming option of pgoutput plugin, so they should be\n> located under \"Logical Streaming Replication Parameters\" section.\n\nFixed.\n\n> 05. AlterSubscription\n>\n> ```\n> + if (IsSet(opts.specified_opts, SUBOPT_GENERATED_COLUMN))\n> + {\n> + ereport(ERROR,\n> + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> + errmsg(\"toggling generated_column option is not allowed.\")));\n> + }\n> ```\n>\n> If you don't want to support the option, you can remove SUBOPT_GENERATED_COLUMN\n> macro from the function. But can you clarify the reason why you do not want?\n\nFixed.\n\n> 07. logicalrep_write_tuple\n>\n> ```\n> - if (!column_in_column_list(att->attnum, columns))\n> + if (!column_in_column_list(att->attnum, columns) && !att->attgenerated)\n> + continue;\n> +\n> + if (att->attgenerated && !publish_generated_column)\n> continue;\n> ```\n>\n> I think changes in v2 was reverted or wrongly merged.\n\nFixed.\n\n> 08. test code\n>\n> Can you add tests that generated columns are replicated by the logical replication?\n\nAdded the test cases.\n\nPatch v4-0001 contains all the changes required. See [1] for the changes added.\n\n[1] https://www.postgresql.org/message-id/CAHv8RjJcOsk%3Dy%2BvJ3y%2BvXhzR9ZUzUEURvS_90hQW3MNfJ5di7A%40mail.gmail.com\n\nThanks and Regards,\nShubham Khanna.\n\n\n", "msg_date": "Mon, 3 Jun 2024 15:25:38 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Thu, May 23, 2024 at 5:56 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Thu, 23 May 2024 at 09:19, Shubham Khanna\n> <khannashubham1197@gmail.com> wrote:\n> >\n> > > Dear Shubham,\n> > >\n> > > Thanks for creating a patch! Here are high-level comments.\n> >\n> > > 1.\n> > > Please document the feature. If it is hard to describe, we should change the API.\n> >\n> > I have added the feature in the document.\n> >\n> > > 4.\n> > > Regarding the test_decoding plugin, it has already been able to decode the\n> > > generated columns. So... as the first place, is the proposed option really needed\n> > > for the plugin? Why do you include it?\n> > > If you anyway want to add the option, the default value should be on - which keeps\n> > > current behavior.\n> >\n> > I have made the generated column options as true for test_decoding\n> > plugin so by default we will send generated column data.\n> >\n> > > 5.\n> > > Assuming that the feature become usable used for logical replicaiton. Not sure,\n> > > should we change the protocol version at that time? Nodes prior than PG17 may\n> > > not want receive values for generated columns. Can we control only by the option?\n> >\n> > I verified the backward compatibility test by using the generated\n> > column option and it worked fine. I think there is no need to make any\n> > further changes.\n> >\n> > > 7.\n> > >\n> > > Some functions refer data->publish_generated_column many times. Can we store\n> > > the value to a variable?\n> > >\n> > > Below comments are for test_decoding part, but they may be not needed.\n> > >\n> > > =====\n> > >\n> > > a. pg_decode_startup()\n> > >\n> > > ```\n> > > + else if (strcmp(elem->defname, \"include_generated_columns\") == 0)\n> > > ```\n> > >\n> > > Other options for test_decoding do not have underscore. It should be\n> > > \"include-generated-columns\".\n> > >\n> > > b. pg_decode_change()\n> > >\n> > > data->include_generated_columns is referred four times in the function.\n> > > Can you store the value to a varibable?\n> > >\n> > >\n> > > c. pg_decode_change()\n> > >\n> > > ```\n> > > - true);\n> > > + true, data->include_generated_columns );\n> > > ```\n> > >\n> > > Please remove the blank.\n> >\n> > Fixed.\n> > The attached v3 Patch has the changes for the same.\n>\n> Few comments:\n> 1) Since this is removed, tupdesc variable is not required anymore:\n> +++ b/src/backend/catalog/pg_publication.c\n> @@ -534,12 +534,6 @@ publication_translate_columns(Relation targetrel,\n> List *columns,\n> errmsg(\"cannot use system\n> column \\\"%s\\\" in publication column list\",\n> colname));\n>\n> - if (TupleDescAttr(tupdesc, attnum - 1)->attgenerated)\n> - ereport(ERROR,\n> -\n> errcode(ERRCODE_INVALID_COLUMN_REFERENCE),\n> - errmsg(\"cannot use generated\n> column \\\"%s\\\" in publication column list\",\n> - colname));\n\nFixed.\n\n> 2) In test_decoding include_generated_columns option is used:\n> + else if (strcmp(elem->defname,\n> \"include_generated_columns\") == 0)\n> + {\n> + if (elem->arg == NULL)\n> + continue;\n> + else if (!parse_bool(strVal(elem->arg),\n> &data->include_generated_columns))\n> + ereport(ERROR,\n> +\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"could not\n> parse value \\\"%s\\\" for parameter \\\"%s\\\"\",\n> +\n> strVal(elem->arg), elem->defname)));\n> + }\n>\n> In subscription we have used generated_column, we can try to use the\n> same option in both places:\n> + else if (IsSet(supported_opts, SUBOPT_GENERATED_COLUMN) &&\n> + strcmp(defel->defname,\n> \"generated_column\") == 0)\n> + {\n> + if (IsSet(opts->specified_opts,\n> SUBOPT_GENERATED_COLUMN))\n> + errorConflictingDefElem(defel, pstate);\n> +\n> + opts->specified_opts |= SUBOPT_GENERATED_COLUMN;\n> + opts->generated_column = defGetBoolean(defel);\n> + }\n\nWill update the name to 'include_generated_columns' in the next\nversion of the Patch.\n\n> 3) Tab completion can be added for create subscription to include\n> generated_column option\n\nFixed.\n\n> 4) There are few whitespace issues while applying the patch, check for\n> git diff --check\n\nFixed.\n\n> 5) Add few tests for the new option added\n\nAdded new test cases.\n\nPatch v4-0001 contains all the changes required. See [1] for the changes added.\n\n[1] https://www.postgresql.org/message-id/CAHv8RjJcOsk%3Dy%2BvJ3y%2BvXhzR9ZUzUEURvS_90hQW3MNfJ5di7A%40mail.gmail.com\n\nThanks and Regards,\nShubham Khanna.\n\n\n", "msg_date": "Mon, 3 Jun 2024 15:31:46 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Fri, May 24, 2024 at 8:26 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi,\n>\n> Here are some review comments for the patch v3-0001.\n>\n> I don't think v3 addressed any of my previous review comments for\n> patches v1 and v2. [1][2]\n>\n> So the comments below are limited only to the new code (i.e. the v3\n> versus v2 differences). Meanwhile, all my previous review comments may\n> still apply.\n\nPatch v4-0001 addresses the previous review comments for patches v1\nand v2. [1][2]\n\n> ======\n> GENERAL\n>\n> The patch applied gives whitespace warnings:\n>\n> [postgres@CentOS7-x64 oss_postgres_misc]$ git apply\n> ../patches_misc/v3-0001-Support-generated-column-capturing-generated-colu.patch\n> ../patches_misc/v3-0001-Support-generated-column-capturing-generated-colu.patch:150:\n> trailing whitespace.\n>\n> ../patches_misc/v3-0001-Support-generated-column-capturing-generated-colu.patch:202:\n> trailing whitespace.\n>\n> ../patches_misc/v3-0001-Support-generated-column-capturing-generated-colu.patch:730:\n> trailing whitespace.\n> warning: 3 lines add whitespace errors.\n\nFixed.\n\n> ======\n> contrib/test_decoding/test_decoding.c\n>\n> 1. pg_decode_change\n>\n> MemoryContext old;\n> + bool include_generated_columns;\n> +\n>\n> I'm not really convinced this variable saves any code.\n\nFixed.\n\n> ======\n> doc/src/sgml/protocol.sgml\n>\n> 2.\n> + <varlistentry>\n> + <term><replaceable\n> class=\"parameter\">include-generated-columns</replaceable></term>\n> + <listitem>\n> + <para>\n> + The include-generated-columns option controls whether\n> generated columns should be included in the string representation of\n> tuples during logical decoding in PostgreSQL. This allows users to\n> customize the output format based on whether they want to include\n> these columns or not.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n>\n> 2a.\n> Something is not correct when this name has hyphens and all the nearby\n> parameter names do not. Shouldn't it be all uppercase like the other\n> boolean parameter?\n>\n> ~\n>\n> 2b.\n> Text in the SGML file should be wrapped properly.\n>\n> ~\n>\n> 2c.\n> IMO the comment can be more terse and it also needs to specify that it\n> is a boolean type, and what is the default value if not passed.\n>\n> SUGGESTION\n>\n> INCLUDE_GENERATED_COLUMNS [ boolean ]\n>\n> If true, then generated columns should be included in the string\n> representation of tuples during logical decoding in PostgreSQL. The\n> default is false.\n\nFixed.\n\n> ======\n> src/backend/replication/logical/proto.c\n>\n> 3. logicalrep_write_tuple\n>\n> - if (!column_in_column_list(att->attnum, columns))\n> + if (!column_in_column_list(att->attnum, columns) && !att->attgenerated)\n> + continue;\n> +\n> + if (att->attgenerated && !publish_generated_column)\n> continue;\n>\n> 3a.\n> This code seems overcomplicated checking the same flag multiple times.\n>\n> SUGGESTION\n> if (att->attgenerated)\n> {\n> if (!publish_generated_column)\n> continue;\n> }\n> else\n> {\n> if (!column_in_column_list(att->attnum, columns))\n> continue;\n> }\n>\n> ~\n>\n> 3b.\n> The same logic occurs several times in logicalrep_write_tuple\n\nFixed.\n\n> 4. logicalrep_write_attrs\n>\n> if (!column_in_column_list(att->attnum, columns))\n> continue;\n>\n> + if (att->attgenerated && !publish_generated_column)\n> + continue;\n> +\n>\n> Shouldn't these code fragments (2x in this function) look the same as\n> in logicalrep_write_tuple? See the above review comments.\n\nFixed.\n\n> ======\n> src/backend/replication/pgoutput/pgoutput.c\n>\n> 5. maybe_send_schema\n>\n> TransactionId topxid = InvalidTransactionId;\n> + bool publish_generated_column = data->publish_generated_column;\n>\n> I'm not convinced this saves any code, and anyway, it is not\n> consistent with other fields in this function that are not extracted\n> to another variable (e.g. data->streaming).\n\nFixed.\n\n> 6. pgoutput_change\n> -\n> + bool publish_generated_column = data->publish_generated_column;\n> +\n>\n> I'm not convinced this saves any code, and anyway, it is not\n> consistent with other fields in this function that are not extracted\n> to another variable (e.g. data->binary).\n\nFixed.\n\n> ======\n> [1] My v1 review -\n> https://www.postgresql.org/message-id/CAHut+PsuJfcaeg6zst=6PE5uyJv_UxVRHU3ck7W2aHb1uQYKng@mail.gmail.com\n> [2] My v2 review -\n> https://www.postgresql.org/message-id/CAHut%2BPv4RpOsUgkEaXDX%3DW2rhHAsJLiMWdUrUGZOcoRHuWj5%2BQ%40mail.gmail.com\n\nPatch v4-0001 contains all the changes required. See [1] for the changes added.\n\n[1] https://www.postgresql.org/message-id/CAHv8RjJcOsk%3Dy%2BvJ3y%2BvXhzR9ZUzUEURvS_90hQW3MNfJ5di7A%40mail.gmail.com\n\nThanks and Regards,\nShubham Khanna.\n\n\n", "msg_date": "Mon, 3 Jun 2024 15:54:21 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Mon, 3 Jun 2024 at 13:03, Shubham Khanna <khannashubham1197@gmail.com> wrote:\n>\n> On Thu, May 16, 2024 at 11:35 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Here are some review comments for the patch v1-0001.\n> >\n> > ======\n> > GENERAL\n> >\n> > G.1. Use consistent names\n> >\n> > It seems to add unnecessary complications by having different names\n> > for all the new options, fields and API parameters.\n> >\n> > e.g. sometimes 'include_generated_columns'\n> > e.g. sometimes 'publish_generated_columns'\n> >\n> > Won't it be better to just use identical names everywhere for\n> > everything? I don't mind which one you choose; I just felt you only\n> > need one name, not two. This comment overrides everything else in this\n> > post so whatever name you choose, make adjustments for all my other\n> > review comments as necessary.\n>\n> I have updated the name to 'include_generated_columns' everywhere in the Patch.\n>\n> > ======\n> >\n> > G.2. Is it possible to just use the existing bms?\n> >\n> > A very large part of this patch is adding more API parameters to\n> > delegate the 'publish_generated_columns' flag value down to when it is\n> > finally checked and used. e.g.\n> >\n> > The functions:\n> > - logicalrep_write_insert(), logicalrep_write_update(),\n> > logicalrep_write_delete()\n> > ... are delegating the new parameter 'publish_generated_column' down to:\n> > - logicalrep_write_tuple\n> >\n> > The functions:\n> > - logicalrep_write_rel()\n> > ... are delegating the new parameter 'publish_generated_column' down to:\n> > - logicalrep_write_attrs\n> >\n> > AFAICT in all these places the API is already passing a \"Bitmapset\n> > *columns\". I was wondering if it might be possible to modify the\n> > \"Bitmapset *columns\" BEFORE any of those functions get called so that\n> > the \"columns\" BMS either does or doesn't include generated cols (as\n> > appropriate according to the option).\n> >\n> > Well, it might not be so simple because there are some NULL BMS\n> > considerations also, but I think it would be worth investigating at\n> > least, because if indeed you can find some common place (somewhere\n> > like pgoutput_change()?) where the columns BMS can be filtered to\n> > remove bits for generated cols then it could mean none of those other\n> > patch API changes are needed at all -- then the patch would only be\n> > 1/2 the size.\n>\n> I will analyse and reply to this in the next version.\n>\n> > ======\n> > Commit message\n> >\n> > 1.\n> > Now if include_generated_columns option is specified, the generated\n> > column information and generated column data also will be sent.\n> >\n> > Usage from pgoutput plugin:\n> > SELECT * FROM pg_logical_slot_peek_binary_changes('slot1', NULL, NULL,\n> > 'proto_version', '1', 'publication_names', 'pub1',\n> > 'include_generated_columns', 'true');\n> >\n> > Usage from test_decoding plugin:\n> > SELECT data FROM pg_logical_slot_get_changes('slot2', NULL, NULL,\n> > 'include-xids', '0', 'skip-empty-xacts', '1',\n> > 'include_generated_columns', '1');\n> >\n> > ~\n> >\n> > I think there needs to be more background information given here. This\n> > commit message doesn't seem to describe anything about what is the\n> > problem and how this patch fixes it. It just jumps straight into\n> > giving usages of a 'include_generated_columns' option.\n> >\n> > It also doesn't say that this is an option that was newly *introduced*\n> > by the patch -- it refers to it as though the reader should already\n> > know about it.\n> >\n> > Furthermore, your hacker's post says \"Currently it is not supported as\n> > a subscription option because table sync for the generated column is\n> > not possible as copy command does not support getting data for the\n> > generated column. If this feature is required we can remove this\n> > limitation from the copy command and then add it as a subscription\n> > option later.\" IMO that all seems like the kind of information that\n> > ought to also be mentioned in this commit message.\n>\n> I have updated the Commit message mentioning the suggested changes.\n>\n> > ======\n> > contrib/test_decoding/sql/ddl.sql\n> >\n> > 2.\n> > +-- check include_generated_columns option with generated column\n> > +CREATE TABLE gencoltable (a int PRIMARY KEY, b int GENERATED ALWAYS\n> > AS (a * 2) STORED);\n> > +INSERT INTO gencoltable (a) VALUES (1), (2), (3);\n> > +SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,\n> > NULL, 'include-xids', '0', 'skip-empty-xacts', '1',\n> > 'include_generated_columns', '1');\n> > +DROP TABLE gencoltable;\n> > +\n> >\n> > 2a.\n> > Perhaps you should include both option values to demonstrate the\n> > difference in behaviour:\n> >\n> > 'include_generated_columns', '0'\n> > 'include_generated_columns', '1'\n>\n> Added the other option values to demonstrate the difference in behaviour:\n>\n> > 2b.\n> > I think you maybe need to include more some test combinations where\n> > there is and isn't a COLUMN LIST, because I am not 100% sure I\n> > understand the current logic/expectations for all combinations.\n> >\n> > e.g. When the generated column is in a column list but\n> > 'publish_generated_columns' is false then what should happen? etc.\n> > Also if there are any special rules then those should be mentioned in\n> > the commit message.\n>\n> Test case is added and the same is mentioned in the documentation.\n>\n> > ======\n> > src/backend/replication/logical/proto.c\n> >\n> > 3.\n> > For all the API changes the new parameter name should be plural.\n> >\n> > /publish_generated_column/publish_generated_columns/\n>\n> Updated the name to 'include_generated_columns'\n>\n> > 4. logical_rep_write_tuple:\n> >\n> > - if (att->attisdropped || att->attgenerated)\n> > + if (att->attisdropped)\n> > continue;\n> >\n> > - if (!column_in_column_list(att->attnum, columns))\n> > + if (!column_in_column_list(att->attnum, columns) && !att->attgenerated)\n> > + continue;\n> > +\n> > + if (att->attgenerated && !publish_generated_column)\n> > continue;\n> > That code seems confusing. Shouldn't the logic be exactly as also in\n> > logicalrep_write_attrs()?\n> >\n> > e.g. Shouldn't they both look like this:\n> >\n> > if (att->attisdropped)\n> > continue;\n> >\n> > if (att->attgenerated && !publish_generated_column)\n> > continue;\n> >\n> > if (!column_in_column_list(att->attnum, columns))\n> > continue;\n>\n> Fixed.\n>\n> > ======\n> > src/backend/replication/pgoutput/pgoutput.c\n> >\n> > 5.\n> > static void send_relation_and_attrs(Relation relation, TransactionId xid,\n> > LogicalDecodingContext *ctx,\n> > - Bitmapset *columns);\n> > + Bitmapset *columns,\n> > + bool publish_generated_column);\n> >\n> > Use plural. /publish_generated_column/publish_generated_columns/\n>\n> Updated the name to 'include_generated_columns'\n>\n> > 6. parse_output_parameters\n> >\n> > bool origin_option_given = false;\n> > + bool generate_column_option_given = false;\n> >\n> > data->binary = false;\n> > data->streaming = LOGICALREP_STREAM_OFF;\n> > data->messages = false;\n> > data->two_phase = false;\n> > + data->publish_generated_column = false;\n> >\n> > I think the 1st var should be 'include_generated_columns_option_given'\n> > for consistency with the name of the actual option that was given.\n>\n> Updated the name to 'include_generated_columns_option_given'\n>\n> > ======\n> > src/include/replication/logicalproto.h\n> >\n> > 7.\n> > (Same as a previous review comment)\n> >\n> > For all the API changes the new parameter name should be plural.\n> >\n> > /publish_generated_column/publish_generated_columns/\n>\n> Updated the name to 'include_generated_columns'\n>\n> > ======\n> > src/include/replication/pgoutput.h\n> >\n> > 8.\n> > bool publish_no_origin;\n> > + bool publish_generated_column;\n> > } PGOutputData;\n> >\n> > /publish_generated_column/publish_generated_columns/\n>\n> Updated the name to 'include_generated_columns'\n>\n> The attached Patch contains the suggested changes.\n\nThanks for the updated patch, few comments:\n1) The option name seems wrong here:\nIn one place include_generated_column is specified and other place\ninclude_generated_columns is specified:\n\n+ else if (IsSet(supported_opts,\nSUBOPT_INCLUDE_GENERATED_COLUMN) &&\n+ strcmp(defel->defname,\n\"include_generated_column\") == 0)\n+ {\n+ if (IsSet(opts->specified_opts,\nSUBOPT_INCLUDE_GENERATED_COLUMN))\n+ errorConflictingDefElem(defel, pstate);\n+\n+ opts->specified_opts |= SUBOPT_INCLUDE_GENERATED_COLUMN;\n+ opts->include_generated_column = defGetBoolean(defel);\n+ }\n\ndiff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c\nindex d453e224d9..e8ff752fd9 100644\n--- a/src/bin/psql/tab-complete.c\n+++ b/src/bin/psql/tab-complete.c\n@@ -3365,7 +3365,7 @@ psql_completion(const char *text, int start, int end)\n COMPLETE_WITH(\"binary\", \"connect\", \"copy_data\", \"create_slot\",\n \"disable_on_error\",\n\"enabled\", \"failover\", \"origin\",\n \"password_required\",\n\"run_as_owner\", \"slot_name\",\n- \"streaming\",\n\"synchronous_commit\", \"two_phase\");\n+ \"streaming\",\n\"synchronous_commit\", \"two_phase\",\"include_generated_columns\");\n\n2) This small data table need not have a primary key column as it will\ncreate an index and insertion will happen in the index too.\n+-- check include-generated-columns option with generated column\n+CREATE TABLE gencoltable (a int PRIMARY KEY, b int GENERATED ALWAYS\nAS (a * 2) STORED);\n+INSERT INTO gencoltable (a) VALUES (1), (2), (3);\n+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,\nNULL, 'include-xids', '0', 'skip-empty-xacts', '1',\n'include-generated-columns', '1');\n\n3) Please add a test case for this:\n+ set to <literal>false</literal>. If the subscriber-side\ncolumn is also a\n+ generated column then this option has no effect; the\nreplicated data will\n+ be ignored and the subscriber column will be filled as\nnormal with the\n+ subscriber-side computed or default data.\n\n4) You can use a new style of ereport to remove the brackets around errcode\n4.a)\n+ else if (!parse_bool(strVal(elem->arg),\n&data->include_generated_columns))\n+ ereport(ERROR,\n+\n(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"could not\nparse value \\\"%s\\\" for parameter \\\"%s\\\"\",\n+\nstrVal(elem->arg), elem->defname)));\n\n4.b) similarly here too:\n+ ereport(ERROR,\n+ (errcode(ERRCODE_SYNTAX_ERROR),\n+ /*- translator: both %s are strings of the form\n\"option = value\" */\n+ errmsg(\"%s and %s are mutually\nexclusive options\",\n+ \"copy_data = true\",\n\"include_generated_column = true\")));\n\n4.c) similarly here too:\n+ if (include_generated_columns_option_given)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_SYNTAX_ERROR),\n+ errmsg(\"conflicting\nor redundant options\")));\n\n5) These variable names can be changed to keep it smaller, something\nlike gencol or generatedcol or gencolumn, etc\n+++ b/src/include/catalog/pg_subscription.h\n@@ -98,6 +98,8 @@ CATALOG(pg_subscription,6100,SubscriptionRelationId)\nBKI_SHARED_RELATION BKI_ROW\n * slots) in the upstream database are enabled\n * to be synchronized to the standbys. */\n\n+ bool subincludegeneratedcolumn; /* True if generated columns must be\npublished */\n+\n #ifdef CATALOG_VARLEN /* variable-length fields start here */\n /* Connection string to the publisher */\n text subconninfo BKI_FORCE_NOT_NULL;\n@@ -157,6 +159,7 @@ typedef struct Subscription\n List *publications; /* List of publication names to subscribe to */\n char *origin; /* Only publish data originating from the\n * specified origin */\n+ bool includegeneratedcolumn; /* publish generated column data */\n } Subscription;\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 3 Jun 2024 16:13:29 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": ">\n> The attached Patch contains the suggested changes.\n>\n\nHi,\n\nCurrently, COPY command does not work for generated columns and\ntherefore, COPY of generated column is not supported during tablesync\nprocess. So, in patch v4-0001 we added a check to allow replication of\nthe generated column only if 'copy_data = false'.\n\nI am attaching patches to resolve the above issues.\n\nv5-0001: not changed\nv5-0002: Support COPY of generated column\nv5-0003: Support COPY of generated column during tablesync process\n\nThought?\n\n\nThanks and Regards,\nShlok Kyal", "msg_date": "Mon, 3 Jun 2024 17:22:35 +0530", "msg_from": "Shlok Kyal <shlok.kyal.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi,\n\nHere are some review comments for patch v5-0001.\n\n======\nGENERAL G.1\n\nThe patch changes HEAD behaviour for PUBLICATION col-lists right? e.g.\nmaybe before they were always ignored, but now they are not?\n\nOTOH, when 'include_generated_columns' is false then the PUBLICATION\ncol-list will ignore any generated cols even when they are present in\na PUBLICATION col-list, right?\n\nThese kinds of points should be noted in the commit message and in the\n(col-list?) documentation.\n\n======\nCommit message\n\nGeneral 1a.\nIMO the commit message needs some background to say something like:\n\"Currently generated column values are not replicated because it is\nassumed that the corresponding subscriber-side table will generate its\nown values for those columns.\"\n\n~\n\nGeneral 1b.\nSomewhere in this commit message, you need to give all the other\nspecial rules --- e.g. the docs says \"If the subscriber-side column is\nalso a generated column then this option has no effect\"\n\n~~~\n\n2.\nThis commit enables support for the 'include_generated_columns' option\nin logical replication, allowing the transmission of generated column\ninformation and data alongside regular table changes. This option is\nparticularly useful for scenarios where applications require access to\ngenerated column values for downstream processing or synchronization.\n\n~\n\nI don't think the sentence \"This option is particularly useful...\" is\nhelpful. It seems like just saying \"This commit supports option XXX.\nThis is particularly useful if you want XXX\".\n\n~~~\n\n3.\nCREATE SUBSCRIPTION test1 connection 'dbname=postgres host=localhost port=9999\n'publication pub1;\n\n~\n\nWhat is this CREATE SUBSCRIPTION for? Shouldn't it have an example of\nthe new parameter being used in it?\n\n~~~\n\n4.\nCurrently copy_data option with include_generated_columns option is\nnot supported. A future patch will remove this limitation.\n\n~\n\nSuggest to single-quote those parameter names for better readability.\n\n~~~\n\n5.\nThis commit aims to enhance the flexibility and utility of logical\nreplication by allowing users to include generated column information\nin replication streams, paving the way for more robust data\nsynchronization and processing workflows.\n\n~\n\nIMO this paragraph can be omitted.\n\n======\n.../test_decoding/sql/decoding_into_rel.sql\n\n6.\n+-- check include-generated-columns option with generated column\n+CREATE TABLE gencoltable (a int PRIMARY KEY, b int GENERATED ALWAYS\nAS (a * 2) STORED);\n+INSERT INTO gencoltable (a) VALUES (1), (2), (3);\n+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,\nNULL, 'include-xids', '0', 'skip-empty-xacts', '1',\n'include-generated-columns', '1');\n+INSERT INTO gencoltable (a) VALUES (4), (5), (6);\n+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,\nNULL, 'include-xids', '0', 'skip-empty-xacts', '1',\n'include-generated-columns', '0');\n+DROP TABLE gencoltable;\n+\n\n6a.\nI felt some additional explicit comments might help the readabilty of\nthe output file.\n\ne.g.1\n-- When 'include-generated=columns' = '1' the generated column 'b'\nvalues will be replicated\nSELECT data FROM pg_logical_slot_get_changes...\n\ne.g.2\n-- When 'include-generated=columns' = '0' the generated column 'b'\nvalues will not be replicated\nSELECT data FROM pg_logical_slot_get_changes...\n\n~~\n\n6b.\nSuggest adding one more test case (where 'include-generated=columns'\nis not set) to confirm/demonstrate the default behaviour for\nreplicated generated cols.\n\n======\ndoc/src/sgml/protocol.sgml\n\n7.\n+ <varlistentry>\n+ <term><replaceable\nclass=\"parameter\">include-generated-columns</replaceable></term>\n+ <listitem>\n+ <para>\n+ Boolean option to enable generated columns.\n+ The include-generated-columns option controls whether generated\n+ columns should be included in the string representation of tuples\n+ during logical decoding in PostgreSQL. This allows users to\n+ customize the output format based on whether they want to include\n+ these columns or not. The default is false.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n\n7a.\nIt doesn't render properly. e.g. Should not be bold italic (probably\nthe class is wrong?), because none of the nearby parameters look this\nway.\n\n~\n\n7b.\nThe name here should NOT have hyphens. It needs underscores same as\nall other nearby protocol parameters.\n\n~\n\n7c.\nThe description seems overly verbose.\n\nSUGGESTION\nBoolean option to enable generated columns. This option controls\nwhether generated columns should be included in the string\nrepresentation of tuples during logical decoding in PostgreSQL. The\ndefault is false.\n\n======\ndoc/src/sgml/ref/create_subscription.sgml\n\n8.\n+\n+ <varlistentry\nid=\"sql-createsubscription-params-with-include-generated-column\">\n+ <term><literal>include_generated_column</literal>\n(<type>boolean</type>)</term>\n+ <listitem>\n+ <para>\n+ Specifies whether the generated columns present in the tables\n+ associated with the subscription should be replicated. The default is\n+ <literal>false</literal>.\n+ </para>\n\nThe parameter name should be plural (include_generated_columns).\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\n9.\n #define SUBOPT_ORIGIN 0x00008000\n+#define SUBOPT_INCLUDE_GENERATED_COLUMN 0x00010000\n\nShould be plural COLUMNS\n\n~~~\n\n10.\n+ else if (IsSet(supported_opts, SUBOPT_INCLUDE_GENERATED_COLUMN) &&\n+ strcmp(defel->defname, \"include_generated_column\") == 0)\n\nThe new subscription parameter should be plural (\"include_generated_columns\").\n\n~~~\n\n11.\n+\n+ /*\n+ * Do additional checking for disallowed combination when copy_data and\n+ * include_generated_column are true. COPY of generated columns is\nnot supported\n+ * yet.\n+ */\n+ if (opts->copy_data && opts->include_generated_column)\n+ {\n+ ereport(ERROR,\n+ (errcode(ERRCODE_SYNTAX_ERROR),\n+ /*- translator: both %s are strings of the form \"option = value\" */\n+ errmsg(\"%s and %s are mutually exclusive options\",\n+ \"copy_data = true\", \"include_generated_column = true\")));\n+ }\n\n/combination/combinations/\n\nThe parameter name should be plural in the comment and also in the\nerror message.\n\n======\nsrc/bin/psql/tab-complete.c\n\n12.\n COMPLETE_WITH(\"binary\", \"connect\", \"copy_data\", \"create_slot\",\n \"disable_on_error\", \"enabled\", \"failover\", \"origin\",\n \"password_required\", \"run_as_owner\", \"slot_name\",\n- \"streaming\", \"synchronous_commit\", \"two_phase\");\n+ \"streaming\", \"synchronous_commit\", \"two_phase\",\"include_generated_columns\");\n\nThe new param should be added in alphabetical order same as all the others.\n\n======\nsrc/include/catalog/pg_subscription.h\n\n13.\n+ bool subincludegeneratedcolumn; /* True if generated columns must be\npublished */\n+\n\nThe field name should be plural.\n\n~~~\n\n14.\n+ bool includegeneratedcolumn; /* publish generated column data */\n } Subscription;\n\nThe field name should be plural.\n\n======\nsrc/include/replication/walreceiver.h\n\n15.\n * prepare time */\n char *origin; /* Only publish data originating from the\n * specified origin */\n+ bool include_generated_column; /* publish generated columns */\n } logical;\n } proto;\n } WalRcvStreamOptions;\n\n~\n\nThis new field name should be plural.\n\n======\nsrc/test/subscription/t/011_generated.pl\n\n16.\n+my ($cmdret, $stdout, $stderr) = $node_subscriber->psql('postgres', qq(\n+ CREATE SUBSCRIPTION sub2 CONNECTION '$publisher_connstr' PUBLICATION\npub2 WITH (include_generated_column = true)\n+));\n+ok( $stderr =~\n+ qr/copy_data = true and include_generated_column = true are\nmutually exclusive options/,\n+ 'cannot use both include_generated_column and copy_data as true');\n\nIsn't this mutual exclusiveness of options something that could have\nbeen tested in the regress test suite instead of TAP tests? e.g. AFAIK\nyou won't require a connection to test this case.\n\n~~~\n\n17. Missing test?\n\nIIUC there is a missing test scenario. You can add another subscriber\ntable TAB3 which *already* has generated cols (e.g. generating\ndifferent values to the publisher) so then you can verify they are NOT\noverwritten, even when the 'include_generated_cols' is true.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 4 Jun 2024 12:41:34 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi,\n\nHere are some review comments for patch v5-0002.\n\n======\nGENERAL\n\nG1.\nIIUC now you are unconditionally allowing all generated columns to be copied.\n\nI think this is assuming that the table sync code (in the next patch\n0003?) is going to explicitly name all the columns it wants to copy\n(so if it wants to get generated cols then it will name the generated\ncols, and if is doesn't want generated cols then it won't name them).\n\nMaybe that is OK for the logical replication tablesync case, but I am\nnot sure if it will be desirable to *always* copy generated columns in\nother user scenarios.\n\ne.g. I was wondering if there should be a new COPY command option\nintroduced here -- INCLUDE_GENERATED_COLUMNS (with default false) so\nthen the current HEAD behaviour is unaffected unless that option is\nenabled.\n\n~~~\n\nG2.\nThe current COPY command documentation [1] says \"If no column list is\nspecified, all columns of the table except generated columns will be\ncopied.\"\n\nBut this 0002 patch has changed that documented behaviour, and so the\ndocumentation needs to be changed as well, right?\n\n======\nCommit Message\n\n1.\nCurrently COPY command do not copy generated column. With this commit\nadded support for COPY for generated column.\n\n~\n\nThe grammar/cardinality is not good here. Try some tool (Grammarly or\nchatGPT, etc) to help correct it.\n\n======\nsrc/backend/commands/copy.c\n\n======\nsrc/test/regress/expected/generated.out\n\n======\nsrc/test/regress/sql/generated.sql\n\n2.\nI think these COPY test cases require some explicit comments to\ndescribe what they are doing, and what are the expected results.\n\nCurrently, I have doubts about some of this test input/output\n\ne.g.1. Why is the 'b' column sometimes specified as 1? It needs some\nexplanation. Are you expecting this generated col value to be\nignored/overwritten or what?\n\nCOPY gtest1 (a, b) FROM stdin DELIMITER ' ';\n5 1\n6 1\n\\.\n\ne.g.2. what is the reason for this new 'missing data for column \"b\"'\nerror? Or is it some introduced quirk because \"b\" now cannot be\ngenerated since there is no value for \"a\"? I don't know if the\nexpected *.out here is OK or not, so some test comments may help to\nclarify it.\n\n======\n[1] https://www.postgresql.org/docs/devel/sql-copy.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 4 Jun 2024 14:51:30 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi,\n\nHere are some review comments for patch v5-0003.\n\n======\n0. Whitespace warnings when the patch was applied.\n\n[postgres@CentOS7-x64 oss_postgres_misc]$ git apply\n../patches_misc/v5-0003-Support-copy-of-generated-columns-during-tablesyn.patch\n../patches_misc/v5-0003-Support-copy-of-generated-columns-during-tablesyn.patch:29:\ntrailing whitespace.\n has no effect; the replicated data will be ignored and the subscriber\n../patches_misc/v5-0003-Support-copy-of-generated-columns-during-tablesyn.patch:30:\ntrailing whitespace.\n column will be filled as normal with the subscriber-side computed or\n../patches_misc/v5-0003-Support-copy-of-generated-columns-during-tablesyn.patch:189:\ntrailing whitespace.\n(walrcv_server_version(LogRepWorkerWalRcvConn) >= 120000 &&\nwarning: 3 lines add whitespace errors.\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\n1.\n- res = walrcv_exec(wrconn, cmd.data, check_columnlist ? 3 : 2, tableRow);\n+ column_count = (!include_generated_column && check_gen_col) ? 4 :\n(check_columnlist ? 3 : 2);\n+ res = walrcv_exec(wrconn, cmd.data, column_count, tableRow);\n\nThe 'column_count' seems out of control. Won't it be far simpler to\nassign/increment the value dynamically only as required instead of the\ntricky calculation at the end which is unnecessarily difficult to\nunderstand?\n\n~~~\n\n2.\n+ /*\n+ * If include_generated_column option is false and all the column of\nthe table in the\n+ * publication are generated then we should throw an error.\n+ */\n+ if (!isnull && !include_generated_column && check_gen_col)\n+ {\n+ attlist = DatumGetArrayTypeP(attlistdatum);\n+ gen_col_count = DatumGetInt32(slot_getattr(slot, 4, &isnull));\n+ Assert(!isnull);\n+\n+ attcount = ArrayGetNItems(ARR_NDIM(attlist), ARR_DIMS(attlist));\n+\n+ if (attcount != 0 && attcount == gen_col_count)\n+ ereport(ERROR,\n+ errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+ errmsg(\"cannot use only generated column for table \\\"%s.%s\\\" in\npublication when generated_column option is false\",\n+ nspname, relname));\n+ }\n+\n\nWhy do you think this new logic/error is necessary?\n\nIIUC the 'include_generated_columns' should be false to match the\nexisting HEAD behavior. So this scenario where your publisher-side\ntable *only* has generated columns is something that could already\nhappen, right? IOW, this introduced error could be a candidate for\nanother discussion/thread/patch, but is it really required for this\ncurrent patch?\n\n======\nsrc/backend/replication/logical/tablesync.c\n\n3.\n lrel->remoteid,\n- (walrcv_server_version(LogRepWorkerWalRcvConn) >= 120000 ?\n- \"AND a.attgenerated = ''\" : \"\"),\n+ (walrcv_server_version(LogRepWorkerWalRcvConn) >= 120000 &&\n+ (walrcv_server_version(LogRepWorkerWalRcvConn) <= 160000 ||\n+ !MySubscription->includegeneratedcolumn) ? \"AND a.attgenerated = ''\" : \"\"),\n\nThis ternary within one big appendStringInfo seems quite complicated.\nWon't it be better to split the appendStringInfo into multiple parts\nso the generated-cols calculation can be done more simply?\n\n======\nsrc/test/subscription/t/011_generated.pl\n\n4.\nI think there should be a variety of different tablesync scenarios\n(when 'include_generated_columns' is true) tested here instead of just\none, and all varieties with lots of comments to say what they are\ndoing, expectations etc.\n\na. publisher-side gen-col \"a\" replicating to subscriber-side NOT\ngen-col \"a\" (ok, value gets replicated)\nb. publisher-side gen-col \"a\" replicating to subscriber-side gen-col\n(ok, but ignored)\nc. publisher-side NOT gen-col \"b\" replicating to subscriber-side\ngen-col \"b\" (error?)\n\n~~\n\n5.\n+$result = $node_subscriber->safe_psql('postgres', \"SELECT a, b FROM tab3\");\n+is( $result, qq(1|2\n+2|4\n+3|6), 'generated columns initial sync with include_generated_column = true');\n\nShould this say \"ORDER BY...\" so it will not fail if the row order\nhappens to be something unanticipated?\n\n======\n\n99.\nAlso, see the attached file with numerous other nitpicks:\n- plural param- and var-names\n- typos in comments\n- missing spaces\n- SQL keyword should be UPPERCASE\n- etc.\n\nPlease apply any/all of these if you agree with them.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 4 Jun 2024 19:30:49 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Mon, Jun 3, 2024 at 9:52 PM Shlok Kyal <shlok.kyal.oss@gmail.com> wrote:\n>\n> >\n> > The attached Patch contains the suggested changes.\n> >\n>\n> Hi,\n>\n> Currently, COPY command does not work for generated columns and\n> therefore, COPY of generated column is not supported during tablesync\n> process. So, in patch v4-0001 we added a check to allow replication of\n> the generated column only if 'copy_data = false'.\n>\n> I am attaching patches to resolve the above issues.\n>\n> v5-0001: not changed\n> v5-0002: Support COPY of generated column\n> v5-0003: Support COPY of generated column during tablesync process\n>\n\nHi Shlok, I have a question about patch v5-0003.\n\nAccording to the patch 0001 docs \"If the subscriber-side column is\nalso a generated column then this option has no effect; the replicated\ndata will be ignored and the subscriber column will be filled as\nnormal with the subscriber-side computed or default data\".\n\nDoesn't this mean it will be a waste of effort/resources to COPY any\ncolumn value where the subscriber-side column is generated since we\nknow that any copied value will be ignored anyway?\n\nBut I don't recall seeing any comment or logic for this kind of copy\noptimisation in the patch 0003. Is this already accounted for\nsomewhere and I missed it, or is my understanding wrong?\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 5 Jun 2024 10:18:53 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Dear Shlok and Shubham,\r\n\r\nThanks for updating the patch!\r\n\r\nI briefly checked the v5-0002. IIUC, your patch allows to copy generated\r\ncolumns unconditionally. I think the behavior affects many people so that it is\r\nhard to get agreement.\r\n\r\nCan we add a new option like `GENERATED_COLUMNS [boolean]`? If the default is set\r\nto off, we can keep the current specification.\r\n\r\nThought?\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/ \r\n\r\n", "msg_date": "Thu, 6 Jun 2024 02:59:19 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Pgoutput not capturing the generated columns" }, { "msg_contents": "> Thanks for the updated patch, few comments:\n> 1) The option name seems wrong here:\n> In one place include_generated_column is specified and other place\n> include_generated_columns is specified:\n>\n> + else if (IsSet(supported_opts,\n> SUBOPT_INCLUDE_GENERATED_COLUMN) &&\n> + strcmp(defel->defname,\n> \"include_generated_column\") == 0)\n> + {\n> + if (IsSet(opts->specified_opts,\n> SUBOPT_INCLUDE_GENERATED_COLUMN))\n> + errorConflictingDefElem(defel, pstate);\n> +\n> + opts->specified_opts |= SUBOPT_INCLUDE_GENERATED_COLUMN;\n> + opts->include_generated_column = defGetBoolean(defel);\n> + }\n\nFixed.\n\n> diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c\n> index d453e224d9..e8ff752fd9 100644\n> --- a/src/bin/psql/tab-complete.c\n> +++ b/src/bin/psql/tab-complete.c\n> @@ -3365,7 +3365,7 @@ psql_completion(const char *text, int start, int end)\n> COMPLETE_WITH(\"binary\", \"connect\", \"copy_data\", \"create_slot\",\n> \"disable_on_error\",\n> \"enabled\", \"failover\", \"origin\",\n> \"password_required\",\n> \"run_as_owner\", \"slot_name\",\n> - \"streaming\",\n> \"synchronous_commit\", \"two_phase\");\n> + \"streaming\",\n> \"synchronous_commit\", \"two_phase\",\"include_generated_columns\");\n>\n> 2) This small data table need not have a primary key column as it will\n> create an index and insertion will happen in the index too.\n> +-- check include-generated-columns option with generated column\n> +CREATE TABLE gencoltable (a int PRIMARY KEY, b int GENERATED ALWAYS\n> AS (a * 2) STORED);\n> +INSERT INTO gencoltable (a) VALUES (1), (2), (3);\n> +SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,\n> NULL, 'include-xids', '0', 'skip-empty-xacts', '1',\n> 'include-generated-columns', '1');\n\nFixed.\n\n> 3) Please add a test case for this:\n> + set to <literal>false</literal>. If the subscriber-side\n> column is also a\n> + generated column then this option has no effect; the\n> replicated data will\n> + be ignored and the subscriber column will be filled as\n> normal with the\n> + subscriber-side computed or default data.\n\nAdded the required test case.\n\n> 4) You can use a new style of ereport to remove the brackets around errcode\n> 4.a)\n> + else if (!parse_bool(strVal(elem->arg),\n> &data->include_generated_columns))\n> + ereport(ERROR,\n> +\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"could not\n> parse value \\\"%s\\\" for parameter \\\"%s\\\"\",\n> +\n> strVal(elem->arg), elem->defname)));\n>\n> 4.b) similarly here too:\n> + ereport(ERROR,\n> + (errcode(ERRCODE_SYNTAX_ERROR),\n> + /*- translator: both %s are strings of the form\n> \"option = value\" */\n> + errmsg(\"%s and %s are mutually\n> exclusive options\",\n> + \"copy_data = true\",\n> \"include_generated_column = true\")));\n>\n> 4.c) similarly here too:\n> + if (include_generated_columns_option_given)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_SYNTAX_ERROR),\n> + errmsg(\"conflicting\n> or redundant options\")));\n\nFixed.\n\n> 5) These variable names can be changed to keep it smaller, something\n> like gencol or generatedcol or gencolumn, etc\n> +++ b/src/include/catalog/pg_subscription.h\n> @@ -98,6 +98,8 @@ CATALOG(pg_subscription,6100,SubscriptionRelationId)\n> BKI_SHARED_RELATION BKI_ROW\n> * slots) in the upstream database are enabled\n> * to be synchronized to the standbys. */\n>\n> + bool subincludegeneratedcolumn; /* True if generated columns must be\n> published */\n> +\n> #ifdef CATALOG_VARLEN /* variable-length fields start here */\n> /* Connection string to the publisher */\n> text subconninfo BKI_FORCE_NOT_NULL;\n> @@ -157,6 +159,7 @@ typedef struct Subscription\n> List *publications; /* List of publication names to subscribe to */\n> char *origin; /* Only publish data originating from the\n> * specified origin */\n> + bool includegeneratedcolumn; /* publish generated column data */\n> } Subscription;\n\nFixed.\n\nThe attached Patch contains the suggested changes.\n\nThanks and Regards,\nShubham Khanna.", "msg_date": "Fri, 14 Jun 2024 15:52:03 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Tue, Jun 4, 2024 at 8:12 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi,\n>\n> Here are some review comments for patch v5-0001.\n>\n> ======\n> GENERAL G.1\n>\n> The patch changes HEAD behaviour for PUBLICATION col-lists right? e.g.\n> maybe before they were always ignored, but now they are not?\n>\n> OTOH, when 'include_generated_columns' is false then the PUBLICATION\n> col-list will ignore any generated cols even when they are present in\n> a PUBLICATION col-list, right?\n>\n> These kinds of points should be noted in the commit message and in the\n> (col-list?) documentation.\n\nFixed.\n\n> ======\n> Commit message\n>\n> General 1a.\n> IMO the commit message needs some background to say something like:\n> \"Currently generated column values are not replicated because it is\n> assumed that the corresponding subscriber-side table will generate its\n> own values for those columns.\"\n>\n> ~\n>\n> General 1b.\n> Somewhere in this commit message, you need to give all the other\n> special rules --- e.g. the docs says \"If the subscriber-side column is\n> also a generated column then this option has no effect\"\n>\n> ~~~\n\nFixed.\n\n> 2.\n> This commit enables support for the 'include_generated_columns' option\n> in logical replication, allowing the transmission of generated column\n> information and data alongside regular table changes. This option is\n> particularly useful for scenarios where applications require access to\n> generated column values for downstream processing or synchronization.\n>\n> ~\n>\n> I don't think the sentence \"This option is particularly useful...\" is\n> helpful. It seems like just saying \"This commit supports option XXX.\n> This is particularly useful if you want XXX\".\n>\n\nFixed.\n\n>\n> 3.\n> CREATE SUBSCRIPTION test1 connection 'dbname=postgres host=localhost port=9999\n> 'publication pub1;\n>\n> ~\n>\n> What is this CREATE SUBSCRIPTION for? Shouldn't it have an example of\n> the new parameter being used in it?\n>\n\nAdded the description for this in the Patch.\n\n>\n> 4.\n> Currently copy_data option with include_generated_columns option is\n> not supported. A future patch will remove this limitation.\n>\n> ~\n>\n> Suggest to single-quote those parameter names for better readability.\n>\n\nFixed.\n\n>\n> 5.\n> This commit aims to enhance the flexibility and utility of logical\n> replication by allowing users to include generated column information\n> in replication streams, paving the way for more robust data\n> synchronization and processing workflows.\n>\n> ~\n>\n> IMO this paragraph can be omitted.\n\nFixed.\n\n> ======\n> .../test_decoding/sql/decoding_into_rel.sql\n>\n> 6.\n> +-- check include-generated-columns option with generated column\n> +CREATE TABLE gencoltable (a int PRIMARY KEY, b int GENERATED ALWAYS\n> AS (a * 2) STORED);\n> +INSERT INTO gencoltable (a) VALUES (1), (2), (3);\n> +SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,\n> NULL, 'include-xids', '0', 'skip-empty-xacts', '1',\n> 'include-generated-columns', '1');\n> +INSERT INTO gencoltable (a) VALUES (4), (5), (6);\n> +SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,\n> NULL, 'include-xids', '0', 'skip-empty-xacts', '1',\n> 'include-generated-columns', '0');\n> +DROP TABLE gencoltable;\n> +\n>\n> 6a.\n> I felt some additional explicit comments might help the readabilty of\n> the output file.\n>\n> e.g.1\n> -- When 'include-generated=columns' = '1' the generated column 'b'\n> values will be replicated\n> SELECT data FROM pg_logical_slot_get_changes...\n>\n> e.g.2\n> -- When 'include-generated=columns' = '0' the generated column 'b'\n> values will not be replicated\n> SELECT data FROM pg_logical_slot_get_changes...\n\nAdded the required description for this.\n\n> 6b.\n> Suggest adding one more test case (where 'include-generated=columns'\n> is not set) to confirm/demonstrate the default behaviour for\n> replicated generated cols.\n\nAdded the required Test case.\n\n> ======\n> doc/src/sgml/protocol.sgml\n>\n> 7.\n> + <varlistentry>\n> + <term><replaceable\n> class=\"parameter\">include-generated-columns</replaceable></term>\n> + <listitem>\n> + <para>\n> + Boolean option to enable generated columns.\n> + The include-generated-columns option controls whether generated\n> + columns should be included in the string representation of tuples\n> + during logical decoding in PostgreSQL. This allows users to\n> + customize the output format based on whether they want to include\n> + these columns or not. The default is false.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n>\n> 7a.\n> It doesn't render properly. e.g. Should not be bold italic (probably\n> the class is wrong?), because none of the nearby parameters look this\n> way.\n>\n> ~\n>\n> 7b.\n> The name here should NOT have hyphens. It needs underscores same as\n> all other nearby protocol parameters.\n>\n> ~\n>\n> 7c.\n> The description seems overly verbose.\n>\n> SUGGESTION\n> Boolean option to enable generated columns. This option controls\n> whether generated columns should be included in the string\n> representation of tuples during logical decoding in PostgreSQL. The\n> default is false.\n\nFixed.\n\n> ======\n> doc/src/sgml/ref/create_subscription.sgml\n>\n> 8.\n> +\n> + <varlistentry\n> id=\"sql-createsubscription-params-with-include-generated-column\">\n> + <term><literal>include_generated_column</literal>\n> (<type>boolean</type>)</term>\n> + <listitem>\n> + <para>\n> + Specifies whether the generated columns present in the tables\n> + associated with the subscription should be replicated. The default is\n> + <literal>false</literal>.\n> + </para>\n>\n> The parameter name should be plural (include_generated_columns).\n\nFixed.\n\n> ======\n> src/backend/commands/subscriptioncmds.c\n>\n> 9.\n> #define SUBOPT_ORIGIN 0x00008000\n> +#define SUBOPT_INCLUDE_GENERATED_COLUMN 0x00010000\n>\n> Should be plural COLUMNS\n>\nFixed.\n\n> 10.\n> + else if (IsSet(supported_opts, SUBOPT_INCLUDE_GENERATED_COLUMN) &&\n> + strcmp(defel->defname, \"include_generated_column\") == 0)\n>\n> The new subscription parameter should be plural (\"include_generated_columns\").\n\nFixed.\n\n> 11.\n> +\n> + /*\n> + * Do additional checking for disallowed combination when copy_data and\n> + * include_generated_column are true. COPY of generated columns is\n> not supported\n> + * yet.\n> + */\n> + if (opts->copy_data && opts->include_generated_column)\n> + {\n> + ereport(ERROR,\n> + (errcode(ERRCODE_SYNTAX_ERROR),\n> + /*- translator: both %s are strings of the form \"option = value\" */\n> + errmsg(\"%s and %s are mutually exclusive options\",\n> + \"copy_data = true\", \"include_generated_column = true\")));\n> + }\n>\n> /combination/combinations/\n>\n> The parameter name should be plural in the comment and also in the\n> error message.\n\nFixed.\n\n> ======\n> src/bin/psql/tab-complete.c\n>\n> 12.\n> COMPLETE_WITH(\"binary\", \"connect\", \"copy_data\", \"create_slot\",\n> \"disable_on_error\", \"enabled\", \"failover\", \"origin\",\n> \"password_required\", \"run_as_owner\", \"slot_name\",\n> - \"streaming\", \"synchronous_commit\", \"two_phase\");\n> + \"streaming\", \"synchronous_commit\", \"two_phase\",\"include_generated_columns\");\n>\n> The new param should be added in alphabetical order same as all the others.\n\nFixed.\n\n> ======\n> src/include/catalog/pg_subscription.h\n>\n> 13.\n> + bool subincludegeneratedcolumn; /* True if generated columns must be\n> published */\n> +\n>\n> The field name should be plural.\n\nFixed.\n\n>\n> 14.\n> + bool includegeneratedcolumn; /* publish generated column data */\n> } Subscription;\n>\n> The field name should be plural.\n\nFixed.\n\n> ======\n> src/include/replication/walreceiver.h\n>\n> 15.\n> * prepare time */\n> char *origin; /* Only publish data originating from the\n> * specified origin */\n> + bool include_generated_column; /* publish generated columns */\n> } logical;\n> } proto;\n> } WalRcvStreamOptions;\n>\n> ~\n>\n> This new field name should be plural.\n\nFixed.\n\n> ======\n> src/test/subscription/t/011_generated.pl\n>\n> 16.\n> +my ($cmdret, $stdout, $stderr) = $node_subscriber->psql('postgres', qq(\n> + CREATE SUBSCRIPTION sub2 CONNECTION '$publisher_connstr' PUBLICATION\n> pub2 WITH (include_generated_column = true)\n> +));\n> +ok( $stderr =~\n> + qr/copy_data = true and include_generated_column = true are\n> mutually exclusive options/,\n> + 'cannot use both include_generated_column and copy_data as true');\n>\n> Isn't this mutual exclusiveness of options something that could have\n> been tested in the regress test suite instead of TAP tests? e.g. AFAIK\n> you won't require a connection to test this case.\n\n\n> 17. Missing test?\n>\n> IIUC there is a missing test scenario. You can add another subscriber\n> table TAB3 which *already* has generated cols (e.g. generating\n> different values to the publisher) so then you can verify they are NOT\n> overwritten, even when the 'include_generated_cols' is true.\n>\n> ======\n\nMoved this test case to the Regression test.\n\nPatch v6-0001 contains all the changes required. See [1] for the changes added.\n\n[1] https://www.postgresql.org/message-id/CAHv8RjJn6EiyAitJbbvkvVV2d45fV3Wjr2VmWFugm3RsbaU%2BRg%40mail.gmail.com\n\nThanks and Regards,\nShubham Khanna.\n\n\n", "msg_date": "Fri, 14 Jun 2024 16:08:49 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Tue, 4 Jun 2024 at 10:21, Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi,\n>\n> Here are some review comments for patch v5-0002.\n>\n> ======\n> GENERAL\n>\n> G1.\n> IIUC now you are unconditionally allowing all generated columns to be copied.\n>\n> I think this is assuming that the table sync code (in the next patch\n> 0003?) is going to explicitly name all the columns it wants to copy\n> (so if it wants to get generated cols then it will name the generated\n> cols, and if is doesn't want generated cols then it won't name them).\n>\n> Maybe that is OK for the logical replication tablesync case, but I am\n> not sure if it will be desirable to *always* copy generated columns in\n> other user scenarios.\n>\n> e.g. I was wondering if there should be a new COPY command option\n> introduced here -- INCLUDE_GENERATED_COLUMNS (with default false) so\n> then the current HEAD behaviour is unaffected unless that option is\n> enabled.\n>\n> ~~~\n>\n> G2.\n> The current COPY command documentation [1] says \"If no column list is\n> specified, all columns of the table except generated columns will be\n> copied.\"\n>\n> But this 0002 patch has changed that documented behaviour, and so the\n> documentation needs to be changed as well, right?\n>\n> ======\n> Commit Message\n>\n> 1.\n> Currently COPY command do not copy generated column. With this commit\n> added support for COPY for generated column.\n>\n> ~\n>\n> The grammar/cardinality is not good here. Try some tool (Grammarly or\n> chatGPT, etc) to help correct it.\n>\n> ======\n> src/backend/commands/copy.c\n>\n> ======\n> src/test/regress/expected/generated.out\n>\n> ======\n> src/test/regress/sql/generated.sql\n>\n> 2.\n> I think these COPY test cases require some explicit comments to\n> describe what they are doing, and what are the expected results.\n>\n> Currently, I have doubts about some of this test input/output\n>\n> e.g.1. Why is the 'b' column sometimes specified as 1? It needs some\n> explanation. Are you expecting this generated col value to be\n> ignored/overwritten or what?\n>\n> COPY gtest1 (a, b) FROM stdin DELIMITER ' ';\n> 5 1\n> 6 1\n> \\.\n>\n> e.g.2. what is the reason for this new 'missing data for column \"b\"'\n> error? Or is it some introduced quirk because \"b\" now cannot be\n> generated since there is no value for \"a\"? I don't know if the\n> expected *.out here is OK or not, so some test comments may help to\n> clarify it.\n>\n> ======\n> [1] https://www.postgresql.org/docs/devel/sql-copy.html\n>\nHi Peter,\n\nI have removed the changes in the COPY command. I came up with an\napproach which requires changes only in tablesync code. We can COPY\ngenerated columns during tablesync using syntax 'COPY (SELECT\ncolumn_name from table) TO STDOUT.'\n\nI have attached the patch for the same.\nv7-0001 : Not Modified\nv7-0002: Support replication of generated columns during initial sync.\n\nThanks and Regards,\nShlok Kyal", "msg_date": "Sat, 15 Jun 2024 16:11:23 +0530", "msg_from": "Shlok Kyal <shlok.kyal.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Tue, 4 Jun 2024 at 15:01, Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi,\n>\n> Here are some review comments for patch v5-0003.\n>\n> ======\n> 0. Whitespace warnings when the patch was applied.\n>\n> [postgres@CentOS7-x64 oss_postgres_misc]$ git apply\n> ../patches_misc/v5-0003-Support-copy-of-generated-columns-during-tablesyn.patch\n> ../patches_misc/v5-0003-Support-copy-of-generated-columns-during-tablesyn.patch:29:\n> trailing whitespace.\n> has no effect; the replicated data will be ignored and the subscriber\n> ../patches_misc/v5-0003-Support-copy-of-generated-columns-during-tablesyn.patch:30:\n> trailing whitespace.\n> column will be filled as normal with the subscriber-side computed or\n> ../patches_misc/v5-0003-Support-copy-of-generated-columns-during-tablesyn.patch:189:\n> trailing whitespace.\n> (walrcv_server_version(LogRepWorkerWalRcvConn) >= 120000 &&\n> warning: 3 lines add whitespace errors.\n>\nFixed\n\n> ======\n> src/backend/commands/subscriptioncmds.c\n>\n> 1.\n> - res = walrcv_exec(wrconn, cmd.data, check_columnlist ? 3 : 2, tableRow);\n> + column_count = (!include_generated_column && check_gen_col) ? 4 :\n> (check_columnlist ? 3 : 2);\n> + res = walrcv_exec(wrconn, cmd.data, column_count, tableRow);\n>\n> The 'column_count' seems out of control. Won't it be far simpler to\n> assign/increment the value dynamically only as required instead of the\n> tricky calculation at the end which is unnecessarily difficult to\n> understand?\n>\nI have removed this piece of code.\n\n> ~~~\n>\n> 2.\n> + /*\n> + * If include_generated_column option is false and all the column of\n> the table in the\n> + * publication are generated then we should throw an error.\n> + */\n> + if (!isnull && !include_generated_column && check_gen_col)\n> + {\n> + attlist = DatumGetArrayTypeP(attlistdatum);\n> + gen_col_count = DatumGetInt32(slot_getattr(slot, 4, &isnull));\n> + Assert(!isnull);\n> +\n> + attcount = ArrayGetNItems(ARR_NDIM(attlist), ARR_DIMS(attlist));\n> +\n> + if (attcount != 0 && attcount == gen_col_count)\n> + ereport(ERROR,\n> + errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> + errmsg(\"cannot use only generated column for table \\\"%s.%s\\\" in\n> publication when generated_column option is false\",\n> + nspname, relname));\n> + }\n> +\n>\n> Why do you think this new logic/error is necessary?\n>\n> IIUC the 'include_generated_columns' should be false to match the\n> existing HEAD behavior. So this scenario where your publisher-side\n> table *only* has generated columns is something that could already\n> happen, right? IOW, this introduced error could be a candidate for\n> another discussion/thread/patch, but is it really required for this\n> current patch?\n>\nYes, this scenario can also happen in HEAD. For this patch I have\nremoved this check.\n\n> ======\n> src/backend/replication/logical/tablesync.c\n>\n> 3.\n> lrel->remoteid,\n> - (walrcv_server_version(LogRepWorkerWalRcvConn) >= 120000 ?\n> - \"AND a.attgenerated = ''\" : \"\"),\n> + (walrcv_server_version(LogRepWorkerWalRcvConn) >= 120000 &&\n> + (walrcv_server_version(LogRepWorkerWalRcvConn) <= 160000 ||\n> + !MySubscription->includegeneratedcolumn) ? \"AND a.attgenerated = ''\" : \"\"),\n>\n> This ternary within one big appendStringInfo seems quite complicated.\n> Won't it be better to split the appendStringInfo into multiple parts\n> so the generated-cols calculation can be done more simply?\n>\nFixed\n\n> ======\n> src/test/subscription/t/011_generated.pl\n>\n> 4.\n> I think there should be a variety of different tablesync scenarios\n> (when 'include_generated_columns' is true) tested here instead of just\n> one, and all varieties with lots of comments to say what they are\n> doing, expectations etc.\n>\n> a. publisher-side gen-col \"a\" replicating to subscriber-side NOT\n> gen-col \"a\" (ok, value gets replicated)\n> b. publisher-side gen-col \"a\" replicating to subscriber-side gen-col\n> (ok, but ignored)\n> c. publisher-side NOT gen-col \"b\" replicating to subscriber-side\n> gen-col \"b\" (error?)\n>\nAdded the tests\n\n> ~~\n>\n> 5.\n> +$result = $node_subscriber->safe_psql('postgres', \"SELECT a, b FROM tab3\");\n> +is( $result, qq(1|2\n> +2|4\n> +3|6), 'generated columns initial sync with include_generated_column = true');\n>\n> Should this say \"ORDER BY...\" so it will not fail if the row order\n> happens to be something unanticipated?\n>\nFixed\n\n> ======\n>\n> 99.\n> Also, see the attached file with numerous other nitpicks:\n> - plural param- and var-names\n> - typos in comments\n> - missing spaces\n> - SQL keyword should be UPPERCASE\n> - etc.\n>\n> Please apply any/all of these if you agree with them.\nFixed\n\nPatch 7-0002 contains all the changes. Please refer [1]\n[1]: https://www.postgresql.org/message-id/CANhcyEUz0FcyR3T76b%2BNhtmvWO7o96O_oEwsLZNZksEoPmVzXw%40mail.gmail.com\n\nThanks and Regards,\nShlok Kyal\n\n\n", "msg_date": "Sat, 15 Jun 2024 16:14:44 +0530", "msg_from": "Shlok Kyal <shlok.kyal.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Wed, 5 Jun 2024 at 05:49, Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Mon, Jun 3, 2024 at 9:52 PM Shlok Kyal <shlok.kyal.oss@gmail.com> wrote:\n> >\n> > >\n> > > The attached Patch contains the suggested changes.\n> > >\n> >\n> > Hi,\n> >\n> > Currently, COPY command does not work for generated columns and\n> > therefore, COPY of generated column is not supported during tablesync\n> > process. So, in patch v4-0001 we added a check to allow replication of\n> > the generated column only if 'copy_data = false'.\n> >\n> > I am attaching patches to resolve the above issues.\n> >\n> > v5-0001: not changed\n> > v5-0002: Support COPY of generated column\n> > v5-0003: Support COPY of generated column during tablesync process\n> >\n>\n> Hi Shlok, I have a question about patch v5-0003.\n>\n> According to the patch 0001 docs \"If the subscriber-side column is\n> also a generated column then this option has no effect; the replicated\n> data will be ignored and the subscriber column will be filled as\n> normal with the subscriber-side computed or default data\".\n>\n> Doesn't this mean it will be a waste of effort/resources to COPY any\n> column value where the subscriber-side column is generated since we\n> know that any copied value will be ignored anyway?\n>\n> But I don't recall seeing any comment or logic for this kind of copy\n> optimisation in the patch 0003. Is this already accounted for\n> somewhere and I missed it, or is my understanding wrong?\nYour understanding is correct.\nWith v7-0002, if a subscriber-side column is generated, then we do not\ninclude that column in the column list during COPY. This will address\nthe above issue.\n\nPatch 7-0002 contains all the changes. Please refer [1]\n[1]: https://www.postgresql.org/message-id/CANhcyEUz0FcyR3T76b%2BNhtmvWO7o96O_oEwsLZNZksEoPmVzXw%40mail.gmail.com\n\nThanks and Regards,\nShlok Kyal\n\n\n", "msg_date": "Sat, 15 Jun 2024 16:16:57 +0530", "msg_from": "Shlok Kyal <shlok.kyal.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Thu, 6 Jun 2024 at 08:29, Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Dear Shlok and Shubham,\n>\n> Thanks for updating the patch!\n>\n> I briefly checked the v5-0002. IIUC, your patch allows to copy generated\n> columns unconditionally. I think the behavior affects many people so that it is\n> hard to get agreement.\n>\n> Can we add a new option like `GENERATED_COLUMNS [boolean]`? If the default is set\n> to off, we can keep the current specification.\n>\n> Thought?\nHi Kuroda-san,\n\nI agree that we should not allow to copy generated columns unconditionally.\nWith patch v7-0002, I have used a different approach which does not\nrequire any code changes in COPY.\n\nPlease refer [1] for patch v7-0002.\n[1]: https://www.postgresql.org/message-id/CANhcyEUz0FcyR3T76b%2BNhtmvWO7o96O_oEwsLZNZksEoPmVzXw%40mail.gmail.com\n\nThanks and Regards,\nShlok Kyal\n\n\n", "msg_date": "Sat, 15 Jun 2024 16:18:31 +0530", "msg_from": "Shlok Kyal <shlok.kyal.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Fri, 14 Jun 2024 at 15:52, Shubham Khanna\n<khannashubham1197@gmail.com> wrote:\n>\n> The attached Patch contains the suggested changes.\n>\n\nHi Shubham, thanks for providing a patch.\nI have some comments for v6-0001.\n\n1. create_subscription.sgml\nThere is repetition of the same line.\n\n+ <para>\n+ Specifies whether the generated columns present in the tables\n+ associated with the subscription should be replicated. If the\n+ subscriber-side column is also a generated column then this option\n+ has no effect; the replicated data will be ignored and the subscriber\n+ column will be filled as normal with the subscriber-side computed or\n+ default data.\n+ <literal>false</literal>.\n+ </para>\n+\n+ <para>\n+ This parameter can only be set true if\n<literal>copy_data</literal> is\n+ set to <literal>false</literal>. If the subscriber-side\ncolumn is also a\n+ generated column then this option has no effect; the\nreplicated data will\n+ be ignored and the subscriber column will be filled as\nnormal with the\n+ subscriber-side computed or default data.\n+ </para>\n\n==============================\n2. subscriptioncmds.c\n\n2a. The macro name should be in uppercase. We can use a short name\nlike 'SUBOPT_INCLUDE_GEN_COL'. Thought?\n+#define SUBOPT_include_generated_columns 0x00010000\n\n2b.Update macro name accordingly\n+ if (IsSet(supported_opts, SUBOPT_include_generated_columns))\n+ opts->include_generated_columns = false;\n\n2c. Update macro name accordingly\n+ else if (IsSet(supported_opts, SUBOPT_include_generated_columns) &&\n+ strcmp(defel->defname, \"include_generated_columns\") == 0)\n+ {\n+ if (IsSet(opts->specified_opts, SUBOPT_include_generated_columns))\n+ errorConflictingDefElem(defel, pstate);\n+\n+ opts->specified_opts |= SUBOPT_include_generated_columns;\n+ opts->include_generated_columns = defGetBoolean(defel);\n+ }\n\n2d. Update macro name accordingly\n+ SUBOPT_RUN_AS_OWNER | SUBOPT_FAILOVER | SUBOPT_ORIGIN |\n+ SUBOPT_include_generated_columns);\n\n\n==============================\n\n3. decoding_into_rel.out\n\n3a. In comment, I think it should be \"When 'include-generated-columns'\n= '1' the generated column 'b' values will be replicated\"\n+-- When 'include-generated-columns' = '1' the generated column 'b'\nvalues will not be replicated\n+INSERT INTO gencoltable (a) VALUES (1), (2), (3);\n+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,\nNULL, 'include-xids', '0', 'skip-empty-xacts', '1',\n'include-generated-columns', '1');\n+ data\n+-------------------------------------------------------------\n+ BEGIN\n+ table public.gencoltable: INSERT: a[integer]:1 b[integer]:2\n+ table public.gencoltable: INSERT: a[integer]:2 b[integer]:4\n+ table public.gencoltable: INSERT: a[integer]:3 b[integer]:6\n+ COMMIT\n\n3b. In comment, I think it should be \"When 'include-generated-columns'\n= '1' the generated column 'b' values will not be replicated\"\n+-- When 'include-generated-columns' = '0' the generated column 'b'\nvalues will be replicated\n+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,\nNULL, 'include-xids', '0', 'skip-empty-xacts', '1',\n'include-generated-columns', '0');\n+ data\n+------------------------------------------------\n+ BEGIN\n+ table public.gencoltable: INSERT: a[integer]:4\n+ table public.gencoltable: INSERT: a[integer]:5\n+ table public.gencoltable: INSERT: a[integer]:6\n+ COMMIT\n+(5 rows)\n\n=========================\n\n4. Here names for both the tests are the same. I think we should use\ndifferent names.\n\n+$result = $node_subscriber->safe_psql('postgres', \"SELECT * FROM tab2\");\n+is( $result, qq(4|8\n+5|10), 'generated columns replicated to non-generated column on subscriber');\n+\n+$node_publisher->safe_psql('postgres', \"INSERT INTO tab3 VALUES (4), (5)\");\n+\n+$node_publisher->wait_for_catchup('sub3');\n+\n+$result = $node_subscriber->safe_psql('postgres', \"SELECT * FROM tab3\");\n+is( $result, qq(4|24\n+5|25), 'generated columns replicated to non-generated column on subscriber');\n\nThanks and Regards,\nShlok Kyal\n\n\n", "msg_date": "Mon, 17 Jun 2024 11:50:33 +0530", "msg_from": "Shlok Kyal <shlok.kyal.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Fri, 14 Jun 2024 at 15:52, Shubham Khanna\n<khannashubham1197@gmail.com> wrote:\n>\n> > Thanks for the updated patch, few comments:\n> > 1) The option name seems wrong here:\n> > In one place include_generated_column is specified and other place\n> > include_generated_columns is specified:\n> >\n> > + else if (IsSet(supported_opts,\n> > SUBOPT_INCLUDE_GENERATED_COLUMN) &&\n> > + strcmp(defel->defname,\n> > \"include_generated_column\") == 0)\n> > + {\n> > + if (IsSet(opts->specified_opts,\n> > SUBOPT_INCLUDE_GENERATED_COLUMN))\n> > + errorConflictingDefElem(defel, pstate);\n> > +\n> > + opts->specified_opts |= SUBOPT_INCLUDE_GENERATED_COLUMN;\n> > + opts->include_generated_column = defGetBoolean(defel);\n> > + }\n>\n> Fixed.\n>\n> > diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c\n> > index d453e224d9..e8ff752fd9 100644\n> > --- a/src/bin/psql/tab-complete.c\n> > +++ b/src/bin/psql/tab-complete.c\n> > @@ -3365,7 +3365,7 @@ psql_completion(const char *text, int start, int end)\n> > COMPLETE_WITH(\"binary\", \"connect\", \"copy_data\", \"create_slot\",\n> > \"disable_on_error\",\n> > \"enabled\", \"failover\", \"origin\",\n> > \"password_required\",\n> > \"run_as_owner\", \"slot_name\",\n> > - \"streaming\",\n> > \"synchronous_commit\", \"two_phase\");\n> > + \"streaming\",\n> > \"synchronous_commit\", \"two_phase\",\"include_generated_columns\");\n> >\n> > 2) This small data table need not have a primary key column as it will\n> > create an index and insertion will happen in the index too.\n> > +-- check include-generated-columns option with generated column\n> > +CREATE TABLE gencoltable (a int PRIMARY KEY, b int GENERATED ALWAYS\n> > AS (a * 2) STORED);\n> > +INSERT INTO gencoltable (a) VALUES (1), (2), (3);\n> > +SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,\n> > NULL, 'include-xids', '0', 'skip-empty-xacts', '1',\n> > 'include-generated-columns', '1');\n>\n> Fixed.\n>\n> > 3) Please add a test case for this:\n> > + set to <literal>false</literal>. If the subscriber-side\n> > column is also a\n> > + generated column then this option has no effect; the\n> > replicated data will\n> > + be ignored and the subscriber column will be filled as\n> > normal with the\n> > + subscriber-side computed or default data.\n>\n> Added the required test case.\n>\n> > 4) You can use a new style of ereport to remove the brackets around errcode\n> > 4.a)\n> > + else if (!parse_bool(strVal(elem->arg),\n> > &data->include_generated_columns))\n> > + ereport(ERROR,\n> > +\n> > (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > + errmsg(\"could not\n> > parse value \\\"%s\\\" for parameter \\\"%s\\\"\",\n> > +\n> > strVal(elem->arg), elem->defname)));\n> >\n> > 4.b) similarly here too:\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_SYNTAX_ERROR),\n> > + /*- translator: both %s are strings of the form\n> > \"option = value\" */\n> > + errmsg(\"%s and %s are mutually\n> > exclusive options\",\n> > + \"copy_data = true\",\n> > \"include_generated_column = true\")));\n> >\n> > 4.c) similarly here too:\n> > + if (include_generated_columns_option_given)\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_SYNTAX_ERROR),\n> > + errmsg(\"conflicting\n> > or redundant options\")));\n>\n> Fixed.\n>\n> > 5) These variable names can be changed to keep it smaller, something\n> > like gencol or generatedcol or gencolumn, etc\n> > +++ b/src/include/catalog/pg_subscription.h\n> > @@ -98,6 +98,8 @@ CATALOG(pg_subscription,6100,SubscriptionRelationId)\n> > BKI_SHARED_RELATION BKI_ROW\n> > * slots) in the upstream database are enabled\n> > * to be synchronized to the standbys. */\n> >\n> > + bool subincludegeneratedcolumn; /* True if generated columns must be\n> > published */\n> > +\n> > #ifdef CATALOG_VARLEN /* variable-length fields start here */\n> > /* Connection string to the publisher */\n> > text subconninfo BKI_FORCE_NOT_NULL;\n> > @@ -157,6 +159,7 @@ typedef struct Subscription\n> > List *publications; /* List of publication names to subscribe to */\n> > char *origin; /* Only publish data originating from the\n> > * specified origin */\n> > + bool includegeneratedcolumn; /* publish generated column data */\n> > } Subscription;\n>\n> Fixed.\n>\n> The attached Patch contains the suggested changes.\n\nFew comments:\n1) Here tab1 and tab2 are exactly the same tables, just check if the\ntable tab1 itself can be used for your tests.\n@@ -24,20 +24,50 @@ $node_publisher->safe_psql('postgres',\n \"CREATE TABLE tab1 (a int PRIMARY KEY, b int GENERATED ALWAYS\nAS (a * 2) STORED)\"\n );\n+$node_publisher->safe_psql('postgres',\n+ \"CREATE TABLE tab2 (a int PRIMARY KEY, b int GENERATED ALWAYS\nAS (a * 2) STORED)\"\n+);\n\n2) We can document that the include_generate_columns option cannot be altered.\n\n3) You can mention that include-generated-columns is true by default\nand generated column data will be selected\n+-- When 'include-generated-columns' is not set\n+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,\nNULL, 'include-xids', '0', 'skip-empty-xacts', '1');\n+ data\n+-------------------------------------------------------------\n+ BEGIN\n+ table public.gencoltable: INSERT: a[integer]:1 b[integer]:2\n+ table public.gencoltable: INSERT: a[integer]:2 b[integer]:4\n+ table public.gencoltable: INSERT: a[integer]:3 b[integer]:6\n+ COMMIT\n+(5 rows)\n\n4) The comment seems to be wrong here, the comment says b will not be\nreplicated but b is being selected:\n-- When 'include-generated-columns' = '1' the generated column 'b'\nvalues will not be replicated\nINSERT INTO gencoltable (a) VALUES (1), (2), (3);\nSELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,\nNULL, 'include-xids', '0', 'skip-empty-xacts', '1',\n'include-generated-columns', '1');\n data\n-------------------------------------------------------------\n BEGIN\n table public.gencoltable: INSERT: a[integer]:1 b[integer]:2\n table public.gencoltable: INSERT: a[integer]:2 b[integer]:4\n table public.gencoltable: INSERT: a[integer]:3 b[integer]:6\n COMMIT\n(5 rows)\n\n5) Similarly here too the comment seems to be wrong, the comment says\nb will not replicated but b is not being selected:\nINSERT INTO gencoltable (a) VALUES (4), (5), (6);\n-- When 'include-generated-columns' = '0' the generated column 'b'\nvalues will be replicated\nSELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,\nNULL, 'include-xids', '0', 'skip-empty-xacts', '1',\n'include-generated-columns', '0');\n data\n------------------------------------------------\n BEGIN\n table public.gencoltable: INSERT: a[integer]:4\n table public.gencoltable: INSERT: a[integer]:5\n table public.gencoltable: INSERT: a[integer]:6\n COMMIT\n(5 rows)\n\n6) SUBOPT_include_generated_columns change it to SUBOPT_GENERATED to\nkeep the name consistent:\n--- a/src/backend/commands/subscriptioncmds.c\n+++ b/src/backend/commands/subscriptioncmds.c\n@@ -72,6 +72,7 @@\n #define SUBOPT_FAILOVER 0x00002000\n #define SUBOPT_LSN 0x00004000\n #define SUBOPT_ORIGIN 0x00008000\n+#define SUBOPT_include_generated_columns 0x00010000\n\n7) The comment style seems to be inconsistent, both of them can start\nin lower case\n+-- check include-generated-columns option with generated column\n+CREATE TABLE gencoltable (a int, b int GENERATED ALWAYS AS (a * 2) STORED);\n+INSERT INTO gencoltable (a) VALUES (1), (2), (3);\n+-- When 'include-generated-columns' is not set\n+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,\nNULL, 'include-xids', '0', 'skip-empty-xacts', '1');\n+ data\n+-------------------------------------------------------------\n+ BEGIN\n+ table public.gencoltable: INSERT: a[integer]:1 b[integer]:2\n+ table public.gencoltable: INSERT: a[integer]:2 b[integer]:4\n+ table public.gencoltable: INSERT: a[integer]:3 b[integer]:6\n+ COMMIT\n+(5 rows)\n+\n+-- When 'include-generated-columns' = '1' the generated column 'b'\nvalues will not be replicated\n\n8) This could be changed to remove the insert statements by using\npg_logical_slot_peek_changes:\n-- When 'include-generated-columns' is not set\nSELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,\nNULL, 'include-xids', '0', 'skip-empty-xacts', '1');\n-- When 'include-generated-columns' = '1' the generated column 'b'\nvalues will not be replicated\nINSERT INTO gencoltable (a) VALUES (1), (2), (3);\nSELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,\nNULL, 'include-xids', '0', 'skip-empty-xacts', '1',\n'include-generated-columns', '1');\nINSERT INTO gencoltable (a) VALUES (4), (5), (6);\n-- When 'include-generated-columns' = '0' the generated column 'b'\nvalues will be replicated\nSELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,\nNULL, 'include-xids', '0', 'skip-empty-xacts', '1',\n'include-generated-columns', '0');\nto:\n-- When 'include-generated-columns' is not set\nSELECT data FROM pg_logical_slot_peek_changes('regression_slot', NULL,\nNULL, 'include-xids', '0', 'skip-empty-xacts', '1');\n-- When 'include-generated-columns' = '1' the generated column 'b'\nvalues will not be replicated\nSELECT data FROM pg_logical_slot_peek_changes('regression_slot', NULL,\nNULL, 'include-xids', '0', 'skip-empty-xacts', '1',\n'include-generated-columns', '1');\n-- When 'include-generated-columns' = '0' the generated column 'b'\nvalues will be replicated\nSELECT data FROM pg_logical_slot_peek_changes('regression_slot', NULL,\nNULL, 'include-xids', '0', 'skip-empty-xacts', '1',\n'include-generated-columns', '0');\n\n9) In commit message the option used is wrong\ninclude_generated_columns should actually be\ninclude-generated-columns:\nUsage from test_decoding plugin:\nSELECT data FROM pg_logical_slot_get_changes('slot2', NULL, NULL,\n'include-xids', '0', 'skip-empty-xacts', '1',\n 'include_generated_columns','1');\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 17 Jun 2024 11:57:05 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi, here are my review comments for patch v7-0001.\n\n======\n1. GENERAL - \\dRs+\n\nShouldn't the new SUBSCRIPTION parameter be exposed via \"describe\"\n(e.g. \\dRs+ mysub) the same as the other boolean parameters?\n\n======\nCommit message\n\n2.\nWhen 'include_generated_columns' is false then the PUBLICATION\ncol-list will ignore any generated cols even when they are present in\na PUBLICATION col-list\n\n~\n\nMaybe you don't need to mention \"PUBLICATION col-list\" twice.\n\nSUGGESTION\nWhen 'include_generated_columns' is false, generated columns are not\nreplicated, even when present in a PUBLICATION col-list.\n\n~~~\n\n2.\nCREATE SUBSCRIPTION test1 connection 'dbname=postgres host=localhost port=9999\n'publication pub1;\n\n~\n\n2a.\n(I've questioned this one in previous reviews)\n\nWhat exactly is the purpose of this statement in the commit message?\nWas this supposed to demonstrate the usage of the\n'include_generated_columns' parameter?\n\n~\n\n2b.\n/publication/ PUBLICATION/\n\n\n~~~\n\n3.\nIf the subscriber-side column is also a generated column then\nthisoption has no effect; the replicated data will be ignored and the\nsubscriber column will be filled as normal with the subscriber-side\ncomputed or default data.\n\n~\n\nMissing space: /thisoption/this option/\n\n======\n.../expected/decoding_into_rel.out\n\n4.\n+-- When 'include-generated-columns' is not set\n+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,\nNULL, 'include-xids', '0', 'skip-empty-xacts', '1');\n+ data\n+-------------------------------------------------------------\n+ BEGIN\n+ table public.gencoltable: INSERT: a[integer]:1 b[integer]:2\n+ table public.gencoltable: INSERT: a[integer]:2 b[integer]:4\n+ table public.gencoltable: INSERT: a[integer]:3 b[integer]:6\n+ COMMIT\n+(5 rows)\n\nWhy is the default value here equivalent to\n'include-generated-columns' = '1' here instead of '0'? The default for\nthe CREATE SUBSCRIPTION parameter 'include_generated_columns' is\nfalse, and IMO it seems confusing for these 2 defaults to be\ndifferent. Here I think it should default to '0' *regardless* of what\nthe previous functionality might have done -- e.g. this is a \"test\ndecoder\" so the parameter should behave sensibly.\n\n======\n.../test_decoding/sql/decoding_into_rel.sql\n\nNITPICK - wrong comments.\n\n======\ndoc/src/sgml/protocol.sgml\n\n5.\n+ <varlistentry>\n+ <term>include_generated_columns</term>\n+ <listitem>\n+ <para>\n+ Boolean option to enable generated columns. This option controls\n+ whether generated columns should be included in the string\n+ representation of tuples during logical decoding in PostgreSQL.\n+ The default is false.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+\n\nDoes the protocol version need to be bumped to support this new option\nand should that be mentioned on this page similar to how all other\nversion values are mentioned?\n\n======\ndoc/src/sgml/ref/create_subscription.sgml\n\nNITPICK - some missing words/sentence.\nNITPICK - some missing <literal> tags.\nNITPICK - remove duplicated sentence.\nNITPICK - add another <para>.\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\n6.\n #define SUBOPT_ORIGIN 0x00008000\n+#define SUBOPT_include_generated_columns 0x00010000\n\nPlease use UPPERCASE for consistency with other macros.\n\n======\n.../libpqwalreceiver/libpqwalreceiver.c\n\n7.\n+ if (options->proto.logical.include_generated_columns &&\n+ PQserverVersion(conn->streamConn) >= 170000)\n+ appendStringInfoString(&cmd, \", include_generated_columns 'on'\");\n+\n\nIMO it makes more sense to say 'true' here instead of 'on'. It seems\nlike this was just cut/paste from the above code (where 'on' was\nsensible).\n\n======\nsrc/include/catalog/pg_subscription.h\n\n8.\n@@ -98,6 +98,8 @@ CATALOG(pg_subscription,6100,SubscriptionRelationId)\nBKI_SHARED_RELATION BKI_ROW\n * slots) in the upstream database are enabled\n * to be synchronized to the standbys. */\n\n+ bool subincludegencol; /* True if generated columns must be published */\n+\n\nNot fixed as claimed. This field name ought to be plural.\n\n/subincludegencol/subincludegencols/\n\n~~~\n\n9.\n char *origin; /* Only publish data originating from the\n * specified origin */\n+ bool includegencol; /* publish generated column data */\n } Subscription;\n\nNot fixed as claimed. This field name ought to be plural.\n\n/includegencol/includegencols/\n\n======\nsrc/test/subscription/t/031_column_list.pl\n\n10.\n+$node_publisher->safe_psql('postgres',\n+ \"CREATE TABLE tab2 (a int PRIMARY KEY, b int GENERATED ALWAYS AS (a\n* 2) STORED)\"\n+);\n+\n+$node_publisher->safe_psql('postgres',\n+ \"CREATE TABLE tab3 (a int PRIMARY KEY, b int GENERATED ALWAYS AS (a\n+ 10) STORED)\"\n+);\n+\n $node_subscriber->safe_psql('postgres',\n \"CREATE TABLE tab1 (a int PRIMARY KEY, b int GENERATED ALWAYS AS (a\n* 22) STORED, c int)\"\n );\n\n+$node_subscriber->safe_psql('postgres',\n+ \"CREATE TABLE tab2 (a int PRIMARY KEY, b int)\"\n+);\n+\n+$node_subscriber->safe_psql('postgres',\n+ \"CREATE TABLE tab3 (a int PRIMARY KEY, b int GENERATED ALWAYS AS (a\n+ 20) STORED)\"\n+);\n\nIMO the test needs lots more comments to describe what it is doing:\n\nFor example, the setup deliberately has made:\n* publisher-side tab2 has generated col 'b' but subscriber-side tab2\nhas NON-gnerated col 'b'.\n* publisher-side tab3 has generated col 'b' but subscriber-side tab2\nhas DIFFERENT COMPUTATION generated col 'b'.\n\nSo it will be better to have comments to explain all this instead of\nhaving to figure it out.\n\n~~~\n\n11.\n # data for initial sync\n\n $node_publisher->safe_psql('postgres',\n \"INSERT INTO tab1 (a) VALUES (1), (2), (3)\");\n+$node_publisher->safe_psql('postgres',\n+ \"INSERT INTO tab2 (a) VALUES (1), (2), (3)\");\n\n $node_publisher->safe_psql('postgres',\n- \"CREATE PUBLICATION pub1 FOR ALL TABLES\");\n+ \"CREATE PUBLICATION pub1 FOR TABLE tab1\");\n+$node_publisher->safe_psql('postgres',\n+ \"CREATE PUBLICATION pub2 FOR TABLE tab2\");\n+$node_publisher->safe_psql('postgres',\n+ \"CREATE PUBLICATION pub3 FOR TABLE tab3\");\n+\n\n# Wait for initial sync of all subscriptions\n$node_subscriber->wait_for_subscription_sync;\n\nmy $result = $node_subscriber->safe_psql('postgres', \"SELECT a, b FROM tab1\");\nis( $result, qq(1|22\n2|44\n3|66), 'generated columns initial sync');\n\n~\n\nIMO (and for completeness) it would be better to INSERT data for all\nthe tables and alsot to validate that tables tab2 and tab3 has zero\nrows replicated. Yes, I know there is 'copy_data=false', but it is\njust easier to see all the tables instead of guessing why some are\nomitted, and anyway this test case will be needed after the next patch\nimplements the COPY support for gen-cols.\n\n~~~\n\n12.\n+$node_publisher->safe_psql('postgres', \"INSERT INTO tab2 VALUES (4), (5)\");\n+\n+$node_publisher->wait_for_catchup('sub2');\n+\n+$result = $node_subscriber->safe_psql('postgres', \"SELECT * FROM tab2\");\n+is( $result, qq(4|8\n+5|10), 'generated columns replicated to non-generated column on subscriber');\n+\n+$node_publisher->safe_psql('postgres', \"INSERT INTO tab3 VALUES (4), (5)\");\n+\n+$node_publisher->wait_for_catchup('sub3');\n+\n+$result = $node_subscriber->safe_psql('postgres', \"SELECT * FROM tab3\");\n+is( $result, qq(4|24\n+5|25), 'generated columns replicated to non-generated column on subscriber');\n+\n\nHere also I think there should be explicit comments about what these\ncases are testing, what results you are expecting, and why. The\ncomments will look something like the message parameter of those\nsafe_psql(...)\n\ne.g.\n# confirm generated columns ARE replicated when the subscriber-side\ncolumn is not generated\n\ne.g.\n# confirm generated columns are NOT replicated when the\nsubscriber-side column is also generated\n\n======\n\n99.\nPlease also see my nitpicks attachment patch for various other\ncosmetic and docs problems, and apply theseif you agree:\n- documentation wording/rendering\n- wrong comments\n- spacing\n- etc.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Mon, 17 Jun 2024 18:27:30 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi, here are my review comments for patch v7-0002\n\n======\nCommit Message\n\nNITPICKS\n- rearrange paragraphs\n- typo \"donot\"\n- don't start a sentence with \"And\"\n- etc.\n\nPlease see the attachment for my suggested commit message text updates\nand take from it whatever you agree with.\n\n======\ndoc/src/sgml/ref/create_subscription.sgml\n\n1.\n+ If the subscriber-side column is also a generated column\nthen this option\n+ has no effect; the replicated data will be ignored and the subscriber\n+ column will be filled as normal with the subscriber-side computed or\n+ default data. And during table synchronization, the data\ncorresponding to\n+ the generated column on subscriber-side will not be sent from the\n+ publisher to the subscriber.\n\nThis text already mentions subscriber-side generated cols. IMO you\ndon't need to say anything at all about table synchronization --\nthat's just an internal code optimization, which is not something the\nuser needs to know about. IOW, the entire last sentence (\"And\nduring...\") should be removed.\n\n======\nsrc/backend/replication/logical/relation.c\n\n2. logicalrep_rel_open\n\n- if (attr->attisdropped)\n+ if (attr->attisdropped ||\n+ (!MySubscription->includegencol && attr->attgenerated))\n {\n entry->attrmap->attnums[i] = -1;\n continue;\n\n~\n\nMaybe I'm mistaken, but isn't this code for skipping checking for\n\"missing\" subscriber-side (aka local) columns? Can't it just\nunconditionally skip every attr->attgenerated -- i.e. why does it\nmatter if the MySubscription->includegencol was set or not?\n\n======\nsrc/backend/replication/logical/tablesync.c\n\n3. make_copy_attnamelist\n\n- for (i = 0; i < rel->remoterel.natts; i++)\n+ desc = RelationGetDescr(rel->localrel);\n+\n+ for (i = 0; i < desc->natts; i++)\n {\n- attnamelist = lappend(attnamelist,\n- makeString(rel->remoterel.attnames[i]));\n+ int attnum;\n+ Form_pg_attribute attr = TupleDescAttr(desc, i);\n+\n+ if (!attr->attgenerated)\n+ continue;\n+\n+ attnum = logicalrep_rel_att_by_name(&rel->remoterel,\n+ NameStr(attr->attname));\n+\n+ /*\n+ * Check if subscription table have a generated column with same\n+ * column name as a non-generated column in the corresponding\n+ * publication table.\n+ */\n+ if (attnum >=0 && !attgenlist[attnum])\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"logical replication target relation \\\"%s.%s\\\" is missing\nreplicated column: \\\"%s\\\"\",\n+ rel->remoterel.nspname, rel->remoterel.relname, NameStr(attr->attname))));\n+\n+ if (attnum >= 0)\n+ gencollist = lappend_int(gencollist, attnum);\n }\n\n~\n\nNITPICK - Use C99-style for loop variables\nNITPICK - Typo in comment\nNITPICK - spaces\n\n~\n\n3a.\nI think above code should be refactored so there is only one check for\n\"if (attnum >= 0)\" -- e.g. other condition should be nested.\n\n~\n\n3b.\nThat ERROR message says \"missing replicated column\", but that doesn't\nseem much like what the code-comment was saying this code is about.\n\n~~~\n\n4.\n+ for (i = 0; i < rel->remoterel.natts; i++)\n+ {\n+\n+ if (gencollist != NIL && j < gencollist->length &&\n+ list_nth_int(gencollist, j) == i)\n+ j++;\n+ else\n+ attnamelist = lappend(attnamelist,\n+ makeString(rel->remoterel.attnames[i]));\n+ }\n\nNITPICK - Use C99-style for loop variables\nNITPICK - Unnecessary blank lines\n\n~\n\nIIUC the subscriber-side table and the publisher-side table do NOT\nhave to have all the columns in identical order for the logical\nreplication to work correcly. AFAIK it works fine so long as the\ncolumn names match for the replicated columns. Therefore, I am\nsuspicious that this new patch code seems to be imposing some new\nordering assumptions/restrictions (e.g. list_nth_int stuff) which are\nnot current requirements.\n\n~~~\n\ncopy_table:\n\nNITPICK - comment typo\nNITPICK - comment wording\n\n~\n\n5.\n+ int i = 0;\n+ ListCell *l;\n+\n appendStringInfoString(&cmd, \"COPY (SELECT \");\n- for (int i = 0; i < lrel.natts; i++)\n+ foreach(l, attnamelist)\n {\n- appendStringInfoString(&cmd, quote_identifier(lrel.attnames[i]));\n- if (i < lrel.natts - 1)\n+ appendStringInfoString(&cmd, quote_identifier(strVal(lfirst(l))));\n+ if (i < attnamelist->length - 1)\n appendStringInfoString(&cmd, \", \");\n+ i++;\n }\nIIUC for new code like this, it is preferred to use the foreach*\nmacros instead of ListCell.\n\n======\nsrc/test/regress/sql/subscription.sql\n\n6.\n--- fail - copy_data and include_generated_columns are mutually\nexclusive options\n-CREATE SUBSCRIPTION sub2 CONNECTION 'dbname=regress_doesnotexist'\nPUBLICATION testpub WITH (include_generated_columns = true);\n-ERROR: copy_data = true and include_generated_columns = true are\nmutually exclusive options\n\nIt is OK to delete this test now but IMO still needs to be some\n\"include_generated_columns must be boolean\" test cases (e.g. same as\nthere was two_phase). Actually, this should probably be done by the\n0001 patch.\n\n======\nsrc/test/subscription/t/011_generated.pl\n\n7.\nAll the PRIMARY KEY stuff may be overkill. Are primary keys really\nneeded for these tests?\n\n~~~\n\n8.\n+$node_publisher->safe_psql('postgres',\n+ \"CREATE TABLE tab4 (a int PRIMARY KEY, b int GENERATED ALWAYS AS (a\n* 2) STORED, c int GENERATED ALWAYS AS (a * 2) STORED)\"\n+);\n+\n+$node_publisher->safe_psql('postgres',\n+ \"CREATE TABLE tab5 (a int PRIMARY KEY, b int)\"\n+);\n+\n\nMaybe add comments on what is special about all these tables, so don't\nhave to read the tests later to deduce their purpose.\n\ntab4: publisher-side generated col 'b' and 'c' ==> subscriber-side\nnon-generated col 'b', and generated-col 'c'\ntab5: publisher-side non-generated col 'b' --> subscriber-side\nnon-generated col 'b'\n\n~~~\n\n9.\n+$node_subscriber->safe_psql('postgres',\n+ \"CREATE SUBSCRIPTION sub4 CONNECTION '$publisher_connstr'\nPUBLICATION pub4 WITH (include_generated_columns = true)\"\n+ );\n+\n\nAll the publications are created together, and all the subscriptions\nare created together except for 'sub5'. Consider including a comment\nto say why you deliberately created the 'sub5' subscription separate\nfrom all others.\n\n======\n\n99.\nPlease also see my code nitpicks attachment patch for various other\ncosmetic problems, and apply them if you agree.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 18 Jun 2024 15:26:45 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "> Hi Shubham, thanks for providing a patch.\n> I have some comments for v6-0001.\n>\n> 1. create_subscription.sgml\n> There is repetition of the same line.\n>\n> + <para>\n> + Specifies whether the generated columns present in the tables\n> + associated with the subscription should be replicated. If the\n> + subscriber-side column is also a generated column then this option\n> + has no effect; the replicated data will be ignored and the subscriber\n> + column will be filled as normal with the subscriber-side computed or\n> + default data.\n> + <literal>false</literal>.\n> + </para>\n> +\n> + <para>\n> + This parameter can only be set true if\n> <literal>copy_data</literal> is\n> + set to <literal>false</literal>. If the subscriber-side\n> column is also a\n> + generated column then this option has no effect; the\n> replicated data will\n> + be ignored and the subscriber column will be filled as\n> normal with the\n> + subscriber-side computed or default data.\n> + </para>\n>\n> ==============================\n> 2. subscriptioncmds.c\n>\n> 2a. The macro name should be in uppercase. We can use a short name\n> like 'SUBOPT_INCLUDE_GEN_COL'. Thought?\n> +#define SUBOPT_include_generated_columns 0x00010000\n>\n> 2b.Update macro name accordingly\n> + if (IsSet(supported_opts, SUBOPT_include_generated_columns))\n> + opts->include_generated_columns = false;\n>\n> 2c. Update macro name accordingly\n> + else if (IsSet(supported_opts, SUBOPT_include_generated_columns) &&\n> + strcmp(defel->defname, \"include_generated_columns\") == 0)\n> + {\n> + if (IsSet(opts->specified_opts, SUBOPT_include_generated_columns))\n> + errorConflictingDefElem(defel, pstate);\n> +\n> + opts->specified_opts |= SUBOPT_include_generated_columns;\n> + opts->include_generated_columns = defGetBoolean(defel);\n> + }\n>\n> 2d. Update macro name accordingly\n> + SUBOPT_RUN_AS_OWNER | SUBOPT_FAILOVER | SUBOPT_ORIGIN |\n> + SUBOPT_include_generated_columns);\n>\n>\n> ==============================\n>\n> 3. decoding_into_rel.out\n>\n> 3a. In comment, I think it should be \"When 'include-generated-columns'\n> = '1' the generated column 'b' values will be replicated\"\n> +-- When 'include-generated-columns' = '1' the generated column 'b'\n> values will not be replicated\n> +INSERT INTO gencoltable (a) VALUES (1), (2), (3);\n> +SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,\n> NULL, 'include-xids', '0', 'skip-empty-xacts', '1',\n> 'include-generated-columns', '1');\n> + data\n> +-------------------------------------------------------------\n> + BEGIN\n> + table public.gencoltable: INSERT: a[integer]:1 b[integer]:2\n> + table public.gencoltable: INSERT: a[integer]:2 b[integer]:4\n> + table public.gencoltable: INSERT: a[integer]:3 b[integer]:6\n> + COMMIT\n>\n> 3b. In comment, I think it should be \"When 'include-generated-columns'\n> = '1' the generated column 'b' values will not be replicated\"\n> +-- When 'include-generated-columns' = '0' the generated column 'b'\n> values will be replicated\n> +SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,\n> NULL, 'include-xids', '0', 'skip-empty-xacts', '1',\n> 'include-generated-columns', '0');\n> + data\n> +------------------------------------------------\n> + BEGIN\n> + table public.gencoltable: INSERT: a[integer]:4\n> + table public.gencoltable: INSERT: a[integer]:5\n> + table public.gencoltable: INSERT: a[integer]:6\n> + COMMIT\n> +(5 rows)\n>\n> =========================\n>\n> 4. Here names for both the tests are the same. I think we should use\n> different names.\n>\n> +$result = $node_subscriber->safe_psql('postgres', \"SELECT * FROM tab2\");\n> +is( $result, qq(4|8\n> +5|10), 'generated columns replicated to non-generated column on subscriber');\n> +\n> +$node_publisher->safe_psql('postgres', \"INSERT INTO tab3 VALUES (4), (5)\");\n> +\n> +$node_publisher->wait_for_catchup('sub3');\n> +\n> +$result = $node_subscriber->safe_psql('postgres', \"SELECT * FROM tab3\");\n> +is( $result, qq(4|24\n> +5|25), 'generated columns replicated to non-generated column on subscriber');\n\nAll the comments are handled.\n\nThe attached Patch contains all the suggested changes.\n\nThanks and Regards,\nShubham Khanna.", "msg_date": "Wed, 19 Jun 2024 16:52:20 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "> Few comments:\n> 1) Here tab1 and tab2 are exactly the same tables, just check if the\n> table tab1 itself can be used for your tests.\n> @@ -24,20 +24,50 @@ $node_publisher->safe_psql('postgres',\n> \"CREATE TABLE tab1 (a int PRIMARY KEY, b int GENERATED ALWAYS\n> AS (a * 2) STORED)\"\n> );\n> +$node_publisher->safe_psql('postgres',\n> + \"CREATE TABLE tab2 (a int PRIMARY KEY, b int GENERATED ALWAYS\n> AS (a * 2) STORED)\"\n> +);\n\nOn the subscription side the tables have different descriptions, so we\nneed to have different tables on the publisher side.\n\n> 2) We can document that the include_generate_columns option cannot be altered.\n>\n> 3) You can mention that include-generated-columns is true by default\n> and generated column data will be selected\n> +-- When 'include-generated-columns' is not set\n> +SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,\n> NULL, 'include-xids', '0', 'skip-empty-xacts', '1');\n> + data\n> +-------------------------------------------------------------\n> + BEGIN\n> + table public.gencoltable: INSERT: a[integer]:1 b[integer]:2\n> + table public.gencoltable: INSERT: a[integer]:2 b[integer]:4\n> + table public.gencoltable: INSERT: a[integer]:3 b[integer]:6\n> + COMMIT\n> +(5 rows)\n>\n> 4) The comment seems to be wrong here, the comment says b will not be\n> replicated but b is being selected:\n> -- When 'include-generated-columns' = '1' the generated column 'b'\n> values will not be replicated\n> INSERT INTO gencoltable (a) VALUES (1), (2), (3);\n> SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,\n> NULL, 'include-xids', '0', 'skip-empty-xacts', '1',\n> 'include-generated-columns', '1');\n> data\n> -------------------------------------------------------------\n> BEGIN\n> table public.gencoltable: INSERT: a[integer]:1 b[integer]:2\n> table public.gencoltable: INSERT: a[integer]:2 b[integer]:4\n> table public.gencoltable: INSERT: a[integer]:3 b[integer]:6\n> COMMIT\n> (5 rows)\n>\n> 5) Similarly here too the comment seems to be wrong, the comment says\n> b will not replicated but b is not being selected:\n> INSERT INTO gencoltable (a) VALUES (4), (5), (6);\n> -- When 'include-generated-columns' = '0' the generated column 'b'\n> values will be replicated\n> SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,\n> NULL, 'include-xids', '0', 'skip-empty-xacts', '1',\n> 'include-generated-columns', '0');\n> data\n> ------------------------------------------------\n> BEGIN\n> table public.gencoltable: INSERT: a[integer]:4\n> table public.gencoltable: INSERT: a[integer]:5\n> table public.gencoltable: INSERT: a[integer]:6\n> COMMIT\n> (5 rows)\n>\n> 6) SUBOPT_include_generated_columns change it to SUBOPT_GENERATED to\n> keep the name consistent:\n> --- a/src/backend/commands/subscriptioncmds.c\n> +++ b/src/backend/commands/subscriptioncmds.c\n> @@ -72,6 +72,7 @@\n> #define SUBOPT_FAILOVER 0x00002000\n> #define SUBOPT_LSN 0x00004000\n> #define SUBOPT_ORIGIN 0x00008000\n> +#define SUBOPT_include_generated_columns 0x00010000\n>\n> 7) The comment style seems to be inconsistent, both of them can start\n> in lower case\n> +-- check include-generated-columns option with generated column\n> +CREATE TABLE gencoltable (a int, b int GENERATED ALWAYS AS (a * 2) STORED);\n> +INSERT INTO gencoltable (a) VALUES (1), (2), (3);\n> +-- When 'include-generated-columns' is not set\n> +SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,\n> NULL, 'include-xids', '0', 'skip-empty-xacts', '1');\n> + data\n> +-------------------------------------------------------------\n> + BEGIN\n> + table public.gencoltable: INSERT: a[integer]:1 b[integer]:2\n> + table public.gencoltable: INSERT: a[integer]:2 b[integer]:4\n> + table public.gencoltable: INSERT: a[integer]:3 b[integer]:6\n> + COMMIT\n> +(5 rows)\n> +\n> +-- When 'include-generated-columns' = '1' the generated column 'b'\n> values will not be replicated\n>\n> 8) This could be changed to remove the insert statements by using\n> pg_logical_slot_peek_changes:\n> -- When 'include-generated-columns' is not set\n> SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,\n> NULL, 'include-xids', '0', 'skip-empty-xacts', '1');\n> -- When 'include-generated-columns' = '1' the generated column 'b'\n> values will not be replicated\n> INSERT INTO gencoltable (a) VALUES (1), (2), (3);\n> SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,\n> NULL, 'include-xids', '0', 'skip-empty-xacts', '1',\n> 'include-generated-columns', '1');\n> INSERT INTO gencoltable (a) VALUES (4), (5), (6);\n> -- When 'include-generated-columns' = '0' the generated column 'b'\n> values will be replicated\n> SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,\n> NULL, 'include-xids', '0', 'skip-empty-xacts', '1',\n> 'include-generated-columns', '0');\n> to:\n> -- When 'include-generated-columns' is not set\n> SELECT data FROM pg_logical_slot_peek_changes('regression_slot', NULL,\n> NULL, 'include-xids', '0', 'skip-empty-xacts', '1');\n> -- When 'include-generated-columns' = '1' the generated column 'b'\n> values will not be replicated\n> SELECT data FROM pg_logical_slot_peek_changes('regression_slot', NULL,\n> NULL, 'include-xids', '0', 'skip-empty-xacts', '1',\n> 'include-generated-columns', '1');\n> -- When 'include-generated-columns' = '0' the generated column 'b'\n> values will be replicated\n> SELECT data FROM pg_logical_slot_peek_changes('regression_slot', NULL,\n> NULL, 'include-xids', '0', 'skip-empty-xacts', '1',\n> 'include-generated-columns', '0');\n>\n> 9) In commit message the option used is wrong\n> include_generated_columns should actually be\n> include-generated-columns:\n> Usage from test_decoding plugin:\n> SELECT data FROM pg_logical_slot_get_changes('slot2', NULL, NULL,\n> 'include-xids', '0', 'skip-empty-xacts', '1',\n> 'include_generated_columns','1');\n\nAll the comments are handled.\n\nPatch v8-0001 contains all the changes required. See [1] for the changes added.\n\n[1] https://www.postgresql.org/message-id/CAHv8Rj%2BAi0CgtXiAga82bWpWB8fVcOWycNyJ_jqXm788v3R8rQ%40mail.gmail.com\n\nThanks and Regards,\nShubham Khanna.\n\n\n", "msg_date": "Wed, 19 Jun 2024 17:06:31 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Mon, Jun 17, 2024 at 1:57 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi, here are my review comments for patch v7-0001.\n>\n> ======\n> 1. GENERAL - \\dRs+\n>\n> Shouldn't the new SUBSCRIPTION parameter be exposed via \"describe\"\n> (e.g. \\dRs+ mysub) the same as the other boolean parameters?\n>\n> ======\n> Commit message\n>\n> 2.\n> When 'include_generated_columns' is false then the PUBLICATION\n> col-list will ignore any generated cols even when they are present in\n> a PUBLICATION col-list\n>\n> ~\n>\n> Maybe you don't need to mention \"PUBLICATION col-list\" twice.\n>\n> SUGGESTION\n> When 'include_generated_columns' is false, generated columns are not\n> replicated, even when present in a PUBLICATION col-list.\n>\n> ~~~\n>\n> 2.\n> CREATE SUBSCRIPTION test1 connection 'dbname=postgres host=localhost port=9999\n> 'publication pub1;\n>\n> ~\n>\n> 2a.\n> (I've questioned this one in previous reviews)\n>\n> What exactly is the purpose of this statement in the commit message?\n> Was this supposed to demonstrate the usage of the\n> 'include_generated_columns' parameter?\n>\n> ~\n>\n> 2b.\n> /publication/ PUBLICATION/\n>\n>\n> ~~~\n>\n> 3.\n> If the subscriber-side column is also a generated column then\n> thisoption has no effect; the replicated data will be ignored and the\n> subscriber column will be filled as normal with the subscriber-side\n> computed or default data.\n>\n> ~\n>\n> Missing space: /thisoption/this option/\n>\n> ======\n> .../expected/decoding_into_rel.out\n>\n> 4.\n> +-- When 'include-generated-columns' is not set\n> +SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,\n> NULL, 'include-xids', '0', 'skip-empty-xacts', '1');\n> + data\n> +-------------------------------------------------------------\n> + BEGIN\n> + table public.gencoltable: INSERT: a[integer]:1 b[integer]:2\n> + table public.gencoltable: INSERT: a[integer]:2 b[integer]:4\n> + table public.gencoltable: INSERT: a[integer]:3 b[integer]:6\n> + COMMIT\n> +(5 rows)\n>\n> Why is the default value here equivalent to\n> 'include-generated-columns' = '1' here instead of '0'? The default for\n> the CREATE SUBSCRIPTION parameter 'include_generated_columns' is\n> false, and IMO it seems confusing for these 2 defaults to be\n> different. Here I think it should default to '0' *regardless* of what\n> the previous functionality might have done -- e.g. this is a \"test\n> decoder\" so the parameter should behave sensibly.\n>\n> ======\n> .../test_decoding/sql/decoding_into_rel.sql\n>\n> NITPICK - wrong comments.\n>\n> ======\n> doc/src/sgml/protocol.sgml\n>\n> 5.\n> + <varlistentry>\n> + <term>include_generated_columns</term>\n> + <listitem>\n> + <para>\n> + Boolean option to enable generated columns. This option controls\n> + whether generated columns should be included in the string\n> + representation of tuples during logical decoding in PostgreSQL.\n> + The default is false.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> +\n>\n> Does the protocol version need to be bumped to support this new option\n> and should that be mentioned on this page similar to how all other\n> version values are mentioned?\n\nI already did the Backward Compatibility test earlier and decided that\nprotocol bump is not needed.\n\n> doc/src/sgml/ref/create_subscription.sgml\n>\n> NITPICK - some missing words/sentence.\n> NITPICK - some missing <literal> tags.\n> NITPICK - remove duplicated sentence.\n> NITPICK - add another <para>.\n>\n> ======\n> src/backend/commands/subscriptioncmds.c\n>\n> 6.\n> #define SUBOPT_ORIGIN 0x00008000\n> +#define SUBOPT_include_generated_columns 0x00010000\n>\n> Please use UPPERCASE for consistency with other macros.\n>\n> ======\n> .../libpqwalreceiver/libpqwalreceiver.c\n>\n> 7.\n> + if (options->proto.logical.include_generated_columns &&\n> + PQserverVersion(conn->streamConn) >= 170000)\n> + appendStringInfoString(&cmd, \", include_generated_columns 'on'\");\n> +\n>\n> IMO it makes more sense to say 'true' here instead of 'on'. It seems\n> like this was just cut/paste from the above code (where 'on' was\n> sensible).\n>\n> ======\n> src/include/catalog/pg_subscription.h\n>\n> 8.\n> @@ -98,6 +98,8 @@ CATALOG(pg_subscription,6100,SubscriptionRelationId)\n> BKI_SHARED_RELATION BKI_ROW\n> * slots) in the upstream database are enabled\n> * to be synchronized to the standbys. */\n>\n> + bool subincludegencol; /* True if generated columns must be published */\n> +\n>\n> Not fixed as claimed. This field name ought to be plural.\n>\n> /subincludegencol/subincludegencols/\n>\n> ~~~\n>\n> 9.\n> char *origin; /* Only publish data originating from the\n> * specified origin */\n> + bool includegencol; /* publish generated column data */\n> } Subscription;\n>\n> Not fixed as claimed. This field name ought to be plural.\n>\n> /includegencol/includegencols/\n>\n> ======\n> src/test/subscription/t/031_column_list.pl\n>\n> 10.\n> +$node_publisher->safe_psql('postgres',\n> + \"CREATE TABLE tab2 (a int PRIMARY KEY, b int GENERATED ALWAYS AS (a\n> * 2) STORED)\"\n> +);\n> +\n> +$node_publisher->safe_psql('postgres',\n> + \"CREATE TABLE tab3 (a int PRIMARY KEY, b int GENERATED ALWAYS AS (a\n> + 10) STORED)\"\n> +);\n> +\n> $node_subscriber->safe_psql('postgres',\n> \"CREATE TABLE tab1 (a int PRIMARY KEY, b int GENERATED ALWAYS AS (a\n> * 22) STORED, c int)\"\n> );\n>\n> +$node_subscriber->safe_psql('postgres',\n> + \"CREATE TABLE tab2 (a int PRIMARY KEY, b int)\"\n> +);\n> +\n> +$node_subscriber->safe_psql('postgres',\n> + \"CREATE TABLE tab3 (a int PRIMARY KEY, b int GENERATED ALWAYS AS (a\n> + 20) STORED)\"\n> +);\n>\n> IMO the test needs lots more comments to describe what it is doing:\n>\n> For example, the setup deliberately has made:\n> * publisher-side tab2 has generated col 'b' but subscriber-side tab2\n> has NON-gnerated col 'b'.\n> * publisher-side tab3 has generated col 'b' but subscriber-side tab2\n> has DIFFERENT COMPUTATION generated col 'b'.\n>\n> So it will be better to have comments to explain all this instead of\n> having to figure it out.\n>\n> ~~~\n>\n> 11.\n> # data for initial sync\n>\n> $node_publisher->safe_psql('postgres',\n> \"INSERT INTO tab1 (a) VALUES (1), (2), (3)\");\n> +$node_publisher->safe_psql('postgres',\n> + \"INSERT INTO tab2 (a) VALUES (1), (2), (3)\");\n>\n> $node_publisher->safe_psql('postgres',\n> - \"CREATE PUBLICATION pub1 FOR ALL TABLES\");\n> + \"CREATE PUBLICATION pub1 FOR TABLE tab1\");\n> +$node_publisher->safe_psql('postgres',\n> + \"CREATE PUBLICATION pub2 FOR TABLE tab2\");\n> +$node_publisher->safe_psql('postgres',\n> + \"CREATE PUBLICATION pub3 FOR TABLE tab3\");\n> +\n>\n> # Wait for initial sync of all subscriptions\n> $node_subscriber->wait_for_subscription_sync;\n>\n> my $result = $node_subscriber->safe_psql('postgres', \"SELECT a, b FROM tab1\");\n> is( $result, qq(1|22\n> 2|44\n> 3|66), 'generated columns initial sync');\n>\n> ~\n>\n> IMO (and for completeness) it would be better to INSERT data for all\n> the tables and alsot to validate that tables tab2 and tab3 has zero\n> rows replicated. Yes, I know there is 'copy_data=false', but it is\n> just easier to see all the tables instead of guessing why some are\n> omitted, and anyway this test case will be needed after the next patch\n> implements the COPY support for gen-cols.\n>\n> ~~~\n>\n> 12.\n> +$node_publisher->safe_psql('postgres', \"INSERT INTO tab2 VALUES (4), (5)\");\n> +\n> +$node_publisher->wait_for_catchup('sub2');\n> +\n> +$result = $node_subscriber->safe_psql('postgres', \"SELECT * FROM tab2\");\n> +is( $result, qq(4|8\n> +5|10), 'generated columns replicated to non-generated column on subscriber');\n> +\n> +$node_publisher->safe_psql('postgres', \"INSERT INTO tab3 VALUES (4), (5)\");\n> +\n> +$node_publisher->wait_for_catchup('sub3');\n> +\n> +$result = $node_subscriber->safe_psql('postgres', \"SELECT * FROM tab3\");\n> +is( $result, qq(4|24\n> +5|25), 'generated columns replicated to non-generated column on subscriber');\n> +\n>\n> Here also I think there should be explicit comments about what these\n> cases are testing, what results you are expecting, and why. The\n> comments will look something like the message parameter of those\n> safe_psql(...)\n>\n> e.g.\n> # confirm generated columns ARE replicated when the subscriber-side\n> column is not generated\n>\n> e.g.\n> # confirm generated columns are NOT replicated when the\n> subscriber-side column is also generated\n>\n> ======\n>\n> 99.\n> Please also see my nitpicks attachment patch for various other\n> cosmetic and docs problems, and apply theseif you agree:\n> - documentation wording/rendering\n> - wrong comments\n> - spacing\n> - etc.\n\nAll the comments are handled.\n\nPatch v8-0001 contains all the changes required. See [1] for the changes added.\n\n[1] https://www.postgresql.org/message-id/CAHv8Rj%2BAi0CgtXiAga82bWpWB8fVcOWycNyJ_jqXm788v3R8rQ%40mail.gmail.com\n\nThanks and Regards,\nShubham Khanna.\n\n\n", "msg_date": "Wed, 19 Jun 2024 17:30:06 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On 19.06.24 13:22, Shubham Khanna wrote:\n> All the comments are handled.\n> \n> The attached Patch contains all the suggested changes.\n\nPlease also take a look at the proposed patch for virtual generated \ncolumns [0] and consider how that would affect your patch. I think your \nfeature can only replicate *stored* generated columns. So perhaps the \ndocumentation and terminology in your patch should reflect that.\n\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/a368248e-69e4-40be-9c07-6c3b5880b0a6@eisentraut.org\n\n\n\n", "msg_date": "Wed, 19 Jun 2024 18:13:14 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi, here are my review comments for v8-0001.\n\n======\nCommit message.\n\n1.\nIt seems like the patch name was accidentally omitted, so it became a\nmess when it defaulted to the 1st paragraph of the commit message.\n\n======\ncontrib/test_decoding/test_decoding.c\n\n2.\n+ data->include_generated_columns = true;\n\nI previously posted a comment [1, #4] that this should default to\nfalse; IMO it is unintuitive for the test_decoding to have an\n*opposite* default behaviour compared to CREATE SUBSCRIPTION.\n\n======\ndoc/src/sgml/ref/create_subscription.sgml\n\nNITPICK - remove the inconsistent blank line in SGML\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\n3.\n+#define SUBOPT_include_generated_columns 0x00010000\n\nI previously posted a comment [1, #6] that this should be UPPERCASE,\nbut it is not yet fixed.\n\n======\nsrc/bin/psql/describe.c\n\nNITPICK - move and reword the bogus comment\n\n~\n\n4.\n+ if (pset.sversion >= 170000)\n+ appendPQExpBuffer(&buf,\n+ \", subincludegencols AS \\\"%s\\\"\\n\",\n+ gettext_noop(\"include_generated_columns\"));\n\n4a.\nFor consistency with every other parameter, that column title should\nbe written in words \"Include generated columns\" (not\n\"include_generated_columns\").\n\n~\n\n4b.\nIMO this new column belongs with the other subscription parameter\ncolumns (e.g. put it ahead of the \"Conninfo\" column).\n\n======\nsrc/test/subscription/t/011_generated.pl\n\nNITPICK - fixed a comment\n\n5.\nIMO, it would be better for readability if all the matching CREATE\nTABLE for publisher and subscriber are kept together, instead of the\ncurrent code which is creating all publisher tables and then creating\nall subscriber tables.\n\n~~~\n\n6.\n+$result = $node_subscriber->safe_psql('postgres', \"SELECT * FROM tab2\");\n+is( $result, qq(4|8\n+5|10), 'confirm generated columns ARE replicated when the\nsubscriber-side column is not generated');\n+\n...\n+$result = $node_subscriber->safe_psql('postgres', \"SELECT * FROM tab3\");\n+is( $result, qq(4|24\n+5|25), 'confirm generated columns are NOT replicated when the\nsubscriber-side column is also generated');\n+\n\n6a.\nThese SELECT all need ORDER BY to protect against the SELECT *\nreturning rows in some unexpected order.\n\n~\n\n6b.\nIMO there should be more comments here to explain how you can tell the\ncolumn was NOT replicated. E.g. it is because the result value of 'b'\nis the subscriber-side computed value (which you made deliberately\ndifferent to the publisher-side computed value).\n\n======\n\n99.\nPlease also refer to the attached nitpicks top-up patch for minor\ncosmetic stuff.\n\n======\n[1] https://www.postgresql.org/message-id/CAHv8RjLeZtTeXpFdoY6xCPO41HtuOPMSSZgshVdb%2BV%3Dp2YHL8Q%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Thu, 20 Jun 2024 13:33:14 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Wed, 19 Jun 2024 at 21:43, Peter Eisentraut <peter@eisentraut.org> wrote:\n>\n> On 19.06.24 13:22, Shubham Khanna wrote:\n> > All the comments are handled.\n> >\n> > The attached Patch contains all the suggested changes.\n>\n> Please also take a look at the proposed patch for virtual generated\n> columns [0] and consider how that would affect your patch. I think your\n> feature can only replicate *stored* generated columns. So perhaps the\n> documentation and terminology in your patch should reflect that.\n\nThis patch is unable to manage virtual generated columns because it\nstores NULL values for them. Along with documentation the initial sync\ncommand being generated also should be changed to sync data\nexclusively for stored generated columns, omitting virtual ones. I\nsuggest treating these changes as a separate patch(0003) for future\nmerging or a separate commit, depending on the order of patch\nacceptance.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 20 Jun 2024 12:51:57 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Tue, 18 Jun 2024 at 10:57, Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi, here are my review comments for patch v7-0002\n>\n> ======\n> Commit Message\n>\n> NITPICKS\n> - rearrange paragraphs\n> - typo \"donot\"\n> - don't start a sentence with \"And\"\n> - etc.\n>\n> Please see the attachment for my suggested commit message text updates\n> and take from it whatever you agree with.\n>\nFixed\n\n> ======\n> doc/src/sgml/ref/create_subscription.sgml\n>\n> 1.\n> + If the subscriber-side column is also a generated column\n> then this option\n> + has no effect; the replicated data will be ignored and the subscriber\n> + column will be filled as normal with the subscriber-side computed or\n> + default data. And during table synchronization, the data\n> corresponding to\n> + the generated column on subscriber-side will not be sent from the\n> + publisher to the subscriber.\n>\n> This text already mentions subscriber-side generated cols. IMO you\n> don't need to say anything at all about table synchronization --\n> that's just an internal code optimization, which is not something the\n> user needs to know about. IOW, the entire last sentence (\"And\n> during...\") should be removed.\n>\nFixed\n\n> ======\n> src/backend/replication/logical/relation.c\n>\n> 2. logicalrep_rel_open\n>\n> - if (attr->attisdropped)\n> + if (attr->attisdropped ||\n> + (!MySubscription->includegencol && attr->attgenerated))\n> {\n> entry->attrmap->attnums[i] = -1;\n> continue;\n>\n> ~\n>\n> Maybe I'm mistaken, but isn't this code for skipping checking for\n> \"missing\" subscriber-side (aka local) columns? Can't it just\n> unconditionally skip every attr->attgenerated -- i.e. why does it\n> matter if the MySubscription->includegencol was set or not?\n>\nIn case 'include_generated_columns' is 'true'. column list in\nremoterel will have an entry for generated columns.\nSo, in this case if we skip every attr->attgenerated, we will get a\nmissing column error.\n\n> ======\n> src/backend/replication/logical/tablesync.c\n>\n> 3. make_copy_attnamelist\n>\n> - for (i = 0; i < rel->remoterel.natts; i++)\n> + desc = RelationGetDescr(rel->localrel);\n> +\n> + for (i = 0; i < desc->natts; i++)\n> {\n> - attnamelist = lappend(attnamelist,\n> - makeString(rel->remoterel.attnames[i]));\n> + int attnum;\n> + Form_pg_attribute attr = TupleDescAttr(desc, i);\n> +\n> + if (!attr->attgenerated)\n> + continue;\n> +\n> + attnum = logicalrep_rel_att_by_name(&rel->remoterel,\n> + NameStr(attr->attname));\n> +\n> + /*\n> + * Check if subscription table have a generated column with same\n> + * column name as a non-generated column in the corresponding\n> + * publication table.\n> + */\n> + if (attnum >=0 && !attgenlist[attnum])\n> + ereport(ERROR,\n> + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> + errmsg(\"logical replication target relation \\\"%s.%s\\\" is missing\n> replicated column: \\\"%s\\\"\",\n> + rel->remoterel.nspname, rel->remoterel.relname, NameStr(attr->attname))));\n> +\n> + if (attnum >= 0)\n> + gencollist = lappend_int(gencollist, attnum);\n> }\n>\n> ~\n>\n> NITPICK - Use C99-style for loop variables\n> NITPICK - Typo in comment\n> NITPICK - spaces\n>\n> ~\n>\n> 3a.\n> I think above code should be refactored so there is only one check for\n> \"if (attnum >= 0)\" -- e.g. other condition should be nested.\n>\n> ~\n>\n> 3b.\n> That ERROR message says \"missing replicated column\", but that doesn't\n> seem much like what the code-comment was saying this code is about.\n>\nFixed\n\n> ~~~\n>\n> 4.\n> + for (i = 0; i < rel->remoterel.natts; i++)\n> + {\n> +\n> + if (gencollist != NIL && j < gencollist->length &&\n> + list_nth_int(gencollist, j) == i)\n> + j++;\n> + else\n> + attnamelist = lappend(attnamelist,\n> + makeString(rel->remoterel.attnames[i]));\n> + }\n>\n> NITPICK - Use C99-style for loop variables\n> NITPICK - Unnecessary blank lines\n>\n> ~\n>\n> IIUC the subscriber-side table and the publisher-side table do NOT\n> have to have all the columns in identical order for the logical\n> replication to work correcly. AFAIK it works fine so long as the\n> column names match for the replicated columns. Therefore, I am\n> suspicious that this new patch code seems to be imposing some new\n> ordering assumptions/restrictions (e.g. list_nth_int stuff) which are\n> not current requirements.\n>\n> ~~~\n>\n> copy_table:\n>\n> NITPICK - comment typo\n> NITPICK - comment wording\n>\nFixed\n\n> ~\n>\n> 5.\n> + int i = 0;\n> + ListCell *l;\n> +\n> appendStringInfoString(&cmd, \"COPY (SELECT \");\n> - for (int i = 0; i < lrel.natts; i++)\n> + foreach(l, attnamelist)\n> {\n> - appendStringInfoString(&cmd, quote_identifier(lrel.attnames[i]));\n> - if (i < lrel.natts - 1)\n> + appendStringInfoString(&cmd, quote_identifier(strVal(lfirst(l))));\n> + if (i < attnamelist->length - 1)\n> appendStringInfoString(&cmd, \", \");\n> + i++;\n> }\n> IIUC for new code like this, it is preferred to use the foreach*\n> macros instead of ListCell.\n>\nFixed\n\n> ======\n> src/test/regress/sql/subscription.sql\n>\n> 6.\n> --- fail - copy_data and include_generated_columns are mutually\n> exclusive options\n> -CREATE SUBSCRIPTION sub2 CONNECTION 'dbname=regress_doesnotexist'\n> PUBLICATION testpub WITH (include_generated_columns = true);\n> -ERROR: copy_data = true and include_generated_columns = true are\n> mutually exclusive options\n>\n> It is OK to delete this test now but IMO still needs to be some\n> \"include_generated_columns must be boolean\" test cases (e.g. same as\n> there was two_phase). Actually, this should probably be done by the\n> 0001 patch.\n>\nFixed\n\n> ======\n> src/test/subscription/t/011_generated.pl\n>\n> 7.\n> All the PRIMARY KEY stuff may be overkill. Are primary keys really\n> needed for these tests?\n>\nFixed\n\n> ~~~\n>\n> 8.\n> +$node_publisher->safe_psql('postgres',\n> + \"CREATE TABLE tab4 (a int PRIMARY KEY, b int GENERATED ALWAYS AS (a\n> * 2) STORED, c int GENERATED ALWAYS AS (a * 2) STORED)\"\n> +);\n> +\n> +$node_publisher->safe_psql('postgres',\n> + \"CREATE TABLE tab5 (a int PRIMARY KEY, b int)\"\n> +);\n> +\n>\n> Maybe add comments on what is special about all these tables, so don't\n> have to read the tests later to deduce their purpose.\n>\n> tab4: publisher-side generated col 'b' and 'c' ==> subscriber-side\n> non-generated col 'b', and generated-col 'c'\n> tab5: publisher-side non-generated col 'b' --> subscriber-side\n> non-generated col 'b'\n>\nFixed\n\n> ~~~\n>\n> 9.\n> +$node_subscriber->safe_psql('postgres',\n> + \"CREATE SUBSCRIPTION sub4 CONNECTION '$publisher_connstr'\n> PUBLICATION pub4 WITH (include_generated_columns = true)\"\n> + );\n> +\n>\n> All the publications are created together, and all the subscriptions\n> are created together except for 'sub5'. Consider including a comment\n> to say why you deliberately created the 'sub5' subscription separate\n> from all others.\n>\nFixed\n\n> ======\n>\n> 99.\n> Please also see my code nitpicks attachment patch for various other\n> cosmetic problems, and apply them if you agree.\n>\nApplied the changes\n\nI have fixed the comments and attached the patches. I have also\nattached the v9-0003 patch. It will resolve the issue suggested by\nVignesh in [1]. I have also updated the documentation for the same.\nv9-0001 - Not Modified\nv9-0002 - Support replication of generated columns during initial sync.\nv9-0003 - Fix behaviour of tablesync for Virtual Generated Columns.\n\n[1]: https://www.postgresql.org/message-id/CALDaNm3Ufg872XqgPvBVzXHvUVenu-8%2BGz2dyEuKq3CN0UxfKw%40mail.gmail.com\n\nThanks and Regards,\nShlok Kyal", "msg_date": "Thu, 20 Jun 2024 18:30:19 +0530", "msg_from": "Shlok Kyal <shlok.kyal.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Thu, 20 Jun 2024 at 12:52, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Wed, 19 Jun 2024 at 21:43, Peter Eisentraut <peter@eisentraut.org> wrote:\n> >\n> > On 19.06.24 13:22, Shubham Khanna wrote:\n> > > All the comments are handled.\n> > >\n> > > The attached Patch contains all the suggested changes.\n> >\n> > Please also take a look at the proposed patch for virtual generated\n> > columns [0] and consider how that would affect your patch. I think your\n> > feature can only replicate *stored* generated columns. So perhaps the\n> > documentation and terminology in your patch should reflect that.\n>\n> This patch is unable to manage virtual generated columns because it\n> stores NULL values for them. Along with documentation the initial sync\n> command being generated also should be changed to sync data\n> exclusively for stored generated columns, omitting virtual ones. I\n> suggest treating these changes as a separate patch(0003) for future\n> merging or a separate commit, depending on the order of patch\n> acceptance.\n>\n\nI have addressed the issue and updated the documentation accordingly.\nAnd created a new 0003 patch.\nPlease refer to v9-0003 patch for the same in [1].\n\n[1]: https://www.postgresql.org/message-id/CANhcyEXmjLEPNgOSAtjS4YGb9JvS8w-SO9S%2BjRzzzXo2RavNWw%40mail.gmail.com\n\nThanks and Regards,\nShlok Kyal\n\n\n", "msg_date": "Thu, 20 Jun 2024 18:33:15 +0530", "msg_from": "Shlok Kyal <shlok.kyal.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi Shubham, here are some more patch v8-0001 comments that I missed yesterday.\n\n======\nsrc/test/subscription/t/011_generated.pl\n\n1.\nAre the PRIMARY KEY qualifiers needed for the new tab2, tab3 tables? I\ndon't think so.\n\n~~~\n\n2.\n+$result = $node_subscriber->safe_psql('postgres', \"SELECT * FROM tab2\");\n+is( $result, qq(4|8\n+5|10), 'confirm generated columns ARE replicated when the\nsubscriber-side column is not generated');\n+\n+$node_publisher->safe_psql('postgres', \"INSERT INTO tab3 VALUES (4), (5)\");\n+\n+$node_publisher->wait_for_catchup('sub3');\n+\n+$result = $node_subscriber->safe_psql('postgres', \"SELECT * FROM tab3\");\n+is( $result, qq(4|24\n+5|25), 'confirm generated columns are NOT replicated when the\nsubscriber-side column is also generated');\n+\n\nIt would be prudent to do explicit \"SELECT a,b\" instead of \"SELECT *\",\nfor readability and to avoid any surprises.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n", "msg_date": "Fri, 21 Jun 2024 12:52:57 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi, here are some review comments for patch v9-0002.\n\n======\nsrc/backend/replication/logical/relation.c\n\n1. logicalrep_rel_open\n\n- if (attr->attisdropped)\n+ if (attr->attisdropped ||\n+ (!MySubscription->includegencols && attr->attgenerated))\n\nYou replied to my question from the previous review [1, #2] as follows:\nIn case 'include_generated_columns' is 'true'. column list in\nremoterel will have an entry for generated columns. So, in this case\nif we skip every attr->attgenerated, we will get a missing column\nerror.\n\n~\n\nTBH, the reason seems very subtle to me. Perhaps that\n\"(!MySubscription->includegencols && attr->attgenerated))\" condition\nshould be coded as a separate \"if\", so then you can include a comment\nsimilar to your answer, to explain it.\n\n======\nsrc/backend/replication/logical/tablesync.c\n\nmake_copy_attnamelist:\n\nNITPICK - punctuation in function comment\nNITPICK - add/reword some more comments\nNITPICK - rearrange comments to be closer to the code they are commenting\n\n~\n\n2. make_copy_attnamelist.\n\n+ /*\n+ * Construct column list for COPY.\n+ */\n+ for (int i = 0; i < rel->remoterel.natts; i++)\n+ {\n+ if(!gencollist[i])\n+ attnamelist = lappend(attnamelist,\n+ makeString(rel->remoterel.attnames[i]));\n+ }\n\nIIUC isn't this assuming that the attribute number (aka column order)\nis the same on the subscriber side (e.g. gencollist idx) and on the\npublisher side (e.g. remoterel.attnames[i]). AFAIK logical\nreplication does not require this ordering must be match like that,\ntherefore I am suspicious this new logic is accidentally imposing new\nunwanted assumptions/restrictions. I had asked the same question\nbefore [1-#4] about this code, but there was no reply.\n\nIdeally, there would be more test cases for when the columns\n(including the generated ones) are all in different orders on the\npub/sub tables.\n\n~~~\n\n3. General - varnames.\n\nIt would help with understanding if the 'attgenlist' variables in all\nthese functions are re-named to make it very clear that this is\nreferring to the *remote* (publisher-side) table genlist, not the\nsubscriber table genlist.\n\n~~~\n\n4.\n+ int i = 0;\n+\n appendStringInfoString(&cmd, \"COPY (SELECT \");\n- for (int i = 0; i < lrel.natts; i++)\n+ foreach_ptr(ListCell, l, attnamelist)\n {\n- appendStringInfoString(&cmd, quote_identifier(lrel.attnames[i]));\n- if (i < lrel.natts - 1)\n+ appendStringInfoString(&cmd, quote_identifier(strVal(l)));\n+ if (i < attnamelist->length - 1)\n appendStringInfoString(&cmd, \", \");\n+ i++;\n }\n\n4a.\nI think the purpose of the new macros is to avoid using ListCell, and\nalso 'l' is an unhelpful variable name. Shouldn't this code be more\nlike:\nforeach_node(String, att_name, attnamelist)\n\n~\n\n4b.\nThe code can be far simpler if you just put the comma (\", \") always\nup-front except the *first* iteration, so you can avoid checking the\nlist length every time. For example:\n\nif (i++)\n appendStringInfoString(&cmd, \", \");\n\n======\nsrc/test/subscription/t/011_generated.pl\n\n5. General.\n\nHmm. This patch 0002 included many formatting changes to tables tab2\nand tab3 and subscriptions sub2 and sub3 but they do not belong here.\nThe proper formatting for those needs to be done back in patch 0001\nwhere they were introduced. Patch 0002 should just concentrate only on\nthe new stuff for patch 0002.\n\n~\n\n6. CREATE TABLES would be better in pairs\n\nIMO it will be better if the matching CREATE TABLE for pub and sub are\nkept together, instead of separating them by doing all pub then all\nsub. I previously made the same comment for patch 0001, so maybe it\nwill be addressed next time...\n\n~\n\n7. SELECT *\n\n+$result =\n+ $node_subscriber->safe_psql('postgres', \"SELECT * FROM tab4 ORDER BY a\");\n\nIt will be prudent to do explicit \"SELECT a,b,c\" instead of \"SELECT\n*\", for readability and so there are no surprises.\n\n======\n\n99.\nPlease also refer to my attached nitpicks diff for numerous cosmetic\nchanges, and apply if you agree.\n\n======\n[1] https://www.postgresql.org/message-id/CAHut%2BPtAsEc3PEB1KUk1kFF5tcCrDCCTcbboougO29vP1B4E2Q%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Fri, 21 Jun 2024 13:33:16 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi Shubham/Shlok.\n\nFYI, my patch describing the current PG17 behaviour of logical\nreplication of generated columns was recently pushed [1].\n\nNote that this will have some impact on your patch set. e.g. You are\nchanging the current replication behaviour, so the \"Generated Columns\"\nsection note will now need to be modified by your patches.\n\n======\n[1] https://github.com/postgres/postgres/commit/7a089f6e6a14ca3a5dc8822c393c6620279968b9\n[2]\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 21 Jun 2024 15:17:08 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi, Here are some review comments for patch v9-0003\n\n======\nCommit Message\n\n/fix/fixes/\n\n======\n1.\nGeneral. Is tablesync enough?\n\nI don't understand why is the patch only concerned about tablesync?\nDoes it make sense to prevent VIRTUAL column replication during\ntablesync, if you aren't also going to prevent VIRTUAL columns from\nnormal logical replication (e.g. when copy_data = false)? Or is this\nalready handled somewhere?\n\n~~~\n\n2.\nGeneral. Missing test.\n\nAdd some test cases to verify behaviour is different for STORED versus\nVIRTUAL generated columns\n\n======\nsrc/sgml/ref/create_subscription.sgml\n\nNITPICK - consider rearranging as shown in my nitpicks diff\nNITPICK - use <literal> sgml markup for STORED\n\n======\nsrc/backend/replication/logical/tablesync.c\n\n3.\n- if ((walrcv_server_version(LogRepWorkerWalRcvConn) >= 120000 &&\n- walrcv_server_version(LogRepWorkerWalRcvConn) <= 160000) ||\n- !MySubscription->includegencols)\n+ if (walrcv_server_version(LogRepWorkerWalRcvConn) < 170000)\n+ {\n+ if (walrcv_server_version(LogRepWorkerWalRcvConn) >= 120000)\n appendStringInfo(&cmd, \" AND a.attgenerated = ''\");\n+ }\n+ else if (walrcv_server_version(LogRepWorkerWalRcvConn) >= 170000)\n+ {\n+ if(MySubscription->includegencols)\n+ appendStringInfo(&cmd, \" AND a.attgenerated != 'v'\");\n+ else\n+ appendStringInfo(&cmd, \" AND a.attgenerated = ''\");\n+ }\n\nIMO this logic is too tricky to remain uncommented -- please add some comments.\nAlso, it seems somewhat complex. I think you can achieve the same more simply:\n\nSUGGESTION\n\nif (walrcv_server_version(LogRepWorkerWalRcvConn) >= 120000)\n{\n bool gencols_allowed = walrcv_server_version(LogRepWorkerWalRcvConn) >= 170000\n && MySubscription->includegencols;\n if (gencols_allowed)\n {\n /* Replication of generated cols is supported, but not VIRTUAL cols. */\n appendStringInfo(&cmd, \" AND a.attgenerated != 'v'\");\n }\n else\n {\n /* Replication of generated cols is not supported. */\n appendStringInfo(&cmd, \" AND a.attgenerated = ''\");\n }\n}\n\n======\n\n99.\nPlease refer also to my attached nitpick diffs and apply those if you agree.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Fri, 21 Jun 2024 17:20:33 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi Shubham/Shlok.\n\nFYI, there is some other documentation page that mentions generated\ncolumn replication messages.\n\nThis page [1] says:\n\"Next, the following message part appears for each column included in\nthe publication (except generated columns):\"\n\nBut, with the introduction of your new feature, I think that the\n\"except generated columns\" wording is not strictly correct anymore.\nIOW that docs page needs updating but AFAICT your patches have not\naddressed this yet.\n\n======\n[1] https://www.postgresql.org/docs/17/protocol-logicalrep-message-formats.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 21 Jun 2024 17:38:22 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Thu, Jun 20, 2024 at 9:03 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi, here are my review comments for v8-0001.\n>\n> ======\n> Commit message.\n>\n> 1.\n> It seems like the patch name was accidentally omitted, so it became a\n> mess when it defaulted to the 1st paragraph of the commit message.\n>\n> ======\n> contrib/test_decoding/test_decoding.c\n>\n> 2.\n> + data->include_generated_columns = true;\n>\n> I previously posted a comment [1, #4] that this should default to\n> false; IMO it is unintuitive for the test_decoding to have an\n> *opposite* default behaviour compared to CREATE SUBSCRIPTION.\n>\n> ======\n> doc/src/sgml/ref/create_subscription.sgml\n>\n> NITPICK - remove the inconsistent blank line in SGML\n>\n> ======\n> src/backend/commands/subscriptioncmds.c\n>\n> 3.\n> +#define SUBOPT_include_generated_columns 0x00010000\n>\n> I previously posted a comment [1, #6] that this should be UPPERCASE,\n> but it is not yet fixed.\n>\n> ======\n> src/bin/psql/describe.c\n>\n> NITPICK - move and reword the bogus comment\n>\n> ~\n>\n> 4.\n> + if (pset.sversion >= 170000)\n> + appendPQExpBuffer(&buf,\n> + \", subincludegencols AS \\\"%s\\\"\\n\",\n> + gettext_noop(\"include_generated_columns\"));\n>\n> 4a.\n> For consistency with every other parameter, that column title should\n> be written in words \"Include generated columns\" (not\n> \"include_generated_columns\").\n>\n> ~\n>\n> 4b.\n> IMO this new column belongs with the other subscription parameter\n> columns (e.g. put it ahead of the \"Conninfo\" column).\n>\n> ======\n> src/test/subscription/t/011_generated.pl\n>\n> NITPICK - fixed a comment\n>\n> 5.\n> IMO, it would be better for readability if all the matching CREATE\n> TABLE for publisher and subscriber are kept together, instead of the\n> current code which is creating all publisher tables and then creating\n> all subscriber tables.\n>\n> ~~~\n>\n> 6.\n> +$result = $node_subscriber->safe_psql('postgres', \"SELECT * FROM tab2\");\n> +is( $result, qq(4|8\n> +5|10), 'confirm generated columns ARE replicated when the\n> subscriber-side column is not generated');\n> +\n> ...\n> +$result = $node_subscriber->safe_psql('postgres', \"SELECT * FROM tab3\");\n> +is( $result, qq(4|24\n> +5|25), 'confirm generated columns are NOT replicated when the\n> subscriber-side column is also generated');\n> +\n>\n> 6a.\n> These SELECT all need ORDER BY to protect against the SELECT *\n> returning rows in some unexpected order.\n>\n> ~\n>\n> 6b.\n> IMO there should be more comments here to explain how you can tell the\n> column was NOT replicated. E.g. it is because the result value of 'b'\n> is the subscriber-side computed value (which you made deliberately\n> different to the publisher-side computed value).\n>\n> ======\n>\n> 99.\n> Please also refer to the attached nitpicks top-up patch for minor\n> cosmetic stuff.\n\nAll the comments are handled.\n\nThe attached Patch contains all the suggested changes.\n\nThanks and Regards,\nShubham Khanna.", "msg_date": "Sat, 22 Jun 2024 23:36:00 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Fri, Jun 21, 2024 at 8:23 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi Shubham, here are some more patch v8-0001 comments that I missed yesterday.\n>\n> ======\n> src/test/subscription/t/011_generated.pl\n>\n> 1.\n> Are the PRIMARY KEY qualifiers needed for the new tab2, tab3 tables? I\n> don't think so.\n>\n> ~~~\n>\n> 2.\n> +$result = $node_subscriber->safe_psql('postgres', \"SELECT * FROM tab2\");\n> +is( $result, qq(4|8\n> +5|10), 'confirm generated columns ARE replicated when the\n> subscriber-side column is not generated');\n> +\n> +$node_publisher->safe_psql('postgres', \"INSERT INTO tab3 VALUES (4), (5)\");\n> +\n> +$node_publisher->wait_for_catchup('sub3');\n> +\n> +$result = $node_subscriber->safe_psql('postgres', \"SELECT * FROM tab3\");\n> +is( $result, qq(4|24\n> +5|25), 'confirm generated columns are NOT replicated when the\n> subscriber-side column is also generated');\n> +\n>\n> It would be prudent to do explicit \"SELECT a,b\" instead of \"SELECT *\",\n> for readability and to avoid any surprises.\n\nBoth the comments are handled.\n\nPatch v9-0001 contains all the changes required. See [1] for the changes added.\n\n[1] https://www.postgresql.org/message-id/CAHv8Rj%2B6kwOGmn5MsRaTmciJDxtvNsyszMoPXV62OGPGzkxrCg%40mail.gmail.com\n\nThanks and Regards,\nShubham Khanna.\n\n\n", "msg_date": "Sat, 22 Jun 2024 23:38:57 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi Shubham,\r\n\r\nThanks for sharing new patch! You shared as v9, but it should be v10, right?\r\nAlso, since there are no commitfest entry, I registered [1]. You can rename the\r\ntitle based on the needs. Currently CFbot said OK.\r\n\r\nAnyway, below are my comments.\r\n\r\n01. General\r\nYour patch contains unnecessary changes. Please remove all of them. E.g., \r\n\r\n```\r\n \t\t\t\t\t\t \" s.subpublications,\\n\");\r\n-\r\n```\r\nAnd\r\n```\r\n \t\tappendPQExpBufferStr(query, \" o.remote_lsn AS suboriginremotelsn,\\n\"\r\n-\t\t\t\t\t\t\t \" s.subenabled,\\n\");\r\n+\t\t\t\t\t\t\t\" s.subenabled,\\n\");\r\n```\r\n\r\n02. General\r\nAgain, please run the pgindent/pgperltidy.\r\n\r\n03. test_decoding\r\nPreviously I suggested to the default value of to be include_generated_columns\r\nshould be true, so you modified like that. However, Peter suggested opposite\r\nopinion [3] and you just revised accordingly. I think either way might be okay, but\r\nat least you must clarify the reason why you preferred to set default to false and\r\nchanged accordingly.\r\n\r\n04. decoding_into_rel.sql\r\nAccording to the comment atop this file, this test should insert result to a table.\r\nBut added case does not - we should put them at another place. I.e., create another\r\nfile.\r\n\r\n05. decoding_into_rel.sql\r\n```\r\n+-- when 'include-generated-columns' is not set\r\n```\r\nCan you clarify the expected behavior as a comment?\r\n\r\n06. getSubscriptions\r\n```\r\n+\telse\r\n+\t\tappendPQExpBufferStr(query,\r\n+\t\t\t\t\t\t\" false AS subincludegencols,\\n\");\r\n```\r\nI think the comma is not needed.\r\nAlso, this error meant that you did not test to use pg_dump for instances prior PG16.\r\nPlease verify whether we can dump subscriptions and restore them accordingly.\r\n\r\n[1]: https://commitfest.postgresql.org/48/5068/\r\n[2]: https://www.postgresql.org/message-id/OSBPR01MB25529997E012DEABA8E15A02F5E52%40OSBPR01MB2552.jpnprd01.prod.outlook.com\r\n[3]: https://www.postgresql.org/message-id/CAHut%2BPujrRQ63ju8P41tBkdjkQb4X9uEdLK_Wkauxum1MVUdfA%40mail.gmail.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/ \r\n\r\n", "msg_date": "Sun, 23 Jun 2024 04:58:15 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi, here are some patch v9-0001 comments.\n\nI saw Kuroda-san has already posted comments for this patch so there\nmay be some duplication here.\n\n======\nGENERAL\n\n1.\nThe later patches 0002 etc are checking to support only STORED\ngencols. But, doesn't that restriction belong in this patch 0001 so\nVIRTUAL columns are not decoded etc in the first place... (??)\n\n~~~\n\n2.\nThe \"Generated Columns\" docs mentioned in my previous review comment\n[2] should be modified by this 0001 patch.\n\n~~~\n\n3.\nI think the \"Message Format\" page mentioned in my previous review\ncomment [3] should be modified by this 0001 patch.\n\n======\nCommit message\n\n4.\nThe patch name is still broken as previously mentioned [1, #1]\n\n======\ndoc/src/sgml/protocol.sgml\n\n5.\nShould this docs be referring to STORED generated columns, instead of\njust generated columns?\n\n======\ndoc/src/sgml/ref/create_subscription.sgml\n\n6.\nShould this be docs referring to STORED generated columns, instead of\njust generated columns?\n\n======\nsrc/bin/pg_dump/pg_dump.c\n\ngetSubscriptions:\nNITPICK - tabs\nNITPICK - patch removed a blank line it should not be touching\nNITPICK = patch altered indents it should not be touching\nNITPICK - a missing blank line that was previously present\n\n7.\n+ else\n+ appendPQExpBufferStr(query,\n+ \" false AS subincludegencols,\\n\");\n\nThere is an unwanted comma here.\n\n~\n\ndumpSubscription\nNITPICK - patch altered indents it should not be touching\n\n======\nsrc/bin/pg_dump/pg_dump.h\n\nNITPICK - unnecessary blank line\n\n======\nsrc/bin/psql/describe.c\n\ndescribeSubscriptions\nNITPICK - bad indentation\n\n8.\nIn my previous review [1, #4b] I suggested this new column should be\nin a different order (e.g. adjacent to the other ones ahead of\n'Conninfo'), but this is not yet addressed.\n\n======\nsrc/test/subscription/t/011_generated.pl\n\nNITPICK - missing space in comment\nNITPICK - misleading \"because\" wording in the comment\n\n======\n\n99.\nSee also my attached nitpicks diff, for cosmetic issues. Please apply\nwhatever you agree with.\n\n======\n[1] My v8-0001 review -\nhttps://www.postgresql.org/message-id/CAHut%2BPujrRQ63ju8P41tBkdjkQb4X9uEdLK_Wkauxum1MVUdfA%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAHut%2BPvsRWq9t2tEErt5ZWZCVpNFVZjfZ_owqfdjOhh4yXb_3Q%40mail.gmail.com\n[3] https://www.postgresql.org/message-id/CAHut%2BPsHsT3V1wQ5uoH9ynbmWn4ZQqOe34X%2Bg37LSi7sgE_i2g%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia.", "msg_date": "Mon, 24 Jun 2024 12:51:26 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Fri, 21 Jun 2024 at 09:03, Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi, here are some review comments for patch v9-0002.\n>\n> ======\n> src/backend/replication/logical/relation.c\n>\n> 1. logicalrep_rel_open\n>\n> - if (attr->attisdropped)\n> + if (attr->attisdropped ||\n> + (!MySubscription->includegencols && attr->attgenerated))\n>\n> You replied to my question from the previous review [1, #2] as follows:\n> In case 'include_generated_columns' is 'true'. column list in\n> remoterel will have an entry for generated columns. So, in this case\n> if we skip every attr->attgenerated, we will get a missing column\n> error.\n>\n> ~\n>\n> TBH, the reason seems very subtle to me. Perhaps that\n> \"(!MySubscription->includegencols && attr->attgenerated))\" condition\n> should be coded as a separate \"if\", so then you can include a comment\n> similar to your answer, to explain it.\nFixed\n\n> ======\n> src/backend/replication/logical/tablesync.c\n>\n> make_copy_attnamelist:\n>\n> NITPICK - punctuation in function comment\n> NITPICK - add/reword some more comments\n> NITPICK - rearrange comments to be closer to the code they are commenting\nApplied the changes\n\n> ~\n>\n> 2. make_copy_attnamelist.\n>\n> + /*\n> + * Construct column list for COPY.\n> + */\n> + for (int i = 0; i < rel->remoterel.natts; i++)\n> + {\n> + if(!gencollist[i])\n> + attnamelist = lappend(attnamelist,\n> + makeString(rel->remoterel.attnames[i]));\n> + }\n>\n> IIUC isn't this assuming that the attribute number (aka column order)\n> is the same on the subscriber side (e.g. gencollist idx) and on the\n> publisher side (e.g. remoterel.attnames[i]). AFAIK logical\n> replication does not require this ordering must be match like that,\n> therefore I am suspicious this new logic is accidentally imposing new\n> unwanted assumptions/restrictions. I had asked the same question\n> before [1-#4] about this code, but there was no reply.\n>\n> Ideally, there would be more test cases for when the columns\n> (including the generated ones) are all in different orders on the\n> pub/sub tables.\n'gencollist' is set according to the remoterel\n+ gencollist[attnum] = true;\nwhere attnum is the attribute number of the corresponding column on remote rel.\n\nI have also added the tests to confirm the behaviour\n\n> ~~~\n>\n> 3. General - varnames.\n>\n> It would help with understanding if the 'attgenlist' variables in all\n> these functions are re-named to make it very clear that this is\n> referring to the *remote* (publisher-side) table genlist, not the\n> subscriber table genlist.\nFixed\n\n> ~~~\n>\n> 4.\n> + int i = 0;\n> +\n> appendStringInfoString(&cmd, \"COPY (SELECT \");\n> - for (int i = 0; i < lrel.natts; i++)\n> + foreach_ptr(ListCell, l, attnamelist)\n> {\n> - appendStringInfoString(&cmd, quote_identifier(lrel.attnames[i]));\n> - if (i < lrel.natts - 1)\n> + appendStringInfoString(&cmd, quote_identifier(strVal(l)));\n> + if (i < attnamelist->length - 1)\n> appendStringInfoString(&cmd, \", \");\n> + i++;\n> }\n>\n> 4a.\n> I think the purpose of the new macros is to avoid using ListCell, and\n> also 'l' is an unhelpful variable name. Shouldn't this code be more\n> like:\n> foreach_node(String, att_name, attnamelist)\n>\n> ~\n>\n> 4b.\n> The code can be far simpler if you just put the comma (\", \") always\n> up-front except the *first* iteration, so you can avoid checking the\n> list length every time. For example:\n>\n> if (i++)\n> appendStringInfoString(&cmd, \", \");\nFixed\n\n> ======\n> src/test/subscription/t/011_generated.pl\n>\n> 5. General.\n>\n> Hmm. This patch 0002 included many formatting changes to tables tab2\n> and tab3 and subscriptions sub2 and sub3 but they do not belong here.\n> The proper formatting for those needs to be done back in patch 0001\n> where they were introduced. Patch 0002 should just concentrate only on\n> the new stuff for patch 0002.\nFixed\n\n> ~\n>\n> 6. CREATE TABLES would be better in pairs\n>\n> IMO it will be better if the matching CREATE TABLE for pub and sub are\n> kept together, instead of separating them by doing all pub then all\n> sub. I previously made the same comment for patch 0001, so maybe it\n> will be addressed next time...\nFixed\n\n> ~\n>\n> 7. SELECT *\n>\n> +$result =\n> + $node_subscriber->safe_psql('postgres', \"SELECT * FROM tab4 ORDER BY a\");\n>\n> It will be prudent to do explicit \"SELECT a,b,c\" instead of \"SELECT\n> *\", for readability and so there are no surprises.\nFixed\n\n> ======\n>\n> 99.\n> Please also refer to my attached nitpicks diff for numerous cosmetic\n> changes, and apply if you agree.\nApplied the changes.\n\n> ======\n> [1] https://www.postgresql.org/message-id/CAHut%2BPtAsEc3PEB1KUk1kFF5tcCrDCCTcbboougO29vP1B4E2Q%40mail.gmail.com\n\nI have attached a v10 patch to address the comments:\nv10-0001 - Not Modified\nv10-0002 - Support replication of generated columns during initial sync.\nv10-0003 - Fix behaviour for Virtual Generated Columns.\n\nThanks and Regards,\nShlok Kyal", "msg_date": "Mon, 24 Jun 2024 18:26:03 +0530", "msg_from": "Shlok Kyal <shlok.kyal.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Fri, 21 Jun 2024 at 12:51, Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi, Here are some review comments for patch v9-0003\n>\n> ======\n> Commit Message\n>\n> /fix/fixes/\nFixed\n\n> ======\n> 1.\n> General. Is tablesync enough?\n>\n> I don't understand why is the patch only concerned about tablesync?\n> Does it make sense to prevent VIRTUAL column replication during\n> tablesync, if you aren't also going to prevent VIRTUAL columns from\n> normal logical replication (e.g. when copy_data = false)? Or is this\n> already handled somewhere?\nI checked the behaviour during incremental changes. I saw during\ndecoding 'null' values are present for Virtual Generated Columns. I\nmade the relevant changes to not support replication of Virtual\ngenerated columns.\n\n> ~~~\n>\n> 2.\n> General. Missing test.\n>\n> Add some test cases to verify behaviour is different for STORED versus\n> VIRTUAL generated columns\nI have not added the tests as it would give an error in cfbot.\nI have added a TODO note for the same. This can be done once the\nVIRTUAL generated columns patch is committted.\n\n> ======\n> src/sgml/ref/create_subscription.sgml\n>\n> NITPICK - consider rearranging as shown in my nitpicks diff\n> NITPICK - use <literal> sgml markup for STORED\nFixed\n\n> ======\n> src/backend/replication/logical/tablesync.c\n>\n> 3.\n> - if ((walrcv_server_version(LogRepWorkerWalRcvConn) >= 120000 &&\n> - walrcv_server_version(LogRepWorkerWalRcvConn) <= 160000) ||\n> - !MySubscription->includegencols)\n> + if (walrcv_server_version(LogRepWorkerWalRcvConn) < 170000)\n> + {\n> + if (walrcv_server_version(LogRepWorkerWalRcvConn) >= 120000)\n> appendStringInfo(&cmd, \" AND a.attgenerated = ''\");\n> + }\n> + else if (walrcv_server_version(LogRepWorkerWalRcvConn) >= 170000)\n> + {\n> + if(MySubscription->includegencols)\n> + appendStringInfo(&cmd, \" AND a.attgenerated != 'v'\");\n> + else\n> + appendStringInfo(&cmd, \" AND a.attgenerated = ''\");\n> + }\n>\n> IMO this logic is too tricky to remain uncommented -- please add some comments.\n> Also, it seems somewhat complex. I think you can achieve the same more simply:\n>\n> SUGGESTION\n>\n> if (walrcv_server_version(LogRepWorkerWalRcvConn) >= 120000)\n> {\n> bool gencols_allowed = walrcv_server_version(LogRepWorkerWalRcvConn) >= 170000\n> && MySubscription->includegencols;\n> if (gencols_allowed)\n> {\n> /* Replication of generated cols is supported, but not VIRTUAL cols. */\n> appendStringInfo(&cmd, \" AND a.attgenerated != 'v'\");\n> }\n> else\n> {\n> /* Replication of generated cols is not supported. */\n> appendStringInfo(&cmd, \" AND a.attgenerated = ''\");\n> }\n> }\nFixed\n\n> ======\n>\n> 99.\n> Please refer also to my attached nitpick diffs and apply those if you agree.\nApplied the changes.\n\nI have attached the updated patch v10 here in [1].\n[1]: https://www.postgresql.org/message-id/CANhcyEUMCk6cCbw0vVZWo8FRd6ae9CmKG%3DgKP-9Q67jLn8HqtQ%40mail.gmail.com\n\nThanks and Regards,\nShlok Kyal\n\n\n", "msg_date": "Mon, 24 Jun 2024 18:27:33 +0530", "msg_from": "Shlok Kyal <shlok.kyal.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Here are some review comments for the patch v10-0002.\n\n======\nCommit Message\n\n1.\nNote that we don't copy columns when the subscriber-side column is also\ngenerated. Those will be filled as normal with the subscriber-side computed or\ndefault data.\n\n~\n\nNow this patch also introduced some errors etc, so I think that patch\ncomment should be written differently to explicitly spell out\nbehaviour of every combination, something like the below:\n\nSummary\n\nwhen (include_generated_column = true)\n\n* publisher not-generated column => subscriber not-generated column:\nThis is just normal logical replication (not changed by this patch).\n\n* publisher not-generated column => subscriber generated column: This\nwill give ERROR.\n\n* publisher generated column => subscriber not-generated column: The\npublisher generated column value is copied.\n\n* publisher generated column => subscriber generated column: The\npublisher generated column value is not copied. The subscriber\ngenerated column will be filled with the subscriber-side computed or\ndefault data.\n\nwhen (include_generated_columns = false)\n\n* publisher not-generated column => subscriber not-generated column:\nThis is just normal logical replication (not changed by this patch).\n\n* publisher not-generated column => subscriber generated column: This\nwill give ERROR.\n\n* publisher generated column => subscriber not-generated column: This\nwill replicate nothing. Publisher generate-column is not replicated.\nThe subscriber column will be filled with the subscriber-side default\ndata.\n\n* publisher generated column => subscriber generated column: This\nwill replicate nothing. Publisher generate-column is not replicated.\nThe subscriber generated column will be filled with the\nsubscriber-side computed or default data.\n\n======\nsrc/backend/replication/logical/relation.c\n\n2.\nlogicalrep_rel_open:\n\nI tested some of the \"missing column\" logic, and got the following results:\n\nScenario A:\nPUB\ntest_pub=# create table t2(a int, b int);\ntest_pub=# create publication pub2 for table t2;\nSUB\ntest_sub=# create table t2(a int, b int generated always as (a*2) stored);\ntest_sub=# create subscription sub2 connection 'dbname=test_pub'\npublication pub2 with (include_generated_columns = false);\nResult:\nERROR: logical replication target relation \"public.t2\" is missing\nreplicated column: \"b\"\n\n~\n\nScenario B:\nPUB/SUB identical to above, but subscription sub2 created \"with\n(include_generated_columns = true);\"\nResult:\nERROR: logical replication target relation \"public.t2\" has a\ngenerated column \"b\" but corresponding column on source relation is\nnot a generated column\n\n~~~\n\n2a. Question\n\nWhy should we get 2 different error messages for what is essentially\nthe same problem according to whether the 'include_generated_columns'\nis false or true? Isn't the 2nd error message the more correct and\nuseful one for scenarios like this involving generated columns?\n\nThoughts?\n\n~\n\n2b. Missing tests?\n\nI also noticed there seems no TAP test for the current \"missing\nreplicated column\" message. IMO there should be a new test introduced\nfor this because the loop involved too much bms logic to go\nuntested...\n\n======\nsrc/backend/replication/logical/tablesync.c\n\nmake_copy_attnamelist:\nNITPICK - minor comment tweak\nNITPICK - add some spaces after \"if\" code\n\n3.\nShould you pfree the gencollist at the bottom of this function when\nyou no longer need it, for tidiness?\n\n~~~\n\n4.\n static void\n-fetch_remote_table_info(char *nspname, char *relname,\n+fetch_remote_table_info(char *nspname, char *relname, bool **remotegenlist,\n LogicalRepRelation *lrel, List **qual)\n {\n WalRcvExecResult *res;\n StringInfoData cmd;\n TupleTableSlot *slot;\n Oid tableRow[] = {OIDOID, CHAROID, CHAROID};\n- Oid attrRow[] = {INT2OID, TEXTOID, OIDOID, BOOLOID};\n+ Oid attrRow[] = {INT2OID, TEXTOID, OIDOID, BOOLOID, BOOLOID};\n Oid qualRow[] = {TEXTOID};\n bool isnull;\n+ bool *remotegenlist_res;\n\nIMO the names 'remotegenlist' and 'remotegenlist_res' should be\nswapped the other way around, because it is the function parameter\nthat is the \"result\", whereas the 'remotegenlist_res' is just the\nlocal working var for it.\n\n~~~\n\n5. fetch_remote_table_info\n\nNow walrcv_server_version(LogRepWorkerWalRcvConn) is used in multiple\nplaces, I think it will be better to assign this to a 'server_version'\nvariable to be used everywhere instead of having multiple function\ncalls.\n\n~~~\n\n6.\n \"SELECT a.attnum,\"\n \" a.attname,\"\n \" a.atttypid,\"\n- \" a.attnum = ANY(i.indkey)\"\n+ \" a.attnum = ANY(i.indkey),\"\n+ \" a.attgenerated != ''\"\n \" FROM pg_catalog.pg_attribute a\"\n \" LEFT JOIN pg_catalog.pg_index i\"\n \" ON (i.indexrelid = pg_get_replica_identity_index(%u))\"\n \" WHERE a.attnum > 0::pg_catalog.int2\"\n- \" AND NOT a.attisdropped %s\"\n+ \" AND NOT a.attisdropped\", lrel->remoteid);\n+\n+ if ((walrcv_server_version(LogRepWorkerWalRcvConn) >= 120000 &&\n+ walrcv_server_version(LogRepWorkerWalRcvConn) <= 160000) ||\n+ !MySubscription->includegencols)\n+ appendStringInfo(&cmd, \" AND a.attgenerated = ''\");\n+\n\nIf the server version is < PG12 then AFAIK there was no such thing as\n\"a.attgenerated\", so shouldn't that SELECT \" a.attgenerated != ''\"\npart also be guarded by some version checking condition like in the\nWHERE? Otherwise won't it cause an ERROR for old servers?\n\n~~~\n\n7.\n /*\n- * For non-tables and tables with row filters, we need to do COPY\n- * (SELECT ...), but we can't just do SELECT * because we need to not\n- * copy generated columns. For tables with any row filters, build a\n- * SELECT query with OR'ed row filters for COPY.\n+ * For non-tables and tables with row filters and when\n+ * 'include_generated_columns' is specified as 'true', we need to do\n+ * COPY (SELECT ...), as normal COPY of generated column is not\n+ * supported. For tables with any row filters, build a SELECT query\n+ * with OR'ed row filters for COPY.\n */\n\nNITPICK. I felt this was not quite right. AFAIK the reasons for using\nthis COPY (SELECT ...) syntax is different for row-filters and\ngenerated-columns. Anyway, I updated the comment slightly in my\nnitpicks attachment. Please have a look at it to see if you agree with\nthe suggestions. Maybe I am wrong.\n\n~~~\n\n8.\n- for (int i = 0; i < lrel.natts; i++)\n+ foreach_ptr(String, att_name, attnamelist)\n\nI'm not 100% sure, but isn't foreach_node the macro to use here,\nrather than foreach_ptr?\n======\nsrc/test/subscription/t/011_generated.pl\n\n9.\nPlease discuss with Shubham how to make all the tab1, tab2, tab3,\ntab4, tab5, tab6 comments use the same kind of style/wording.\nCurrently, the patches 0001 and 0002 test comments are a bit\ninconsistent.\n\n~~~\n\n10.\nRelated to above -- now that patch 0002 supports copy_data=true I\ndon't see why we need to test generated columns *both* for\ncopy_data=false and also for copy_data=true. IOW, is it really\nnecessary to have so many tables/tests? For example, I am thinking\nsome of those tests from patch 0001 can be re-used or just removed now\nthat copy_data=true works.\n\n~~~\n\nNITPICK - minor comment tweak\n\n~~~\n\n11.\nFor tab4 and tab6 I saw the initial sync and normal replication data\ntests are all merged together, but I had expected to see the initial\nsync and normal replication data tests separated so it would be\nconsistent with the earlier tab1, tab2, tab3 tests.\n\n======\n\n99.\nAlso, I have attached a nitpicks diff for some of the cosmetic review\ncomments mentioned above. Please apply whatever of these that you\nagree with.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 25 Jun 2024 16:25:50 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Dear Shlok,\r\n\r\nThanks for updating patches! Below are my comments, maybe only for 0002.\r\n\r\n01. General\r\n\r\nIIUC, we are not discussed why ALTER SUBSCRIPTION ... SET include_generated_columns\r\nis prohibit. Previously, it seems okay because there are exclusive options. But now,\r\nsuch restrictions are gone. Do you have a reason in your mind? It is just not considered\r\nyet?\r\n\r\n02. General\r\n\r\nAccording to the doc, we allow to alter a column to non-generated one, by ALTER\r\nTABLE ... ALTER COLUMN ... DROP EXPRESSION command. Not sure, what should be\r\nwhen the command is executed on the subscriber while copying the data? Should\r\nwe continue the copy or restart? How do you think?\r\n\r\n03. Tes tcode\r\n\r\nIIUC, REFRESH PUBLICATION can also lead the table synchronization. Can you add\r\na test for that?\r\n\r\n04. Test code (maybe for 0001)\r\n\r\nPlease test the combination with TABLE ... ALTER COLUMN ... DROP EXPRESSION command.\r\n\r\n05. logicalrep_rel_open\r\n\r\n```\r\n+ /*\r\n+ * In case 'include_generated_columns' is 'false', we should skip the\r\n+ * check of missing attrs for generated columns.\r\n+ * In case 'include_generated_columns' is 'true', we should check if\r\n+ * corresponding column for the generated column in publication column\r\n+ * list is present in the subscription table.\r\n+ */\r\n+ if (!MySubscription->includegencols && attr->attgenerated)\r\n+ {\r\n+ entry->attrmap->attnums[i] = -1;\r\n+ continue;\r\n+ }\r\n```\r\n\r\nThis comment is not very clear to me, because here we do not skip anything.\r\nCan you clarify the reason why attnums[i] is set to -1 and how will it be used?\r\n\r\n06. make_copy_attnamelist\r\n\r\n```\r\n+ gencollist = palloc0(MaxTupleAttributeNumber * sizeof(bool));\r\n```\r\n\r\nI think this array is too large. Can we reduce a size to (desc->natts * sizeof(bool))?\r\nAlso, the free'ing should be done.\r\n\r\n07. make_copy_attnamelist\r\n\r\n```\r\n+ /* Loop to handle subscription table generated columns. */\r\n+ for (int i = 0; i < desc->natts; i++)\r\n```\r\n\r\nIIUC, the loop is needed to find generated columns on the subscriber side, right?\r\nCan you clarify as comment?\r\n\r\n08. copy_table\r\n\r\n```\r\n+ /*\r\n+ * Regular table with no row filter and 'include_generated_columns'\r\n+ * specified as 'false' during creation of subscription.\r\n+ */\r\n```\r\n\r\nI think this comment is not correct. After patching, all tablesync command becomes\r\nlike COPY (SELECT ...) if include_genereted_columns is set to true. Is it right?\r\nCan we restrict only when the table has generated ones?\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/ \r\n\r\n", "msg_date": "Tue, 25 Jun 2024 13:19:21 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi Shlok. Here are my review comments for patch v10-0003\n\n======\nGeneral.\n\n1.\nThe patch has lots of conditions like:\nif (att->attgenerated && (att->attgenerated !=\nATTRIBUTE_GENERATED_STORED || !include_generated_columns))\n continue;\n\nIMO these are hard to read. Although more verbose, please consider if\nall those (for the sake of readability) would be better re-written\nlike below :\n\nif (att->generated)\n{\n if (!include_generated_columns)\n continue;\n\n if (att->attgenerated != ATTRIBUTE_GENERATED_STORED)\n continue;\n}\n\n======\ncontrib/test_decoding/test_decoding.c\n\ntuple_to_stringinfo:\n\nNITPICK = refactored the code and comments a bit here to make it easier\nNITPICK - there is no need to mention \"virtual\". Instead, say we only\nsupport STORED\n\n======\nsrc/backend/catalog/pg_publication.c\n\npublication_translate_columns:\n\nNITPICK - introduced variable 'att' to simplify this code\n\n~\n\n2.\n+ ereport(ERROR,\n+ errcode(ERRCODE_INVALID_COLUMN_REFERENCE),\n+ errmsg(\"cannot use virtual generated column \\\"%s\\\" in publication\ncolumn list\",\n+ colname));\n\nIs it better to avoid referring to \"virtual\" at all? Instead, consider\nrearranging the wording to say something like \"generated column \\\"%s\\\"\nis not STORED so cannot be used in a publication column list\"\n\n~~~\n\npg_get_publication_tables:\n\nNITPICK - split the condition code for readability\n\n======\nsrc/backend/replication/logical/relation.c\n\n3. logicalrep_rel_open\n\n+ if (attr->attgenerated && attr->attgenerated != ATTRIBUTE_GENERATED_STORED)\n+ continue;\n+\n\nIsn't this missing some code to say \"entry->attrmap->attnums[i] =\n-1;\", same as all the other nearby code is doing?\n\n~~~\n\n4.\nI felt all the \"generated column\" logic should be kept together, so\nthis new condition should be combined with the other generated column\ncondition, like:\n\nif (attr->attgenerated)\n{\n /* comment... */\n if (!MySubscription->includegencols)\n {\n entry->attrmap->attnums[i] = -1;\n continue;\n }\n\n /* comment... */\n if (attr->attgenerated != ATTRIBUTE_GENERATED_STORED)\n {\n entry->attrmap->attnums[i] = -1;\n continue;\n }\n}\n\n======\nsrc/backend/replication/logical/tablesync.c\n\n5.\n+ if (gencols_allowed)\n+ {\n+ /* Replication of generated cols is supported, but not VIRTUAL cols. */\n+ appendStringInfo(&cmd, \" AND a.attgenerated != 'v'\");\n+ }\n\nIs it better here to use the ATTRIBUTE_GENERATED_VIRTUAL macro instead\nof the hardwired 'v'? (Maybe add another TODO comment to revisit\nthis).\n\nAlternatively, consider if it is more future-proof to rearrange so it\njust says what *is* supported instead of what *isn't* supported:\ne.g. \"AND a.attgenerated IN ('', 's')\"\n\n======\nsrc/test/subscription/t/011_generated.pl\n\nNITPICK - some comments are missing the word \"stored\"\nNITPICK - sometimes \"col\" should be plural \"cols\"\nNITPICK = typo \"GNERATED\"\n\n======\n\n6.\nIn a previous review [1, comment #3] I mentioned that there should be\nsome docs updates on the \"Logical Replication Message Formats\" section\n53.9. So, I expected patch 0001 would make some changes and then patch\n0003 would have to update it again to say something about \"STORED\".\nBut all that is missing from the v10* patches.\n\n======\n\n99.\nSee also my nitpicks diff which is a top-up patch addressing all the\nnitpick comments mentioned above. Please apply all of these that you\nagree with.\n\n======\n[1] https://www.postgresql.org/message-id/CAHut%2BPvQ8CLq-JysTTeRj4u5SC9vTVcx3AgwTHcPUEOh-UnKcQ%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\nOn Mon, Jun 24, 2024 at 10:56 PM Shlok Kyal <shlok.kyal.oss@gmail.com> wrote:\n>\n> On Fri, 21 Jun 2024 at 09:03, Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Hi, here are some review comments for patch v9-0002.\n> >\n> > ======\n> > src/backend/replication/logical/relation.c\n> >\n> > 1. logicalrep_rel_open\n> >\n> > - if (attr->attisdropped)\n> > + if (attr->attisdropped ||\n> > + (!MySubscription->includegencols && attr->attgenerated))\n> >\n> > You replied to my question from the previous review [1, #2] as follows:\n> > In case 'include_generated_columns' is 'true'. column list in\n> > remoterel will have an entry for generated columns. So, in this case\n> > if we skip every attr->attgenerated, we will get a missing column\n> > error.\n> >\n> > ~\n> >\n> > TBH, the reason seems very subtle to me. Perhaps that\n> > \"(!MySubscription->includegencols && attr->attgenerated))\" condition\n> > should be coded as a separate \"if\", so then you can include a comment\n> > similar to your answer, to explain it.\n> Fixed\n>\n> > ======\n> > src/backend/replication/logical/tablesync.c\n> >\n> > make_copy_attnamelist:\n> >\n> > NITPICK - punctuation in function comment\n> > NITPICK - add/reword some more comments\n> > NITPICK - rearrange comments to be closer to the code they are commenting\n> Applied the changes\n>\n> > ~\n> >\n> > 2. make_copy_attnamelist.\n> >\n> > + /*\n> > + * Construct column list for COPY.\n> > + */\n> > + for (int i = 0; i < rel->remoterel.natts; i++)\n> > + {\n> > + if(!gencollist[i])\n> > + attnamelist = lappend(attnamelist,\n> > + makeString(rel->remoterel.attnames[i]));\n> > + }\n> >\n> > IIUC isn't this assuming that the attribute number (aka column order)\n> > is the same on the subscriber side (e.g. gencollist idx) and on the\n> > publisher side (e.g. remoterel.attnames[i]). AFAIK logical\n> > replication does not require this ordering must be match like that,\n> > therefore I am suspicious this new logic is accidentally imposing new\n> > unwanted assumptions/restrictions. I had asked the same question\n> > before [1-#4] about this code, but there was no reply.\n> >\n> > Ideally, there would be more test cases for when the columns\n> > (including the generated ones) are all in different orders on the\n> > pub/sub tables.\n> 'gencollist' is set according to the remoterel\n> + gencollist[attnum] = true;\n> where attnum is the attribute number of the corresponding column on remote rel.\n>\n> I have also added the tests to confirm the behaviour\n>\n> > ~~~\n> >\n> > 3. General - varnames.\n> >\n> > It would help with understanding if the 'attgenlist' variables in all\n> > these functions are re-named to make it very clear that this is\n> > referring to the *remote* (publisher-side) table genlist, not the\n> > subscriber table genlist.\n> Fixed\n>\n> > ~~~\n> >\n> > 4.\n> > + int i = 0;\n> > +\n> > appendStringInfoString(&cmd, \"COPY (SELECT \");\n> > - for (int i = 0; i < lrel.natts; i++)\n> > + foreach_ptr(ListCell, l, attnamelist)\n> > {\n> > - appendStringInfoString(&cmd, quote_identifier(lrel.attnames[i]));\n> > - if (i < lrel.natts - 1)\n> > + appendStringInfoString(&cmd, quote_identifier(strVal(l)));\n> > + if (i < attnamelist->length - 1)\n> > appendStringInfoString(&cmd, \", \");\n> > + i++;\n> > }\n> >\n> > 4a.\n> > I think the purpose of the new macros is to avoid using ListCell, and\n> > also 'l' is an unhelpful variable name. Shouldn't this code be more\n> > like:\n> > foreach_node(String, att_name, attnamelist)\n> >\n> > ~\n> >\n> > 4b.\n> > The code can be far simpler if you just put the comma (\", \") always\n> > up-front except the *first* iteration, so you can avoid checking the\n> > list length every time. For example:\n> >\n> > if (i++)\n> > appendStringInfoString(&cmd, \", \");\n> Fixed\n>\n> > ======\n> > src/test/subscription/t/011_generated.pl\n> >\n> > 5. General.\n> >\n> > Hmm. This patch 0002 included many formatting changes to tables tab2\n> > and tab3 and subscriptions sub2 and sub3 but they do not belong here.\n> > The proper formatting for those needs to be done back in patch 0001\n> > where they were introduced. Patch 0002 should just concentrate only on\n> > the new stuff for patch 0002.\n> Fixed\n>\n> > ~\n> >\n> > 6. CREATE TABLES would be better in pairs\n> >\n> > IMO it will be better if the matching CREATE TABLE for pub and sub are\n> > kept together, instead of separating them by doing all pub then all\n> > sub. I previously made the same comment for patch 0001, so maybe it\n> > will be addressed next time...\n> Fixed\n>\n> > ~\n> >\n> > 7. SELECT *\n> >\n> > +$result =\n> > + $node_subscriber->safe_psql('postgres', \"SELECT * FROM tab4 ORDER BY a\");\n> >\n> > It will be prudent to do explicit \"SELECT a,b,c\" instead of \"SELECT\n> > *\", for readability and so there are no surprises.\n> Fixed\n>\n> > ======\n> >\n> > 99.\n> > Please also refer to my attached nitpicks diff for numerous cosmetic\n> > changes, and apply if you agree.\n> Applied the changes.\n>\n> > ======\n> > [1] https://www.postgresql.org/message-id/CAHut%2BPtAsEc3PEB1KUk1kFF5tcCrDCCTcbboougO29vP1B4E2Q%40mail.gmail.com\n>\n> I have attached a v10 patch to address the comments:\n> v10-0001 - Not Modified\n> v10-0002 - Support replication of generated columns during initial sync.\n> v10-0003 - Fix behaviour for Virtual Generated Columns.\n>\n> Thanks and Regards,\n> Shlok Kyal", "msg_date": "Wed, 26 Jun 2024 12:36:07 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Sun, Jun 23, 2024 at 10:28 AM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Hi Shubham,\n>\n> Thanks for sharing new patch! You shared as v9, but it should be v10, right?\n> Also, since there are no commitfest entry, I registered [1]. You can rename the\n> title based on the needs. Currently CFbot said OK.\n>\n> Anyway, below are my comments.\n>\n> 01. General\n> Your patch contains unnecessary changes. Please remove all of them. E.g.,\n>\n> ```\n> \" s.subpublications,\\n\");\n> -\n> ```\n> And\n> ```\n> appendPQExpBufferStr(query, \" o.remote_lsn AS suboriginremotelsn,\\n\"\n> - \" s.subenabled,\\n\");\n> + \" s.subenabled,\\n\");\n> ```\n>\n> 02. General\n> Again, please run the pgindent/pgperltidy.\n>\n> 03. test_decoding\n> Previously I suggested to the default value of to be include_generated_columns\n> should be true, so you modified like that. However, Peter suggested opposite\n> opinion [3] and you just revised accordingly. I think either way might be okay, but\n> at least you must clarify the reason why you preferred to set default to false and\n> changed accordingly.\n\nI have set the default value as true in case of test_decoding. The\nreason for this is even before the new feature implementation,\ngenerated columns were getting selected.\n\n> 04. decoding_into_rel.sql\n> According to the comment atop this file, this test should insert result to a table.\n> But added case does not - we should put them at another place. I.e., create another\n> file.\n>\n> 05. decoding_into_rel.sql\n> ```\n> +-- when 'include-generated-columns' is not set\n> ```\n> Can you clarify the expected behavior as a comment?\n>\n> 06. getSubscriptions\n> ```\n> + else\n> + appendPQExpBufferStr(query,\n> + \" false AS subincludegencols,\\n\");\n> ```\n> I think the comma is not needed.\n> Also, this error meant that you did not test to use pg_dump for instances prior PG16.\n> Please verify whether we can dump subscriptions and restore them accordingly.\n>\n> [1]: https://commitfest.postgresql.org/48/5068/\n> [2]: https://www.postgresql.org/message-id/OSBPR01MB25529997E012DEABA8E15A02F5E52%40OSBPR01MB2552.jpnprd01.prod.outlook.com\n> [3]: https://www.postgresql.org/message-id/CAHut%2BPujrRQ63ju8P41tBkdjkQb4X9uEdLK_Wkauxum1MVUdfA%40mail.gmail.com\n\nAll the comments are handled.\n\nThe attached Patches contains all the suggested changes.\n\nThanks and Regards,\nShubham Khanna.", "msg_date": "Wed, 26 Jun 2024 17:42:24 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Mon, Jun 24, 2024 at 8:21 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi, here are some patch v9-0001 comments.\n>\n> I saw Kuroda-san has already posted comments for this patch so there\n> may be some duplication here.\n>\n> ======\n> GENERAL\n>\n> 1.\n> The later patches 0002 etc are checking to support only STORED\n> gencols. But, doesn't that restriction belong in this patch 0001 so\n> VIRTUAL columns are not decoded etc in the first place... (??)\n>\n> ~~~\n>\n> 2.\n> The \"Generated Columns\" docs mentioned in my previous review comment\n> [2] should be modified by this 0001 patch.\n>\n> ~~~\n>\n> 3.\n> I think the \"Message Format\" page mentioned in my previous review\n> comment [3] should be modified by this 0001 patch.\n>\n> ======\n> Commit message\n>\n> 4.\n> The patch name is still broken as previously mentioned [1, #1]\n>\n> ======\n> doc/src/sgml/protocol.sgml\n>\n> 5.\n> Should this docs be referring to STORED generated columns, instead of\n> just generated columns?\n>\n> ======\n> doc/src/sgml/ref/create_subscription.sgml\n>\n> 6.\n> Should this be docs referring to STORED generated columns, instead of\n> just generated columns?\n>\n> ======\n> src/bin/pg_dump/pg_dump.c\n>\n> getSubscriptions:\n> NITPICK - tabs\n> NITPICK - patch removed a blank line it should not be touching\n> NITPICK = patch altered indents it should not be touching\n> NITPICK - a missing blank line that was previously present\n>\n> 7.\n> + else\n> + appendPQExpBufferStr(query,\n> + \" false AS subincludegencols,\\n\");\n>\n> There is an unwanted comma here.\n>\n> ~\n>\n> dumpSubscription\n> NITPICK - patch altered indents it should not be touching\n>\n> ======\n> src/bin/pg_dump/pg_dump.h\n>\n> NITPICK - unnecessary blank line\n>\n> ======\n> src/bin/psql/describe.c\n>\n> describeSubscriptions\n> NITPICK - bad indentation\n>\n> 8.\n> In my previous review [1, #4b] I suggested this new column should be\n> in a different order (e.g. adjacent to the other ones ahead of\n> 'Conninfo'), but this is not yet addressed.\n>\n> ======\n> src/test/subscription/t/011_generated.pl\n>\n> NITPICK - missing space in comment\n> NITPICK - misleading \"because\" wording in the comment\n>\n> ======\n>\n> 99.\n> See also my attached nitpicks diff, for cosmetic issues. Please apply\n> whatever you agree with.\n>\n> ======\n> [1] My v8-0001 review -\n> https://www.postgresql.org/message-id/CAHut%2BPujrRQ63ju8P41tBkdjkQb4X9uEdLK_Wkauxum1MVUdfA%40mail.gmail.com\n> [2] https://www.postgresql.org/message-id/CAHut%2BPvsRWq9t2tEErt5ZWZCVpNFVZjfZ_owqfdjOhh4yXb_3Q%40mail.gmail.com\n> [3] https://www.postgresql.org/message-id/CAHut%2BPsHsT3V1wQ5uoH9ynbmWn4ZQqOe34X%2Bg37LSi7sgE_i2g%40mail.gmail.com\n\nAll the comments are handled.\n\nI have attached the updated patch v11 here in [1]. See [1] for the\nchanges added.\n\n[1] https://www.postgresql.org/message-id/CAHv8RjJpS_XDkR6OrsmMZtCBZNxeYoCdENhC0%3Dbe0rLmNvhiQw%40mail.gmail.com\n\nThanks and Regards,\nShubham Khanna.\n\n\n", "msg_date": "Wed, 26 Jun 2024 17:46:33 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": ">All the comments are handled.\n>\n> The attached Patches contain all the suggested changes.\n\nv11-0003 patch was not getting applied, so here are the updated\npatches for the same.\n\nThanks and Regards,\nShubham Khanna.", "msg_date": "Thu, 27 Jun 2024 10:59:01 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi, here are some patch v11-0001 comments.\n\n(BTW, I had difficulty reviewing this because something seemed strange\nwith the changes this patch made to the test_decoding tests).\n\n======\nGeneral\n\n1. Patch name\n\nPatch name does not need to quote 'logical replication'\n\n~\n\n2. test_decoding tests\n\nMultiple test_decoding tests were failing for me. There is something\nvery suspicious about the unexplained changes the patch made to the\nexpected \"binary.out\" and \"decoding_into_rel.out\" etc. I REVERTED all\nthose changes in my nitpicks top-up to get everything working. Please\nre-confirm that all the test_decoding tests are OK!\n\n======\nCommit Message\n\n3.\nSince you are including the example usage for test_decoding, I think\nit's better to include the example usage of CREATE SUBSCRIPTION also.\n\n======\ncontrib/test_decoding/expected/binary.out\n\n4.\n SELECT 'init' FROM\npg_create_logical_replication_slot('regression_slot',\n'test_decoding');\n- ?column?\n-----------\n- init\n-(1 row)\n-\n+ERROR: replication slot \"regression_slot\" already exists\n\nHuh? Why is this unrelated expected output changed by this patch?\n\nThe test_decoding test fails for me unless I REVERT this change!! See\nmy nitpicks diff.\n\n======\n.../expected/decoding_into_rel.out\n\n5.\n-SELECT 'stop' FROM pg_drop_replication_slot('regression_slot');\n- ?column?\n-----------\n- stop\n-(1 row)\n-\n\nHuh? Why is this unrelated expected output changed by this patch?\n\nThe test_decoding test fails for me unless I REVERT this change!! See\nmy nitpicks diff.\n\n======\n.../test_decoding/sql/decoding_into_rel.sql\n\n6.\n-SELECT 'stop' FROM pg_drop_replication_slot('regression_slot');\n+SELECT 'stop' FROM pg_drop_replication_slot('regression_slot');\n\nHuh, Why does this patch change this code at all? I REVERTED this\nchange. See my nitpicks diff.\n\n======\n.../test_decoding/sql/generated_columns.sql\n\n(see my nitpicks replacement file for this test)\n\n7.\n+-- test that we can insert the result of a 'include_generated_columns'\n+-- into the tables created. That's really not a good idea in practical terms,\n+-- but provides a nice test.\n\nNITPICK - I didn't understand the point of this comment. I updated\nthe comment according to my understanding.\n\n~\n\nNITPICK - The comment \"when 'include-generated-columns' is not set\nthen columns will not be replicated\" is the opposite of what the\nresult is. I changed this comment.\n\nNITPICK - modified and unified wording of some of the other comments\n\nNITPICK - changed some blank lines\n\n======\ncontrib/test_decoding/test_decoding.c\n\n8.\n+ else if (strcmp(elem->defname, \"include-generated-columns\") == 0)\n+ {\n+ if (elem->arg == NULL)\n+ data->include_generated_columns = true;\n\nIs there any way to test that \"elem->arg == NULL\" in the\ngenerated.sql? OTOH, if it is not possible to get here then is the\ncode even needed?\n\n======\ndoc/src/sgml/ddl.sgml\n\n9.\n <para>\n- Generated columns are skipped for logical replication and cannot be\n- specified in a <command>CREATE PUBLICATION</command> column list.\n+ 'include_generated_columns' option controls whether generated columns\n+ should be included in the string representation of tuples during\n+ logical decoding in PostgreSQL. The default is <literal>true</literal>.\n </para>\n\nNITPICK - Use proper markdown instead of single quotes for the parameter.\n\nNITPICK - I think this can be reworded slightly to provide a\ncross-reference to the CREATE SUBSCRIPTION parameter for more details\n(which means then we can avoid repeating details like the default\nvalue here). PSA my nitpicks diff for an example of how I thought docs\nshould look.\n\n======\ndoc/src/sgml/protocol.sgml\n\n10.\n+ The default is true.\n\nNo, it isn't. AFAIK you made the default behaviour true only for\n'test_decoding', but the default for CREATE SUBSCRIPTION remains\n*false* because that is the existing PG17 behaviour. And the default\nfor the 'include_generated_columns' in the protocol is *also* false to\nmatch the CREATE SUBSCRIPTION default.\n\ne.g. libpqwalreceiver.c only sets \", include_generated_columns 'true'\"\nwhen options->proto.logical.include_generated_columns\ne.g. worker.c says: options->proto.logical.include_generated_columns =\nMySubscription->includegencols;\ne.g. subscriptioncmds.c sets default: opts->include_generated_columns = false;\n\n(This confirmed my previous review expectation that using different\ndefault behaviours for test_decoding and pgoutput would surely lead to\nconfusion)\n\n~~~\n\n11.\n- <para>\n- Next, the following message part appears for each column included in\n- the publication (except generated columns):\n- </para>\n-\n\nAFAIK you cannot just remove this entire paragraph because I thought\nit was still relevant to talking about \"... the following message\npart\". But, if you don't want to explain and cross-reference about\n'include_generated_columns' then maybe it is OK just to remove the\n\"(except generated columns)\" part?\n\n======\nsrc/test/subscription/t/011_generated.pl\n\nNITPICK - comment typo /tab2/tab3/\nNITPICK - remove some blank lines\n\n~~~\n\n12.\n# the column was NOT replicated (the result value of 'b' is the\nsubscriber-side computed value)\n\nNITPICK - I think this comment is wrong for the tab2 test because here\ncol 'b' IS replicated. I have added much more substantial test case\ncomments in the attached nitpicks diff. PSA.\n\n======\nsrc/test/subscription/t/031_column_list.pl\n\n13.\nNITPICK - IMO there is a missing word \"list\" in the comment. This bug\nexisted already on HEAD but since this patch is modifying this comment\nI think we can also fix this in passing.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia.", "msg_date": "Thu, 27 Jun 2024 19:10:59 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Thu, Jun 27, 2024 at 2:41 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi, here are some patch v11-0001 comments.\n>\n> (BTW, I had difficulty reviewing this because something seemed strange\n> with the changes this patch made to the test_decoding tests).\n>\n> ======\n> General\n>\n> 1. Patch name\n>\n> Patch name does not need to quote 'logical replication'\n>\n> ~\n>\n> 2. test_decoding tests\n>\n> Multiple test_decoding tests were failing for me. There is something\n> very suspicious about the unexplained changes the patch made to the\n> expected \"binary.out\" and \"decoding_into_rel.out\" etc. I REVERTED all\n> those changes in my nitpicks top-up to get everything working. Please\n> re-confirm that all the test_decoding tests are OK!\n>\n> ======\n> Commit Message\n>\n> 3.\n> Since you are including the example usage for test_decoding, I think\n> it's better to include the example usage of CREATE SUBSCRIPTION also.\n>\n> ======\n> contrib/test_decoding/expected/binary.out\n>\n> 4.\n> SELECT 'init' FROM\n> pg_create_logical_replication_slot('regression_slot',\n> 'test_decoding');\n> - ?column?\n> -----------\n> - init\n> -(1 row)\n> -\n> +ERROR: replication slot \"regression_slot\" already exists\n>\n> Huh? Why is this unrelated expected output changed by this patch?\n>\n> The test_decoding test fails for me unless I REVERT this change!! See\n> my nitpicks diff.\n>\n> ======\n> .../expected/decoding_into_rel.out\n>\n> 5.\n> -SELECT 'stop' FROM pg_drop_replication_slot('regression_slot');\n> - ?column?\n> -----------\n> - stop\n> -(1 row)\n> -\n>\n> Huh? Why is this unrelated expected output changed by this patch?\n>\n> The test_decoding test fails for me unless I REVERT this change!! See\n> my nitpicks diff.\n>\n> ======\n> .../test_decoding/sql/decoding_into_rel.sql\n>\n> 6.\n> -SELECT 'stop' FROM pg_drop_replication_slot('regression_slot');\n> +SELECT 'stop' FROM pg_drop_replication_slot('regression_slot');\n>\n> Huh, Why does this patch change this code at all? I REVERTED this\n> change. See my nitpicks diff.\n>\n> ======\n> .../test_decoding/sql/generated_columns.sql\n>\n> (see my nitpicks replacement file for this test)\n>\n> 7.\n> +-- test that we can insert the result of a 'include_generated_columns'\n> +-- into the tables created. That's really not a good idea in practical terms,\n> +-- but provides a nice test.\n>\n> NITPICK - I didn't understand the point of this comment. I updated\n> the comment according to my understanding.\n>\n> ~\n>\n> NITPICK - The comment \"when 'include-generated-columns' is not set\n> then columns will not be replicated\" is the opposite of what the\n> result is. I changed this comment.\n>\n> NITPICK - modified and unified wording of some of the other comments\n>\n> NITPICK - changed some blank lines\n>\n> ======\n> contrib/test_decoding/test_decoding.c\n>\n> 8.\n> + else if (strcmp(elem->defname, \"include-generated-columns\") == 0)\n> + {\n> + if (elem->arg == NULL)\n> + data->include_generated_columns = true;\n>\n> Is there any way to test that \"elem->arg == NULL\" in the\n> generated.sql? OTOH, if it is not possible to get here then is the\n> code even needed?\n>\n\nCurrently I could not find a case where the\n'include_generated_columns' option is not specifying any value, but I\nwas hesitant to remove this from here as the other options mentioned\nfollow the same rules. Thoughts?\n\n> ======\n> doc/src/sgml/ddl.sgml\n>\n> 9.\n> <para>\n> - Generated columns are skipped for logical replication and cannot be\n> - specified in a <command>CREATE PUBLICATION</command> column list.\n> + 'include_generated_columns' option controls whether generated columns\n> + should be included in the string representation of tuples during\n> + logical decoding in PostgreSQL. The default is <literal>true</literal>.\n> </para>\n>\n> NITPICK - Use proper markdown instead of single quotes for the parameter.\n>\n> NITPICK - I think this can be reworded slightly to provide a\n> cross-reference to the CREATE SUBSCRIPTION parameter for more details\n> (which means then we can avoid repeating details like the default\n> value here). PSA my nitpicks diff for an example of how I thought docs\n> should look.\n>\n> ======\n> doc/src/sgml/protocol.sgml\n>\n> 10.\n> + The default is true.\n>\n> No, it isn't. AFAIK you made the default behaviour true only for\n> 'test_decoding', but the default for CREATE SUBSCRIPTION remains\n> *false* because that is the existing PG17 behaviour. And the default\n> for the 'include_generated_columns' in the protocol is *also* false to\n> match the CREATE SUBSCRIPTION default.\n>\n> e.g. libpqwalreceiver.c only sets \", include_generated_columns 'true'\"\n> when options->proto.logical.include_generated_columns\n> e.g. worker.c says: options->proto.logical.include_generated_columns =\n> MySubscription->includegencols;\n> e.g. subscriptioncmds.c sets default: opts->include_generated_columns = false;\n>\n> (This confirmed my previous review expectation that using different\n> default behaviours for test_decoding and pgoutput would surely lead to\n> confusion)\n>\n> ~~~\n>\n> 11.\n> - <para>\n> - Next, the following message part appears for each column included in\n> - the publication (except generated columns):\n> - </para>\n> -\n>\n> AFAIK you cannot just remove this entire paragraph because I thought\n> it was still relevant to talking about \"... the following message\n> part\". But, if you don't want to explain and cross-reference about\n> 'include_generated_columns' then maybe it is OK just to remove the\n> \"(except generated columns)\" part?\n>\n> ======\n> src/test/subscription/t/011_generated.pl\n>\n> NITPICK - comment typo /tab2/tab3/\n> NITPICK - remove some blank lines\n>\n> ~~~\n>\n> 12.\n> # the column was NOT replicated (the result value of 'b' is the\n> subscriber-side computed value)\n>\n> NITPICK - I think this comment is wrong for the tab2 test because here\n> col 'b' IS replicated. I have added much more substantial test case\n> comments in the attached nitpicks diff. PSA.\n>\n> ======\n> src/test/subscription/t/031_column_list.pl\n>\n> 13.\n> NITPICK - IMO there is a missing word \"list\" in the comment. This bug\n> existed already on HEAD but since this patch is modifying this comment\n> I think we can also fix this in passing.\n\nAll the comments are handled.\n\nThe attached Patches contain all the suggested changes.\n\nThanks and Regards,\nShubham Khanna.", "msg_date": "Mon, 1 Jul 2024 16:08:03 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Mon, Jul 1, 2024 at 8:38 PM Shubham Khanna\n<khannashubham1197@gmail.com> wrote:\n>\n>...\n> > 8.\n> > + else if (strcmp(elem->defname, \"include-generated-columns\") == 0)\n> > + {\n> > + if (elem->arg == NULL)\n> > + data->include_generated_columns = true;\n> >\n> > Is there any way to test that \"elem->arg == NULL\" in the\n> > generated.sql? OTOH, if it is not possible to get here then is the\n> > code even needed?\n> >\n>\n> Currently I could not find a case where the\n> 'include_generated_columns' option is not specifying any value, but I\n> was hesitant to remove this from here as the other options mentioned\n> follow the same rules. Thoughts?\n>\n\nIf you do manage to find a scenario for this then I think a test for\nit would be good. But, I agree that the code seems OK because now I\nsee it is the same pattern as similar nearby code.\n\n~~~\n\nThanks for the updated patch. Here are some review comments for patch v13-0001.\n\n======\n.../expected/generated_columns.out\n\nnitpicks (see generated_columns.sql)\n\n======\n.../test_decoding/sql/generated_columns.sql\n\nnitpick - use plural /column/columns/\nnitpick - use consistent wording in the comments\nnitpick - IMO it is better to INSERT different values for each of the tests\n\n======\ndoc/src/sgml/protocol.sgml\n\nnitpick - I noticed that none of the other boolean parameters on this\npage mention about a default, so maybe here we should do the same and\nomit that information.\n\n~~~\n\n1.\n- <para>\n- Next, the following message part appears for each column included in\n- the publication (except generated columns):\n- </para>\n-\n\nIn a previous review [1 comment #11] I wrote that you can't just\nremove this paragraph because AFAIK it is still meaningful. A minimal\nchange might be to just remove the \"(except generated columns)\" part.\nAlternatively, you could give a more detailed explanation mentioning\nthe include_generated_columns protocol parameter.\n\nI provided some updated text for this paragraph in my NITPICKS top-up\npatch, Please have a look at that for ideas.\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\nIt looks like pg_indent needs to be run on this file.\n\n======\nsrc/include/catalog/pg_subscription.h\n\nnitpick - comment /publish/Publish/ for consistency\n\n======\nsrc/include/replication/walreceiver.h\n\nnitpick - comment /publish/Publish/ for consistency\n\n======\nsrc/test/regress/expected/subscription.out\n\nnitpicks - (see subscription.sql)\n\n======\nsrc/test/regress/sql/subscription.sql\n\nnitpick - combine the invalid option combinations test with all the\nothers (no special comment needed)\nnitpick - rename subscription as 'regress_testsub2' same as all its peers.\n\n======\nsrc/test/subscription/t/011_generated.pl\n\nnitpick - add/remove blank lines\n\n======\nsrc/test/subscription/t/031_column_list.pl\n\nnitpick - rewording for a comment. This issue was not strictly caused\nby this patch, but since you are modifying the same comment we can fix\nthis in passing.\n\n======\n99.\nPlease also see the attached top-up patch for all those nitpicks\nidentified above.\n\n======\n[1] v11-0001 review\nhttps://www.postgresql.org/message-id/CAHut%2BPv45gB4cV%2BSSs6730Kb8urQyqjdZ9PBVgmpwqCycr1Ybg%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 2 Jul 2024 14:23:06 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi Shubham,\n\nAs you can see, most of my recent review comments for patch 0001 are\nonly cosmetic nitpicks. But, there is still one long-unanswered design\nquestion from a month ago [1, #G.2]\n\nA lot of the patch code of pgoutput.c and proto.c and logicalproto.h\nis related to the introduction and passing everywhere of new\n'include_generated_columns' function parameters. These same functions\nare also always passing \"BitMapSet *columns\" representing the\npublication column list.\n\nMy question was about whether we can't make use of the existing BMS\nparameter instead of introducing all the new API parameters.\n\nThe idea might go something like this:\n\n* If 'include_generated_columns' option is specified true and if no\ncolumn list was already specified then perhaps the relentry->columns\ncan be used for a \"dummy\" column list that has everything including\nall the generated columns.\n\n* By doing this:\n -- you may be able to avoid passing the extra\n'include_gernated_columns' everywhere\n -- you may be able to avoid checking for generated columns deeper in\nthe code (since it is already checked up-front when building the\ncolumn list BMS)\n\n~~\n\nI'm not saying this design idea is guaranteed to work, but it might be\nworth considering, because if it does work then there is potential to\nmake the current 0001 patch significantly shorter.\n\n======\n[1] https://www.postgresql.org/message-id/CAHut%2BPsuJfcaeg6zst%3D6PE5uyJv_UxVRHU3ck7W2aHb1uQYKng%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 2 Jul 2024 15:29:20 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Tue, 25 Jun 2024 at 11:56, Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are some review comments for the patch v10-0002.\n>\n> ======\n> Commit Message\n>\n> 1.\n> Note that we don't copy columns when the subscriber-side column is also\n> generated. Those will be filled as normal with the subscriber-side computed or\n> default data.\n>\n> ~\n>\n> Now this patch also introduced some errors etc, so I think that patch\n> comment should be written differently to explicitly spell out\n> behaviour of every combination, something like the below:\n>\n> Summary\n>\n> when (include_generated_column = true)\n>\n> * publisher not-generated column => subscriber not-generated column:\n> This is just normal logical replication (not changed by this patch).\n>\n> * publisher not-generated column => subscriber generated column: This\n> will give ERROR.\n>\n> * publisher generated column => subscriber not-generated column: The\n> publisher generated column value is copied.\n>\n> * publisher generated column => subscriber generated column: The\n> publisher generated column value is not copied. The subscriber\n> generated column will be filled with the subscriber-side computed or\n> default data.\n>\n> when (include_generated_columns = false)\n>\n> * publisher not-generated column => subscriber not-generated column:\n> This is just normal logical replication (not changed by this patch).\n>\n> * publisher not-generated column => subscriber generated column: This\n> will give ERROR.\n>\n> * publisher generated column => subscriber not-generated column: This\n> will replicate nothing. Publisher generate-column is not replicated.\n> The subscriber column will be filled with the subscriber-side default\n> data.\n>\n> * publisher generated column => subscriber generated column: This\n> will replicate nothing. Publisher generate-column is not replicated.\n> The subscriber generated column will be filled with the\n> subscriber-side computed or default data.\nModified\n\n> ======\n> src/backend/replication/logical/relation.c\n>\n> 2.\n> logicalrep_rel_open:\n>\n> I tested some of the \"missing column\" logic, and got the following results:\n>\n> Scenario A:\n> PUB\n> test_pub=# create table t2(a int, b int);\n> test_pub=# create publication pub2 for table t2;\n> SUB\n> test_sub=# create table t2(a int, b int generated always as (a*2) stored);\n> test_sub=# create subscription sub2 connection 'dbname=test_pub'\n> publication pub2 with (include_generated_columns = false);\n> Result:\n> ERROR: logical replication target relation \"public.t2\" is missing\n> replicated column: \"b\"\n>\n> ~\n>\n> Scenario B:\n> PUB/SUB identical to above, but subscription sub2 created \"with\n> (include_generated_columns = true);\"\n> Result:\n> ERROR: logical replication target relation \"public.t2\" has a\n> generated column \"b\" but corresponding column on source relation is\n> not a generated column\n>\n> ~~~\n>\n> 2a. Question\n>\n> Why should we get 2 different error messages for what is essentially\n> the same problem according to whether the 'include_generated_columns'\n> is false or true? Isn't the 2nd error message the more correct and\n> useful one for scenarios like this involving generated columns?\n>\n> Thoughts?\nDid the modification to give same error in both cases\n\n> ~\n>\n> 2b. Missing tests?\n>\n> I also noticed there seems no TAP test for the current \"missing\n> replicated column\" message. IMO there should be a new test introduced\n> for this because the loop involved too much bms logic to go\n> untested...\nAdded the tests 004_sync.pl\n\n> ======\n> src/backend/replication/logical/tablesync.c\n>\n> make_copy_attnamelist:\n> NITPICK - minor comment tweak\n> NITPICK - add some spaces after \"if\" code\nApplied the changes\n\n> 3.\n> Should you pfree the gencollist at the bottom of this function when\n> you no longer need it, for tidiness?\nFixed\n\n> ~~~\n>\n> 4.\n> static void\n> -fetch_remote_table_info(char *nspname, char *relname,\n> +fetch_remote_table_info(char *nspname, char *relname, bool **remotegenlist,\n> LogicalRepRelation *lrel, List **qual)\n> {\n> WalRcvExecResult *res;\n> StringInfoData cmd;\n> TupleTableSlot *slot;\n> Oid tableRow[] = {OIDOID, CHAROID, CHAROID};\n> - Oid attrRow[] = {INT2OID, TEXTOID, OIDOID, BOOLOID};\n> + Oid attrRow[] = {INT2OID, TEXTOID, OIDOID, BOOLOID, BOOLOID};\n> Oid qualRow[] = {TEXTOID};\n> bool isnull;\n> + bool *remotegenlist_res;\n>\n> IMO the names 'remotegenlist' and 'remotegenlist_res' should be\n> swapped the other way around, because it is the function parameter\n> that is the \"result\", whereas the 'remotegenlist_res' is just the\n> local working var for it.\nFixed\n\n> ~~~\n>\n> 5. fetch_remote_table_info\n>\n> Now walrcv_server_version(LogRepWorkerWalRcvConn) is used in multiple\n> places, I think it will be better to assign this to a 'server_version'\n> variable to be used everywhere instead of having multiple function\n> calls.\nFixed\n\n> ~~~\n>\n> 6.\n> \"SELECT a.attnum,\"\n> \" a.attname,\"\n> \" a.atttypid,\"\n> - \" a.attnum = ANY(i.indkey)\"\n> + \" a.attnum = ANY(i.indkey),\"\n> + \" a.attgenerated != ''\"\n> \" FROM pg_catalog.pg_attribute a\"\n> \" LEFT JOIN pg_catalog.pg_index i\"\n> \" ON (i.indexrelid = pg_get_replica_identity_index(%u))\"\n> \" WHERE a.attnum > 0::pg_catalog.int2\"\n> - \" AND NOT a.attisdropped %s\"\n> + \" AND NOT a.attisdropped\", lrel->remoteid);\n> +\n> + if ((walrcv_server_version(LogRepWorkerWalRcvConn) >= 120000 &&\n> + walrcv_server_version(LogRepWorkerWalRcvConn) <= 160000) ||\n> + !MySubscription->includegencols)\n> + appendStringInfo(&cmd, \" AND a.attgenerated = ''\");\n> +\n>\n> If the server version is < PG12 then AFAIK there was no such thing as\n> \"a.attgenerated\", so shouldn't that SELECT \" a.attgenerated != ''\"\n> part also be guarded by some version checking condition like in the\n> WHERE? Otherwise won't it cause an ERROR for old servers?\nFixed\n\n> ~~~\n>\n> 7.\n> /*\n> - * For non-tables and tables with row filters, we need to do COPY\n> - * (SELECT ...), but we can't just do SELECT * because we need to not\n> - * copy generated columns. For tables with any row filters, build a\n> - * SELECT query with OR'ed row filters for COPY.\n> + * For non-tables and tables with row filters and when\n> + * 'include_generated_columns' is specified as 'true', we need to do\n> + * COPY (SELECT ...), as normal COPY of generated column is not\n> + * supported. For tables with any row filters, build a SELECT query\n> + * with OR'ed row filters for COPY.\n> */\n>\n> NITPICK. I felt this was not quite right. AFAIK the reasons for using\n> this COPY (SELECT ...) syntax is different for row-filters and\n> generated-columns. Anyway, I updated the comment slightly in my\n> nitpicks attachment. Please have a look at it to see if you agree with\n> the suggestions. Maybe I am wrong.\nFixed\n\n> ~~~\n>\n> 8.\n> - for (int i = 0; i < lrel.natts; i++)\n> + foreach_ptr(String, att_name, attnamelist)\n>\n> I'm not 100% sure, but isn't foreach_node the macro to use here,\n> rather than foreach_ptr?\nFixed\n\n> ======\n> src/test/subscription/t/011_generated.pl\n>\n> 9.\n> Please discuss with Shubham how to make all the tab1, tab2, tab3,\n> tab4, tab5, tab6 comments use the same kind of style/wording.\n> Currently, the patches 0001 and 0002 test comments are a bit\n> inconsistent.\nFixed\n\n> ~~~\n>\n> 10.\n> Related to above -- now that patch 0002 supports copy_data=true I\n> don't see why we need to test generated columns *both* for\n> copy_data=false and also for copy_data=true. IOW, is it really\n> necessary to have so many tables/tests? For example, I am thinking\n> some of those tests from patch 0001 can be re-used or just removed now\n> that copy_data=true works.\nFixed\n\n> ~~~\n>\n> NITPICK - minor comment tweak\nFixed\n\n> ~~~\n>\n> 11.\n> For tab4 and tab6 I saw the initial sync and normal replication data\n> tests are all merged together, but I had expected to see the initial\n> sync and normal replication data tests separated so it would be\n> consistent with the earlier tab1, tab2, tab3 tests.\nFixed\n\n> ======\n>\n> 99.\n> Also, I have attached a nitpicks diff for some of the cosmetic review\n> comments mentioned above. Please apply whatever of these that you\n> agree with.\nApplied the relevant changes\n\nI have attached a v14 to fix the comments.\n\nThanks and Regards,\nShlok Kyal", "msg_date": "Thu, 4 Jul 2024 15:58:35 +0530", "msg_from": "Shlok Kyal <shlok.kyal.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Tue, 25 Jun 2024 at 18:49, Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Dear Shlok,\n>\n> Thanks for updating patches! Below are my comments, maybe only for 0002.\n>\n> 01. General\n>\n> IIUC, we are not discussed why ALTER SUBSCRIPTION ... SET include_generated_columns\n> is prohibit. Previously, it seems okay because there are exclusive options. But now,\n> such restrictions are gone. Do you have a reason in your mind? It is just not considered\n> yet?\nWe donot support ALTER SUBSCRIPTION to alter\n'include_generated_columns'. Suppose initially the user has a logical\nreplication setup. Publisher has\ntable t1 with columns (c1 int, c2 int generated always as (c1*2)) and\nsubscriber has table t1 with columns (c1 int, c2 int). And initially\n'incude_generated_column' is true.\nNow if we 'ALTER SUBSCRIPTION' to set 'include_generated_columns' as\nfalse. Initial rows will have data for c2 on the subscriber table, but\nwill not have value after alter. This may be an inconsistent\nbehaviour.\n\n\n> 02. General\n>\n> According to the doc, we allow to alter a column to non-generated one, by ALTER\n> TABLE ... ALTER COLUMN ... DROP EXPRESSION command. Not sure, what should be\n> when the command is executed on the subscriber while copying the data? Should\n> we continue the copy or restart? How do you think?\nCOPY of data will happen in a single transaction, so if we execute\n'ALTER TABLE ... ALTER COLUMN ... DROP EXPRESSION' command, It will\ntake place after the whole COPY command will finish. So I think it\nwill not create any issue.\n\n> 03. Tes tcode\n>\n> IIUC, REFRESH PUBLICATION can also lead the table synchronization. Can you add\n> a test for that?\nAdded\n\n> 04. Test code (maybe for 0001)\n>\n> Please test the combination with TABLE ... ALTER COLUMN ... DROP EXPRESSION command.\nAdded\n\n> 05. logicalrep_rel_open\n>\n> ```\n> + /*\n> + * In case 'include_generated_columns' is 'false', we should skip the\n> + * check of missing attrs for generated columns.\n> + * In case 'include_generated_columns' is 'true', we should check if\n> + * corresponding column for the generated column in publication column\n> + * list is present in the subscription table.\n> + */\n> + if (!MySubscription->includegencols && attr->attgenerated)\n> + {\n> + entry->attrmap->attnums[i] = -1;\n> + continue;\n> + }\n> ```\n>\n> This comment is not very clear to me, because here we do not skip anything.\n> Can you clarify the reason why attnums[i] is set to -1 and how will it be used?\nThis part of the code is removed to address some comments.\n\n> 06. make_copy_attnamelist\n>\n> ```\n> + gencollist = palloc0(MaxTupleAttributeNumber * sizeof(bool));\n> ```\n>\n> I think this array is too large. Can we reduce a size to (desc->natts * sizeof(bool))?\n> Also, the free'ing should be done.\nI have changed the name 'gencollist' to 'localgenlist' to make the\nname more consistent. Also\nsize should be (rel->remoterel.natts * sizeof(bool)) as I am storing\nif a column is generated like 'localgenlist[attnum] = true;'\nwhere 'attnum' is corresponding attribute number on publisher side.\n\n> 07. make_copy_attnamelist\n>\n> ```\n> + /* Loop to handle subscription table generated columns. */\n> + for (int i = 0; i < desc->natts; i++)\n> ```\n>\n> IIUC, the loop is needed to find generated columns on the subscriber side, right?\n> Can you clarify as comment?\nFixed\n\n> 08. copy_table\n>\n> ```\n> + /*\n> + * Regular table with no row filter and 'include_generated_columns'\n> + * specified as 'false' during creation of subscription.\n> + */\n> ```\n>\n> I think this comment is not correct. After patching, all tablesync command becomes\n> like COPY (SELECT ...) if include_genereted_columns is set to true. Is it right?\n> Can we restrict only when the table has generated ones?\nFixed\n\nPlease refer to v14 patch for the changes [1].\n\n\n[1]: https://www.postgresql.org/message-id/CANhcyEW95M_usF1OJDudeejs0L0%2BYOEb%3DdXmt_4Hs-70%3DCXa-g%40mail.gmail.com\n\nThanks and Regards,\nShlok Kyal\n\n\n", "msg_date": "Thu, 4 Jul 2024 16:01:31 +0530", "msg_from": "Shlok Kyal <shlok.kyal.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Wed, 26 Jun 2024 at 08:06, Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi Shlok. Here are my review comments for patch v10-0003\n>\n> ======\n> General.\n>\n> 1.\n> The patch has lots of conditions like:\n> if (att->attgenerated && (att->attgenerated !=\n> ATTRIBUTE_GENERATED_STORED || !include_generated_columns))\n> continue;\n>\n> IMO these are hard to read. Although more verbose, please consider if\n> all those (for the sake of readability) would be better re-written\n> like below :\n>\n> if (att->generated)\n> {\n> if (!include_generated_columns)\n> continue;\n>\n> if (att->attgenerated != ATTRIBUTE_GENERATED_STORED)\n> continue;\n> }\nFixed\n\n> ======\n> contrib/test_decoding/test_decoding.c\n>\n> tuple_to_stringinfo:\n>\n> NITPICK = refactored the code and comments a bit here to make it easier\n> NITPICK - there is no need to mention \"virtual\". Instead, say we only\n> support STORED\nFixed\n\n> ======\n> src/backend/catalog/pg_publication.c\n>\n> publication_translate_columns:\n>\n> NITPICK - introduced variable 'att' to simplify this code\nFixed\n\n> ~\n>\n> 2.\n> + ereport(ERROR,\n> + errcode(ERRCODE_INVALID_COLUMN_REFERENCE),\n> + errmsg(\"cannot use virtual generated column \\\"%s\\\" in publication\n> column list\",\n> + colname));\n>\n> Is it better to avoid referring to \"virtual\" at all? Instead, consider\n> rearranging the wording to say something like \"generated column \\\"%s\\\"\n> is not STORED so cannot be used in a publication column list\"\nFixed\n\n> ~~~\n>\n> pg_get_publication_tables:\n>\n> NITPICK - split the condition code for readability\nFixed\n\n> ======\n> src/backend/replication/logical/relation.c\n>\n> 3. logicalrep_rel_open\n>\n> + if (attr->attgenerated && attr->attgenerated != ATTRIBUTE_GENERATED_STORED)\n> + continue;\n> +\n>\n> Isn't this missing some code to say \"entry->attrmap->attnums[i] =\n> -1;\", same as all the other nearby code is doing?\nFixed\n\n> ~~~\n>\n> 4.\n> I felt all the \"generated column\" logic should be kept together, so\n> this new condition should be combined with the other generated column\n> condition, like:\n>\n> if (attr->attgenerated)\n> {\n> /* comment... */\n> if (!MySubscription->includegencols)\n> {\n> entry->attrmap->attnums[i] = -1;\n> continue;\n> }\n>\n> /* comment... */\n> if (attr->attgenerated != ATTRIBUTE_GENERATED_STORED)\n> {\n> entry->attrmap->attnums[i] = -1;\n> continue;\n> }\n> }\nFixed\n\n> ======\n> src/backend/replication/logical/tablesync.c\n>\n> 5.\n> + if (gencols_allowed)\n> + {\n> + /* Replication of generated cols is supported, but not VIRTUAL cols. */\n> + appendStringInfo(&cmd, \" AND a.attgenerated != 'v'\");\n> + }\n>\n> Is it better here to use the ATTRIBUTE_GENERATED_VIRTUAL macro instead\n> of the hardwired 'v'? (Maybe add another TODO comment to revisit\n> this).\n>\n> Alternatively, consider if it is more future-proof to rearrange so it\n> just says what *is* supported instead of what *isn't* supported:\n> e.g. \"AND a.attgenerated IN ('', 's')\"\nI feel we should use ATTRIBUTE_GENERATED_VIRTUAL macro. Added a TODO.\n\n> ======\n> src/test/subscription/t/011_generated.pl\n>\n> NITPICK - some comments are missing the word \"stored\"\n> NITPICK - sometimes \"col\" should be plural \"cols\"\n> NITPICK = typo \"GNERATED\"\nAdd the relevant changes.\n\n> ======\n>\n> 6.\n> In a previous review [1, comment #3] I mentioned that there should be\n> some docs updates on the \"Logical Replication Message Formats\" section\n> 53.9. So, I expected patch 0001 would make some changes and then patch\n> 0003 would have to update it again to say something about \"STORED\".\n> But all that is missing from the v10* patches.\n>\n> ======\nWill fix in upcoming version\n\n>\n> 99.\n> See also my nitpicks diff which is a top-up patch addressing all the\n> nitpick comments mentioned above. Please apply all of these that you\n> agree with.\nApplied Relevant changes\n\nPlease refer v14 patch for the changes [1].\n\n\n[1]: https://www.postgresql.org/message-id/CANhcyEW95M_usF1OJDudeejs0L0%2BYOEb%3DdXmt_4Hs-70%3DCXa-g%40mail.gmail.com\n\nThanks and Regards,\nShlok Kyal\n\n\n", "msg_date": "Thu, 4 Jul 2024 16:02:59 +0530", "msg_from": "Shlok Kyal <shlok.kyal.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Here are my review comments for v14-0002.\n\n======\nsrc/backend/replication/logical/tablesync.c\n\nmake_copy_attnamelist:\n\nnitpick - remove excessive parentheses in palloc0 call.\n\nnitpick - Code is fine AFAICT except it's not immediately obvious\nlocalgenlist is indexed by the *remote* attribute number. So I renamed\n'attrnum' variable in my nitpicks diff. OTOH, if you think no change\nis necessary, that is OK to (in that case maybe add a comment).\n\n~~~\n\n1. fetch_remote_table_info\n\n+ if ((server_version >= 120000 && server_version <= 160000) ||\n+ !MySubscription->includegencols)\n+ appendStringInfo(&cmd, \" AND a.attgenerated = ''\");\n\nShould this say < 180000 instead of <= 160000?\n\n~~~\n\ncopy_table:\n\nnitpick - uppercase in comment\n\nnitpick - missing space after \"if\"\n\n~~~\n\n2. copy_table\n\n+ attnamelist = make_copy_attnamelist(relmapentry, remotegenlist);\n+\n /* Start copy on the publisher. */\n initStringInfo(&cmd);\n\n- /* Regular table with no row filter */\n- if (lrel.relkind == RELKIND_RELATION && qual == NIL)\n+ /* check if remote column list has generated columns */\n+ if(MySubscription->includegencols)\n+ {\n+ for (int i = 0; i < relmapentry->remoterel.natts; i++)\n+ {\n+ if(remotegenlist[i])\n+ {\n+ remote_has_gencol = true;\n+ break;\n+ }\n+ }\n+ }\n+\n\nThere is some subtle logic going on here:\n\nFor example, the comment here says \"Check if the remote column list\nhas generated columns\", and it then proceeds to iterate the remote\nattributes checking the remotegenlist[i]. But the remotegenlist[] was\nreturned from a prior call to make_copy_attnamelist() and according to\nthe make_copy_attnamelist logic, it is NOT returning all remote\ngenerated-cols in that list. Specifically, it is stripping some of\nthem -- \"Do not include generated columns of the subscription table in\nthe [remotegenlist] column list.\".\n\nSo, actually this loop seems to be only finding cases (setting\nremote_has_gen = true) where the remote column is generated but the\nmatch local column is *not* generated. Maybe this was the intended\nlogic all along but then certainly the comment should be improved to\ndescribe it better.\n\n~~~\n\n3.\n+ /*\n+ * Regular table with no row filter and 'include_generated_columns'\n+ * specified as 'false' during creation of subscription.\n+ */\n+ if (lrel.relkind == RELKIND_RELATION && qual == NIL && !remote_has_gencol)\n\nnitpick - This comment also needs improving. For example, just because\nremote_has_gencol is false, it does not follow that\n'include_generated_columns' was specified as 'false' -- maybe the\nparameter was 'true' but the table just had no generated columns\nanyway... I've modified the comment already in my nitpicks diff, but\nprobably you can improve on that.\n\n~\n\nnitpick - \"else\" comment is modified slightly too. Please see the nitpicks diff.\n\n~\n\n4.\nIn hindsight, I felt your variable 'remote_has_gencol' was not\nwell-named because it is not for saying the remote table has a\ngenerated column -- it is saying the remote table has a generated\ncolumn **that we have to copy**. So, rather it should be named\nsomething like 'gencol_copy_needed' (but I didn't change this name in\nthe nitpick diffs...)\n\n======\nsrc/test/subscription/t/004_sync.pl\n\nnitpick - changes to comment style to make the test case separations\nmuch more obvious\nnitpick - minor comment wording tweaks\n\n5.\nHere, you are confirming we get an ERROR when replicating from a\nnon-generated column to a generated column. But I think your patch\nalso added exactly that same test scenario in the 011_generated (as\nthe sub5 test). So, maybe this one here should be removed?\n\n======\nsrc/test/subscription/t/011_generated.pl\n\nnitpick - comment wrapping at 80 chars\nnitpick - add/remove blank lines for readability\nnitpick - typo /subsriber/subscriber/\nnitpick - prior to the ALTER test, tab6 is unsubscribed. So add\nanother test to verify its initial data\nnitpick - sometimes the msg 'add a new table to existing publication'\nis misplaced\nnitpick - the tests for tab6 and tab5 were in opposite to the expected\norder, so swapped them.\n\n======\n99.\nPlease see also the attached diff which implements all the nitpicks\ndescribed in this post.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Fri, 5 Jul 2024 18:17:22 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Tue, Jul 2, 2024 at 9:53 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Mon, Jul 1, 2024 at 8:38 PM Shubham Khanna\n> <khannashubham1197@gmail.com> wrote:\n> >\n> >...\n> > > 8.\n> > > + else if (strcmp(elem->defname, \"include-generated-columns\") == 0)\n> > > + {\n> > > + if (elem->arg == NULL)\n> > > + data->include_generated_columns = true;\n> > >\n> > > Is there any way to test that \"elem->arg == NULL\" in the\n> > > generated.sql? OTOH, if it is not possible to get here then is the\n> > > code even needed?\n> > >\n> >\n> > Currently I could not find a case where the\n> > 'include_generated_columns' option is not specifying any value, but I\n> > was hesitant to remove this from here as the other options mentioned\n> > follow the same rules. Thoughts?\n> >\n>\n> If you do manage to find a scenario for this then I think a test for\n> it would be good. But, I agree that the code seems OK because now I\n> see it is the same pattern as similar nearby code.\n>\n> ~~~\n>\n> Thanks for the updated patch. Here are some review comments for patch v13-0001.\n>\n> ======\n> .../expected/generated_columns.out\n>\n> nitpicks (see generated_columns.sql)\n>\n> ======\n> .../test_decoding/sql/generated_columns.sql\n>\n> nitpick - use plural /column/columns/\n> nitpick - use consistent wording in the comments\n> nitpick - IMO it is better to INSERT different values for each of the tests\n>\n> ======\n> doc/src/sgml/protocol.sgml\n>\n> nitpick - I noticed that none of the other boolean parameters on this\n> page mention about a default, so maybe here we should do the same and\n> omit that information.\n>\n> ~~~\n>\n> 1.\n> - <para>\n> - Next, the following message part appears for each column included in\n> - the publication (except generated columns):\n> - </para>\n> -\n>\n> In a previous review [1 comment #11] I wrote that you can't just\n> remove this paragraph because AFAIK it is still meaningful. A minimal\n> change might be to just remove the \"(except generated columns)\" part.\n> Alternatively, you could give a more detailed explanation mentioning\n> the include_generated_columns protocol parameter.\n>\n> I provided some updated text for this paragraph in my NITPICKS top-up\n> patch, Please have a look at that for ideas.\n>\n> ======\n> src/backend/commands/subscriptioncmds.c\n>\n> It looks like pg_indent needs to be run on this file.\n>\n> ======\n> src/include/catalog/pg_subscription.h\n>\n> nitpick - comment /publish/Publish/ for consistency\n>\n> ======\n> src/include/replication/walreceiver.h\n>\n> nitpick - comment /publish/Publish/ for consistency\n>\n> ======\n> src/test/regress/expected/subscription.out\n>\n> nitpicks - (see subscription.sql)\n>\n> ======\n> src/test/regress/sql/subscription.sql\n>\n> nitpick - combine the invalid option combinations test with all the\n> others (no special comment needed)\n> nitpick - rename subscription as 'regress_testsub2' same as all its peers.\n>\n> ======\n> src/test/subscription/t/011_generated.pl\n>\n> nitpick - add/remove blank lines\n>\n> ======\n> src/test/subscription/t/031_column_list.pl\n>\n> nitpick - rewording for a comment. This issue was not strictly caused\n> by this patch, but since you are modifying the same comment we can fix\n> this in passing.\n>\n> ======\n> 99.\n> Please also see the attached top-up patch for all those nitpicks\n> identified above.\n>\n> ======\n> [1] v11-0001 review\n> https://www.postgresql.org/message-id/CAHut%2BPv45gB4cV%2BSSs6730Kb8urQyqjdZ9PBVgmpwqCycr1Ybg%40mail.gmail.com\n\nAll the comments are handled.\n\nThe attached Patches contain all the suggested changes. Here, v15-0001\nis modified to fix the comments, v15-0002 is not modified and v15-0003\nis modified according to the changes in v15-0001 patch.\nThanks Shlok Kyal for modifying the v15-0003 Patch.\n\n\nThanks and Regards,\nShubham Khanna.", "msg_date": "Fri, 5 Jul 2024 17:31:11 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Tue, Jul 2, 2024 at 10:59 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi Shubham,\n>\n> As you can see, most of my recent review comments for patch 0001 are\n> only cosmetic nitpicks. But, there is still one long-unanswered design\n> question from a month ago [1, #G.2]\n>\n> A lot of the patch code of pgoutput.c and proto.c and logicalproto.h\n> is related to the introduction and passing everywhere of new\n> 'include_generated_columns' function parameters. These same functions\n> are also always passing \"BitMapSet *columns\" representing the\n> publication column list.\n>\n> My question was about whether we can't make use of the existing BMS\n> parameter instead of introducing all the new API parameters.\n>\n> The idea might go something like this:\n>\n> * If 'include_generated_columns' option is specified true and if no\n> column list was already specified then perhaps the relentry->columns\n> can be used for a \"dummy\" column list that has everything including\n> all the generated columns.\n>\n> * By doing this:\n> -- you may be able to avoid passing the extra\n> 'include_gernated_columns' everywhere\n> -- you may be able to avoid checking for generated columns deeper in\n> the code (since it is already checked up-front when building the\n> column list BMS)\n>\n> ~~\n>\n> I'm not saying this design idea is guaranteed to work, but it might be\n> worth considering, because if it does work then there is potential to\n> make the current 0001 patch significantly shorter.\n>\n> ======\n> [1] https://www.postgresql.org/message-id/CAHut%2BPsuJfcaeg6zst%3D6PE5uyJv_UxVRHU3ck7W2aHb1uQYKng%40mail.gmail.com\n\nI have fixed this issue in the latest Patches.\n\nPlease refer to the updated v15 Patches here in [1]. See [1] for the\nchanges added.\n\n[1] https://www.postgresql.org/message-id/CAHv8Rj%2B%3Dhn--ALJQvzzu7meX3LuO3tJKppDS7eO1BGvNFYBAbg%40mail.gmail.com\n\nThanks and Regards,\nShubham Khanna.\n\n\n", "msg_date": "Fri, 5 Jul 2024 17:35:33 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Here are review comments for v15-0001\n\n======\ndoc/src/sgml/ddl.sgml\n\nnitpick - there was a comma (,) which should be a period (.)\n\n======\n.../libpqwalreceiver/libpqwalreceiver.c\n\n1.\n+ if (options->proto.logical.include_generated_columns &&\n+ PQserverVersion(conn->streamConn) >= 170000)\n+ appendStringInfoString(&cmd, \", include_generated_columns 'true'\");\n+\n\nShould now say >= 180000\n\n======\nsrc/backend/replication/pgoutput/pgoutput.c\n\nnitpick - comment wording for RelationSyncEntry.collist.\n\n~~\n\n2.\npgoutput_column_list_init:\n\nI found the current logic to be quite confusing. I assume the code is\nworking OK, because AFAIK there are plenty of tests and they are all\npassing, but the logic seems somewhat repetitive and there are also no\ncomments to explain it adding to my confusion.\n\nIIUC, PRIOR TO THIS PATCH:\n\nBMS field 'columns' represented the \"columns of the column list\" or it\nwas NULL if there was no publication column list (and it was also NULL\nif the column list contained every column).\n\nIIUC NOW, WITH THIS PATCH:\n\nThe BMS field 'columns' meaning is changed slightly to be something\nlike \"columns to be replicated\" or NULL if all columns are to be\nreplicated. This is almost the same thing except we are now handing\nthe generated columns up-front, so generated columns will or won't\nappear in the BMS according to the \"include_generated_columns\"\nparameter. See how this is all a bit subtle which is why copious new\ncomments are required to explain it...\n\nSo, although the test result evidence suggests this is working OK, I\nhave many questions/issues about it. Here are some to start with:\n\n2a. It needs a lot more (summary and detailed) comments explaining the\nlogic now that the meaning is slightly different.\n\n2b. What is the story with the FOR ALL TABLES case now? Previously,\nthere would always be NULL 'columns' for \"FOR ALL TABLES\" case -- the\ncomment still says so. But now you've tacked on a 2nd pass of\niterations to build the BMS outside of the \"if (!pub->alltables)\"\ncheck. Is that OK?\n\n2c. The following logic seemed unexpected:\n- if (bms_num_members(cols) == nliveatts)\n+ if (bms_num_members(cols) == nliveatts &&\n+ data->include_generated_columns)\n {\n bms_free(cols);\n cols = NULL;\n`\nI had thought the above code would look different -- more like:\nif (att->attgenerated && !data->include_generated_columns)\n continue;\n\nnliveatts++;\n...\n\n2d. Was so much duplicated code necessary? It feels like the whole\n\"Get the number of live attributes.\" and assignment of cols to NULL\nmight be made common to both code paths.\n\n2e. I'm beginning to question the pros/cons of the new BMS logic; I\nhad suggested trying this way (processing the generated columns\nup-front in the BMS 'columns' list) to reduce patch code and simplify\nall the subsequent API delegation of \"include_generated_cloumns\"\neverywhere like it was in v14-0001. Indeed, that part was a success\nand the patch is now smaller. But I don't like much that we've traded\nreduced code overall for increased confusing code in that BMS\nfunction. If all this BMS code can be refactored and commented to be\neasier to understand then maybe all will be well, but if it can't then\nmaybe this BMS change was a bridge too far. I haven't given up on it\njust yet, but I wonder what was your opinion about it, and do other\npeople have thoughts about whether this was the good direction to\ntake?\n\n======\nsrc/bin/pg_dump/pg_dump.c\n\n3.\n+ if (fout->remoteVersion >= 170000)\n+ appendPQExpBufferStr(query,\n+ \" s.subincludegencols\\n\");\n+ else\n+ appendPQExpBufferStr(query,\n+ \" false AS subincludegencols\\n\");\n\nShould now say >= 180000\n\n======\nsrc/bin/psql/describe.c\n\n4.\n+ /* include_generated_columns is only supported in v18 and higher */\n+ if (pset.sversion >= 170000)\n+ appendPQExpBuffer(&buf,\n+ \", subincludegencols AS \\\"%s\\\"\\n\",\n+ gettext_noop(\"Include generated columns\"));\n+\n\nShould now say >= 180000\n\n======\nsrc/include/catalog/pg_subscription.h\n\nnitpick - let's make the comment the same as in WalRcvStreamOptions\n\n======\nsrc/include/replication/logicalproto.h\n\nnitpick - extern for logicalrep_write_update should be unchanged by this patch\n\n======\nsrc/test/regress/sql/subscription.sql\n\nnitpick = the comment \"include_generated_columns and copy_data = true\nare mutually exclusive\" is not necessary because this all falls under\nthe existing comment \"fail - invalid option combinations\"\n\nnitpick - let's explicitly put \"copy_data = true\" in the CREATE\nSUBSCRIPTION to make it more obvious\n\n======\n99. Please also refer to the attached 'diffs' patch which implements\nall of my nitpicks issues mentioned above.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Mon, 8 Jul 2024 15:22:54 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi Shlok, Here are some review comments for patch v15-0003.\n\n======\nsrc/backend/catalog/pg_publication.c\n\n1. publication_translate_columns\n\nThe function comment says:\n * Translate a list of column names to an array of attribute numbers\n * and a Bitmapset with them; verify that each attribute is appropriate\n * to have in a publication column list (no system or generated attributes,\n * no duplicates). Additional checks with replica identity are done later;\n * see pub_collist_contains_invalid_column.\n\nThat part about \"[no] generated attributes\" seems to have gone stale\n-- e.g. not quite correct anymore. Should it say no VIRTUAL generated\nattributes?\n\n======\nsrc/backend/replication/logical/proto.c\n\n2. logicalrep_write_tuple and logicalrep_write_attrs\n\nI thought all the code fragments like this:\n\n+ if (att->attgenerated && att->attgenerated != ATTRIBUTE_GENERATED_STORED)\n+ continue;\n+\n\ndon't need to be in the code anymore, because of the BitMapSet (BMS)\nprocessing done to make the \"column list\" for publication where\ndisallowed generated cols should already be excluded from the BMS,\nright?\n\nSo shouldn't all these be detected by the following statement:\nif (!column_in_column_list(att->attnum, columns))\n continue;\n\n======\nsrc/backend/replication/logical/tablesync.c\n3.\n+ if(server_version >= 120000)\n+ {\n+ bool gencols_allowed = server_version >= 170000 &&\nMySubscription->includegencols;\n+\n+ if (gencols_allowed)\n+ {\n\nShould say server_version >= 180000, instead of 170000\n\n======\nsrc/backend/replication/pgoutput/pgoutput.c\n\n4. send_relation_and_attrs\n\n(this is a similar comment for #2 above)\n\nIIUC of the advantages of the BitMapSet (BMS) idea in patch 0001 to\nprocess the generated columns up-front means there is no need to check\nthem again in code like this.\n\nThey should be discovered anyway in the subsequent check:\n/* Skip this attribute if it's not present in the column list */\nif (columns != NULL && !bms_is_member(att->attnum, columns))\n continue;\n\n======\nsrc/test/subscription/t/011_generated.pl\n\n5.\nAFAICT there are still multiple comments (e.g. for the \"TEST tab<n>\"\ncomments) where it still says \"generated\" instead of \"stored\ngenerated\". I did not make a \"nitpicks\" diff for these because those\ncomments are inherited from the prior patch 0002 which still has\noutstanding review comments on it too. Please just search/replace\nthem.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 8 Jul 2024 17:50:26 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Fri, 5 Jul 2024 at 13:47, Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are my review comments for v14-0002.\n>\n> ======\n> src/backend/replication/logical/tablesync.c\n>\n> 2. copy_table\n>\n> + attnamelist = make_copy_attnamelist(relmapentry, remotegenlist);\n> +\n> /* Start copy on the publisher. */\n> initStringInfo(&cmd);\n>\n> - /* Regular table with no row filter */\n> - if (lrel.relkind == RELKIND_RELATION && qual == NIL)\n> + /* check if remote column list has generated columns */\n> + if(MySubscription->includegencols)\n> + {\n> + for (int i = 0; i < relmapentry->remoterel.natts; i++)\n> + {\n> + if(remotegenlist[i])\n> + {\n> + remote_has_gencol = true;\n> + break;\n> + }\n> + }\n> + }\n> +\n>\n> There is some subtle logic going on here:\n>\n> For example, the comment here says \"Check if the remote column list\n> has generated columns\", and it then proceeds to iterate the remote\n> attributes checking the remotegenlist[i]. But the remotegenlist[] was\n> returned from a prior call to make_copy_attnamelist() and according to\n> the make_copy_attnamelist logic, it is NOT returning all remote\n> generated-cols in that list. Specifically, it is stripping some of\n> them -- \"Do not include generated columns of the subscription table in\n> the [remotegenlist] column list.\".\n>\n> So, actually this loop seems to be only finding cases (setting\n> remote_has_gen = true) where the remote column is generated but the\n> match local column is *not* generated. Maybe this was the intended\n> logic all along but then certainly the comment should be improved to\n> describe it better.\n\n'remotegenlist' is actually constructed in function 'fetch_remote_table_info'\nand it has an entry for every column in the column list specifying\nwhether a column is\ngenerated or not.\nIn the function 'make_copy_attnamelist' we are not modifying the list.\nSo, I think the current comment would be sufficient. Thoughts?\n\n> ======\n> src/test/subscription/t/004_sync.pl\n>\n> nitpick - changes to comment style to make the test case separations\n> much more obvious\n> nitpick - minor comment wording tweaks\n>\n> 5.\n> Here, you are confirming we get an ERROR when replicating from a\n> non-generated column to a generated column. But I think your patch\n> also added exactly that same test scenario in the 011_generated (as\n> the sub5 test). So, maybe this one here should be removed?\n\nFor 0004_sync.pl, it is tested when 'include_generated_columns' is not\nspecified. Whereas for the test in 011_generated\n'include_generated_columns = true' is specified.\nI thought we should have a test for both cases to test if the error\nmessage format is the same for both cases. Thoughts?\n\nI have attached the patches and I have addressed the rest of the\ncomment and added changes in v16-0002. I have not modified the\nv16-0001 patch.\n\n\nThanks and Regards,\nShlok Kyal", "msg_date": "Mon, 8 Jul 2024 17:33:14 +0530", "msg_from": "Shlok Kyal <shlok.kyal.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Mon, 8 Jul 2024 at 13:20, Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi Shlok, Here are some review comments for patch v15-0003.\n>\n> ======\n> src/backend/catalog/pg_publication.c\n>\n> 1. publication_translate_columns\n>\n> The function comment says:\n> * Translate a list of column names to an array of attribute numbers\n> * and a Bitmapset with them; verify that each attribute is appropriate\n> * to have in a publication column list (no system or generated attributes,\n> * no duplicates). Additional checks with replica identity are done later;\n> * see pub_collist_contains_invalid_column.\n>\n> That part about \"[no] generated attributes\" seems to have gone stale\n> -- e.g. not quite correct anymore. Should it say no VIRTUAL generated\n> attributes?\nYes, we should use VIRTUAL generated attributes, I have modified it.\n\n> ======\n> src/backend/replication/logical/proto.c\n>\n> 2. logicalrep_write_tuple and logicalrep_write_attrs\n>\n> I thought all the code fragments like this:\n>\n> + if (att->attgenerated && att->attgenerated != ATTRIBUTE_GENERATED_STORED)\n> + continue;\n> +\n>\n> don't need to be in the code anymore, because of the BitMapSet (BMS)\n> processing done to make the \"column list\" for publication where\n> disallowed generated cols should already be excluded from the BMS,\n> right?\n>\n> So shouldn't all these be detected by the following statement:\n> if (!column_in_column_list(att->attnum, columns))\n> continue;\nThe current BMS logic do not handle the Virtual Generated Columns.\nThere can be cases where we do not want a virtual generated column but\nit would be present in BMS.\nTo address this I have added the above logic. I have added this logic\nsimilar to the checks of 'attr->attisdropped'.\n\n> ======\n> src/backend/replication/pgoutput/pgoutput.c\n>\n> 4. send_relation_and_attrs\n>\n> (this is a similar comment for #2 above)\n>\n> IIUC of the advantages of the BitMapSet (BMS) idea in patch 0001 to\n> process the generated columns up-front means there is no need to check\n> them again in code like this.\n>\n> They should be discovered anyway in the subsequent check:\n> /* Skip this attribute if it's not present in the column list */\n> if (columns != NULL && !bms_is_member(att->attnum, columns))\n> continue;\nSame explanation as above.\n\nI have addressed all the comments in v16-0003 patch. Please refer [1].\n[1]: https://www.postgresql.org/message-id/CANhcyEXw%3DBFFVUqohWES9EPkdq-ZMC5QRBVQqQPzrO%3DQ7uzFQw%40mail.gmail.com\n\nThanks and Regards,\nShlok Kyal\n\n\n", "msg_date": "Mon, 8 Jul 2024 17:34:38 +0530", "msg_from": "Shlok Kyal <shlok.kyal.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi Shlok, Here are my review comments for v16-0002\n\n======\nsrc/backend/replication/logical/tablesync.c\n\n1. fetch_remote_table_info\n\n+ if ((server_version >= 120000 && server_version < 180000) ||\n+ !MySubscription->includegencols)\n+ appendStringInfo(&cmd, \" AND a.attgenerated = ''\");\n\nI felt this condition was a bit complicated. it needs a comment to\nexplain that \"attgenerated\" has been supported only since >= PG12 and\n'include_generated_columns' is supported only since >= PG18. The more\nI look at this I think this is a bug. For example, what happens if the\nserver is *before* PG12 and include_generated_cols is false; won't it\nthen try to build SQL using the \"attgenerated\" column which will cause\nan ERROR on the server?\n\nIIRC this condition is already written properly in your patch 0003.\nSo, most of that 0003 condition refactoring should be done here in\npatch 0002 instead.\n\n~~~\n\n2. copy_table\n\n> > So, actually this loop seems to be only finding cases (setting\n> > remote_has_gen = true) where the remote column is generated but the\n> > match local column is *not* generated. Maybe this was the intended\n> > logic all along but then certainly the comment should be improved to\n> > describe it better.\n>\n> 'remotegenlist' is actually constructed in function 'fetch_remote_table_info'\n> and it has an entry for every column in the column list specifying\n> whether a column is\n> generated or not.\n> In the function 'make_copy_attnamelist' we are not modifying the list.\n> So, I think the current comment would be sufficient. Thoughts?\n\nYes, I was mistaken thinking the list is \"modified\". OTOH, I still\nfeel the existing comment (\"Check if remote column list has any\ngenerated column\") is misleading because the remote table might have\ngenerated cols but we are not even interested in them if the\nequivalent subscriber column is also generated. Please see nitpicks\ndiff, for my suggestion how to update this comment.\n\n~~~\n\nnitpick - add space after \"if\"\n\n======\nsrc/test/subscription/t/004_sync.pl\n\n> > 5.\n> > Here, you are confirming we get an ERROR when replicating from a\n> > non-generated column to a generated column. But I think your patch\n> > also added exactly that same test scenario in the 011_generated (as\n> > the sub5 test). So, maybe this one here should be removed?\n>\n> For 0004_sync.pl, it is tested when 'include_generated_columns' is not\n> specified. Whereas for the test in 011_generated\n> 'include_generated_columns = true' is specified.\n> I thought we should have a test for both cases to test if the error\n> message format is the same for both cases. Thoughts?\n\n3.\nSorry, I missed that there was a parameter flag difference. Anyway,\nsince the code-path to reach this error is the same regardless of the\n'include_generated_columns' parameter value IMO having too many tests\nmight be overkill. YMMV.\n\nAnyway, whether you decide to keep both test cases or not, I think all\ntesting related to generated column replication belongs in the new\n001_generated.pl TAP file -- not here in 04_sync.pl\n.\n======\nsrc/test/subscription/t/011_generated.pl\n\n4. Untested scenarios for \"missing col\"?\n\nI have seen (in 04_sync.pl) missing column test cases for:\n- publisher not-generated col ==> subscriber missing column\n\nMaybe I am mistaken, but I don't recall seeing any test cases for:\n- publisher generated-col ==> subscriber missing col\n\nUnless they are already done somewhere, I think this scenario should\nbe in 011_generated.pl. Furthermore, maybe it needs to be tested for\nboth include_generated_columns = true / false, because if the\nparameter is false it should be OK, but if the parameter is true it\nshould give ERROR.\n\n~~~\n\n5.\n-# publisher-side tab3 has generated col 'b' but subscriber-side tab3\nhas DIFFERENT COMPUTATION generated col 'b'.\n+# tab3:\n+# publisher-side tab3 has generated col 'b' but\n+# subscriber-side tab3 has DIFFERENT COMPUTATION generated col 'b'.\n\nI think this change is only improving a comment that was introduced by\npatch 0001. This all belongs back in patch 0001, then patch 0002 has\nnothing to do here.\n\n======\n99.\nPlease also refer to the attached diffs patch which implements any\nnitpicks mentioned above.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 9 Jul 2024 11:44:10 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi Shlok, here are my review comments for v16-0003.\n\n======\nsrc/backend/replication/logical/proto.c\n\n\nOn Mon, Jul 8, 2024 at 10:04 PM Shlok Kyal <shlok.kyal.oss@gmail.com> wrote:\n>\n> On Mon, 8 Jul 2024 at 13:20, Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> >\n> > 2. logicalrep_write_tuple and logicalrep_write_attrs\n> >\n> > I thought all the code fragments like this:\n> >\n> > + if (att->attgenerated && att->attgenerated != ATTRIBUTE_GENERATED_STORED)\n> > + continue;\n> > +\n> >\n> > don't need to be in the code anymore, because of the BitMapSet (BMS)\n> > processing done to make the \"column list\" for publication where\n> > disallowed generated cols should already be excluded from the BMS,\n> > right?\n> >\n> > So shouldn't all these be detected by the following statement:\n> > if (!column_in_column_list(att->attnum, columns))\n> > continue;\n> The current BMS logic do not handle the Virtual Generated Columns.\n> There can be cases where we do not want a virtual generated column but\n> it would be present in BMS.\n> To address this I have added the above logic. I have added this logic\n> similar to the checks of 'attr->attisdropped'.\n>\n\nHmm. I thought the BMS idea of patch 0001 is to discover what columns\nshould be replicated up-front. If they should not be replicated (e.g.\nvirtual generated columns cannot be) then they should never be in the\nBMS.\n\nSo what you said (\"There can be cases where we do not want a virtual\ngenerated column but it would be present in BMS\") should not be\nhappening. If that is happening then it sounds more like a bug in the\nnew BMS logic of pgoutput_column_list_init() function. In other words,\nif what you say is true, then it seems like the current extra\nconditions you have in patch 0004 are just a band-aid to cover a\nproblem of the BMS logic of patch 0001. Am I mistaken?\n\n> > ======\n> > src/backend/replication/pgoutput/pgoutput.c\n> >\n> > 4. send_relation_and_attrs\n> >\n> > (this is a similar comment for #2 above)\n> >\n> > IIUC of the advantages of the BitMapSet (BMS) idea in patch 0001 to\n> > process the generated columns up-front means there is no need to check\n> > them again in code like this.\n> >\n> > They should be discovered anyway in the subsequent check:\n> > /* Skip this attribute if it's not present in the column list */\n> > if (columns != NULL && !bms_is_member(att->attnum, columns))\n> > continue;\n> Same explanation as above.\n\nAs above.\n\n======\nsrc/test/subscription/t/011_generated.pl\n\nI'm not sure if you needed to say \"STORED\" generated cols for the\nsubscriber-side columns but anyway, whatever is done needs to be done\nconsistently. FYI, below you did *not* say STORED for subscriber-side\ngenerated cols, but in other comments for subscriber-side generated\ncolumns, you did say STORED.\n\n# tab3:\n# publisher-side tab3 has STORED generated col 'b' but\n# subscriber-side tab3 has DIFFERENT COMPUTATION generated col 'b'.\n\n~\n\n# tab4:\n# publisher-side tab4 has STORED generated cols 'b' and 'c' but\n# subscriber-side tab4 has non-generated col 'b', and generated-col 'c'\n# where columns on publisher/subscriber are in a different order\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 9 Jul 2024 14:22:46 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi Shubham/Shlok, I was thinking some more about the suggested new\nBitMapSet (BMS) idea of patch 0001 that changes the 'columns' meaning\nto include generated cols also where necessary.\n\nI feel it is a bit risky to change lots of code without being 100%\nconfident it will still be in the final push. It's also going to make\nthe reviewing job harder if stuff gets added and then later removed.\n\nIMO it might be better to revert all the patches (mostly 0001, but\nalso parts of subsequent patches) to their pre-BMS-change ~v14* state.\nThen all the BMS \"improvement\" can be kept isolated in a new patch\n0004.\n\nSome more reasons to split this off into a separate patch are:\n\n* The BMS change is essentially a redesign/cleanup of the code but is\nnothing to do with the actual *functionality* of the new \"generated\ncolumns\" feature.\n\n* Apart from the BMS change I think the rest of the patches are nearly\nstable now. So it might be good to get it all finished so the BMS\nchange can be tackled separately.\n\n* By isolating the BMS change, then we will be able to see exactly\nwhat is the code cost/benefit (e.g. removal of redundant code versus\nadding new logic) which is part of the judgement to decide whether to\ndo it this way or not.\n\n* By isolating the BMS change, then it makes it convenient for testing\nbefore/after in case there are any performance concerns\n\n* By isolating the BMS change, if some unexpected obstacle is\nencountered that makes it unfeasible then we can just throw away patch\n0004 and everything else (patches 0001,0002,0003) will still be good\nto go.\n\nThoughts?\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 10 Jul 2024 08:51:55 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Mon, Jul 8, 2024 at 10:53 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are review comments for v15-0001\n>\n> ======\n> doc/src/sgml/ddl.sgml\n>\n> nitpick - there was a comma (,) which should be a period (.)\n>\n> ======\n> .../libpqwalreceiver/libpqwalreceiver.c\n>\n> 1.\n> + if (options->proto.logical.include_generated_columns &&\n> + PQserverVersion(conn->streamConn) >= 170000)\n> + appendStringInfoString(&cmd, \", include_generated_columns 'true'\");\n> +\n>\n> Should now say >= 180000\n>\n> ======\n> src/backend/replication/pgoutput/pgoutput.c\n>\n> nitpick - comment wording for RelationSyncEntry.collist.\n>\n> ~~\n>\n> 2.\n> pgoutput_column_list_init:\n>\n> I found the current logic to be quite confusing. I assume the code is\n> working OK, because AFAIK there are plenty of tests and they are all\n> passing, but the logic seems somewhat repetitive and there are also no\n> comments to explain it adding to my confusion.\n>\n> IIUC, PRIOR TO THIS PATCH:\n>\n> BMS field 'columns' represented the \"columns of the column list\" or it\n> was NULL if there was no publication column list (and it was also NULL\n> if the column list contained every column).\n>\n> IIUC NOW, WITH THIS PATCH:\n>\n> The BMS field 'columns' meaning is changed slightly to be something\n> like \"columns to be replicated\" or NULL if all columns are to be\n> replicated. This is almost the same thing except we are now handing\n> the generated columns up-front, so generated columns will or won't\n> appear in the BMS according to the \"include_generated_columns\"\n> parameter. See how this is all a bit subtle which is why copious new\n> comments are required to explain it...\n>\n> So, although the test result evidence suggests this is working OK, I\n> have many questions/issues about it. Here are some to start with:\n>\n> 2a. It needs a lot more (summary and detailed) comments explaining the\n> logic now that the meaning is slightly different.\n>\n> 2b. What is the story with the FOR ALL TABLES case now? Previously,\n> there would always be NULL 'columns' for \"FOR ALL TABLES\" case -- the\n> comment still says so. But now you've tacked on a 2nd pass of\n> iterations to build the BMS outside of the \"if (!pub->alltables)\"\n> check. Is that OK?\n>\n> 2c. The following logic seemed unexpected:\n> - if (bms_num_members(cols) == nliveatts)\n> + if (bms_num_members(cols) == nliveatts &&\n> + data->include_generated_columns)\n> {\n> bms_free(cols);\n> cols = NULL;\n> `\n> I had thought the above code would look different -- more like:\n> if (att->attgenerated && !data->include_generated_columns)\n> continue;\n>\n> nliveatts++;\n> ...\n>\n> 2d. Was so much duplicated code necessary? It feels like the whole\n> \"Get the number of live attributes.\" and assignment of cols to NULL\n> might be made common to both code paths.\n>\n> 2e. I'm beginning to question the pros/cons of the new BMS logic; I\n> had suggested trying this way (processing the generated columns\n> up-front in the BMS 'columns' list) to reduce patch code and simplify\n> all the subsequent API delegation of \"include_generated_cloumns\"\n> everywhere like it was in v14-0001. Indeed, that part was a success\n> and the patch is now smaller. But I don't like much that we've traded\n> reduced code overall for increased confusing code in that BMS\n> function. If all this BMS code can be refactored and commented to be\n> easier to understand then maybe all will be well, but if it can't then\n> maybe this BMS change was a bridge too far. I haven't given up on it\n> just yet, but I wonder what was your opinion about it, and do other\n> people have thoughts about whether this was the good direction to\n> take?\n\nI have created a separate patch(v17-0004) for this idea. Will address\nthis comment in the next version of patches.\n\n> ======\n> src/bin/pg_dump/pg_dump.c\n>\n> 3.\n> + if (fout->remoteVersion >= 170000)\n> + appendPQExpBufferStr(query,\n> + \" s.subincludegencols\\n\");\n> + else\n> + appendPQExpBufferStr(query,\n> + \" false AS subincludegencols\\n\");\n>\n> Should now say >= 180000\n>\n> ======\n> src/bin/psql/describe.c\n>\n> 4.\n> + /* include_generated_columns is only supported in v18 and higher */\n> + if (pset.sversion >= 170000)\n> + appendPQExpBuffer(&buf,\n> + \", subincludegencols AS \\\"%s\\\"\\n\",\n> + gettext_noop(\"Include generated columns\"));\n> +\n>\n> Should now say >= 180000\n>\n> ======\n> src/include/catalog/pg_subscription.h\n>\n> nitpick - let's make the comment the same as in WalRcvStreamOptions\n>\n> ======\n> src/include/replication/logicalproto.h\n>\n> nitpick - extern for logicalrep_write_update should be unchanged by this patch\n>\n> ======\n> src/test/regress/sql/subscription.sql\n>\n> nitpick = the comment \"include_generated_columns and copy_data = true\n> are mutually exclusive\" is not necessary because this all falls under\n> the existing comment \"fail - invalid option combinations\"\n>\n> nitpick - let's explicitly put \"copy_data = true\" in the CREATE\n> SUBSCRIPTION to make it more obvious\n>\n> ======\n> 99. Please also refer to the attached 'diffs' patch which implements\n> all of my nitpicks issues mentioned above.\n\nThe attached Patches contain all the suggested changes. Here, v17-0001\nis modified to fix the comments, v17-0002 and v17-0003 are modified\naccording to the changes in v17-0001 patch and v17-0004 patch contains\nthe changes related to Bitmapset(BMS) idea that changes the 'columns'\nmeaning to include generated cols also where necessary.\n\nThanks and Regards,\nShubham Khanna.", "msg_date": "Thu, 11 Jul 2024 11:42:58 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Wed, Jul 10, 2024 at 4:22 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi Shubham/Shlok, I was thinking some more about the suggested new\n> BitMapSet (BMS) idea of patch 0001 that changes the 'columns' meaning\n> to include generated cols also where necessary.\n>\n> I feel it is a bit risky to change lots of code without being 100%\n> confident it will still be in the final push. It's also going to make\n> the reviewing job harder if stuff gets added and then later removed.\n>\n> IMO it might be better to revert all the patches (mostly 0001, but\n> also parts of subsequent patches) to their pre-BMS-change ~v14* state.\n> Then all the BMS \"improvement\" can be kept isolated in a new patch\n> 0004.\n>\n> Some more reasons to split this off into a separate patch are:\n>\n> * The BMS change is essentially a redesign/cleanup of the code but is\n> nothing to do with the actual *functionality* of the new \"generated\n> columns\" feature.\n>\n> * Apart from the BMS change I think the rest of the patches are nearly\n> stable now. So it might be good to get it all finished so the BMS\n> change can be tackled separately.\n>\n> * By isolating the BMS change, then we will be able to see exactly\n> what is the code cost/benefit (e.g. removal of redundant code versus\n> adding new logic) which is part of the judgement to decide whether to\n> do it this way or not.\n>\n> * By isolating the BMS change, then it makes it convenient for testing\n> before/after in case there are any performance concerns\n>\n> * By isolating the BMS change, if some unexpected obstacle is\n> encountered that makes it unfeasible then we can just throw away patch\n> 0004 and everything else (patches 0001,0002,0003) will still be good\n> to go.\n\nAs suggested, I have created a separate patch for the Bitmapset(BMS)\nidea of patch 0001 that changes the 'columns' meaning to include\ngenerated cols also where necessary.\nPlease refer to the updated v17 Patches here in [1]. See [1] for the\nchanges added.\n\n[1] https://www.postgresql.org/message-id/CAHv8RjJ0gAUd62PvBRXCPYy2oTNZWEY-Qe8cBNzQaJPVMZCeGA%40mail.gmail.com\n\nThanks and Regards,\nShubham Khanna.\n\n\n", "msg_date": "Thu, 11 Jul 2024 11:46:51 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi Shubham.\n\nThanks for separating the new BMS 'columns' modification.\n\nHere are my review comments for the latest patch v17-0001.\n\n======\n\n1. src/backend/replication/pgoutput/pgoutput.c\n\n /*\n * Columns included in the publication, or NULL if all columns are\n * included implicitly. Note that the attnums in this bitmap are not\n+ * publication and include_generated_columns option: other reasons should\n+ * be checked at user side. Note that the attnums in this bitmap are not\n * shifted by FirstLowInvalidHeapAttributeNumber.\n */\n Bitmapset *columns;\nWith this latest 0001 there is now no change to the original\ninterpretation of RelationSyncEntry BMS 'columns'. So, I think this\nfield comment should remain unchanged; i.e. it should be the same as\nthe current HEAD comment.\n\n======\nsrc/test/subscription/t/011_generated.pl\n\nnitpick - comment changes for 'tab2' and 'tab3' to make them more consistent.\n\n======\n99.\nPlease refer to the attached diff patch which implements any nitpicks\ndescribed above.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Fri, 12 Jul 2024 16:43:15 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi, here are some review comments about patch v17-0003\n\n======\n1.\nMissing a docs change?\n\nPreviously, (v16-0002) the patch included a change to\ndoc/src/sgml/protocol.sgml like below to say STORED generated instead\nof just generated.\n\n <para>\n- Boolean option to enable generated columns. This option controls\n- whether generated columns should be included in the string\n- representation of tuples during logical decoding in PostgreSQL.\n+ Boolean option to enable <literal>STORED</literal> generated columns.\n+ This option controls whether <literal>STORED</literal>\ngenerated columns\n+ should be included in the string representation of tuples\nduring logical\n+ decoding in PostgreSQL.\n </para>\n\nWhy is that v16 change no longer present in patch v17-0003?\n\n======\nsrc/backend/catalog/pg_publication.c\n\n2.\nPreviously, (v16-0003) this patch included a change to clarify the\nkind of generated cols that are allowed in a column list.\n\n * Translate a list of column names to an array of attribute numbers\n * and a Bitmapset with them; verify that each attribute is appropriate\n- * to have in a publication column list (no system or generated attributes,\n- * no duplicates). Additional checks with replica identity are done later;\n- * see pub_collist_contains_invalid_column.\n+ * to have in a publication column list (no system or virtual generated\n+ * attributes, no duplicates). Additional checks with replica identity\n+ * are done later; see pub_collist_contains_invalid_column.\n\nWhy is that v16 change no longer present in patch v17-0003?\n\n======\nsrc/backend/replication/logical/tablesync.c\n\n3. make_copy_attnamelist\n\n- if (!attr->attgenerated)\n+ if (attr->attgenerated != ATTRIBUTE_GENERATED_STORED)\n continue;\n\nIIUC this logic is checking to make sure the subscriber-side table\ncolumn was not a generated column (because we don't replicate on top\nof generated columns). So, does the distinction of STORED/VIRTUAL\nreally matter here?\n\n~~~\n\nfetch_remote_table_info:\nnitpick - Should not change any spaces unrelated to the patch\n\n======\n\nsend_relation_and_attrs:\n\n- if (att->attgenerated && !include_generated_columns)\n+ if (att->attgenerated && (att->attgenerated !=\nATTRIBUTE_GENERATED_STORED || !include_generated_columns))\n continue;\n\nnitpick - It seems over-complicated. Conditions can be split so the\ncode fragment looks the same as in other places in this patch.\n\n======\n99.\nPlease see the attached diffs patch that implements any nitpicks\nmentioned above.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Mon, 15 Jul 2024 12:38:20 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi, I had a quick look at the patch v17-0004 which is the split-off\nnew BMS logic.\n\nIIUC this 0004 is currently undergoing some refactoring and\ncleaning-up, so I won't comment much about it except to give the\nfollowing observation below.\n\n======\nsrc/backend/replication/logical/proto.c.\n\nI did not expect to see any code fragments that are still checking\ngenerated columns like below:\n\nlogicalrep_write_tuple:\n\n if (att->attgenerated)\n {\n- if (!include_generated_columns)\n- continue;\n\n if (att->attgenerated != ATTRIBUTE_GENERATED_STORED)\n continue;\n~\n\n if (att->attgenerated)\n {\n- if (!include_generated_columns)\n- continue;\n\n if (att->attgenerated != ATTRIBUTE_GENERATED_STORED)\n continue;\n\n~~~\n\nlogicalrep_write_attrs:\n\n if (att->attgenerated)\n {\n- if (!include_generated_columns)\n- continue;\n\n if (att->attgenerated != ATTRIBUTE_GENERATED_STORED)\n continue;\n\n~\nif (att->attgenerated)\n {\n- if (!include_generated_columns)\n- continue;\n\n if (att->attgenerated != ATTRIBUTE_GENERATED_STORED)\n continue;\n~~~\n\n\nAFAIK, now checking support of generated columns will be done when the\nBMS 'columns' is assigned, so the continuation code will be handled\nlike this:\n\nif (!column_in_column_list(att->attnum, columns))\n continue;\n\n======\n\nBTW there is a subtle but significant difference in this 0004 patch.\nIOW, we are introducing a difference between the list of published\ncolumns VERSUS a publication column list. So please make sure that all\ncode comments are adjusted appropriately so they are not misleading by\ncalling these \"column lists\" still.\n\nBEFORE: BMS 'columns' means \"columns of the column list\" or NULL if\nthere was no publication column list\nAFTER: BMS 'columns' means \"columns to be replicated\" or NULL if all\ncolumns are to be replicated\n\n======\nKind Regards,\nPeter Smith.\n\n\n", "msg_date": "Mon, 15 Jul 2024 15:39:01 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Tue, 9 Jul 2024 at 07:14, Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi Shlok, Here are my review comments for v16-0002\n>\n> ======\n> src/test/subscription/t/004_sync.pl\n>\n> > > 5.\n> > > Here, you are confirming we get an ERROR when replicating from a\n> > > non-generated column to a generated column. But I think your patch\n> > > also added exactly that same test scenario in the 011_generated (as\n> > > the sub5 test). So, maybe this one here should be removed?\n> >\n> > For 0004_sync.pl, it is tested when 'include_generated_columns' is not\n> > specified. Whereas for the test in 011_generated\n> > 'include_generated_columns = true' is specified.\n> > I thought we should have a test for both cases to test if the error\n> > message format is the same for both cases. Thoughts?\n>\n> 3.\n> Sorry, I missed that there was a parameter flag difference. Anyway,\n> since the code-path to reach this error is the same regardless of the\n> 'include_generated_columns' parameter value IMO having too many tests\n> might be overkill. YMMV.\n>\n> Anyway, whether you decide to keep both test cases or not, I think all\n> testing related to generated column replication belongs in the new\n> 001_generated.pl TAP file -- not here in 04_sync.pl\nI have removed the test\n\n> ======\n> src/test/subscription/t/011_generated.pl\n>\n> 4. Untested scenarios for \"missing col\"?\n>\n> I have seen (in 04_sync.pl) missing column test cases for:\n> - publisher not-generated col ==> subscriber missing column\n>\n> Maybe I am mistaken, but I don't recall seeing any test cases for:\n> - publisher generated-col ==> subscriber missing col\n>\n> Unless they are already done somewhere, I think this scenario should\n> be in 011_generated.pl. Furthermore, maybe it needs to be tested for\n> both include_generated_columns = true / false, because if the\n> parameter is false it should be OK, but if the parameter is true it\n> should give ERROR.\n Have added the tests in 011_generated.pl\n\nI have also addressed the remaining comments. Please find the updated\nv18 patches\n\nv18-0001 - Rebased the patch on HEAD\nv18-0002 - Addressed the comments\nv18-0003 - Addressed the comments\nv18-0004- Rebased the patch\n\nThanks and Regards,\nShlok Kyal", "msg_date": "Tue, 16 Jul 2024 14:27:44 +0530", "msg_from": "Shlok Kyal <shlok.kyal.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Tue, 9 Jul 2024 at 09:53, Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi Shlok, here are my review comments for v16-0003.\n>\n> ======\n> src/backend/replication/logical/proto.c\n>\n>\n> On Mon, Jul 8, 2024 at 10:04 PM Shlok Kyal <shlok.kyal.oss@gmail.com> wrote:\n> >\n> > On Mon, 8 Jul 2024 at 13:20, Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > >\n> > > 2. logicalrep_write_tuple and logicalrep_write_attrs\n> > >\n> > > I thought all the code fragments like this:\n> > >\n> > > + if (att->attgenerated && att->attgenerated != ATTRIBUTE_GENERATED_STORED)\n> > > + continue;\n> > > +\n> > >\n> > > don't need to be in the code anymore, because of the BitMapSet (BMS)\n> > > processing done to make the \"column list\" for publication where\n> > > disallowed generated cols should already be excluded from the BMS,\n> > > right?\n> > >\n> > > So shouldn't all these be detected by the following statement:\n> > > if (!column_in_column_list(att->attnum, columns))\n> > > continue;\n> > The current BMS logic do not handle the Virtual Generated Columns.\n> > There can be cases where we do not want a virtual generated column but\n> > it would be present in BMS.\n> > To address this I have added the above logic. I have added this logic\n> > similar to the checks of 'attr->attisdropped'.\n> >\n>\n> Hmm. I thought the BMS idea of patch 0001 is to discover what columns\n> should be replicated up-front. If they should not be replicated (e.g.\n> virtual generated columns cannot be) then they should never be in the\n> BMS.\n>\n> So what you said (\"There can be cases where we do not want a virtual\n> generated column but it would be present in BMS\") should not be\n> happening. If that is happening then it sounds more like a bug in the\n> new BMS logic of pgoutput_column_list_init() function. In other words,\n> if what you say is true, then it seems like the current extra\n> conditions you have in patch 0004 are just a band-aid to cover a\n> problem of the BMS logic of patch 0001. Am I mistaken?\n>\nWe have created a 0004 patch to use the BMS approach. It will be\naddressed in the future 0004 patch.\n\n> > > ======\n> > > src/backend/replication/pgoutput/pgoutput.c\n> > >\n> > > 4. send_relation_and_attrs\n> > >\n> > > (this is a similar comment for #2 above)\n> > >\n> > > IIUC of the advantages of the BitMapSet (BMS) idea in patch 0001 to\n> > > process the generated columns up-front means there is no need to check\n> > > them again in code like this.\n> > >\n> > > They should be discovered anyway in the subsequent check:\n> > > /* Skip this attribute if it's not present in the column list */\n> > > if (columns != NULL && !bms_is_member(att->attnum, columns))\n> > > continue;\n> > Same explanation as above.\n>\n> As above.\n>\nWe have created a 0004 patch to use the BMS approach. It will be\naddressed in the future 0004 patch.\n\n> ======\n> src/test/subscription/t/011_generated.pl\n>\n> I'm not sure if you needed to say \"STORED\" generated cols for the\n> subscriber-side columns but anyway, whatever is done needs to be done\n> consistently. FYI, below you did *not* say STORED for subscriber-side\n> generated cols, but in other comments for subscriber-side generated\n> columns, you did say STORED.\n>\n> # tab3:\n> # publisher-side tab3 has STORED generated col 'b' but\n> # subscriber-side tab3 has DIFFERENT COMPUTATION generated col 'b'.\n>\n> ~\n>\n> # tab4:\n> # publisher-side tab4 has STORED generated cols 'b' and 'c' but\n> # subscriber-side tab4 has non-generated col 'b', and generated-col 'c'\n> # where columns on publisher/subscriber are in a different order\n>\nFixed\n\nPlease find the updated patch v18-0003 patch at [1].\n\n[1]: https://www.postgresql.org/message-id/CANhcyEW3LVJpRPScz6VBa%3DZipEMV7b-u76PDEALNcNDFURCYMA%40mail.gmail.com\n\nThanks and Regards,\nShok Kyal\n\n\n", "msg_date": "Tue, 16 Jul 2024 14:28:43 +0530", "msg_from": "Shlok Kyal <shlok.kyal.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Mon, 15 Jul 2024 at 08:08, Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi, here are some review comments about patch v17-0003\n\nI have addressed the comments in v18-0003 patch [1].\n\n[1]: https://www.postgresql.org/message-id/CANhcyEW3LVJpRPScz6VBa%3DZipEMV7b-u76PDEALNcNDFURCYMA%40mail.gmail.com\n\nThanks and Regards,\nShlok Kyal\n\n\n", "msg_date": "Tue, 16 Jul 2024 14:34:12 +0530", "msg_from": "Shlok Kyal <shlok.kyal.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Fri, Jul 12, 2024 at 12:13 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi Shubham.\n>\n> Thanks for separating the new BMS 'columns' modification.\n>\n> Here are my review comments for the latest patch v17-0001.\n>\n> ======\n>\n> 1. src/backend/replication/pgoutput/pgoutput.c\n>\n> /*\n> * Columns included in the publication, or NULL if all columns are\n> * included implicitly. Note that the attnums in this bitmap are not\n> + * publication and include_generated_columns option: other reasons should\n> + * be checked at user side. Note that the attnums in this bitmap are not\n> * shifted by FirstLowInvalidHeapAttributeNumber.\n> */\n> Bitmapset *columns;\n> With this latest 0001 there is now no change to the original\n> interpretation of RelationSyncEntry BMS 'columns'. So, I think this\n> field comment should remain unchanged; i.e. it should be the same as\n> the current HEAD comment.\n>\n> ======\n> src/test/subscription/t/011_generated.pl\n>\n> nitpick - comment changes for 'tab2' and 'tab3' to make them more consistent.\n>\n> ======\n> 99.\n> Please refer to the attached diff patch which implements any nitpicks\n> described above.\n\nThe attached Patches contain all the suggested changes.\n\nv19-0001 - Addressed the comments.\nv19-0002 - Rebased the Patch.\nv19-0003 - Rebased the Patch.\nv19-0004- Addressed all the comments related to Bitmapset(BMS).\n\nThanks and Regards,\nShubham Khanna.", "msg_date": "Tue, 16 Jul 2024 16:49:42 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Mon, Jul 15, 2024 at 11:09 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi, I had a quick look at the patch v17-0004 which is the split-off\n> new BMS logic.\n>\n> IIUC this 0004 is currently undergoing some refactoring and\n> cleaning-up, so I won't comment much about it except to give the\n> following observation below.\n>\n> ======\n> src/backend/replication/logical/proto.c.\n>\n> I did not expect to see any code fragments that are still checking\n> generated columns like below:\n>\n> logicalrep_write_tuple:\n>\n> if (att->attgenerated)\n> {\n> - if (!include_generated_columns)\n> - continue;\n>\n> if (att->attgenerated != ATTRIBUTE_GENERATED_STORED)\n> continue;\n> ~\n>\n> if (att->attgenerated)\n> {\n> - if (!include_generated_columns)\n> - continue;\n>\n> if (att->attgenerated != ATTRIBUTE_GENERATED_STORED)\n> continue;\n>\n> ~~~\n>\n> logicalrep_write_attrs:\n>\n> if (att->attgenerated)\n> {\n> - if (!include_generated_columns)\n> - continue;\n>\n> if (att->attgenerated != ATTRIBUTE_GENERATED_STORED)\n> continue;\n>\n> ~\n> if (att->attgenerated)\n> {\n> - if (!include_generated_columns)\n> - continue;\n>\n> if (att->attgenerated != ATTRIBUTE_GENERATED_STORED)\n> continue;\n> ~~~\n>\n>\n> AFAIK, now checking support of generated columns will be done when the\n> BMS 'columns' is assigned, so the continuation code will be handled\n> like this:\n>\n> if (!column_in_column_list(att->attnum, columns))\n> continue;\n>\n> ======\n>\n> BTW there is a subtle but significant difference in this 0004 patch.\n> IOW, we are introducing a difference between the list of published\n> columns VERSUS a publication column list. So please make sure that all\n> code comments are adjusted appropriately so they are not misleading by\n> calling these \"column lists\" still.\n>\n> BEFORE: BMS 'columns' means \"columns of the column list\" or NULL if\n> there was no publication column list\n> AFTER: BMS 'columns' means \"columns to be replicated\" or NULL if all\n> columns are to be replicated\n\nI have addressed all the comments in v19-0004 patch.\nPlease refer to the updated v19-0004 Patch here in [1]. See [1] for\nthe changes added.\n\n[1] https://www.postgresql.org/message-id/CAHv8Rj%2BR0cj%3Dz1bTMAgQKQWx1EKvkMEnV9QsHGvOqTdnLUQi1A%40mail.gmail.com\n\nThanks and Regards,\nShubham Khanna.\n\n\n", "msg_date": "Tue, 16 Jul 2024 16:53:45 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi Shubham, here are my review comments for patch v19-0001.\n\n======\nsrc/backend/replication/pgoutput/pgoutput.c\n\n1.\n /*\n * Columns included in the publication, or NULL if all columns are\n * included implicitly. Note that the attnums in this bitmap are not\n+ * publication and include_generated_columns option: other reasons should\n+ * be checked at user side. Note that the attnums in this bitmap are not\n * shifted by FirstLowInvalidHeapAttributeNumber.\n */\n Bitmapset *columns;\nYou replied [1] \"The attached Patches contain all the suggested\nchanges.\" but as I previously commented [2, #1], since there is no\nchange to the interpretation of the 'columns' BMS caused by this\npatch, then I expected this comment would be unchanged (i.e. same as\nHEAD code). But this fix was missed in v19-0001.\n\nOTOH, if you do think there was a reason to change the comment then\nthe above is still not good because \"are not publication and\ninclude_generated_columns option\" wording doesn't make sense.\n\n======\nsrc/test/subscription/t/011_generated.pl\n\nObservation -- I added (in nitpicks diffs) some more comments for\n'tab1' (to make all comments consistent with the new tests added). But\nwhen I was doing that I observed that tab1 and tab3 test scenarios are\nvery similar. It seems only the subscription parameter is not\nspecified (so 'include_generated_cols' default wll be tested). IIRC\nthe default for that parameter is \"false\", so tab1 is not really\ntesting that properly -- e.g. I thought maybe to test the default\nparameter it's better the subscriber-side 'b' should be not-generated?\nBut doing that would make 'tab1' the same as 'tab2'. Anyway, something\nseems amiss -- it seems either something is not tested or is duplicate\ntested. Please revisit what the tab1 test intention was and make sure\nwe are doing the right thing for it...\n\n======\n99.\nThe attached nitpicks diff patch has some tweaked comments.\n\n======\n[1] https://www.postgresql.org/message-id/CAHv8Rj%2BR0cj%3Dz1bTMAgQKQWx1EKvkMEnV9QsHGvOqTdnLUQi1A%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAHut%2BPtVfrbx0jb42LCmS%3D-LcMTtWxm%2BvhaoArkjg7Z0mvuXbg%40mail.gmail.com\n\n\nKind Regards,\nPeter Smith.\nFujitsu Australia.", "msg_date": "Thu, 18 Jul 2024 15:17:16 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi, here are some review comments for v19-0002\n\n======\nsrc/backend/replication/logical/tablesync.c\n\nmake_copy_attnamelist:\nnitpick - tweak function comment\nnitpick - tweak other comments\n\n~~~\n\nfetch_remote_table_info:\nnitpick - add space after \"if\"\nnitpick - removed a comment because logic is self-evident from the variable name\n\n======\nsrc/test/subscription/t/004_sync.pl\n\n1.\nThis new test is not related to generated columns. IIRC, this is just\nsome test that we discovered missing during review of this thread. As\nsuch, I think this change can be posted/patched separately from this\nthread.\n\n======\nsrc/test/subscription/t/011_generated.pl\n\nnitpick - change some comment wording to be more consistent with patch 0001.\n\n======\n99.\nPlease see the nitpicks diff attachment which implements any nitpicks\nmentioned above.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Thu, 18 Jul 2024 18:24:39 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi, here are some review comments for patch v19-0003\n\n======\nsrc/backend/catalog/pg_publication.c\n\n1.\n/*\n * Translate a list of column names to an array of attribute numbers\n * and a Bitmapset with them; verify that each attribute is appropriate\n * to have in a publication column list (no system or generated attributes,\n * no duplicates). Additional checks with replica identity are done later;\n * see pub_collist_contains_invalid_column.\n *\n * Note that the attribute numbers are *not* offset by\n * FirstLowInvalidHeapAttributeNumber; system columns are forbidden so this\n * is okay.\n */\nstatic void\npublication_translate_columns(Relation targetrel, List *columns,\n int *natts, AttrNumber **attrs)\n\n~\n\nI though the above comment ought to change: /or generated\nattributes/or virtual generated attributes/\n\nIIRC this was already addressed back in v16, but somehow that fix has\nbeen lost (???).\n\n======\nsrc/backend/replication/logical/tablesync.c\n\nfetch_remote_table_info:\nnitpick - missing end space in this comment /* TODO: use\nATTRIBUTE_GENERATED_VIRTUAL*/\n\n======\n\n2.\n(in patch v19-0001)\n+# tab3:\n+# publisher-side tab3 has generated col 'b'.\n+# subscriber-side tab3 has generated col 'b', using a different computation.\n\n(here, in patch v19-0003)\n # tab3:\n-# publisher-side tab3 has generated col 'b'.\n-# subscriber-side tab3 has generated col 'b', using a different computation.\n+# publisher-side tab3 has stored generated col 'b' but\n+# subscriber-side tab3 has DIFFERENT COMPUTATION stored generated col 'b'.\n\nIt has become difficult to review these TAP tests, particularly when\ndifferent patches are modifying the same comment. e.g. I post\nsuggestions to modify comments for patch 0001. Those get addressed OK,\nonly to vanish in subsequent patches like has happened in the above\nexample.\n\nReally this patch 0003 was only supposed to add the word \"stored\", not\nrevert the entire comment to something from an earlier version. Please\ntake care that all comment changes are carried forward correctly from\none patch to the next.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n", "msg_date": "Fri, 19 Jul 2024 09:29:02 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Thu, 18 Jul 2024 at 13:55, Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi, here are some review comments for v19-0002\n> ======\n> src/test/subscription/t/004_sync.pl\n>\n> 1.\n> This new test is not related to generated columns. IIRC, this is just\n> some test that we discovered missing during review of this thread. As\n> such, I think this change can be posted/patched separately from this\n> thread.\n>\nI have removed the test for this thread.\n\nI have also addressed the remaining comments for v19-0002 patch.\nPlease find the latest patches.\n\nv20-0001 - not modified\nv20-0002 - Addressed the comments\nv20-0003 - Addressed the comments\nv20-0004 - Not modified\n\nThanks and Regards,\nShlok Kyal", "msg_date": "Fri, 19 Jul 2024 11:31:01 +0530", "msg_from": "Shlok Kyal <shlok.kyal.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Fri, 19 Jul 2024 at 04:59, Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi, here are some review comments for patch v19-0003\n>\n> ======\n> src/backend/catalog/pg_publication.c\n>\n> 1.\n> /*\n> * Translate a list of column names to an array of attribute numbers\n> * and a Bitmapset with them; verify that each attribute is appropriate\n> * to have in a publication column list (no system or generated attributes,\n> * no duplicates). Additional checks with replica identity are done later;\n> * see pub_collist_contains_invalid_column.\n> *\n> * Note that the attribute numbers are *not* offset by\n> * FirstLowInvalidHeapAttributeNumber; system columns are forbidden so this\n> * is okay.\n> */\n> static void\n> publication_translate_columns(Relation targetrel, List *columns,\n> int *natts, AttrNumber **attrs)\n>\n> ~\n>\n> I though the above comment ought to change: /or generated\n> attributes/or virtual generated attributes/\n>\n> IIRC this was already addressed back in v16, but somehow that fix has\n> been lost (???).\nModified the comment\n\n> ======\n> src/backend/replication/logical/tablesync.c\n>\n> fetch_remote_table_info:\n> nitpick - missing end space in this comment /* TODO: use\n> ATTRIBUTE_GENERATED_VIRTUAL*/\n>\nFixed\n\n> ======\n>\n> 2.\n> (in patch v19-0001)\n> +# tab3:\n> +# publisher-side tab3 has generated col 'b'.\n> +# subscriber-side tab3 has generated col 'b', using a different computation.\n>\n> (here, in patch v19-0003)\n> # tab3:\n> -# publisher-side tab3 has generated col 'b'.\n> -# subscriber-side tab3 has generated col 'b', using a different computation.\n> +# publisher-side tab3 has stored generated col 'b' but\n> +# subscriber-side tab3 has DIFFERENT COMPUTATION stored generated col 'b'.\n>\n> It has become difficult to review these TAP tests, particularly when\n> different patches are modifying the same comment. e.g. I post\n> suggestions to modify comments for patch 0001. Those get addressed OK,\n> only to vanish in subsequent patches like has happened in the above\n> example.\n>\n> Really this patch 0003 was only supposed to add the word \"stored\", not\n> revert the entire comment to something from an earlier version. Please\n> take care that all comment changes are carried forward correctly from\n> one patch to the next.\nFixed\n\nI have addressed the comment in v20-0003 patch. Please refer [1].\n\n[1]: https://www.postgresql.org/message-id/CANhcyEUzUurrX38HGvG30gV92YDz6WmnnwNRYMVY4tiga-8KZg%40mail.gmail.com\n\nThanks and Regards,\nShlok Kyal\n\n\n", "msg_date": "Fri, 19 Jul 2024 11:33:51 +0530", "msg_from": "Shlok Kyal <shlok.kyal.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Fri, Jul 19, 2024 at 4:01 PM Shlok Kyal <shlok.kyal.oss@gmail.com> wrote:\n>\n> On Thu, 18 Jul 2024 at 13:55, Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Hi, here are some review comments for v19-0002\n> > ======\n> > src/test/subscription/t/004_sync.pl\n> >\n> > 1.\n> > This new test is not related to generated columns. IIRC, this is just\n> > some test that we discovered missing during review of this thread. As\n> > such, I think this change can be posted/patched separately from this\n> > thread.\n> >\n> I have removed the test for this thread.\n>\n> I have also addressed the remaining comments for v19-0002 patch.\n\nHi, I have no more review comments for patch v20-0002 at this time.\n\nI saw that the above test was removed from this thread as suggested,\nbut I could not find that any new thread was started to propose this\nvaluable missing test.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 23 Jul 2024 09:23:45 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Thu, Jul 18, 2024 at 10:47 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi Shubham, here are my review comments for patch v19-0001.\n>\n> ======\n> src/backend/replication/pgoutput/pgoutput.c\n>\n> 1.\n> /*\n> * Columns included in the publication, or NULL if all columns are\n> * included implicitly. Note that the attnums in this bitmap are not\n> + * publication and include_generated_columns option: other reasons should\n> + * be checked at user side. Note that the attnums in this bitmap are not\n> * shifted by FirstLowInvalidHeapAttributeNumber.\n> */\n> Bitmapset *columns;\n> You replied [1] \"The attached Patches contain all the suggested\n> changes.\" but as I previously commented [2, #1], since there is no\n> change to the interpretation of the 'columns' BMS caused by this\n> patch, then I expected this comment would be unchanged (i.e. same as\n> HEAD code). But this fix was missed in v19-0001.\n>\n> OTOH, if you do think there was a reason to change the comment then\n> the above is still not good because \"are not publication and\n> include_generated_columns option\" wording doesn't make sense.\n>\n> ======\n> src/test/subscription/t/011_generated.pl\n>\n> Observation -- I added (in nitpicks diffs) some more comments for\n> 'tab1' (to make all comments consistent with the new tests added). But\n> when I was doing that I observed that tab1 and tab3 test scenarios are\n> very similar. It seems only the subscription parameter is not\n> specified (so 'include_generated_cols' default wll be tested). IIRC\n> the default for that parameter is \"false\", so tab1 is not really\n> testing that properly -- e.g. I thought maybe to test the default\n> parameter it's better the subscriber-side 'b' should be not-generated?\n> But doing that would make 'tab1' the same as 'tab2'. Anyway, something\n> seems amiss -- it seems either something is not tested or is duplicate\n> tested. Please revisit what the tab1 test intention was and make sure\n> we are doing the right thing for it...\n>\n> ======\n> 99.\n> The attached nitpicks diff patch has some tweaked comments.\n>\n> ======\n> [1] https://www.postgresql.org/message-id/CAHv8Rj%2BR0cj%3Dz1bTMAgQKQWx1EKvkMEnV9QsHGvOqTdnLUQi1A%40mail.gmail.com\n> [2] https://www.postgresql.org/message-id/CAHut%2BPtVfrbx0jb42LCmS%3D-LcMTtWxm%2BvhaoArkjg7Z0mvuXbg%40mail.gmail.com\n\nThe attached Patches contain all the suggested changes.\n\nv21-0001 - Addressed the comments.\nv21-0002 - Added the TAP Tests for 011_generated.pl file and modified\nthe patch accordingly.\nv21-0003 - Added the TAP Tests for 011_generated.pl file and modified\nthe patch accordingly.\nv21-0004- Rebased the Patch.\n\nThanks and Regards,\nShubham Khanna.", "msg_date": "Fri, 26 Jul 2024 17:53:46 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Thanks for the patch updates.\n\nHere are my review comments for v21-0001.\n\nI think this patch is mostly OK now except there are still some\ncomments about the TAP test.\n\n======\nCommit Message\n\n0.\nUsing Create Subscription:\nCREATE SUBSCRIPTION sub2_gen_to_gen CONNECTION '$publisher_connstr' PUBLICATION\npub1 WITH (include_generated_columns = true, copy_data = false)\"\n\nIf you are going to give an example, I think a gen-to-nogen example\nwould be a better choice. That's because the gen-to-gen (as you have\nhere) is not going to replicate anything due to the subscriber-side\ncolumn being generated.\n\n======\nsrc/test/subscription/t/011_generated.pl\n\n1.\n+$node_subscriber2->safe_psql('postgres',\n+ \"CREATE TABLE tab1 (a int PRIMARY KEY, b int GENERATED ALWAYS AS (a\n* 22) STORED, c int)\"\n+);\n\nThe subscriber2 node was intended only for all the tables where we\nneed include_generated_columns to be true. Mostly that is the\ncombination tests. (tab_gen_to_nogen, tab_nogen_to_gen, etc) OTOH,\ntable 'tab1' already existed. I don't think we need to bother\nsubscribing to tab1 from subscriber2 because every combination is\nalready covered by the combination tests. Let's leave this one alone.\n\n\n~~~\n\n2.\nHuh? Where is the \"tab_nogen_to_gen\" combination test that I sent to\nyou off-list?\n\n~~~\n\n3.\n+$node_subscriber->safe_psql('postgres',\n+ \"CREATE TABLE tab_order (c int GENERATED ALWAYS AS (a * 22) STORED,\na int, b int)\"\n+);\n\nMaybe you can test 'tab_order' on both subscription nodes but I think\nit is overkill. IMO it is enough to test it on subscription2.\n\n~~~\n\n4.\n+$node_subscriber->safe_psql('postgres',\n+ \"CREATE TABLE tab_alter (a int, b int, c int GENERATED ALWAYS AS (a\n* 22) STORED)\"\n+);\n\nDitto above. Maybe you can test 'tab_order' on both subscription nodes\nbut I think it is overkill. IMO it is enough to test it on\nsubscription2.\n\n~~~\n\n5.\nDon't forget to add initial data for the missing nogen_to_gen table/test.\n\n~~~\n\n6.\n $node_publisher->safe_psql('postgres',\n- \"CREATE PUBLICATION pub1 FOR ALL TABLES\");\n+ \"CREATE PUBLICATION pub1 FOR TABLE tab1, tab_gen_to_gen,\ntab_gen_to_nogen, tab_gen_to_missing, tab_missing_to_gen, tab_order\");\n+\n $node_subscriber->safe_psql('postgres',\n \"CREATE SUBSCRIPTION sub1 CONNECTION '$publisher_connstr' PUBLICATION pub1\"\n );\n\nIt is not a bad idea to reduce the number of publications as you have\ndone, but IMO jamming all the tables into 1 publication is too much\nbecause it makes it less understandable instead of simpler.\n\nHow about this:\n- leave the 'pub1' just for 'tab1'.\n- have a 'pub_combo' for publication all the gen_to_nogen,\nnogen_to_gen etc combination tests.\n- and a 'pub_misc' for any other misc tables like tab_order.\n\n~~~\n\n7.\n+#####################\n # Wait for initial sync of all subscriptions\n+#####################\n\nI think you should write a note here that you have deliberately set\ncopy_data = false because COPY and include_generated_columns are not\nallowed at the same time for patch 0001. And that is why all expected\nresults on subscriber2 will be empty. Also, say this limitation will\nbe changed in patch 0002.\n\n~~~\n\n(I didn't yet check 011_generated.pl file results beyond this point...\nI'll wait for v22-0001 to review further)\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 29 Jul 2024 17:26:46 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Mon, Jul 29, 2024 at 12:57 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Thanks for the patch updates.\n>\n> Here are my review comments for v21-0001.\n>\n> I think this patch is mostly OK now except there are still some\n> comments about the TAP test.\n>\n> ======\n> Commit Message\n>\n> 0.\n> Using Create Subscription:\n> CREATE SUBSCRIPTION sub2_gen_to_gen CONNECTION '$publisher_connstr' PUBLICATION\n> pub1 WITH (include_generated_columns = true, copy_data = false)\"\n>\n> If you are going to give an example, I think a gen-to-nogen example\n> would be a better choice. That's because the gen-to-gen (as you have\n> here) is not going to replicate anything due to the subscriber-side\n> column being generated.\n>\n> ======\n> src/test/subscription/t/011_generated.pl\n>\n> 1.\n> +$node_subscriber2->safe_psql('postgres',\n> + \"CREATE TABLE tab1 (a int PRIMARY KEY, b int GENERATED ALWAYS AS (a\n> * 22) STORED, c int)\"\n> +);\n>\n> The subscriber2 node was intended only for all the tables where we\n> need include_generated_columns to be true. Mostly that is the\n> combination tests. (tab_gen_to_nogen, tab_nogen_to_gen, etc) OTOH,\n> table 'tab1' already existed. I don't think we need to bother\n> subscribing to tab1 from subscriber2 because every combination is\n> already covered by the combination tests. Let's leave this one alone.\n>\n>\n> ~~~\n>\n> 2.\n> Huh? Where is the \"tab_nogen_to_gen\" combination test that I sent to\n> you off-list?\n>\n> ~~~\n>\n> 3.\n> +$node_subscriber->safe_psql('postgres',\n> + \"CREATE TABLE tab_order (c int GENERATED ALWAYS AS (a * 22) STORED,\n> a int, b int)\"\n> +);\n>\n> Maybe you can test 'tab_order' on both subscription nodes but I think\n> it is overkill. IMO it is enough to test it on subscription2.\n>\n> ~~~\n>\n> 4.\n> +$node_subscriber->safe_psql('postgres',\n> + \"CREATE TABLE tab_alter (a int, b int, c int GENERATED ALWAYS AS (a\n> * 22) STORED)\"\n> +);\n>\n> Ditto above. Maybe you can test 'tab_order' on both subscription nodes\n> but I think it is overkill. IMO it is enough to test it on\n> subscription2.\n>\n> ~~~\n>\n> 5.\n> Don't forget to add initial data for the missing nogen_to_gen table/test.\n>\n> ~~~\n>\n> 6.\n> $node_publisher->safe_psql('postgres',\n> - \"CREATE PUBLICATION pub1 FOR ALL TABLES\");\n> + \"CREATE PUBLICATION pub1 FOR TABLE tab1, tab_gen_to_gen,\n> tab_gen_to_nogen, tab_gen_to_missing, tab_missing_to_gen, tab_order\");\n> +\n> $node_subscriber->safe_psql('postgres',\n> \"CREATE SUBSCRIPTION sub1 CONNECTION '$publisher_connstr' PUBLICATION pub1\"\n> );\n>\n> It is not a bad idea to reduce the number of publications as you have\n> done, but IMO jamming all the tables into 1 publication is too much\n> because it makes it less understandable instead of simpler.\n>\n> How about this:\n> - leave the 'pub1' just for 'tab1'.\n> - have a 'pub_combo' for publication all the gen_to_nogen,\n> nogen_to_gen etc combination tests.\n> - and a 'pub_misc' for any other misc tables like tab_order.\n>\n> ~~~\n>\n> 7.\n> +#####################\n> # Wait for initial sync of all subscriptions\n> +#####################\n>\n> I think you should write a note here that you have deliberately set\n> copy_data = false because COPY and include_generated_columns are not\n> allowed at the same time for patch 0001. And that is why all expected\n> results on subscriber2 will be empty. Also, say this limitation will\n> be changed in patch 0002.\n>\n> ~~~\n>\n> (I didn't yet check 011_generated.pl file results beyond this point...\n> I'll wait for v22-0001 to review further)\n\nThe attached Patches contain all the suggested changes.\n\nv22-0001 - Addressed the comments.\nv22-0002 - Rebased the Patch.\nv22-0003 - Rebased the Patch.\nv22-0004 - Rebased the Patch.\n\nThanks and Regards,\nShubham Khanna.", "msg_date": "Wed, 31 Jul 2024 14:12:01 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi, Here are my review comments for patch v22-0001\n\nAll comments now are only for the TAP test.\n\n======\nsrc/test/subscription/t/011_generated.pl\n\n1. I added all new code for the missing combination test case\n\"gen-to-missing\". See nitpicks diff.\n- create a separate publication for this \"tab_gen_to_missing\" table\nbecause the test gives subscription errors.\n- for the initial data\n- for the replicated data\n\n~~~\n\n2. I added sub1 and sub2 subscriptions for every combo test\n(previously some were absent). See nitpicks diff.\n\n~~~\n\n3. There was a missing test case for nogen-to-gen combination, and\nafter experimenting with this I am getting a bit suspicious,\n\nCurrently, it seems that if a COPY is attempted then the error would\nbe like this:\n2024-08-01 17:16:45.110 AEST [18942] ERROR: column \"b\" is a generated column\n2024-08-01 17:16:45.110 AEST [18942] DETAIL: Generated columns cannot\nbe used in COPY.\n\nOTOH, if a COPY is not attempted (e.g. copy_data = false) then patch\n0001 allows replication to happen. And the generated value of the\nsubscriber \"b\" takes precedence.\n\nI have included these tests in the nitpicks diff of patch 0001.\n\nThose results weren't exactly what I was expecting. That is why it is\nso important to include *every* test combination in these TAP tests --\nbecause unless we know how it works today, we won't know if we are\naccidentally breaking the current behaviour with the other (0002,\n0003) patches.\n\nPlease experiment in patches 0001 and 0002 using tab_nogen_to_gen more\nto make sure the (new?) patch errors make sense and don't overstep by\ngiving ERRORs when they should not.\n\n~~~~\n\nAlso, many other smaller issues/changes were done:\n\n~~~\n\nCreating tables:\n\nnitpick - rearranged to keep all combo test SQLs in a consistent order\nthroughout this file\n1/ gen-to-gen\n2/ gen-to-nogen\n3/ gen-to-missing\n4/ missing-to-gen\n5/ nogen-to-gen\n\nnitpick - fixed the wrong comment for CREATE TABLE tab_nogen_to_gen.\n\nnitpick - tweaked some CREATE TABLE comments for consistency.\n\nnitpick - in the v22 patch many of the generated col 'b' use different\ncomputations for every test. It makes it unnecessarily difficult to\nread/review the expected results. So, I've made them all the same. Now\ncomputation is \"a * 2\" on the publisher side, and \"a * 22\" on the\nsubscriber side.\n\n~~~\n\nCreating Publications and Subscriptions:\n\n\nnitpick - added comment for all the CREATE PUBLICATION\n\nnitpick - added comment for all the CREATE SUBSCRIPTION\n\nnitpick - I moved the note about copy_data = false to where all the\nnode_subscriber2 subscriptions are created. Also, don't explicitly\nrefer to \"patch 000\" in the comment, because that will not make any\nsense after getting pushed.\n\nnitpick - I changed many subscriber names to consistently use \"sub1\"\nor \"sub2\" within the name (this is the visual cue of which\nnode_subscriber<n> they are on). e.g.\n/regress_sub_combo2/regress_sub2_combo/\n\n~~~\n\nInitial Sync tests:\n\nnitpick - not sure if it is possible to do the initial data tests for\n\"nogen_to_gen\" in the normal place. For now, it is just replaced by a\ncomment.\nNOTE - Maybe this should be refactored later to put all the initial\ndata checks in one place. I'll think about this point more in the next\nreview.\n\n~~~\n\nnitpick - Changed cleanup I drop subscriptions before publications.\n\nnitpick - remove the unnecessary blank line at the end.\n\n======\n\nPlease see the attached diffs patch (apply it atop patch 0001) which\nincludes all the nipick changes mentioned above.\n\n~~\n\nBTW, For a quicker turnaround and less churning please consider just\nposting the v23-0001 by itself instead of waiting to rebase all the\nsubsequent patches. When 0001 settles down some more then rebase the\nothers.\n\n~~\n\nAlso, please run the indentation tool over this code ASAP.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Thu, 1 Aug 2024 18:32:21 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Thu, Aug 1, 2024 at 2:02 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi, Here are my review comments for patch v22-0001\n>\n> All comments now are only for the TAP test.\n>\n> ======\n> src/test/subscription/t/011_generated.pl\n>\n> 1. I added all new code for the missing combination test case\n> \"gen-to-missing\". See nitpicks diff.\n> - create a separate publication for this \"tab_gen_to_missing\" table\n> because the test gives subscription errors.\n> - for the initial data\n> - for the replicated data\n>\n> ~~~\n>\n> 2. I added sub1 and sub2 subscriptions for every combo test\n> (previously some were absent). See nitpicks diff.\n>\n> ~~~\n>\n> 3. There was a missing test case for nogen-to-gen combination, and\n> after experimenting with this I am getting a bit suspicious,\n>\n> Currently, it seems that if a COPY is attempted then the error would\n> be like this:\n> 2024-08-01 17:16:45.110 AEST [18942] ERROR: column \"b\" is a generated column\n> 2024-08-01 17:16:45.110 AEST [18942] DETAIL: Generated columns cannot\n> be used in COPY.\n>\n> OTOH, if a COPY is not attempted (e.g. copy_data = false) then patch\n> 0001 allows replication to happen. And the generated value of the\n> subscriber \"b\" takes precedence.\n>\n> I have included these tests in the nitpicks diff of patch 0001.\n>\n> Those results weren't exactly what I was expecting. That is why it is\n> so important to include *every* test combination in these TAP tests --\n> because unless we know how it works today, we won't know if we are\n> accidentally breaking the current behaviour with the other (0002,\n> 0003) patches.\n>\n> Please experiment in patches 0001 and 0002 using tab_nogen_to_gen more\n> to make sure the (new?) patch errors make sense and don't overstep by\n> giving ERRORs when they should not.\n>\n> ~~~~\n>\n> Also, many other smaller issues/changes were done:\n>\n> ~~~\n>\n> Creating tables:\n>\n> nitpick - rearranged to keep all combo test SQLs in a consistent order\n> throughout this file\n> 1/ gen-to-gen\n> 2/ gen-to-nogen\n> 3/ gen-to-missing\n> 4/ missing-to-gen\n> 5/ nogen-to-gen\n>\n> nitpick - fixed the wrong comment for CREATE TABLE tab_nogen_to_gen.\n>\n> nitpick - tweaked some CREATE TABLE comments for consistency.\n>\n> nitpick - in the v22 patch many of the generated col 'b' use different\n> computations for every test. It makes it unnecessarily difficult to\n> read/review the expected results. So, I've made them all the same. Now\n> computation is \"a * 2\" on the publisher side, and \"a * 22\" on the\n> subscriber side.\n>\n> ~~~\n>\n> Creating Publications and Subscriptions:\n>\n>\n> nitpick - added comment for all the CREATE PUBLICATION\n>\n> nitpick - added comment for all the CREATE SUBSCRIPTION\n>\n> nitpick - I moved the note about copy_data = false to where all the\n> node_subscriber2 subscriptions are created. Also, don't explicitly\n> refer to \"patch 000\" in the comment, because that will not make any\n> sense after getting pushed.\n>\n> nitpick - I changed many subscriber names to consistently use \"sub1\"\n> or \"sub2\" within the name (this is the visual cue of which\n> node_subscriber<n> they are on). e.g.\n> /regress_sub_combo2/regress_sub2_combo/\n>\n> ~~~\n>\n> Initial Sync tests:\n>\n> nitpick - not sure if it is possible to do the initial data tests for\n> \"nogen_to_gen\" in the normal place. For now, it is just replaced by a\n> comment.\n> NOTE - Maybe this should be refactored later to put all the initial\n> data checks in one place. I'll think about this point more in the next\n> review.\n>\n> ~~~\n>\n> nitpick - Changed cleanup I drop subscriptions before publications.\n>\n> nitpick - remove the unnecessary blank line at the end.\n>\n> ======\n>\n> Please see the attached diffs patch (apply it atop patch 0001) which\n> includes all the nipick changes mentioned above.\n>\n> ~~\n>\n> BTW, For a quicker turnaround and less churning please consider just\n> posting the v23-0001 by itself instead of waiting to rebase all the\n> subsequent patches. When 0001 settles down some more then rebase the\n> others.\n>\n> ~~\n>\n> Also, please run the indentation tool over this code ASAP.\n>\nI have fixed all the comments. The attached Patch(v23-0001) contains\nall the changes.\n\nThanks and Regards,\nShubham Khanna.", "msg_date": "Fri, 2 Aug 2024 09:56:59 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi Shubhab.\n\nHere are some more review comments for the v23-0001.\n\n======\n011_generated.pl b/src/test/subscription/t/011_generated.pl\n\nnitpick - renamed /regress_pub/regress_pub_tab1/ and\n/regress_sub1/regress_sub1_tab1/\nnitpick - typo /inital data /initial data/\nnitpick - typo /snode_subscriber2/node_subscriber2/\nnitpick - tweak the combo initial sync comments and messages\nnitpick - /#Cleanup/# cleanup/\nnitpick - tweak all the combo normal replication comments\nnitpick - removed blank line at the end\n\n~~~\n\n1. Refactor tab_gen_to_missing initial sync tests.\n\nI moved the tab_gen_to_missing initial sync for node_subscriber2 to be\nback where all the other initial sync tests are done.\nSee the nitpicks patch file.\n\n~~~\n\n2. Refactor tab_nogen_to_gen initial sync tests\n\nI moved all the tab_nogen_to_gen initial sync tests back to where the\nother initial sync tests are done.\nSee the nitpicks patch file.\n\n~~~\n\n3. Added another test case:\n\nBecause the (current PG17) nogen-to-gen initial sync test case (with\ncopy_data=true) gives an ERROR, I have added another combination to\ncover normal replication (e.g. using copy_data=false).\nSee the nitpicks patch file.\n\n(This has exposed an inconsistency which IMO might be a PG17 bug. I\nhave included TAP test comments about this, and plan to post a\nseparate thread for it later).\n\n~\n\n4. GUC\n\nMoving and adding more CREATE SUBSCRIPTION exceeded some default GUCs,\nso extra configuration was needed.\nSee the nitpick patch file.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Mon, 5 Aug 2024 12:40:01 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi,\n\nWriting many new test case combinations has exposed a possible bug in\npatch 0001.\n\nIn my previous post [1] there was questionable behaviour when\nreplicating from a normal (not generated) column on the publisher side\nto a generated column on the subscriber side. Initially, I thought the\ntest might have exposed a possible PG17 bug, but now I think it has\nreally found a bug in patch 0001.\n\n~~~\n\nPreviously (PG17) this would fail consistently both during COPY and\nduring normal replication.Now, patch 0001 has changed this behaviour\n-- it is not always failing anymore.\n\nThe patch should not be impacting this existing behaviour. It only\nintroduces a new 'include_generated_columns', but since the publisher\nside is not a generated column I do not expect there should be any\ndifference in behaviour for this test case. IMO the TAP test expected\nresults should be corrected for this scenario. And fix the bug.\n\nBelow is an example demonstrating PG17 behaviour.\n\n======\n\n\nPublisher:\n----------\n\n(notice column \"b\" is not generated)\n\ntest_pub=# CREATE TABLE tab_nogen_to_gen (a int, b int);\nCREATE TABLE\ntest_pub=# INSERT INTO tab_nogen_to_gen VALUES (1,101),(2,102);\nINSERT 0 2\ntest_pub=# CREATE PUBLICATION pub1 for TABLE tab_nogen_to_gen;\nCREATE PUBLICATION\ntest_pub=#\n\nSubscriber:\n-----------\n\n(notice corresponding column \"b\" is generated)\n\ntest_sub=# CREATE TABLE tab_nogen_to_gen (a int, b int GENERATED\nALWAYS AS (a * 22) STORED);\nCREATE TABLE\ntest_sub=#\n\nTry to create a subscription. Notice we get the error: ERROR: logical\nreplication target relation \"public.tab_nogen_to_gen\" is missing\nreplicated column: \"b\"\n\ntest_sub=# CREATE SUBSCRIPTION sub1 CONNECTION 'dbname=test_pub'\nPUBLICATION pub1;\n2024-08-05 13:16:40.043 AEST [20957] WARNING: subscriptions created\nby regression test cases should have names starting with \"regress_\"\nWARNING: subscriptions created by regression test cases should have\nnames starting with \"regress_\"\nNOTICE: created replication slot \"sub1\" on publisher\nCREATE SUBSCRIPTION\ntest_sub=# 2024-08-05 13:16:40.105 AEST [29258] LOG: logical\nreplication apply worker for subscription \"sub1\" has started\n2024-08-05 13:16:40.117 AEST [29260] LOG: logical replication table\nsynchronization worker for subscription \"sub1\", table\n\"tab_nogen_to_gen\" has started\n2024-08-05 13:16:40.172 AEST [29260] ERROR: logical replication\ntarget relation \"public.tab_nogen_to_gen\" is missing replicated\ncolumn: \"b\"\n2024-08-05 13:16:40.173 AEST [20039] LOG: background worker \"logical\nreplication tablesync worker\" (PID 29260) exited with exit code 1\n2024-08-05 13:16:45.187 AEST [29400] LOG: logical replication table\nsynchronization worker for subscription \"sub1\", table\n\"tab_nogen_to_gen\" has started\n2024-08-05 13:16:45.285 AEST [29400] ERROR: logical replication\ntarget relation \"public.tab_nogen_to_gen\" is missing replicated\ncolumn: \"b\"\n2024-08-05 13:16:45.286 AEST [20039] LOG: background worker \"logical\nreplication tablesync worker\" (PID 29400) exited with exit code 1\n...\n\nCreate the subscription again, but this time with copy_data = false\n\ntest_sub=# CREATE SUBSCRIPTION sub1_nocopy CONNECTION\n'dbname=test_pub' PUBLICATION pub1 WITH (copy_data = false);\n2024-08-05 13:22:57.719 AEST [20957] WARNING: subscriptions created\nby regression test cases should have names starting with \"regress_\"\nWARNING: subscriptions created by regression test cases should have\nnames starting with \"regress_\"\nNOTICE: created replication slot \"sub1_nocopy\" on publisher\nCREATE SUBSCRIPTION\ntest_sub=# 2024-08-05 13:22:57.765 AEST [7012] LOG: logical\nreplication apply worker for subscription \"sub1_nocopy\" has started\n\ntest_sub=#\n\n~~~\n\nThen insert data from the publisher to see what happens for normal replication.\n\ntest_pub=#\ntest_pub=# INSERT INTO tab_nogen_to_gen VALUES (3,103),(4,104);\nINSERT 0 2\n\n~~~\n\nNotice the subscriber gets the same error as before: ERROR: logical\nreplication target relation \"public.tab_nogen_to_gen\" is missing\nreplicated column: \"b\"\n\n2024-08-05 13:25:14.897 AEST [20039] LOG: background worker \"logical\nreplication apply worker\" (PID 10957) exited with exit code 1\n2024-08-05 13:25:19.933 AEST [11095] LOG: logical replication apply\nworker for subscription \"sub1_nocopy\" has started\n2024-08-05 13:25:19.966 AEST [11095] ERROR: logical replication\ntarget relation \"public.tab_nogen_to_gen\" is missing replicated\ncolumn: \"b\"\n2024-08-05 13:25:19.966 AEST [11095] CONTEXT: processing remote data\nfor replication origin \"pg_16390\" during message type \"INSERT\" in\ntransaction 742, finished at 0/1967BB0\n2024-08-05 13:25:19.968 AEST [20039] LOG: background worker \"logical\nreplication apply worker\" (PID 11095) exited with exit code 1\n2024-08-05 13:25:24.917 AEST [11225] LOG: logical replication apply\nworker for subscription \"sub1_nocopy\" has started\n2024-08-05 13:25:24.926 AEST [11225] ERROR: logical replication\ntarget relation \"public.tab_nogen_to_gen\" is missing replicated\ncolumn: \"b\"\n2024-08-05 13:25:24.926 AEST [11225] CONTEXT: processing remote data\nfor replication origin \"pg_16390\" during message type \"INSERT\" in\ntransaction 742, finished at 0/1967BB0\n2024-08-05 13:25:24.927 AEST [20039] LOG: background worker \"logical\nreplication apply worker\" (PID 11225) exited with exit code 1\n...\n\n======\n[1] https://www.postgresql.org/message-id/CAHut%2BPvtT8fKOfvVYr4vANx_fr92vedas%2BZRbQxvMC097rks6w%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 5 Aug 2024 13:44:41 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Mon, Aug 5, 2024 at 8:10 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi Shubhab.\n>\n> Here are some more review comments for the v23-0001.\n>\n> ======\n> 011_generated.pl b/src/test/subscription/t/011_generated.pl\n>\n> nitpick - renamed /regress_pub/regress_pub_tab1/ and\n> /regress_sub1/regress_sub1_tab1/\n> nitpick - typo /inital data /initial data/\n> nitpick - typo /snode_subscriber2/node_subscriber2/\n> nitpick - tweak the combo initial sync comments and messages\n> nitpick - /#Cleanup/# cleanup/\n> nitpick - tweak all the combo normal replication comments\n> nitpick - removed blank line at the end\n>\n> ~~~\n>\n> 1. Refactor tab_gen_to_missing initial sync tests.\n>\n> I moved the tab_gen_to_missing initial sync for node_subscriber2 to be\n> back where all the other initial sync tests are done.\n> See the nitpicks patch file.\n>\n> ~~~\n>\n> 2. Refactor tab_nogen_to_gen initial sync tests\n>\n> I moved all the tab_nogen_to_gen initial sync tests back to where the\n> other initial sync tests are done.\n> See the nitpicks patch file.\n>\n> ~~~\n>\n> 3. Added another test case:\n>\n> Because the (current PG17) nogen-to-gen initial sync test case (with\n> copy_data=true) gives an ERROR, I have added another combination to\n> cover normal replication (e.g. using copy_data=false).\n> See the nitpicks patch file.\n>\n> (This has exposed an inconsistency which IMO might be a PG17 bug. I\n> have included TAP test comments about this, and plan to post a\n> separate thread for it later).\n>\n> ~\n>\n> 4. GUC\n>\n> Moving and adding more CREATE SUBSCRIPTION exceeded some default GUCs,\n> so extra configuration was needed.\n> See the nitpick patch file.\n>\n\nI have fixed all the comments. The attached Patch(v24-0001) contains\nall the changes.\n\nThanks and Regards,\nShubham Khanna.", "msg_date": "Wed, 7 Aug 2024 10:56:16 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Mon, Aug 5, 2024 at 9:15 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi,\n>\n> Writing many new test case combinations has exposed a possible bug in\n> patch 0001.\n>\n> In my previous post [1] there was questionable behaviour when\n> replicating from a normal (not generated) column on the publisher side\n> to a generated column on the subscriber side. Initially, I thought the\n> test might have exposed a possible PG17 bug, but now I think it has\n> really found a bug in patch 0001.\n>\n> ~~~\n>\n> Previously (PG17) this would fail consistently both during COPY and\n> during normal replication.Now, patch 0001 has changed this behaviour\n> -- it is not always failing anymore.\n>\n> The patch should not be impacting this existing behaviour. It only\n> introduces a new 'include_generated_columns', but since the publisher\n> side is not a generated column I do not expect there should be any\n> difference in behaviour for this test case. IMO the TAP test expected\n> results should be corrected for this scenario. And fix the bug.\n>\n> Below is an example demonstrating PG17 behaviour.\n>\n> ======\n>\n>\n> Publisher:\n> ----------\n>\n> (notice column \"b\" is not generated)\n>\n> test_pub=# CREATE TABLE tab_nogen_to_gen (a int, b int);\n> CREATE TABLE\n> test_pub=# INSERT INTO tab_nogen_to_gen VALUES (1,101),(2,102);\n> INSERT 0 2\n> test_pub=# CREATE PUBLICATION pub1 for TABLE tab_nogen_to_gen;\n> CREATE PUBLICATION\n> test_pub=#\n>\n> Subscriber:\n> -----------\n>\n> (notice corresponding column \"b\" is generated)\n>\n> test_sub=# CREATE TABLE tab_nogen_to_gen (a int, b int GENERATED\n> ALWAYS AS (a * 22) STORED);\n> CREATE TABLE\n> test_sub=#\n>\n> Try to create a subscription. Notice we get the error: ERROR: logical\n> replication target relation \"public.tab_nogen_to_gen\" is missing\n> replicated column: \"b\"\n>\n> test_sub=# CREATE SUBSCRIPTION sub1 CONNECTION 'dbname=test_pub'\n> PUBLICATION pub1;\n> 2024-08-05 13:16:40.043 AEST [20957] WARNING: subscriptions created\n> by regression test cases should have names starting with \"regress_\"\n> WARNING: subscriptions created by regression test cases should have\n> names starting with \"regress_\"\n> NOTICE: created replication slot \"sub1\" on publisher\n> CREATE SUBSCRIPTION\n> test_sub=# 2024-08-05 13:16:40.105 AEST [29258] LOG: logical\n> replication apply worker for subscription \"sub1\" has started\n> 2024-08-05 13:16:40.117 AEST [29260] LOG: logical replication table\n> synchronization worker for subscription \"sub1\", table\n> \"tab_nogen_to_gen\" has started\n> 2024-08-05 13:16:40.172 AEST [29260] ERROR: logical replication\n> target relation \"public.tab_nogen_to_gen\" is missing replicated\n> column: \"b\"\n> 2024-08-05 13:16:40.173 AEST [20039] LOG: background worker \"logical\n> replication tablesync worker\" (PID 29260) exited with exit code 1\n> 2024-08-05 13:16:45.187 AEST [29400] LOG: logical replication table\n> synchronization worker for subscription \"sub1\", table\n> \"tab_nogen_to_gen\" has started\n> 2024-08-05 13:16:45.285 AEST [29400] ERROR: logical replication\n> target relation \"public.tab_nogen_to_gen\" is missing replicated\n> column: \"b\"\n> 2024-08-05 13:16:45.286 AEST [20039] LOG: background worker \"logical\n> replication tablesync worker\" (PID 29400) exited with exit code 1\n> ...\n>\n> Create the subscription again, but this time with copy_data = false\n>\n> test_sub=# CREATE SUBSCRIPTION sub1_nocopy CONNECTION\n> 'dbname=test_pub' PUBLICATION pub1 WITH (copy_data = false);\n> 2024-08-05 13:22:57.719 AEST [20957] WARNING: subscriptions created\n> by regression test cases should have names starting with \"regress_\"\n> WARNING: subscriptions created by regression test cases should have\n> names starting with \"regress_\"\n> NOTICE: created replication slot \"sub1_nocopy\" on publisher\n> CREATE SUBSCRIPTION\n> test_sub=# 2024-08-05 13:22:57.765 AEST [7012] LOG: logical\n> replication apply worker for subscription \"sub1_nocopy\" has started\n>\n> test_sub=#\n>\n> ~~~\n>\n> Then insert data from the publisher to see what happens for normal replication.\n>\n> test_pub=#\n> test_pub=# INSERT INTO tab_nogen_to_gen VALUES (3,103),(4,104);\n> INSERT 0 2\n>\n> ~~~\n>\n> Notice the subscriber gets the same error as before: ERROR: logical\n> replication target relation \"public.tab_nogen_to_gen\" is missing\n> replicated column: \"b\"\n>\n> 2024-08-05 13:25:14.897 AEST [20039] LOG: background worker \"logical\n> replication apply worker\" (PID 10957) exited with exit code 1\n> 2024-08-05 13:25:19.933 AEST [11095] LOG: logical replication apply\n> worker for subscription \"sub1_nocopy\" has started\n> 2024-08-05 13:25:19.966 AEST [11095] ERROR: logical replication\n> target relation \"public.tab_nogen_to_gen\" is missing replicated\n> column: \"b\"\n> 2024-08-05 13:25:19.966 AEST [11095] CONTEXT: processing remote data\n> for replication origin \"pg_16390\" during message type \"INSERT\" in\n> transaction 742, finished at 0/1967BB0\n> 2024-08-05 13:25:19.968 AEST [20039] LOG: background worker \"logical\n> replication apply worker\" (PID 11095) exited with exit code 1\n> 2024-08-05 13:25:24.917 AEST [11225] LOG: logical replication apply\n> worker for subscription \"sub1_nocopy\" has started\n> 2024-08-05 13:25:24.926 AEST [11225] ERROR: logical replication\n> target relation \"public.tab_nogen_to_gen\" is missing replicated\n> column: \"b\"\n> 2024-08-05 13:25:24.926 AEST [11225] CONTEXT: processing remote data\n> for replication origin \"pg_16390\" during message type \"INSERT\" in\n> transaction 742, finished at 0/1967BB0\n> 2024-08-05 13:25:24.927 AEST [20039] LOG: background worker \"logical\n> replication apply worker\" (PID 11225) exited with exit code 1\n>\nThis is an expected behaviour. The error message here is improvised.\nThis error is consistent and it is being handled in the 0002 patch.\nBelow are the logs for the same:\n2024-08-07 10:47:45.977 IST [29756] LOG: logical replication table\nsynchronization worker for subscription \"sub1\", table\n\"tab_nogen_to_gen\" has started\n2024-08-07 10:47:46.116 IST [29756] ERROR: logical replication target\nrelation \"public.tab_nogen_to_gen\" has a generated column \"b\" but\ncorresponding column on source relation is not a generated column\n0002 Patch needs to be applied to get rid of this error.\n\nThanks and Regards,\nShubham Khanna.\n\n\n", "msg_date": "Wed, 7 Aug 2024 11:07:20 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi Shubham,\n\nHere are my review comments for patch v24-0001\n\nI think the TAP tests have incorrect expected results for the nogen-to-gen case.\n\nWhereas the HEAD code will cause \"ERROR\" for this test scenario, patch\n0001 does not. IMO the behaviour should be unchanged for this scenario\nwhich has no generated column on the publisher side. So it seems this\nis a bug in patch 0001.\n\nFYI, I have included \"FIXME\" comments in the attached top-up diff\npatch to show which test cases I think are expecting wrong results.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Wed, 7 Aug 2024 18:00:37 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Wed, Aug 7, 2024 at 1:31 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi Shubham,\n>\n> Here are my review comments for patch v24-0001\n>\n> I think the TAP tests have incorrect expected results for the nogen-to-gen case.\n>\n> Whereas the HEAD code will cause \"ERROR\" for this test scenario, patch\n> 0001 does not. IMO the behaviour should be unchanged for this scenario\n> which has no generated column on the publisher side. So it seems this\n> is a bug in patch 0001.\n>\n> FYI, I have included \"FIXME\" comments in the attached top-up diff\n> patch to show which test cases I think are expecting wrong results.\n>\n\nFixed all the comments. The attached Patch(v25-0001) contains all the changes.\n\nThanks and Regards,\nShubham Khanna.", "msg_date": "Thu, 8 Aug 2024 10:53:07 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi Shubham,\n\nI think the v25-0001 patch only half-fixes the problems reported in my\nv24-0001 review.\n\n~\n\nBackground (from the commit message):\nThis commit enables support for the 'include_generated_columns' option\nin logical replication, allowing the transmission of generated column\ninformation and data alongside regular table changes.\n\n~\n\nThe broken TAP test scenario in question is replicating from a\n\"not-generated\" column to a \"generated\" column. As the generated\ncolumn is not on the publishing side, IMO the\n'include_generated_columns' option should have zero effect here.\n\nIn other words, I expect this TAP test for 'include_generated_columns\n= true' case should also be failing, as I wrote already yesterday:\n\n+# FIXME\n+# Since there is no generated column on the publishing side this should give\n+# the same result as the previous test. -- e.g. something like:\n+# ERROR: logical replication target relation\n\"public.tab_nogen_to_gen\" is missing\n+# replicated column: \"b\"\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 8 Aug 2024 17:13:15 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Thu, 8 Aug 2024 at 10:53, Shubham Khanna <khannashubham1197@gmail.com> wrote:\n>\n> On Wed, Aug 7, 2024 at 1:31 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Hi Shubham,\n> >\n> > Here are my review comments for patch v24-0001\n> >\n> > I think the TAP tests have incorrect expected results for the nogen-to-gen case.\n> >\n> > Whereas the HEAD code will cause \"ERROR\" for this test scenario, patch\n> > 0001 does not. IMO the behaviour should be unchanged for this scenario\n> > which has no generated column on the publisher side. So it seems this\n> > is a bug in patch 0001.\n> >\n> > FYI, I have included \"FIXME\" comments in the attached top-up diff\n> > patch to show which test cases I think are expecting wrong results.\n> >\n>\n> Fixed all the comments. The attached Patch(v25-0001) contains all the changes.\n\nFew comments:\n1) Can we add one test with replica identity full to show that\ngenerated column is included in case of update operation with\ntest_decoding.\n\n2) At the end of the file generated_columns.sql a newline is missing:\n+-- when 'include-generated-columns' = '0' the generated column 'b'\nvalues will not be replicated\n+INSERT INTO gencoltable (a) VALUES (7), (8), (9);\n+SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,\nNULL, 'include-xids', '0', 'skip-empty-xacts', '1',\n'include-generated-columns', '0');\n+\n+DROP TABLE gencoltable;\n+\n+SELECT 'stop' FROM pg_drop_replication_slot('regression_slot');\n\\ No newline at end of file\n\n3)\n3.a)This can be changed:\n+-- when 'include-generated-columns' is not set the generated column\n'b' values will be replicated\n+INSERT INTO gencoltable (a) VALUES (1), (2), (3);\n\nto:\n-- By default, 'include-generated-columns' is enabled, so the values\nfor the generated column 'b' will be replicated even if it is not\nexplicitly specified.\n\n3.b) This can be changed:\n-- when 'include-generated-columns' = '1' the generated column 'b'\nvalues will be replicated\nto:\n-- when 'include-generated-columns' is enabled, the values of the\ngenerated column 'b' will be replicated.\n\n3.c) This can be changed:\n-- when 'include-generated-columns' = '0' the generated column 'b'\nvalues will not be replicated\nto:\n-- when 'include-generated-columns' is disabled, the values of the\ngenerated column 'b' will not be replicated.\n\n4) I did not see any test for dump, can we add one test for this.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sat, 10 Aug 2024 19:53:04 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Tue, Jul 23, 2024 at 9:23 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Fri, Jul 19, 2024 at 4:01 PM Shlok Kyal <shlok.kyal.oss@gmail.com> wrote:\n> >\n> > On Thu, 18 Jul 2024 at 13:55, Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > Hi, here are some review comments for v19-0002\n> > > ======\n> > > src/test/subscription/t/004_sync.pl\n> > >\n> > > 1.\n> > > This new test is not related to generated columns. IIRC, this is just\n> > > some test that we discovered missing during review of this thread. As\n> > > such, I think this change can be posted/patched separately from this\n> > > thread.\n> > >\n> > I have removed the test for this thread.\n> >\n> > I have also addressed the remaining comments for v19-0002 patch.\n>\n> Hi, I have no more review comments for patch v20-0002 at this time.\n>\n> I saw that the above test was removed from this thread as suggested,\n> but I could not find that any new thread was started to propose this\n> valuable missing test.\n>\n\nI still did not find any new thread for adding the missing test case,\nso I started one myself [1].\n\n======\n[1] https://www.postgresql.org/message-id/CAHut+PtX8P0EGhsk9p=hQGUHrzxeCSzANXSMKOvYiLX-EjdyNw@mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 15 Aug 2024 17:39:30 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Thu, Aug 8, 2024 at 12:43 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi Shubham,\n>\n> I think the v25-0001 patch only half-fixes the problems reported in my\n> v24-0001 review.\n>\n> ~\n>\n> Background (from the commit message):\n> This commit enables support for the 'include_generated_columns' option\n> in logical replication, allowing the transmission of generated column\n> information and data alongside regular table changes.\n>\n> ~\n>\n> The broken TAP test scenario in question is replicating from a\n> \"not-generated\" column to a \"generated\" column. As the generated\n> column is not on the publishing side, IMO the\n> 'include_generated_columns' option should have zero effect here.\n>\n> In other words, I expect this TAP test for 'include_generated_columns\n> = true' case should also be failing, as I wrote already yesterday:\n>\n> +# FIXME\n> +# Since there is no generated column on the publishing side this should give\n> +# the same result as the previous test. -- e.g. something like:\n> +# ERROR: logical replication target relation\n> \"public.tab_nogen_to_gen\" is missing\n> +# replicated column: \"b\"\n\nI have fixed the given comments. The attached v26-0001 Patch contains\nthe required changes.\n\nThanks and Regards,\nShubham Khanna.", "msg_date": "Fri, 16 Aug 2024 10:04:46 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Fri, 16 Aug 2024 at 10:04, Shubham Khanna\n<khannashubham1197@gmail.com> wrote:\n>\n> On Thu, Aug 8, 2024 at 12:43 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Hi Shubham,\n> >\n> > I think the v25-0001 patch only half-fixes the problems reported in my\n> > v24-0001 review.\n> >\n> > ~\n> >\n> > Background (from the commit message):\n> > This commit enables support for the 'include_generated_columns' option\n> > in logical replication, allowing the transmission of generated column\n> > information and data alongside regular table changes.\n> >\n> > ~\n> >\n> > The broken TAP test scenario in question is replicating from a\n> > \"not-generated\" column to a \"generated\" column. As the generated\n> > column is not on the publishing side, IMO the\n> > 'include_generated_columns' option should have zero effect here.\n> >\n> > In other words, I expect this TAP test for 'include_generated_columns\n> > = true' case should also be failing, as I wrote already yesterday:\n> >\n> > +# FIXME\n> > +# Since there is no generated column on the publishing side this should give\n> > +# the same result as the previous test. -- e.g. something like:\n> > +# ERROR: logical replication target relation\n> > \"public.tab_nogen_to_gen\" is missing\n> > +# replicated column: \"b\"\n>\n> I have fixed the given comments. The attached v26-0001 Patch contains\n> the required changes.\n\nFew comments:\n1) There's no need to pass include_generated_columns in this case; we\ncan retrieve it from ctx->data instead:\n@@ -749,7 +764,7 @@ maybe_send_schema(LogicalDecodingContext *ctx,\n static void\n send_relation_and_attrs(Relation relation, TransactionId xid,\n LogicalDecodingContext *ctx,\n- Bitmapset *columns)\n+ Bitmapset *columns,\nbool include_generated_columns)\n {\n TupleDesc desc = RelationGetDescr(relation);\n int i;\n@@ -766,7 +781,10 @@ send_relation_and_attrs(Relation relation,\nTransactionId xid,\n\n2) Commit message:\nIf the subscriber-side column is also a generated column then this option\nhas no effect; the replicated data will be ignored and the subscriber\ncolumn will be filled as normal with the subscriber-side computed or\ndefault data.\n\nAn error will occur in this case, so the message should be updated accordingly.\n\n3) The current test is structured as follows: a) Create all required\ntables b) Insert data into tables c) Create publications d) Create\nsubscriptions e) Perform inserts and verify\nThis approach can make reviewing and maintenance somewhat challenging.\n\nInstead, could you modify it to: a) Create the required table for a\nsingle test b) Insert data for this test c) Create the publication for\nthis test d) Create the subscriptions for this test e) Perform inserts\nand verify f) Clean up\n\n4) We can maintain the test as a separate 0002 patch, as it may need a\nfew rounds of review and final adjustments. Once it's fully completed,\nwe can merge it back in.\n\n5) Once we create and drop publication/subscriptions for individual\ntests, we won't need such extensive configuration; we should be able\nto run them with default values:\n+$node_publisher->append_conf(\n+ 'postgresql.conf',\n+ \"max_wal_senders = 20\n+ max_replication_slots = 20\");\n\nRegards,\nVignesh\n\n\n", "msg_date": "Fri, 16 Aug 2024 14:47:39 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Sat, Aug 10, 2024 at 7:53 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Thu, 8 Aug 2024 at 10:53, Shubham Khanna <khannashubham1197@gmail.com> wrote:\n> >\n> > On Wed, Aug 7, 2024 at 1:31 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > Hi Shubham,\n> > >\n> > > Here are my review comments for patch v24-0001\n> > >\n> > > I think the TAP tests have incorrect expected results for the nogen-to-gen case.\n> > >\n> > > Whereas the HEAD code will cause \"ERROR\" for this test scenario, patch\n> > > 0001 does not. IMO the behaviour should be unchanged for this scenario\n> > > which has no generated column on the publisher side. So it seems this\n> > > is a bug in patch 0001.\n> > >\n> > > FYI, I have included \"FIXME\" comments in the attached top-up diff\n> > > patch to show which test cases I think are expecting wrong results.\n> > >\n> >\n> > Fixed all the comments. The attached Patch(v25-0001) contains all the changes.\n>\n> Few comments:\n> 1) Can we add one test with replica identity full to show that\n> generated column is included in case of update operation with\n> test_decoding.\n>\n> 2) At the end of the file generated_columns.sql a newline is missing:\n> +-- when 'include-generated-columns' = '0' the generated column 'b'\n> values will not be replicated\n> +INSERT INTO gencoltable (a) VALUES (7), (8), (9);\n> +SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL,\n> NULL, 'include-xids', '0', 'skip-empty-xacts', '1',\n> 'include-generated-columns', '0');\n> +\n> +DROP TABLE gencoltable;\n> +\n> +SELECT 'stop' FROM pg_drop_replication_slot('regression_slot');\n> \\ No newline at end of file\n>\n> 3)\n> 3.a)This can be changed:\n> +-- when 'include-generated-columns' is not set the generated column\n> 'b' values will be replicated\n> +INSERT INTO gencoltable (a) VALUES (1), (2), (3);\n>\n> to:\n> -- By default, 'include-generated-columns' is enabled, so the values\n> for the generated column 'b' will be replicated even if it is not\n> explicitly specified.\n>\n> 3.b) This can be changed:\n> -- when 'include-generated-columns' = '1' the generated column 'b'\n> values will be replicated\n> to:\n> -- when 'include-generated-columns' is enabled, the values of the\n> generated column 'b' will be replicated.\n>\n> 3.c) This can be changed:\n> -- when 'include-generated-columns' = '0' the generated column 'b'\n> values will not be replicated\n> to:\n> -- when 'include-generated-columns' is disabled, the values of the\n> generated column 'b' will not be replicated.\n>\n> 4) I did not see any test for dump, can we add one test for this.\n\nFixed all the given comments. The attached Patch v27-0001 contains all\nthe required changes. Also, I have created a separate Patch (v27-0002)\nfor TAP Tests related to 'include-generated-columns'\n\nThanks and Regards,\nShubham Khanna.", "msg_date": "Fri, 16 Aug 2024 16:40:33 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi, Here are my review comments for v27-0001.\n\n======\ncontrib/test_decoding/expected/generated_columns.out\ncontrib/test_decoding/sql/generated_columns.sql\n\n+-- By default, 'include-generated-columns' is enabled, so the values\nfor the generated column 'b' will be replicated even if it is not\nexplicitly specified.\n\nnit - The \"default\" is only like this for \"test_decoding\" (e.g., the\nCREATE SUBSCRIPTION option is the opposite), so let's make the comment\nclearer about that.\nnit - Use sentence case in the comments.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Mon, 19 Aug 2024 15:31:13 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi Shubham, here are my review comments for the TAP tests patch v27-0002\n\n======\nCommit message\n\nTap tests for 'include-generated-columns'\n\n~\n\nBut, it's more than that-- these are the TAP tests for all\ncombinations of replication related to generated columns. i.e. both\nwith and without 'include_generated_columns' option enabled.\n\n======\nsrc/test/subscription/t/011_generated.pl\n\nI was mistaken, thinking that the v27-0002 had already been refactored\naccording to Vignesh's last review but it is not done yet, so I am not\ngoing to post detailed review comments until the restructuring is\ncompleted.\n\n~\n\nOTOH, there are some problems I felt have crept into v26-0001 (TAP\ntest is same as v27-0002), so maybe try to also take care of them (see\nbelow) in v28-0002.\n\nIn no particular order:\n\n* I felt it is almost useless now to have the \"combo\" (\n\"regress_pub_combo\") publication. It used to have many tables when\nyou first created it but with every version posted it is publishing\nless and less so now there are only 2 tables in it. Better to have a\nspecific publication for each table now and forget about \"combos\"\n\n* The \"TEST tab_gen_to_gen initial sync\" seems to be not even checking\nthe table data. Why not? e.g. Even if you expect no data, you should\ntest for it.\n\n* The \"TEST tab_gen_to_gen replication\" seems to be not even checking\nthe table data. Why not?\n\n* Multiple XXX comments like \"... it needs more study to determine if\nthe above result was actually correct, or a PG17 bug...\" should be\nremoved. AFAIK we should well understand the expected results for all\ncombinations by now.\n\n* The \"TEST tab_order replication\" is now getting an error saying\n<missing replicated column: \"c\">, Now, that may now be the correct\nerror for this situation, but in that case, then I think the test is\nnot longer testing what it was intended to test (i.e. that column\norder does not matter....) Probably the table definition needs\nadjusting to make sure we are testing whenwe want to test, and not\njust making some random scenario \"PASS\".\n\n* The test \"# TEST tab_alter\" expected empty result also seems\nunhelpful. It might be related to the previous bullet.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 19 Aug 2024 17:10:27 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Fri, Aug 16, 2024 at 2:47 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Fri, 16 Aug 2024 at 10:04, Shubham Khanna\n> <khannashubham1197@gmail.com> wrote:\n> >\n> > On Thu, Aug 8, 2024 at 12:43 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > Hi Shubham,\n> > >\n> > > I think the v25-0001 patch only half-fixes the problems reported in my\n> > > v24-0001 review.\n> > >\n> > > ~\n> > >\n> > > Background (from the commit message):\n> > > This commit enables support for the 'include_generated_columns' option\n> > > in logical replication, allowing the transmission of generated column\n> > > information and data alongside regular table changes.\n> > >\n> > > ~\n> > >\n> > > The broken TAP test scenario in question is replicating from a\n> > > \"not-generated\" column to a \"generated\" column. As the generated\n> > > column is not on the publishing side, IMO the\n> > > 'include_generated_columns' option should have zero effect here.\n> > >\n> > > In other words, I expect this TAP test for 'include_generated_columns\n> > > = true' case should also be failing, as I wrote already yesterday:\n> > >\n> > > +# FIXME\n> > > +# Since there is no generated column on the publishing side this should give\n> > > +# the same result as the previous test. -- e.g. something like:\n> > > +# ERROR: logical replication target relation\n> > > \"public.tab_nogen_to_gen\" is missing\n> > > +# replicated column: \"b\"\n> >\n> > I have fixed the given comments. The attached v26-0001 Patch contains\n> > the required changes.\n>\n> Few comments:\n> 1) There's no need to pass include_generated_columns in this case; we\n> can retrieve it from ctx->data instead:\n> @@ -749,7 +764,7 @@ maybe_send_schema(LogicalDecodingContext *ctx,\n> static void\n> send_relation_and_attrs(Relation relation, TransactionId xid,\n> LogicalDecodingContext *ctx,\n> - Bitmapset *columns)\n> + Bitmapset *columns,\n> bool include_generated_columns)\n> {\n> TupleDesc desc = RelationGetDescr(relation);\n> int i;\n> @@ -766,7 +781,10 @@ send_relation_and_attrs(Relation relation,\n> TransactionId xid,\n>\n> 2) Commit message:\n> If the subscriber-side column is also a generated column then this option\n> has no effect; the replicated data will be ignored and the subscriber\n> column will be filled as normal with the subscriber-side computed or\n> default data.\n>\n> An error will occur in this case, so the message should be updated accordingly.\n>\n> 3) The current test is structured as follows: a) Create all required\n> tables b) Insert data into tables c) Create publications d) Create\n> subscriptions e) Perform inserts and verify\n> This approach can make reviewing and maintenance somewhat challenging.\n>\n> Instead, could you modify it to: a) Create the required table for a\n> single test b) Insert data for this test c) Create the publication for\n> this test d) Create the subscriptions for this test e) Perform inserts\n> and verify f) Clean up\n>\n> 4) We can maintain the test as a separate 0002 patch, as it may need a\n> few rounds of review and final adjustments. Once it's fully completed,\n> we can merge it back in.\n>\n> 5) Once we create and drop publication/subscriptions for individual\n> tests, we won't need such extensive configuration; we should be able\n> to run them with default values:\n> +$node_publisher->append_conf(\n> + 'postgresql.conf',\n> + \"max_wal_senders = 20\n> + max_replication_slots = 20\");\n\nFixed all the given comments. The attached patches contain the\nsuggested changes.\n\nThanks and Regards,\nShubham Khanna.", "msg_date": "Thu, 22 Aug 2024 10:22:03 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Mon, Aug 19, 2024 at 11:01 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi, Here are my review comments for v27-0001.\n>\n> ======\n> contrib/test_decoding/expected/generated_columns.out\n> contrib/test_decoding/sql/generated_columns.sql\n>\n> +-- By default, 'include-generated-columns' is enabled, so the values\n> for the generated column 'b' will be replicated even if it is not\n> explicitly specified.\n>\n> nit - The \"default\" is only like this for \"test_decoding\" (e.g., the\n> CREATE SUBSCRIPTION option is the opposite), so let's make the comment\n> clearer about that.\n> nit - Use sentence case in the comments.\n\nI have addressed all the comments in the v-28-0001 Patch. Please refer\nto the updated v28-0001 Patch here in [1]. See [1] for the changes\nadded.\n\n[1] https://www.postgresql.org/message-id/CAHv8RjL7rkxk6qSroRPg5ZARWMdK2Nd4-QyYNeoc2vhBm3cdDg%40mail.gmail.com\n\nThanks and Regards,\nShubham Khanna.\n\n\n", "msg_date": "Thu, 22 Aug 2024 10:26:24 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Mon, Aug 19, 2024 at 12:40 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi Shubham, here are my review comments for the TAP tests patch v27-0002\n>\n> ======\n> Commit message\n>\n> Tap tests for 'include-generated-columns'\n>\n> ~\n>\n> But, it's more than that-- these are the TAP tests for all\n> combinations of replication related to generated columns. i.e. both\n> with and without 'include_generated_columns' option enabled.\n>\n> ======\n> src/test/subscription/t/011_generated.pl\n>\n> I was mistaken, thinking that the v27-0002 had already been refactored\n> according to Vignesh's last review but it is not done yet, so I am not\n> going to post detailed review comments until the restructuring is\n> completed.\n>\n> ~\n>\n> OTOH, there are some problems I felt have crept into v26-0001 (TAP\n> test is same as v27-0002), so maybe try to also take care of them (see\n> below) in v28-0002.\n>\n> In no particular order:\n>\n> * I felt it is almost useless now to have the \"combo\" (\n> \"regress_pub_combo\") publication. It used to have many tables when\n> you first created it but with every version posted it is publishing\n> less and less so now there are only 2 tables in it. Better to have a\n> specific publication for each table now and forget about \"combos\"\n>\n> * The \"TEST tab_gen_to_gen initial sync\" seems to be not even checking\n> the table data. Why not? e.g. Even if you expect no data, you should\n> test for it.\n>\n> * The \"TEST tab_gen_to_gen replication\" seems to be not even checking\n> the table data. Why not?\n>\n> * Multiple XXX comments like \"... it needs more study to determine if\n> the above result was actually correct, or a PG17 bug...\" should be\n> removed. AFAIK we should well understand the expected results for all\n> combinations by now.\n>\n> * The \"TEST tab_order replication\" is now getting an error saying\n> <missing replicated column: \"c\">, Now, that may now be the correct\n> error for this situation, but in that case, then I think the test is\n> not longer testing what it was intended to test (i.e. that column\n> order does not matter....) Probably the table definition needs\n> adjusting to make sure we are testing whenwe want to test, and not\n> just making some random scenario \"PASS\".\n>\n> * The test \"# TEST tab_alter\" expected empty result also seems\n> unhelpful. It might be related to the previous bullet.\n\nI have addressed all the comments in the v-28-0002 Patch. Please refer\nto the updated v28-0002 Patch here in [1]. See [1] for the changes\nadded.\n\n[1] https://www.postgresql.org/message-id/CAHv8RjL7rkxk6qSroRPg5ZARWMdK2Nd4-QyYNeoc2vhBm3cdDg%40mail.gmail.com\n\nThanks and Regards,\nShubham Khanna.\n\n\n", "msg_date": "Thu, 22 Aug 2024 10:28:16 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Thu, 22 Aug 2024 at 10:22, Shubham Khanna\n<khannashubham1197@gmail.com> wrote:\n>\n> On Fri, Aug 16, 2024 at 2:47 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Fri, 16 Aug 2024 at 10:04, Shubham Khanna\n> > <khannashubham1197@gmail.com> wrote:\n> > >\n> > > On Thu, Aug 8, 2024 at 12:43 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > > >\n> > > > Hi Shubham,\n> > > >\n> > > > I think the v25-0001 patch only half-fixes the problems reported in my\n> > > > v24-0001 review.\n> > > >\n> > > > ~\n> > > >\n> > > > Background (from the commit message):\n> > > > This commit enables support for the 'include_generated_columns' option\n> > > > in logical replication, allowing the transmission of generated column\n> > > > information and data alongside regular table changes.\n> > > >\n> > > > ~\n> > > >\n> > > > The broken TAP test scenario in question is replicating from a\n> > > > \"not-generated\" column to a \"generated\" column. As the generated\n> > > > column is not on the publishing side, IMO the\n> > > > 'include_generated_columns' option should have zero effect here.\n> > > >\n> > > > In other words, I expect this TAP test for 'include_generated_columns\n> > > > = true' case should also be failing, as I wrote already yesterday:\n> > > >\n> > > > +# FIXME\n> > > > +# Since there is no generated column on the publishing side this should give\n> > > > +# the same result as the previous test. -- e.g. something like:\n> > > > +# ERROR: logical replication target relation\n> > > > \"public.tab_nogen_to_gen\" is missing\n> > > > +# replicated column: \"b\"\n> > >\n> > > I have fixed the given comments. The attached v26-0001 Patch contains\n> > > the required changes.\n> >\n> > Few comments:\n> > 1) There's no need to pass include_generated_columns in this case; we\n> > can retrieve it from ctx->data instead:\n> > @@ -749,7 +764,7 @@ maybe_send_schema(LogicalDecodingContext *ctx,\n> > static void\n> > send_relation_and_attrs(Relation relation, TransactionId xid,\n> > LogicalDecodingContext *ctx,\n> > - Bitmapset *columns)\n> > + Bitmapset *columns,\n> > bool include_generated_columns)\n> > {\n> > TupleDesc desc = RelationGetDescr(relation);\n> > int i;\n> > @@ -766,7 +781,10 @@ send_relation_and_attrs(Relation relation,\n> > TransactionId xid,\n> >\n> > 2) Commit message:\n> > If the subscriber-side column is also a generated column then this option\n> > has no effect; the replicated data will be ignored and the subscriber\n> > column will be filled as normal with the subscriber-side computed or\n> > default data.\n> >\n> > An error will occur in this case, so the message should be updated accordingly.\n> >\n> > 3) The current test is structured as follows: a) Create all required\n> > tables b) Insert data into tables c) Create publications d) Create\n> > subscriptions e) Perform inserts and verify\n> > This approach can make reviewing and maintenance somewhat challenging.\n> >\n> > Instead, could you modify it to: a) Create the required table for a\n> > single test b) Insert data for this test c) Create the publication for\n> > this test d) Create the subscriptions for this test e) Perform inserts\n> > and verify f) Clean up\n> >\n> > 4) We can maintain the test as a separate 0002 patch, as it may need a\n> > few rounds of review and final adjustments. Once it's fully completed,\n> > we can merge it back in.\n> >\n> > 5) Once we create and drop publication/subscriptions for individual\n> > tests, we won't need such extensive configuration; we should be able\n> > to run them with default values:\n> > +$node_publisher->append_conf(\n> > + 'postgresql.conf',\n> > + \"max_wal_senders = 20\n> > + max_replication_slots = 20\");\n>\n> Fixed all the given comments. The attached patches contain the\n> suggested changes.\n\nFew comments:\n1) This is already been covered in the first existing test case, may\nbe this can be removed:\n# =============================================================================\n# Testcase start: Subscriber table with a generated column (b) on the\n# subscriber, where column (b) is not present on the publisher.\n\nThis existing test:\n$node_publisher->safe_psql(\n'postgres', qq(\nCREATE TABLE tab1 (a int PRIMARY KEY, b int GENERATED ALWAYS AS (a * 2) STORED);\nINSERT INTO tab1 (a) VALUES (1), (2), (3);\nCREATE PUBLICATION pub1 FOR ALL TABLES;\n));\n\n$node_subscriber->safe_psql(\n'postgres', qq(\nCREATE TABLE tab1 (a int PRIMARY KEY, b int GENERATED ALWAYS AS (a *\n22) STORED, c int);\nCREATE SUBSCRIPTION sub1 CONNECTION '$publisher_connstr' PUBLICATION pub1;\n));\n\n2) Can we have this test verified with include_generated_columns =\ntrue too like how others are done:\nmy $publisher_connstr = $node_publisher->connstr . ' dbname=postgres';\n\n$node_publisher->safe_psql(\n'postgres', qq(\nCREATE TABLE tab1 (a int PRIMARY KEY, b int GENERATED ALWAYS AS (a * 2) STORED);\nINSERT INTO tab1 (a) VALUES (1), (2), (3);\nCREATE PUBLICATION pub1 FOR ALL TABLES;\n));\n\n$node_subscriber->safe_psql(\n'postgres', qq(\nCREATE TABLE tab1 (a int PRIMARY KEY, b int GENERATED ALWAYS AS (a *\n22) STORED, c int);\nCREATE SUBSCRIPTION sub1 CONNECTION '$publisher_connstr' PUBLICATION pub1;\n));\n\n3) There is a typo in this comment:\n3.a) # Testcase start: Publisher table with a generated column (b)\nand subscriber\n# table a with regular column (b).\n\nIt should be:\n# Testcase start: Publisher table with a generated column (b) and subscriber\n# table with a regular column (b).\n\n3.b) similarly here too:\n# Testcase end: Publisher table with a generated column (b) and subscriber\n# table a with regular column (b).\n\n3.c) The comments are not consistent, sometimes mentioned as\ncolumn(b) and sometimes as column (b). We can keep it consistent.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 22 Aug 2024 17:28:21 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi Shubham,\n\nI have reviewed v28* and posted updated v29 versions of patches 0001 and 0002.\n\nIf you are OK with these changes, the next task would be to pg_indent\nthem, then rebase the remaining patches (0003 etc.) and include those\nwith the next patchset version.\n\n//////////\n\nPatch v29-0001 changes:\n\nnit - Fixed typo in comments.\nnit - Removed an unnecessary format change for the unchanged\nsend_relation_and_attrs declaration.\n\n//////////\n\nPatch v29-0002 changes:\n\n1.\nMade fixes to address Vignesh's review comments [1].\n\n2.\nAdded the missing test cases for tab_gen_to_gen, and tab_alter.\n\n3.\nMultiple other modifications include:\nnit - Renamed the test database /test/test_igc_true/ because 'test'\nwas too vague.\nnit - This patch does not need to change most of the existing 'tab1'\ntest. So we should not be reformating the existing test code for no\nreason.\nnit - I added a summary comment to describe the test combinations\nnit - The \"Testcase end\" comments were unnecessary and prone to error,\nso I removed them.\nnit - Change comments /incremental sync/incremental replication/\nnit - Added XXX notes about copy_data=false. These are reminders for\nto change code in later TAP patches\nnit - Rearranged test steps so the publisher does not do incremental\nINSERT until all initial sync tests are done\nnit - Added initial sync tests even if copy_data=false. This is for\ncompleteness - these will be handled in a later TAP patch\nnit - The table names are self-explanatory, so some of the test\n\"messages\" were simplified\n\n======\n[1] https://www.postgresql.org/message-id/CALDaNm31LZQfeR8Vv1qNCOREGffvZbgGDrTp%3D3h%3DEHiHTEO2pQ%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Fri, 23 Aug 2024 20:56:24 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Mon, May 20, 2024 at 1:49 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, May 8, 2024 at 4:14 PM Shubham Khanna\n> <khannashubham1197@gmail.com> wrote:\n> >\n> > On Wed, May 8, 2024 at 11:39 AM Rajendra Kumar Dangwal\n> > <dangwalrajendra888@gmail.com> wrote:\n> > >\n> > > Hi PG Hackers.\n> > >\n> > > We are interested in enhancing the functionality of the pgoutput plugin by adding support for generated columns.\n> > > Could you please guide us on the necessary steps to achieve this? Additionally, do you have a platform for tracking such feature requests? Any insights or assistance you can provide on this matter would be greatly appreciated.\n> >\n> > The attached patch has the changes to support capturing generated\n> > column data using ‘pgoutput’ and’ test_decoding’ plugin. Now if the\n> > ‘include_generated_columns’ option is specified, the generated column\n> > information and generated column data also will be sent.\n>\n> As Euler mentioned earlier, I think it's a decision not to replicate\n> generated columns because we don't know the target table on the\n> subscriber has the same expression and there could be locale issues\n> even if it looks the same. I can see that a benefit of this proposal\n> would be to save cost to compute generated column values if the user\n> wants the target table on the subscriber to have exactly the same data\n> as the publisher's one. Are there other benefits or use cases?\n>\n\nThe cost is one but the other is the user may not want the data to be\ndifferent based on volatile functions like timeofday() or the table on\nsubscriber won't have the column marked as generated. Now, considering\nsuch use cases, is providing a subscription-level option a good idea\nas the patch is doing? I understand that this can serve the purpose\nbut it could also lead to having the same behavior for all the tables\nin all the publications for a subscription which may or may not be\nwhat the user expects. This could lead to some performance overhead\n(due to always sending generated columns for all the tables) for cases\nwhere the user needs it only for a subset of tables.\n\nI think we should consider it as a table-level option while defining\npublication in some way. A few ideas could be: (a) We ask users to\nexplicitly mention the generated column in the columns list while\ndefining publication. This has a drawback such that users need to\nspecify the column list even when all columns need to be replicated.\n(b) We can have some new syntax to indicate the same like: CREATE\nPUBLICATION pub1 FOR TABLE t1 INCLUDE GENERATED COLS, t2, t3, t4\nINCLUDE ..., t5;. I haven't analyzed the feasibility of this, so there\ncould be some challenges but we can at least investigate it.\n\nYet another idea is to keep this as a publication option\n(include_generated_columns or publish_generated_columns) similar to\n\"publish_via_partition_root\". Normally, \"publish_via_partition_root\"\nis used when tables on either side have different partition\nhierarchies which is somewhat the case here.\n\nThoughts?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 28 Aug 2024 11:36:19 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Wed, Aug 28, 2024 at 1:06 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, May 20, 2024 at 1:49 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, May 8, 2024 at 4:14 PM Shubham Khanna\n> > <khannashubham1197@gmail.com> wrote:\n> > >\n> > > On Wed, May 8, 2024 at 11:39 AM Rajendra Kumar Dangwal\n> > > <dangwalrajendra888@gmail.com> wrote:\n> > > >\n> > > > Hi PG Hackers.\n> > > >\n> > > > We are interested in enhancing the functionality of the pgoutput plugin by adding support for generated columns.\n> > > > Could you please guide us on the necessary steps to achieve this? Additionally, do you have a platform for tracking such feature requests? Any insights or assistance you can provide on this matter would be greatly appreciated.\n> > >\n> > > The attached patch has the changes to support capturing generated\n> > > column data using ‘pgoutput’ and’ test_decoding’ plugin. Now if the\n> > > ‘include_generated_columns’ option is specified, the generated column\n> > > information and generated column data also will be sent.\n> >\n> > As Euler mentioned earlier, I think it's a decision not to replicate\n> > generated columns because we don't know the target table on the\n> > subscriber has the same expression and there could be locale issues\n> > even if it looks the same. I can see that a benefit of this proposal\n> > would be to save cost to compute generated column values if the user\n> > wants the target table on the subscriber to have exactly the same data\n> > as the publisher's one. Are there other benefits or use cases?\n> >\n>\n> The cost is one but the other is the user may not want the data to be\n> different based on volatile functions like timeofday()\n\nShouldn't the generation expression be immutable?\n\n> or the table on\n> subscriber won't have the column marked as generated.\n\nYeah, it would be another use case.\n\n> Now, considering\n> such use cases, is providing a subscription-level option a good idea\n> as the patch is doing? I understand that this can serve the purpose\n> but it could also lead to having the same behavior for all the tables\n> in all the publications for a subscription which may or may not be\n> what the user expects. This could lead to some performance overhead\n> (due to always sending generated columns for all the tables) for cases\n> where the user needs it only for a subset of tables.\n\nYeah, it's a downside and I think it's less flexible. For example, if\nusers want to send both tables with generated columns and tables\nwithout generated columns, they would have to create at least two\nsubscriptions. Also, they would have to include a different set of\ntables to two publications.\n\n>\n> I think we should consider it as a table-level option while defining\n> publication in some way. A few ideas could be: (a) We ask users to\n> explicitly mention the generated column in the columns list while\n> defining publication. This has a drawback such that users need to\n> specify the column list even when all columns need to be replicated.\n> (b) We can have some new syntax to indicate the same like: CREATE\n> PUBLICATION pub1 FOR TABLE t1 INCLUDE GENERATED COLS, t2, t3, t4\n> INCLUDE ..., t5;. I haven't analyzed the feasibility of this, so there\n> could be some challenges but we can at least investigate it.\n\nI think we can create a publication for a single table, so what we can\ndo with this feature can be done also by the idea you described below.\n\n> Yet another idea is to keep this as a publication option\n> (include_generated_columns or publish_generated_columns) similar to\n> \"publish_via_partition_root\". Normally, \"publish_via_partition_root\"\n> is used when tables on either side have different partition\n> hierarchies which is somewhat the case here.\n\nIt sounds more useful to me.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 28 Aug 2024 22:13:31 -0500", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Thu, Aug 29, 2024 at 8:44 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Aug 28, 2024 at 1:06 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, May 20, 2024 at 1:49 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > As Euler mentioned earlier, I think it's a decision not to replicate\n> > > generated columns because we don't know the target table on the\n> > > subscriber has the same expression and there could be locale issues\n> > > even if it looks the same. I can see that a benefit of this proposal\n> > > would be to save cost to compute generated column values if the user\n> > > wants the target table on the subscriber to have exactly the same data\n> > > as the publisher's one. Are there other benefits or use cases?\n> > >\n> >\n> > The cost is one but the other is the user may not want the data to be\n> > different based on volatile functions like timeofday()\n>\n> Shouldn't the generation expression be immutable?\n>\n\nYes, I missed that point.\n\n> > or the table on\n> > subscriber won't have the column marked as generated.\n>\n> Yeah, it would be another use case.\n>\n\nRight, apart from that I am not aware of other use cases. If they\nhave, I would request Euler or Rajendra to share any other use case.\n\n> > Now, considering\n> > such use cases, is providing a subscription-level option a good idea\n> > as the patch is doing? I understand that this can serve the purpose\n> > but it could also lead to having the same behavior for all the tables\n> > in all the publications for a subscription which may or may not be\n> > what the user expects. This could lead to some performance overhead\n> > (due to always sending generated columns for all the tables) for cases\n> > where the user needs it only for a subset of tables.\n>\n> Yeah, it's a downside and I think it's less flexible. For example, if\n> users want to send both tables with generated columns and tables\n> without generated columns, they would have to create at least two\n> subscriptions.\n>\n\nAgreed and that would consume more resources.\n\n> Also, they would have to include a different set of\n> tables to two publications.\n>\n> >\n> > I think we should consider it as a table-level option while defining\n> > publication in some way. A few ideas could be: (a) We ask users to\n> > explicitly mention the generated column in the columns list while\n> > defining publication. This has a drawback such that users need to\n> > specify the column list even when all columns need to be replicated.\n> > (b) We can have some new syntax to indicate the same like: CREATE\n> > PUBLICATION pub1 FOR TABLE t1 INCLUDE GENERATED COLS, t2, t3, t4\n> > INCLUDE ..., t5;. I haven't analyzed the feasibility of this, so there\n> > could be some challenges but we can at least investigate it.\n>\n> I think we can create a publication for a single table, so what we can\n> do with this feature can be done also by the idea you described below.\n>\n> > Yet another idea is to keep this as a publication option\n> > (include_generated_columns or publish_generated_columns) similar to\n> > \"publish_via_partition_root\". Normally, \"publish_via_partition_root\"\n> > is used when tables on either side have different partition\n> > hierarchies which is somewhat the case here.\n>\n> It sounds more useful to me.\n>\n\nFair enough. Let's see if anyone else has any preference among the\nproposed methods or can think of a better way.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 29 Aug 2024 11:46:35 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Thu, Aug 29, 2024 at 11:46 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Aug 29, 2024 at 8:44 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Aug 28, 2024 at 1:06 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, May 20, 2024 at 1:49 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > As Euler mentioned earlier, I think it's a decision not to replicate\n> > > > generated columns because we don't know the target table on the\n> > > > subscriber has the same expression and there could be locale issues\n> > > > even if it looks the same. I can see that a benefit of this proposal\n> > > > would be to save cost to compute generated column values if the user\n> > > > wants the target table on the subscriber to have exactly the same data\n> > > > as the publisher's one. Are there other benefits or use cases?\n> > > >\n> > >\n> > > The cost is one but the other is the user may not want the data to be\n> > > different based on volatile functions like timeofday()\n> >\n> > Shouldn't the generation expression be immutable?\n> >\n>\n> Yes, I missed that point.\n>\n> > > or the table on\n> > > subscriber won't have the column marked as generated.\n> >\n> > Yeah, it would be another use case.\n> >\n>\n> Right, apart from that I am not aware of other use cases. If they\n> have, I would request Euler or Rajendra to share any other use case.\n>\n> > > Now, considering\n> > > such use cases, is providing a subscription-level option a good idea\n> > > as the patch is doing? I understand that this can serve the purpose\n> > > but it could also lead to having the same behavior for all the tables\n> > > in all the publications for a subscription which may or may not be\n> > > what the user expects. This could lead to some performance overhead\n> > > (due to always sending generated columns for all the tables) for cases\n> > > where the user needs it only for a subset of tables.\n> >\n> > Yeah, it's a downside and I think it's less flexible. For example, if\n> > users want to send both tables with generated columns and tables\n> > without generated columns, they would have to create at least two\n> > subscriptions.\n> >\n>\n> Agreed and that would consume more resources.\n>\n> > Also, they would have to include a different set of\n> > tables to two publications.\n> >\n> > >\n> > > I think we should consider it as a table-level option while defining\n> > > publication in some way. A few ideas could be: (a) We ask users to\n> > > explicitly mention the generated column in the columns list while\n> > > defining publication. This has a drawback such that users need to\n> > > specify the column list even when all columns need to be replicated.\n> > > (b) We can have some new syntax to indicate the same like: CREATE\n> > > PUBLICATION pub1 FOR TABLE t1 INCLUDE GENERATED COLS, t2, t3, t4\n> > > INCLUDE ..., t5;. I haven't analyzed the feasibility of this, so there\n> > > could be some challenges but we can at least investigate it.\n> >\n> > I think we can create a publication for a single table, so what we can\n> > do with this feature can be done also by the idea you described below.\n> >\n> > > Yet another idea is to keep this as a publication option\n> > > (include_generated_columns or publish_generated_columns) similar to\n> > > \"publish_via_partition_root\". Normally, \"publish_via_partition_root\"\n> > > is used when tables on either side have different partitions\n> > > hierarchies which is somewhat the case here.\n> >\n> > It sounds more useful to me.\n> >\n>\n> Fair enough. Let's see if anyone else has any preference among the\n> proposed methods or can think of a better way.\n\nI have fixed the current issue. I have added the option\n'publish_generated_columns' to the publisher side and created the new\ntest cases accordingly.\nThe attached patches contain the desired changes.\n\nThanks and Regards,\nShubham Khanna.", "msg_date": "Mon, 9 Sep 2024 15:08:19 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Mon, Sep 9, 2024 at 2:38 AM Shubham Khanna\n<khannashubham1197@gmail.com> wrote:\n>\n> On Thu, Aug 29, 2024 at 11:46 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Aug 29, 2024 at 8:44 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Wed, Aug 28, 2024 at 1:06 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Mon, May 20, 2024 at 1:49 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > > > As Euler mentioned earlier, I think it's a decision not to replicate\n> > > > > generated columns because we don't know the target table on the\n> > > > > subscriber has the same expression and there could be locale issues\n> > > > > even if it looks the same. I can see that a benefit of this proposal\n> > > > > would be to save cost to compute generated column values if the user\n> > > > > wants the target table on the subscriber to have exactly the same data\n> > > > > as the publisher's one. Are there other benefits or use cases?\n> > > > >\n> > > >\n> > > > The cost is one but the other is the user may not want the data to be\n> > > > different based on volatile functions like timeofday()\n> > >\n> > > Shouldn't the generation expression be immutable?\n> > >\n> >\n> > Yes, I missed that point.\n> >\n> > > > or the table on\n> > > > subscriber won't have the column marked as generated.\n> > >\n> > > Yeah, it would be another use case.\n> > >\n> >\n> > Right, apart from that I am not aware of other use cases. If they\n> > have, I would request Euler or Rajendra to share any other use case.\n> >\n> > > > Now, considering\n> > > > such use cases, is providing a subscription-level option a good idea\n> > > > as the patch is doing? I understand that this can serve the purpose\n> > > > but it could also lead to having the same behavior for all the tables\n> > > > in all the publications for a subscription which may or may not be\n> > > > what the user expects. This could lead to some performance overhead\n> > > > (due to always sending generated columns for all the tables) for cases\n> > > > where the user needs it only for a subset of tables.\n> > >\n> > > Yeah, it's a downside and I think it's less flexible. For example, if\n> > > users want to send both tables with generated columns and tables\n> > > without generated columns, they would have to create at least two\n> > > subscriptions.\n> > >\n> >\n> > Agreed and that would consume more resources.\n> >\n> > > Also, they would have to include a different set of\n> > > tables to two publications.\n> > >\n> > > >\n> > > > I think we should consider it as a table-level option while defining\n> > > > publication in some way. A few ideas could be: (a) We ask users to\n> > > > explicitly mention the generated column in the columns list while\n> > > > defining publication. This has a drawback such that users need to\n> > > > specify the column list even when all columns need to be replicated.\n> > > > (b) We can have some new syntax to indicate the same like: CREATE\n> > > > PUBLICATION pub1 FOR TABLE t1 INCLUDE GENERATED COLS, t2, t3, t4\n> > > > INCLUDE ..., t5;. I haven't analyzed the feasibility of this, so there\n> > > > could be some challenges but we can at least investigate it.\n> > >\n> > > I think we can create a publication for a single table, so what we can\n> > > do with this feature can be done also by the idea you described below.\n> > >\n> > > > Yet another idea is to keep this as a publication option\n> > > > (include_generated_columns or publish_generated_columns) similar to\n> > > > \"publish_via_partition_root\". Normally, \"publish_via_partition_root\"\n> > > > is used when tables on either side have different partitions\n> > > > hierarchies which is somewhat the case here.\n> > >\n> > > It sounds more useful to me.\n> > >\n> >\n> > Fair enough. Let's see if anyone else has any preference among the\n> > proposed methods or can think of a better way.\n>\n> I have fixed the current issue. I have added the option\n> 'publish_generated_columns' to the publisher side and created the new\n> test cases accordingly.\n> The attached patches contain the desired changes.\n>\n\nThank you for updating the patches. I have some comments:\n\nDo we really need to add this option to test_decoding? I think it\nwould be good if this improves the test coverage. Otherwise, I'm not\nsure we need this part. If we want to add it, I think it would be\nbetter to have it in a separate patch.\n\n---\n+ <para>\n+ If the publisher-side column is also a generated column\nthen this option\n+ has no effect; the publisher column will be filled as normal with the\n+ publisher-side computed or default data.\n+ </para>\n\nI don't understand this description. Why does this option have no\neffect if the publisher-side column is a generated column?\n\n---\n+ <para>\n+ This parameter can only be set <literal>true</literal> if\n<literal>copy_data</literal> is\n+ set to <literal>false</literal>.\n+ </para>\n\nIf I understand this patch correctly, it doesn't disallow to set\ncopy_data to true when the publish_generated_columns option is\nspecified. But do we want to disallow it? I think it would be more\nuseful and understandable if we allow to use both\npublish_generated_columns (publisher option) and copy_data (subscriber\noption) at the same time.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 9 Sep 2024 14:20:44 -0700", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Tue, Sep 10, 2024 at 2:51 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Sep 9, 2024 at 2:38 AM Shubham Khanna\n> <khannashubham1197@gmail.com> wrote:\n> >\n>\n> Thank you for updating the patches. I have some comments:\n>\n> Do we really need to add this option to test_decoding?\n>\n\nI don't see any reason to have such an option in test_decoding,\notherwise, we need a separate option for each publication option. I\nguess this is leftover of the previous subscriber-side approach.\n\n> I think it\n> would be good if this improves the test coverage. Otherwise, I'm not\n> sure we need this part. If we want to add it, I think it would be\n> better to have it in a separate patch.\n>\n\nRight.\n\n> ---\n> + <para>\n> + If the publisher-side column is also a generated column\n> then this option\n> + has no effect; the publisher column will be filled as normal with the\n> + publisher-side computed or default data.\n> + </para>\n>\n> I don't understand this description. Why does this option have no\n> effect if the publisher-side column is a generated column?\n>\n\nShouldn't it be subscriber-side?\n\nI have one additional comment:\n/*\n- * If the publication is FOR ALL TABLES then it is treated the same as\n- * if there are no column lists (even if other publications have a\n- * list).\n+ * If the publication is FOR ALL TABLES and include generated columns\n+ * then it is treated the same as if there are no column lists (even\n+ * if other publications have a list).\n */\n- if (!pub->alltables)\n+ if (!pub->alltables || !pub->pubgencolumns)\n\nWhy do we treat pubgencolumns at the same level as the FOR ALL TABLES\ncase? I thought that if the user has provided a column list, we only\nneed to publish the specified columns even when the\npublish_generated_columns option is set.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 10 Sep 2024 09:44:55 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "IIUC, previously there was a subscriber side option\n'include_generated_columns', but now since v30* there is a publisher\nside option 'publish_generated_columns'.\n\nFair enough, but in the v30* patches I can still see remnants of the\nold name 'include_generated_columns' all over the place:\n- in the commit message\n- in the code (struct field names, param names etc)\n- in the comments\n- in the docs\n\nIf the decision is to call the new PUBLICATION option\n'publish_generated_columns', then can't we please use that one name\n*everywhere* -- e.g. replace all cases where any old name is still\nlurking?\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 10 Sep 2024 18:19:37 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Here are a some more review comments for patch v30-0001.\n\n======\nsrc/sgml/ref/create_publication.sgml\n\n1.\n+ <para>\n+ If the publisher-side column is also a generated column\nthen this option\n+ has no effect; the publisher column will be filled as normal with the\n+ publisher-side computed or default data.\n+ </para>\n\nIt should say \"subscriber-side\"; not \"publisher-side\". The same was\nalready reported by Sawada-San [1].\n\n~~~\n\n2.\n+ <para>\n+ This parameter can only be set <literal>true</literal> if\n<literal>copy_data</literal> is\n+ set to <literal>false</literal>.\n+ </para>\n\nIMO this limitation should be addressed by patch 0001 like it was\nalready done in the previous patches (e.g. v22-0002). I think\nSawada-san suggested the same [1].\n\nAnyway, 'copy_data' is not a PUBLICATION option, so the fact it is\nmentioned like this without any reference to the SUBSCRIPTION seems\nlike a cut/paste error from the previous implementation.\n\n======\nsrc/backend/catalog/pg_publication.c\n\n3. pub_collist_validate\n- if (TupleDescAttr(tupdesc, attnum - 1)->attgenerated)\n- ereport(ERROR,\n- errcode(ERRCODE_INVALID_COLUMN_REFERENCE),\n- errmsg(\"cannot use generated column \\\"%s\\\" in publication column list\",\n- colname));\n-\n\nInstead of just removing this ERROR entirely here, I thought it would\nbe more user-friendly to give a WARNING if the PUBLICATION's explicit\ncolumn list includes generated cols when the option\n\"publish_generated_columns\" is false. This combination doesn't seem\nlike something a user would do intentionally, so just silently\nignoring it (like the current patch does) is likely going to give\nsomeone unexpected results/grief.\n\n======\nsrc/backend/replication/logical/proto.c\n\n4. logicalrep_write_tuple, and logicalrep_write_attrs:\n\n- if (att->attisdropped || att->attgenerated)\n+ if (att->attisdropped)\n continue;\n\nWhy aren't you also checking the new PUBLICATION option here and\nskipping all gencols if the \"publish_generated_columns\" option is\nfalse? Or is the BMS of pgoutput_column_list_init handling this case?\nMaybe there should be an Assert for this?\n\n======\nsrc/backend/replication/pgoutput/pgoutput.c\n\n5. send_relation_and_attrs\n\n- if (att->attisdropped || att->attgenerated)\n+ if (att->attisdropped)\n continue;\n\nSame question as #4.\n\n~~~\n\n6. prepare_all_columns_bms and pgoutput_column_list_init\n\n+ if (att->attgenerated && !pub->pubgencolumns)\n+ cols = bms_del_member(cols, i + 1);\n\nIIUC, the algorithm seems overly tricky filling the BMS with all\ncolumns, before straight away conditionally removing the generated\ncolumns. Can't it be refactored to assign all the correct columns\nup-front, to avoid calling bms_del_member()?\n\n======\nsrc/bin/pg_dump/pg_dump.c\n\n7. getPublications\n\nIIUC, there is lots of missing SQL code here (for all older versions)\nthat should be saying \"false AS pubgencolumns\".\ne.g. compare the SQL with how \"false AS pubviaroot\" is used.\n\n======\nsrc/bin/pg_dump/t/002_pg_dump.pl\n\n8. Missing tests?\n\nI expected to see a pg_dump test for this new PUBLICATION option.\n\n======\nsrc/test/regress/sql/publication.sql\n\n9. Missing tests?\n\nHow about adding another test case that checks this new option must be\n\"Boolean\"?\n\n~~~\n\n10. Missing tests?\n\n--- error: generated column \"d\" can't be in list\n+-- ok: generated columns can be in the list too\n ALTER PUBLICATION testpub_fortable ADD TABLE testpub_tbl5 (a, d);\n+ALTER PUBLICATION testpub_fortable DROP TABLE testpub_tbl5;\n\n(see my earlier comment #3)\n\nIMO there should be another test case for a WARNING here if the user\nattempts to include generated column 'd' in an explicit PUBLICATION\ncolumn list while the \"publish_generated-columns\" is false.\n\n======\n[1] https://www.postgresql.org/message-id/CAD21AoA-tdTz0G-vri8KM2TXeFU8RCDsOpBXUBCgwkfokF7%3DjA%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 11 Sep 2024 13:24:33 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi Shubham,\n\nHere are my general comments about the v30-0002 TAP test patch.\n\n======\n\n1.\nAs mentioned in a previous post [1] there are still several references\nto the old 'include_generated_columns' option remaining in this patch.\nThey need replacing.\n\n~~~\n\n2.\n+# Furthermore, all combinations are tested for publish_generated_columns=false\n+# (see subscription sub1 of database 'postgres'), and\n+# publish_generated_columns=true (see subscription sub2 of database\n+# 'test_igc_true').\n\nThose 'see subscription' notes and 'test_igc_true' are from the old\nimplementation. Those need fixing. BTW, 'test_pgc_true' is a better\nname for the database now that the option name is changed.\n\nIn the previous implementation, the TAP test environment was:\n- a common publication pub, on the 'postgres' database\n- a subscription sub1 with option include_generated_columns=false, on\nthe 'postgres' database\n- a subscription sub2 with option include_generated_columns=true, on\nthe 'test_igc_true' database\n\nNow it is like:\n- a publication pub1, on the 'postgres' database, with option\npublish_generated_columns=false\n- a publication pub2, on the 'postgres' database, with option\npublish_generated_columns=true\n- a subscription sub1, on the 'postgres' database for publication pub1\n- a subscription sub2, on the 'test_pgc_true' database for publication pub2\n\nIt would be good to document that above convention because knowing how\nthe naming/numbering works makes it a lot easier to read the\nsubsequent test cases. Of course, it is really important to\nname/number everything consistently otherwise these tests become hard\nto follow. AFAICT it is mostly OK, but the generated -> generated\npublication should be called 'regress_pub2_gen_to_gen'\n\n~~~\n\n3.\n+# Create table.\n+$node_publisher->safe_psql(\n+ 'postgres', qq(\n+ CREATE TABLE tab_gen_to_nogen (a int, b int GENERATED ALWAYS AS (a *\n2) STORED);\n+ INSERT INTO tab_gen_to_nogen (a) VALUES (1), (2), (3);\n+));\n+\n+# Create publication with publish_generated_columns=false.\n+$node_publisher->safe_psql('postgres',\n+ \"CREATE PUBLICATION regress_pub1_gen_to_nogen FOR TABLE\ntab_gen_to_nogen WITH (publish_generated_columns = false)\"\n+);\n+\n+# Create table and subscription with copy_data=true.\n+$node_subscriber->safe_psql(\n+ 'postgres', qq(\n+ CREATE TABLE tab_gen_to_nogen (a int, b int);\n+ CREATE SUBSCRIPTION regress_sub1_gen_to_nogen CONNECTION\n'$publisher_connstr' PUBLICATION regress_pub1_gen_to_nogen WITH\n(copy_data = true);\n+));\n+\n+# Create publication with publish_generated_columns=true.\n+$node_publisher->safe_psql('postgres',\n+ \"CREATE PUBLICATION regress_pub2_gen_to_nogen FOR TABLE\ntab_gen_to_nogen WITH (publish_generated_columns = true)\"\n+);\n+\n\nThe code can be restructured to be simpler. Both publications are\nalways created on the 'postgres' database at the publisher node, so\nlet's just create them at the same time as the creating the publisher\ntable. It also makes readability much better e.g.\n\n# Create table, and publications\n$node_publisher->safe_psql(\n'postgres', qq(\nCREATE TABLE tab_gen_to_nogen (a int, b int GENERATED ALWAYS AS (a * 2) STORED);\nINSERT INTO tab_gen_to_nogen (a) VALUES (1), (2), (3);\nCREATE PUBLICATION regress_pub1_gen_to_nogen FOR TABLE\ntab_gen_to_nogen WITH (publish_generated_columns = false);\nCREATE PUBLICATION regress_pub2_gen_to_nogen FOR TABLE\ntab_gen_to_nogen WITH (publish_generated_columns = true);\n));\n\nIFAICT this same simplification can be repeated multiple times in this TAP file.\n\n~~\n\nSimilarly, it would be neater to combine DROP PUBLICATION's together too.\n\n~~~\n\n4.\nHopefully, the generated column 'copy_data' can be implemented again\nsoon for subscriptions, and then the initial sync tests here can be\nproperly implemented instead of the placeholders currently in patch\n0002.\n\n======\n[1] https://www.postgresql.org/message-id/CAHut%2BPuDJToG%3DV-ogTi9_6fnhhn2S0%2BsVRGPynhcf9mEh0Q%3DLA%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 11 Sep 2024 17:06:10 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Because this feature is now being implemented as a PUBLICATION option,\nthere is another scenario that might need consideration; I am thinking\nabout where the same table is published by multiple PUBLICATIONS (with\ndifferent option settings) that are subscribed by a single\nSUBSCRIPTION.\n\ne.g.1\n-----\nCREATE PUBLICATION pub1 FOR TABLE t1 WITH (publish_generated_columns = true);\nCREATE PUBLICATION pub2 FOR TABLE t1 WITH (publish_generated_columns = false);\nCREATE SUBSCRIPTION sub ... PUBLICATIONS pub1,pub2;\n-----\n\ne.g.2\n-----\nCREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = true);\nCREATE PUBLICATION pub2 FOR TABLE t1 WITH (publish_generated_columns = false);\nCREATE SUBSCRIPTION sub ... PUBLICATIONS pub1,pub2;\n-----\n\nDo you know if this case is supported? If yes, then which publication\noption value wins?\n\nThe CREATE SUBSCRIPTION docs [1] only says \"Subscriptions having\nseveral publications in which the same table has been published with\ndifferent column lists are not supported.\"\n\nPerhaps the user is supposed to deduce that the example above would\nwork OK if table 't1' has no generated cols. OTOH, if it did have\ngenerated cols then the PUBLICATION column lists must be different and\ntherefore it is \"not supported\" (??).\n\nI have not tried this to see what happens, but even if it behaves as\nexpected, there should probably be some comments/docs/tests for this\nscenario to clarify it for the user.\n\nNotice that \"publish_via_partition_root\" has a similar conundrum, but\nin that case, the behaviour is documented in the CREATE PUBLICATION\ndocs [2]. So, maybe \"publish_generated_columns\" should be documented\na bit like that.\n\n======\n[1] https://www.postgresql.org/docs/devel/sql-createsubscription.html\n[2] https://www.postgresql.org/docs/devel/sql-createpublication.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 12 Sep 2024 15:30:23 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Tue, Sep 10, 2024 at 2:51 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Sep 9, 2024 at 2:38 AM Shubham Khanna\n> <khannashubham1197@gmail.com> wrote:\n> >\n> > On Thu, Aug 29, 2024 at 11:46 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Thu, Aug 29, 2024 at 8:44 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Wed, Aug 28, 2024 at 1:06 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > On Mon, May 20, 2024 at 1:49 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > > >\n> > > > > > As Euler mentioned earlier, I think it's a decision not to replicate\n> > > > > > generated columns because we don't know the target table on the\n> > > > > > subscriber has the same expression and there could be locale issues\n> > > > > > even if it looks the same. I can see that a benefit of this proposal\n> > > > > > would be to save cost to compute generated column values if the user\n> > > > > > wants the target table on the subscriber to have exactly the same data\n> > > > > > as the publisher's one. Are there other benefits or use cases?\n> > > > > >\n> > > > >\n> > > > > The cost is one but the other is the user may not want the data to be\n> > > > > different based on volatile functions like timeofday()\n> > > >\n> > > > Shouldn't the generation expression be immutable?\n> > > >\n> > >\n> > > Yes, I missed that point.\n> > >\n> > > > > or the table on\n> > > > > subscriber won't have the column marked as generated.\n> > > >\n> > > > Yeah, it would be another use case.\n> > > >\n> > >\n> > > Right, apart from that I am not aware of other use cases. If they\n> > > have, I would request Euler or Rajendra to share any other use case.\n> > >\n> > > > > Now, considering\n> > > > > such use cases, is providing a subscription-level option a good idea\n> > > > > as the patch is doing? I understand that this can serve the purpose\n> > > > > but it could also lead to having the same behavior for all the tables\n> > > > > in all the publications for a subscription which may or may not be\n> > > > > what the user expects. This could lead to some performance overhead\n> > > > > (due to always sending generated columns for all the tables) for cases\n> > > > > where the user needs it only for a subset of tables.\n> > > >\n> > > > Yeah, it's a downside and I think it's less flexible. For example, if\n> > > > users want to send both tables with generated columns and tables\n> > > > without generated columns, they would have to create at least two\n> > > > subscriptions.\n> > > >\n> > >\n> > > Agreed and that would consume more resources.\n> > >\n> > > > Also, they would have to include a different set of\n> > > > tables to two publications.\n> > > >\n> > > > >\n> > > > > I think we should consider it as a table-level option while defining\n> > > > > publication in some way. A few ideas could be: (a) We ask users to\n> > > > > explicitly mention the generated column in the columns list while\n> > > > > defining publication. This has a drawback such that users need to\n> > > > > specify the column list even when all columns need to be replicated.\n> > > > > (b) We can have some new syntax to indicate the same like: CREATE\n> > > > > PUBLICATION pub1 FOR TABLE t1 INCLUDE GENERATED COLS, t2, t3, t4\n> > > > > INCLUDE ..., t5;. I haven't analyzed the feasibility of this, so there\n> > > > > could be some challenges but we can at least investigate it.\n> > > >\n> > > > I think we can create a publication for a single table, so what we can\n> > > > do with this feature can be done also by the idea you described below.\n> > > >\n> > > > > Yet another idea is to keep this as a publication option\n> > > > > (include_generated_columns or publish_generated_columns) similar to\n> > > > > \"publish_via_partition_root\". Normally, \"publish_via_partition_root\"\n> > > > > is used when tables on either side have different partitions\n> > > > > hierarchies which is somewhat the case here.\n> > > >\n> > > > It sounds more useful to me.\n> > > >\n> > >\n> > > Fair enough. Let's see if anyone else has any preference among the\n> > > proposed methods or can think of a better way.\n> >\n> > I have fixed the current issue. I have added the option\n> > 'publish_generated_columns' to the publisher side and created the new\n> > test cases accordingly.\n> > The attached patches contain the desired changes.\n> >\n>\n> Thank you for updating the patches. I have some comments:\n>\n> Do we really need to add this option to test_decoding? I think it\n> would be good if this improves the test coverage. Otherwise, I'm not\n> sure we need this part. If we want to add it, I think it would be\n> better to have it in a separate patch.\n>\n\nI have removed the option from the test_decoding file.\n\n> ---\n> + <para>\n> + If the publisher-side column is also a generated column\n> then this option\n> + has no effect; the publisher column will be filled as normal with the\n> + publisher-side computed or default data.\n> + </para>\n>\n> I don't understand this description. Why does this option have no\n> effect if the publisher-side column is a generated column?\n>\n\nThe documentation was incorrect. Currently, replicating from a\npublisher table with a generated column to a subscriber table with a\ngenerated column will result in an error. This has now been updated.\n\n> ---\n> + <para>\n> + This parameter can only be set <literal>true</literal> if\n> <literal>copy_data</literal> is\n> + set to <literal>false</literal>.\n> + </para>\n>\n> If I understand this patch correctly, it doesn't disallow to set\n> copy_data to true when the publish_generated_columns option is\n> specified. But do we want to disallow it? I think it would be more\n> useful and understandable if we allow to use both\n> publish_generated_columns (publisher option) and copy_data (subscriber\n> option) at the same time.\n>\n\nSupport for tablesync with generated columns was not included in the\ninitial patch, and this was reflected in the documentation. The\nfunctionality for syncing generated column data has been introduced\nwith the 0002 patch.\n\nThe attached v31 patches contain the changes for the same. I won't be\nposting the test patch for now. I will share it once this patch has\nbeen stabilized.\n\nThanks and Regards,\nShubham Khanna.", "msg_date": "Fri, 13 Sep 2024 17:04:06 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Tue, 10 Sept 2024 at 09:45, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Sep 10, 2024 at 2:51 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Sep 9, 2024 at 2:38 AM Shubham Khanna\n> > <khannashubham1197@gmail.com> wrote:\n> > >\n> >\n> > Thank you for updating the patches. I have some comments:\n> >\n> > Do we really need to add this option to test_decoding?\n> >\n>\n> I don't see any reason to have such an option in test_decoding,\n> otherwise, we need a separate option for each publication option. I\n> guess this is leftover of the previous subscriber-side approach.\n>\n> > I think it\n> > would be good if this improves the test coverage. Otherwise, I'm not\n> > sure we need this part. If we want to add it, I think it would be\n> > better to have it in a separate patch.\n> >\n>\n> Right.\n>\n> > ---\n> > + <para>\n> > + If the publisher-side column is also a generated column\n> > then this option\n> > + has no effect; the publisher column will be filled as normal with the\n> > + publisher-side computed or default data.\n> > + </para>\n> >\n> > I don't understand this description. Why does this option have no\n> > effect if the publisher-side column is a generated column?\n> >\n>\n> Shouldn't it be subscriber-side?\n>\n> I have one additional comment:\n> /*\n> - * If the publication is FOR ALL TABLES then it is treated the same as\n> - * if there are no column lists (even if other publications have a\n> - * list).\n> + * If the publication is FOR ALL TABLES and include generated columns\n> + * then it is treated the same as if there are no column lists (even\n> + * if other publications have a list).\n> */\n> - if (!pub->alltables)\n> + if (!pub->alltables || !pub->pubgencolumns)\n>\n> Why do we treat pubgencolumns at the same level as the FOR ALL TABLES\n> case? I thought that if the user has provided a column list, we only\n> need to publish the specified columns even when the\n> publish_generated_columns option is set.\n\nTo handle cases where the publish_generated_columns option isn't\nspecified for all tables in a publication, the pubgencolumns check\nneeds to be performed. In such cases, we must create a column list\nthat excludes generated columns. This process involves:\na) Retrieving all columns for the table and adding them to the column\nlist. b) Iterating through this column list and removing generated\ncolumns. c) Checking if the remaining column count matches the total\nnumber of columns. If they match, set the relation entry's column list\nto NULL, so we don’t need to check columns during data replication. If\nthey do not match, update the column list to include only the relevant\ncolumns, allowing pgoutput to replicate data for these specific\ncolumns.\n\nThis step is necessary because some tables in the publication may\ninclude generated columns.\nFor tables where publish_generated_columns is set, the column list\nwill be set to NULL, eliminating the need for a column list check\nduring data publication.\nHowever, modifying the column list based on publish_generated_columns\nis not required, this is addressed in the v31 patch posted by Shubham\nat [1].\n\n[1] - https://www.postgresql.org/message-id/CAHv8Rj%2BinrG6EU0rpDJxih8mmYLhCUP6ouTAmMN2RDnT9tE_Gg%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n", "msg_date": "Fri, 13 Sep 2024 17:26:52 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Fri, Sep 13, 2024 at 9:34 PM Shubham Khanna\n<khannashubham1197@gmail.com> wrote:\n>\n> On Tue, Sep 10, 2024 at 2:51 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Sep 9, 2024 at 2:38 AM Shubham Khanna\n> > <khannashubham1197@gmail.com> wrote:\n> > >\n> > > On Thu, Aug 29, 2024 at 11:46 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Thu, Aug 29, 2024 at 8:44 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > > > On Wed, Aug 28, 2024 at 1:06 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > >\n> > > > > > On Mon, May 20, 2024 at 1:49 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > > > >\n> > > > > > > As Euler mentioned earlier, I think it's a decision not to replicate\n> > > > > > > generated columns because we don't know the target table on the\n> > > > > > > subscriber has the same expression and there could be locale issues\n> > > > > > > even if it looks the same. I can see that a benefit of this proposal\n> > > > > > > would be to save cost to compute generated column values if the user\n> > > > > > > wants the target table on the subscriber to have exactly the same data\n> > > > > > > as the publisher's one. Are there other benefits or use cases?\n> > > > > > >\n> > > > > >\n> > > > > > The cost is one but the other is the user may not want the data to be\n> > > > > > different based on volatile functions like timeofday()\n> > > > >\n> > > > > Shouldn't the generation expression be immutable?\n> > > > >\n> > > >\n> > > > Yes, I missed that point.\n> > > >\n> > > > > > or the table on\n> > > > > > subscriber won't have the column marked as generated.\n> > > > >\n> > > > > Yeah, it would be another use case.\n> > > > >\n> > > >\n> > > > Right, apart from that I am not aware of other use cases. If they\n> > > > have, I would request Euler or Rajendra to share any other use case.\n> > > >\n> > > > > > Now, considering\n> > > > > > such use cases, is providing a subscription-level option a good idea\n> > > > > > as the patch is doing? I understand that this can serve the purpose\n> > > > > > but it could also lead to having the same behavior for all the tables\n> > > > > > in all the publications for a subscription which may or may not be\n> > > > > > what the user expects. This could lead to some performance overhead\n> > > > > > (due to always sending generated columns for all the tables) for cases\n> > > > > > where the user needs it only for a subset of tables.\n> > > > >\n> > > > > Yeah, it's a downside and I think it's less flexible. For example, if\n> > > > > users want to send both tables with generated columns and tables\n> > > > > without generated columns, they would have to create at least two\n> > > > > subscriptions.\n> > > > >\n> > > >\n> > > > Agreed and that would consume more resources.\n> > > >\n> > > > > Also, they would have to include a different set of\n> > > > > tables to two publications.\n> > > > >\n> > > > > >\n> > > > > > I think we should consider it as a table-level option while defining\n> > > > > > publication in some way. A few ideas could be: (a) We ask users to\n> > > > > > explicitly mention the generated column in the columns list while\n> > > > > > defining publication. This has a drawback such that users need to\n> > > > > > specify the column list even when all columns need to be replicated.\n> > > > > > (b) We can have some new syntax to indicate the same like: CREATE\n> > > > > > PUBLICATION pub1 FOR TABLE t1 INCLUDE GENERATED COLS, t2, t3, t4\n> > > > > > INCLUDE ..., t5;. I haven't analyzed the feasibility of this, so there\n> > > > > > could be some challenges but we can at least investigate it.\n> > > > >\n> > > > > I think we can create a publication for a single table, so what we can\n> > > > > do with this feature can be done also by the idea you described below.\n> > > > >\n> > > > > > Yet another idea is to keep this as a publication option\n> > > > > > (include_generated_columns or publish_generated_columns) similar to\n> > > > > > \"publish_via_partition_root\". Normally, \"publish_via_partition_root\"\n> > > > > > is used when tables on either side have different partitions\n> > > > > > hierarchies which is somewhat the case here.\n> > > > >\n> > > > > It sounds more useful to me.\n> > > > >\n> > > >\n> > > > Fair enough. Let's see if anyone else has any preference among the\n> > > > proposed methods or can think of a better way.\n> > >\n> > > I have fixed the current issue. I have added the option\n> > > 'publish_generated_columns' to the publisher side and created the new\n> > > test cases accordingly.\n> > > The attached patches contain the desired changes.\n> > >\n> >\n> > Thank you for updating the patches. I have some comments:\n> >\n> > Do we really need to add this option to test_decoding? I think it\n> > would be good if this improves the test coverage. Otherwise, I'm not\n> > sure we need this part. If we want to add it, I think it would be\n> > better to have it in a separate patch.\n> >\n>\n> I have removed the option from the test_decoding file.\n>\n> > ---\n> > + <para>\n> > + If the publisher-side column is also a generated column\n> > then this option\n> > + has no effect; the publisher column will be filled as normal with the\n> > + publisher-side computed or default data.\n> > + </para>\n> >\n> > I don't understand this description. Why does this option have no\n> > effect if the publisher-side column is a generated column?\n> >\n>\n> The documentation was incorrect. Currently, replicating from a\n> publisher table with a generated column to a subscriber table with a\n> generated column will result in an error. This has now been updated.\n>\n> > ---\n> > + <para>\n> > + This parameter can only be set <literal>true</literal> if\n> > <literal>copy_data</literal> is\n> > + set to <literal>false</literal>.\n> > + </para>\n> >\n> > If I understand this patch correctly, it doesn't disallow to set\n> > copy_data to true when the publish_generated_columns option is\n> > specified. But do we want to disallow it? I think it would be more\n> > useful and understandable if we allow to use both\n> > publish_generated_columns (publisher option) and copy_data (subscriber\n> > option) at the same time.\n> >\n>\n> Support for tablesync with generated columns was not included in the\n> initial patch, and this was reflected in the documentation. The\n> functionality for syncing generated column data has been introduced\n> with the 0002 patch.\n>\n\nSince nothing was said otherwise, I assumed my v30-0001 comments were\naddressed in v31, but the new code seems to have quite a few of my\nsuggested changes missing. If you haven't addressed my review comments\nfor patch 0001 yet, please say so. OTOH, please give reasons for any\nrejected comments.\n\n> The attached v31 patches contain the changes for the same. I won't be\n> posting the test patch for now. I will share it once this patch has\n> been stabilized.\n\nHow can the patch become \"stabilized\" without associated tests to\nverify the behaviour is not broken? e.g. I can write a stable function\nthat says 2+2=5.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 16 Sep 2024 17:41:44 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Wed, Sep 11, 2024 at 10:30 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Because this feature is now being implemented as a PUBLICATION option,\n> there is another scenario that might need consideration; I am thinking\n> about where the same table is published by multiple PUBLICATIONS (with\n> different option settings) that are subscribed by a single\n> SUBSCRIPTION.\n>\n> e.g.1\n> -----\n> CREATE PUBLICATION pub1 FOR TABLE t1 WITH (publish_generated_columns = true);\n> CREATE PUBLICATION pub2 FOR TABLE t1 WITH (publish_generated_columns = false);\n> CREATE SUBSCRIPTION sub ... PUBLICATIONS pub1,pub2;\n> -----\n>\n> e.g.2\n> -----\n> CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = true);\n> CREATE PUBLICATION pub2 FOR TABLE t1 WITH (publish_generated_columns = false);\n> CREATE SUBSCRIPTION sub ... PUBLICATIONS pub1,pub2;\n> -----\n>\n> Do you know if this case is supported? If yes, then which publication\n> option value wins?\n\nI would expect these option values are processed with OR. That is, we\npublish changes of the generated columns if at least one publication\nsets publish_generated_columns to true. It seems to me that we treat\nmultiple row filters in the same way.\n\n>\n> The CREATE SUBSCRIPTION docs [1] only says \"Subscriptions having\n> several publications in which the same table has been published with\n> different column lists are not supported.\"\n>\n> Perhaps the user is supposed to deduce that the example above would\n> work OK if table 't1' has no generated cols. OTOH, if it did have\n> generated cols then the PUBLICATION column lists must be different and\n> therefore it is \"not supported\" (??).\n\nWith the patch, how should this feature work when users specify a\ngenerated column to the column list and set publish_generated_column =\nfalse, in the first place? raise an error (as we do today)? or always\nsend NULL?\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 16 Sep 2024 14:01:32 -0700", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Tue, Sep 17, 2024 at 7:02 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Sep 11, 2024 at 10:30 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Because this feature is now being implemented as a PUBLICATION option,\n> > there is another scenario that might need consideration; I am thinking\n> > about where the same table is published by multiple PUBLICATIONS (with\n> > different option settings) that are subscribed by a single\n> > SUBSCRIPTION.\n> >\n> > e.g.1\n> > -----\n> > CREATE PUBLICATION pub1 FOR TABLE t1 WITH (publish_generated_columns = true);\n> > CREATE PUBLICATION pub2 FOR TABLE t1 WITH (publish_generated_columns = false);\n> > CREATE SUBSCRIPTION sub ... PUBLICATIONS pub1,pub2;\n> > -----\n> >\n> > e.g.2\n> > -----\n> > CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = true);\n> > CREATE PUBLICATION pub2 FOR TABLE t1 WITH (publish_generated_columns = false);\n> > CREATE SUBSCRIPTION sub ... PUBLICATIONS pub1,pub2;\n> > -----\n> >\n> > Do you know if this case is supported? If yes, then which publication\n> > option value wins?\n>\n> I would expect these option values are processed with OR. That is, we\n> publish changes of the generated columns if at least one publication\n> sets publish_generated_columns to true. It seems to me that we treat\n> multiple row filters in the same way.\n>\n\nI thought that the option \"publish_generated_columns\" is more related\nto \"column lists\" than \"row filters\".\n\nLet's say table 't1' has columns 'a', 'b', 'c', 'gen1', 'gen2'.\n\nThen:\nPUBLICATION pub1 FOR TABLE t1 WITH (publish_generated_columns = true);\nis equivalent to\nPUBLICATION pub1 FOR TABLE t1(a,b,c,gen1,gen2);\n\nAnd\nPUBLICATION pub2 FOR TABLE t1 WITH (publish_generated_columns = false);\nis equivalent to\nPUBLICATION pub2 FOR TABLE t1(a,b,c);\n\nSo, I would expect this to fail because the SUBSCRIPTION docs say\n\"Subscriptions having several publications in which the same table has\nbeen published with different column lists are not supported.\"\n\n~~\n\nHere's another example:\nPUBLICATION pub3 FOR TABLE t1(a,b);\nPUBLICATION pub4 FOR TABLE t1(c);\n\nWon't it be strange (e.g. difficult to explain) why pub1 and pub2\ntable column lists are allowed to be combined in one subscription, but\npub3 and pub4 in one subscription are not supported due to the\ndifferent column lists?\n\n> >\n> > The CREATE SUBSCRIPTION docs [1] only says \"Subscriptions having\n> > several publications in which the same table has been published with\n> > different column lists are not supported.\"\n> >\n> > Perhaps the user is supposed to deduce that the example above would\n> > work OK if table 't1' has no generated cols. OTOH, if it did have\n> > generated cols then the PUBLICATION column lists must be different and\n> > therefore it is \"not supported\" (??).\n>\n> With the patch, how should this feature work when users specify a\n> generated column to the column list and set publish_generated_column =\n> false, in the first place? raise an error (as we do today)? or always\n> send NULL?\n\nFor this scenario, I suggested (see [1] #3) that the code could give a\nWARNING. As I wrote up-thread: This combination doesn't seem\nlike something a user would do intentionally, so just silently\nignoring it (which the current patch does) is likely going to give\nsomeone unexpected results/grief.\n\n======\n[1] https://www.postgresql.org/message-id/CAHut%2BPuaitgE4tu3nfaR%3DPCQEKjB%3DmpDtZ1aWkbwb%3DJZE8YvqQ%40mail.gmail.com\n\nKind Regards,\nPeter Smith\nFujitsu Australia\n\n\n", "msg_date": "Tue, 17 Sep 2024 13:09:29 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Mon, Sep 16, 2024 at 8:09 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> I thought that the option \"publish_generated_columns\" is more related\n> to \"column lists\" than \"row filters\".\n>\n> Let's say table 't1' has columns 'a', 'b', 'c', 'gen1', 'gen2'.\n>\n\n> And\n> PUBLICATION pub2 FOR TABLE t1 WITH (publish_generated_columns = false);\n> is equivalent to\n> PUBLICATION pub2 FOR TABLE t1(a,b,c);\n\nThis makes sense to me as it preserves the current behavior.\n\n> Then:\n> PUBLICATION pub1 FOR TABLE t1 WITH (publish_generated_columns = true);\n> is equivalent to\n> PUBLICATION pub1 FOR TABLE t1(a,b,c,gen1,gen2);\n\nThis also makes sense. It would also include future generated columns.\n\n> So, I would expect this to fail because the SUBSCRIPTION docs say\n> \"Subscriptions having several publications in which the same table has\n> been published with different column lists are not supported.\"\n\nSo I agree that it would raise an error if users subscribe to both\npub1 and pub2.\n\nAnd looking back at your examples,\n\n> > > e.g.1\n> > > -----\n> > > CREATE PUBLICATION pub1 FOR TABLE t1 WITH (publish_generated_columns = true);\n> > > CREATE PUBLICATION pub2 FOR TABLE t1 WITH (publish_generated_columns = false);\n> > > CREATE SUBSCRIPTION sub ... PUBLICATIONS pub1,pub2;\n> > > -----\n> > >\n> > > e.g.2\n> > > -----\n> > > CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = true);\n> > > CREATE PUBLICATION pub2 FOR TABLE t1 WITH (publish_generated_columns = false);\n> > > CREATE SUBSCRIPTION sub ... PUBLICATIONS pub1,pub2;\n> > > -----\n\nBoth examples would not be supported.\n\n> > >\n> > > The CREATE SUBSCRIPTION docs [1] only says \"Subscriptions having\n> > > several publications in which the same table has been published with\n> > > different column lists are not supported.\"\n> > >\n> > > Perhaps the user is supposed to deduce that the example above would\n> > > work OK if table 't1' has no generated cols. OTOH, if it did have\n> > > generated cols then the PUBLICATION column lists must be different and\n> > > therefore it is \"not supported\" (??).\n> >\n> > With the patch, how should this feature work when users specify a\n> > generated column to the column list and set publish_generated_column =\n> > false, in the first place? raise an error (as we do today)? or always\n> > send NULL?\n>\n> For this scenario, I suggested (see [1] #3) that the code could give a\n> WARNING. As I wrote up-thread: This combination doesn't seem\n> like something a user would do intentionally, so just silently\n> ignoring it (which the current patch does) is likely going to give\n> someone unexpected results/grief.\n\nIt gives a WARNING, and then publishes the specified generated column\ndata (even if publish_generated_column = false)? If so, it would mean\nthat specifying the generated column to the column list means to\npublish its data regardless of the publish_generated_column parameter\nvalue.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 16 Sep 2024 23:15:07 -0700", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Tue, Sep 17, 2024 at 4:15 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Sep 16, 2024 at 8:09 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > I thought that the option \"publish_generated_columns\" is more related\n> > to \"column lists\" than \"row filters\".\n> >\n> > Let's say table 't1' has columns 'a', 'b', 'c', 'gen1', 'gen2'.\n> >\n>\n> > And\n> > PUBLICATION pub2 FOR TABLE t1 WITH (publish_generated_columns = false);\n> > is equivalent to\n> > PUBLICATION pub2 FOR TABLE t1(a,b,c);\n>\n> This makes sense to me as it preserves the current behavior.\n>\n> > Then:\n> > PUBLICATION pub1 FOR TABLE t1 WITH (publish_generated_columns = true);\n> > is equivalent to\n> > PUBLICATION pub1 FOR TABLE t1(a,b,c,gen1,gen2);\n>\n> This also makes sense. It would also include future generated columns.\n>\n> > So, I would expect this to fail because the SUBSCRIPTION docs say\n> > \"Subscriptions having several publications in which the same table has\n> > been published with different column lists are not supported.\"\n>\n> So I agree that it would raise an error if users subscribe to both\n> pub1 and pub2.\n>\n> And looking back at your examples,\n>\n> > > > e.g.1\n> > > > -----\n> > > > CREATE PUBLICATION pub1 FOR TABLE t1 WITH (publish_generated_columns = true);\n> > > > CREATE PUBLICATION pub2 FOR TABLE t1 WITH (publish_generated_columns = false);\n> > > > CREATE SUBSCRIPTION sub ... PUBLICATIONS pub1,pub2;\n> > > > -----\n> > > >\n> > > > e.g.2\n> > > > -----\n> > > > CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = true);\n> > > > CREATE PUBLICATION pub2 FOR TABLE t1 WITH (publish_generated_columns = false);\n> > > > CREATE SUBSCRIPTION sub ... PUBLICATIONS pub1,pub2;\n> > > > -----\n>\n> Both examples would not be supported.\n>\n> > > >\n> > > > The CREATE SUBSCRIPTION docs [1] only says \"Subscriptions having\n> > > > several publications in which the same table has been published with\n> > > > different column lists are not supported.\"\n> > > >\n> > > > Perhaps the user is supposed to deduce that the example above would\n> > > > work OK if table 't1' has no generated cols. OTOH, if it did have\n> > > > generated cols then the PUBLICATION column lists must be different and\n> > > > therefore it is \"not supported\" (??).\n> > >\n> > > With the patch, how should this feature work when users specify a\n> > > generated column to the column list and set publish_generated_column =\n> > > false, in the first place? raise an error (as we do today)? or always\n> > > send NULL?\n> >\n> > For this scenario, I suggested (see [1] #3) that the code could give a\n> > WARNING. As I wrote up-thread: This combination doesn't seem\n> > like something a user would do intentionally, so just silently\n> > ignoring it (which the current patch does) is likely going to give\n> > someone unexpected results/grief.\n>\n> It gives a WARNING, and then publishes the specified generated column\n> data (even if publish_generated_column = false)? If so, it would mean\n> that specifying the generated column to the column list means to\n> publish its data regardless of the publish_generated_column parameter\n> value.\n>\n\nNo. I meant only it can give the WARNING to tell the user user \"Hey,\nthere is a conflict here because you said publish_generated_column=\nfalse, but you also specified gencols in the column list\".\n\nBut always it is the option \"publish_generated_column\" determines the\nfinal publishing behaviour. So if it says\npublish_generated_column=false then it would NOT publish generated\ncolumns even if they are gencols in the column list. I think this\nmakes sense because when there is no column list specified then that\nimplicitly means \"all columns\" and the table might have some gencols,\nbut still 'publish_generated_columns' is what determines the\nbehaviour.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 17 Sep 2024 16:34:09 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Here are some review comments for v31-0001 (for the docs only)\n\nThere may be some overlap here with some comments already made for\nv30-0001 which are not yet addressed in v31-0001.\n\n======\nCommit message\n\n1.\nWhen introducing the 'publish_generated_columns' parameter, you must\nalso say this is a PUBLICATION parameter.\n\n~~~\n\n2.\nWith this enhancement, users can now include the 'include_generated_columns'\noption when querying logical replication slots using either the pgoutput\nplugin or the test_decoding plugin. This option, when set to 'true' or '1',\ninstructs the replication system to include generated column information\nand data in the replication stream.\n\n~\n\nThe above is stale information because it still refers to the old name\n'include_generated_columns', and to test_decoding which was already\nremoved in this patch.\n\n======\ndoc/src/sgml/ddl.sgml\n\n3.\n+ Generated columns may be skipped during logical replication\naccording to the\n+ <command>CREATE PUBLICATION</command> option\n+ <link linkend=\"sql-createpublication-params-with-include-generated-columns\">\n+ <literal>publish_generated_columns</literal></link>.\n\n3a.\nnit - The linkend is based on the old name instead of the new name.\n\n3b.\nnit - Better to call this a parameter instead of an option because\nthat is what the CREATE PUBLICATION docs call it.\n\n======\ndoc/src/sgml/protocol.sgml\n\n4.\n+ <varlistentry>\n+ <term>publish_generated_columns</term>\n+ <listitem>\n+ <para>\n+ Boolean option to enable generated columns. This option controls\n+ whether generated columns should be included in the string\n+ representation of tuples during logical decoding in PostgreSQL.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+\n\nIs this even needed anymore? Now that the implementation is using a\nPUBLICATION parameter, isn't everything determined just by that\nparameter? I don't see the reason why a protocol change is needed\nanymore. And, if there is no protocol change needed, then this\ndocumentation change is also not needed.\n\n~~~~\n\n5.\n <para>\n- Next, the following message part appears for each column included in\n- the publication (except generated columns):\n+ Next, the following message parts appear for each column included in\n+ the publication (generated columns are excluded unless the parameter\n+ <link linkend=\"protocol-logical-replication-params\">\n+ <literal>publish_generated_columns</literal></link> specifies otherwise):\n </para>\n\nLike the previous comment above, I think everything is now determined\nby the PUBLICATION parameter. So maybe this should just be referring\nto that instead.\n\n======\ndoc/src/sgml/ref/create_publication.sgml\n\n6.\n+ <varlistentry\nid=\"sql-createpublication-params-with-include-generated-columns\">\n+ <term><literal>publish_generated_columns</literal>\n(<type>boolean</type>)</term>\n+ <listitem>\n\nnit - the ID is based on the old parameter name.\n\n~\n\n7.\n+ <para>\n+ This option is only available for replicating generated\ncolumn data from the publisher\n+ to a regular, non-generated column in the subscriber.\n+ </para>\n\nIMO remove this paragraph. I really don't think you should be\nmentioning the subscriber here at all. AFAIK this parameter is only\nfor determining if the generated column will be published or not. What\nhappens at the other end (e.g. logic whether it gets ignored or not by\nthe subscriber) is more like a matrix of behaviours that could be\ndocumented in the \"Logical Replication\" section. But not here.\n\n(I removed this in my nitpicks attachment)\n\n~~~\n\n8.\n+ <para>\n+ This parameter can only be set <literal>true</literal> if\n<literal>copy_data</literal> is\n+ set to <literal>false</literal>.\n+ </para>\n\nIMO remove this paragraph too. The user can create a PUBLICATION\nbefore a SUBSCRIPTION even exists so to say it \"can only be set...\" is\nnot correct. Sure, your patch 0001 does not support the COPY of\ngenerated columns but if you want to document that then it should be\ndocumented in the CREATE SUBSCRIBER docs. But not here.\n\n(I removed this in my nitpicks attachment)\n\nTBH, it would be better if patches 0001 and 0002 were merged then you\ncan avoid all this. IIUC they were only separate in the first place\nbecause 2 different people wrote them. It is not making reviews easier\nwith them split.\n\n======\n\nPlease see the attachment which implements some of the nits above.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 17 Sep 2024 17:43:36 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Review comments for v31-0001.\n\n(I tried to give only new comments, but there might be some overlap\nwith comments I previously made for v30-0001)\n\n======\nsrc/backend/catalog/pg_publication.c\n\n1.\n+\n+ if (publish_generated_columns_given)\n+ {\n+ values[Anum_pg_publication_pubgencolumns - 1] =\nBoolGetDatum(publish_generated_columns);\n+ replaces[Anum_pg_publication_pubgencolumns - 1] = true;\n+ }\n\nnit - unnecessary whitespace above here.\n\n======\nsrc/backend/replication/pgoutput/pgoutput.c\n\n2. prepare_all_columns_bms\n\n+ /* Iterate the cols until generated columns are found. */\n+ cols = bms_add_member(cols, i + 1);\n\nHow does the comment relate to the statement that follows it?\n\n~~~\n\n3.\n+ * Skip generated column if pubgencolumns option was not\n+ * specified.\n\nnit - /pubgencolumns option/publish_generated_columns parameter/\n\n======\nsrc/bin/pg_dump/pg_dump.c\n\n4.\ngetPublications:\n\nnit - /i_pub_gencolumns/i_pubgencols/ (it's the same information but simpler)\n\n======\nsrc/bin/pg_dump/pg_dump.h\n\n5.\n+ bool pubgencolumns;\n } PublicationInfo;\n\nnit - /pubgencolumns/pubgencols/ (it's the same information but simpler)\n\n======\nvsrc/bin/psql/describe.c\n\n6.\n bool has_pubviaroot;\n+ bool has_pubgencol;\n\nnit - /has_pubgencol/has_pubgencols/ (plural consistency)\n\n======\nsrc/include/catalog/pg_publication.h\n\n7.\n+ /* true if generated columns data should be published */\n+ bool pubgencolumns;\n } FormData_pg_publication;\n\nnit - /pubgencolumns/pubgencols/ (it's the same information but simpler)\n\n~~~\n\n8.\n+ bool pubgencolumns;\n PublicationActions pubactions;\n } Publication;\n\nnit - /pubgencolumns/pubgencols/ (it's the same information but simpler)\n\n======\nsrc/test/regress/sql/publication.sql\n\n9.\n+-- Test the publication with or without 'PUBLISH_GENERATED_COLUMNS' parameter\n+SET client_min_messages = 'ERROR';\n+CREATE PUBLICATION pub1 FOR ALL TABLES WITH (PUBLISH_GENERATED_COLUMNS=1);\n+\\dRp+ pub1\n+\n+CREATE PUBLICATION pub2 FOR ALL TABLES WITH (PUBLISH_GENERATED_COLUMNS=0);\n+\\dRp+ pub2\n\n9a.\nnit - Use lowercase for the parameters.\n\n~\n\n9b.\nnit - Fix the comment to say what the test is actually doing:\n\"Test the publication 'publish_generated_columns' parameter enabled or disabled\"\n\n======\nsrc/test/subscription/t/031_column_list.pl\n\n10.\nLater I think you should add another test here to cover the scenario\nthat I was discussing with Sawada-San -- e.g. when there are 2\npublications for the same table subscribed by just 1 subscription but\nhaving different values of the 'publish_generated_columns' for the\npublications.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 17 Sep 2024 19:42:22 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi, here are my review comments for patch v31-0002.\n\n======\n\n1. General.\n\nIMO patches 0001 and 0002 should be merged when next posted. IIUC the\nreason for the split was only because there were 2 different authors\nbut that seems to be not relevant anymore.\n\n======\nCommit message\n\n2.\nWhen 'copy_data' is true, during the initial sync, the data is replicated from\nthe publisher to the subscriber using the COPY command. The normal COPY\ncommand does not copy generated columns, so when 'publish_generated_columns'\nis true, we need to copy using the syntax:\n'COPY (SELECT column_name FROM table_name) TO STDOUT'.\n\n~\n\n2a.\nShould clarify that 'copy_data' is a SUBSCRIPTION parameter.\n\n2b.\nShould clarify that 'publish_generated_columns' is a PUBLICATION parameter.\n\n======\nsrc/backend/replication/logical/tablesync.c\n\nmake_copy_attnamelist:\n\n3.\n- for (i = 0; i < rel->remoterel.natts; i++)\n+ desc = RelationGetDescr(rel->localrel);\n+ localgenlist = palloc0(rel->remoterel.natts * sizeof(bool));\n\nEach time I review this code I am tricked into thinking it is wrong to\nuse rel->remoterel.natts here for the localgenlist. AFAICT the code is\nactually fine because you do not store *all* the subscriber gencols in\n'localgenlist' -- you only store those with matching names on the\npublisher table. It might be good if you could add an explanatory\ncomment about that to prevent any future doubts.\n\n~~~\n\n4.\n+ if (!remotegenlist[remote_attnum])\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"logical replication target relation \\\"%s.%s\\\" has a\ngenerated column \\\"%s\\\" \"\n+ \"but corresponding column on source relation is not a generated column\",\n+ rel->remoterel.nspname, rel->remoterel.relname, NameStr(attr->attname))));\n\nThis error message has lots of good information. OTOH, I think when\ncopy_data=false the error would report the subscriber column just as\n\"missing\", which is maybe less helpful. Perhaps that other\ncopy_data=false \"missing\" case can be improved to share the same error\nmessage that you have here.\n\n~~~\n\nfetch_remote_table_info:\n\n5.\nIIUC, this logic needs to be more sophisticated to handle the case\nthat was being discussed earlier with Sawada-san [1]. e.g. when the\nsame table has gencols but there are multiple subscribed publications\nwhere the 'publish_generated_columns' parameter differs.\n\nAlso, you'll need test cases for this scenario, because it is too\ndifficult to judge correctness just by visual inspection of the code.\n\n~~~~\n\n6.\nnit - Change 'hasgencolpub' to 'has_pub_with_pubgencols' for\nreadability, and initialize it to 'false' to make it easy to use\nlater.\n\n~~~\n\n7.\n- * Get column lists for each relation.\n+ * Get column lists for each relation and check if any of the publication\n+ * has generated column option.\n\nand\n\n+ /* Check if any of the publication has generated column option */\n+ if (server_version >= 180000)\n\nnit - tweak the comments to name the publication parameter properly.\n\n~~~\n\n8.\nforeach(lc, MySubscription->publications)\n{\nif (foreach_current_index(lc) > 0)\nappendStringInfoString(&pub_names, \", \");\nappendStringInfoString(&pub_names, quote_literal_cstr(strVal(lfirst(lc))));\n}\n\nI know this is existing code, but shouldn't all this be done by using\nthe purpose-built function 'get_publications_str'\n\n~~~\n\n9.\n+ ereport(ERROR,\n+ errcode(ERRCODE_CONNECTION_FAILURE),\n+ errmsg(\"could not fetch gencolumns information from publication list: %s\",\n+ pub_names.data));\n\nand\n\n+ errcode(ERRCODE_UNDEFINED_OBJECT),\n+ errmsg(\"failed to fetch tuple for gencols from publication list: %s\",\n+ pub_names.data));\n\nnit - /gencolumns information/generated column publication\ninformation/ to make the errmsg more human-readable\n\n~~~\n\n10.\n+ bool gencols_allowed = server_version >= 180000 && hasgencolpub;\n+\n+ if (!gencols_allowed)\n+ appendStringInfo(&cmd, \" AND a.attgenerated = ''\");\n\nCan the 'gencols_allowed' var be removed, and the condition just be\nreplaced with if (!has_pub_with_pubgencols)? It seems equivalent\nunless I am mistaken.\n\n======\n\nPlease refer to the attachment which implements some of the nits\nmentioned above.\n\n======\n[1] https://www.postgresql.org/message-id/CAD21AoBun9crSWaxteMqyu8A_zme2ppa2uJvLJSJC2E3DJxQVA%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Wed, 18 Sep 2024 13:28:08 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Thu, Aug 29, 2024 at 8:44 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Aug 28, 2024 at 1:06 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > >\n> > > As Euler mentioned earlier, I think it's a decision not to replicate\n> > > generated columns because we don't know the target table on the\n> > > subscriber has the same expression and there could be locale issues\n> > > even if it looks the same. I can see that a benefit of this proposal\n> > > would be to save cost to compute generated column values if the user\n> > > wants the target table on the subscriber to have exactly the same data\n> > > as the publisher's one. Are there other benefits or use cases?\n> > >\n> >\n> > The cost is one but the other is the user may not want the data to be\n> > different based on volatile functions like timeofday()\n>\n> Shouldn't the generation expression be immutable?\n>\n> > or the table on\n> > subscriber won't have the column marked as generated.\n>\n> Yeah, it would be another use case.\n>\n\nWhile speaking with one of the decoding output plugin users, I learned\nthat this feature will be useful when replicating data to a\nnon-postgres database using the plugin output, especially when the\nother database doesn't have a generated column concept.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 19 Sep 2024 14:10:57 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Tue, Sep 17, 2024 at 12:04 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Tue, Sep 17, 2024 at 4:15 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Sep 16, 2024 at 8:09 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > I thought that the option \"publish_generated_columns\" is more related\n> > > to \"column lists\" than \"row filters\".\n> > >\n> > > Let's say table 't1' has columns 'a', 'b', 'c', 'gen1', 'gen2'.\n> > >\n> >\n> > > And\n> > > PUBLICATION pub2 FOR TABLE t1 WITH (publish_generated_columns = false);\n> > > is equivalent to\n> > > PUBLICATION pub2 FOR TABLE t1(a,b,c);\n> >\n> > This makes sense to me as it preserves the current behavior.\n> >\n> > > Then:\n> > > PUBLICATION pub1 FOR TABLE t1 WITH (publish_generated_columns = true);\n> > > is equivalent to\n> > > PUBLICATION pub1 FOR TABLE t1(a,b,c,gen1,gen2);\n> >\n> > This also makes sense. It would also include future generated columns.\n> >\n> > > So, I would expect this to fail because the SUBSCRIPTION docs say\n> > > \"Subscriptions having several publications in which the same table has\n> > > been published with different column lists are not supported.\"\n> >\n> > So I agree that it would raise an error if users subscribe to both\n> > pub1 and pub2.\n> >\n> > And looking back at your examples,\n> >\n> > > > > e.g.1\n> > > > > -----\n> > > > > CREATE PUBLICATION pub1 FOR TABLE t1 WITH (publish_generated_columns = true);\n> > > > > CREATE PUBLICATION pub2 FOR TABLE t1 WITH (publish_generated_columns = false);\n> > > > > CREATE SUBSCRIPTION sub ... PUBLICATIONS pub1,pub2;\n> > > > > -----\n> > > > >\n> > > > > e.g.2\n> > > > > -----\n> > > > > CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = true);\n> > > > > CREATE PUBLICATION pub2 FOR TABLE t1 WITH (publish_generated_columns = false);\n> > > > > CREATE SUBSCRIPTION sub ... PUBLICATIONS pub1,pub2;\n> > > > > -----\n> >\n> > Both examples would not be supported.\n> >\n> > > > >\n> > > > > The CREATE SUBSCRIPTION docs [1] only says \"Subscriptions having\n> > > > > several publications in which the same table has been published with\n> > > > > different column lists are not supported.\"\n> > > > >\n> > > > > Perhaps the user is supposed to deduce that the example above would\n> > > > > work OK if table 't1' has no generated cols. OTOH, if it did have\n> > > > > generated cols then the PUBLICATION column lists must be different and\n> > > > > therefore it is \"not supported\" (??).\n> > > >\n> > > > With the patch, how should this feature work when users specify a\n> > > > generated column to the column list and set publish_generated_column =\n> > > > false, in the first place? raise an error (as we do today)? or always\n> > > > send NULL?\n> > >\n> > > For this scenario, I suggested (see [1] #3) that the code could give a\n> > > WARNING. As I wrote up-thread: This combination doesn't seem\n> > > like something a user would do intentionally, so just silently\n> > > ignoring it (which the current patch does) is likely going to give\n> > > someone unexpected results/grief.\n> >\n> > It gives a WARNING, and then publishes the specified generated column\n> > data (even if publish_generated_column = false)?\n\n\nI think that the column list should take priority and we should\npublish the generated column if it is mentioned in irrespective of\nthe option.\n\n> > If so, it would mean\n> > that specifying the generated column to the column list means to\n> > publish its data regardless of the publish_generated_column parameter\n> > value.\n> >\n>\n> No. I meant only it can give the WARNING to tell the user user \"Hey,\n> there is a conflict here because you said publish_generated_column=\n> false, but you also specified gencols in the column list\".\n>\n\nUsers can use a publication like \"create publication pub1 for table\nt1(c1, c2), t2;\" where they want t1's generated column to be published\nbut not for t2. They can specify the generated column name in the\ncolumn list of t1 in that case even though the rest of the tables\nwon't publish generated columns.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 19 Sep 2024 15:02:25 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Thu, Sep 19, 2024 at 2:32 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Sep 17, 2024 at 12:04 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Tue, Sep 17, 2024 at 4:15 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Mon, Sep 16, 2024 at 8:09 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > > >\n> > > > I thought that the option \"publish_generated_columns\" is more related\n> > > > to \"column lists\" than \"row filters\".\n> > > >\n> > > > Let's say table 't1' has columns 'a', 'b', 'c', 'gen1', 'gen2'.\n> > > >\n> > >\n> > > > And\n> > > > PUBLICATION pub2 FOR TABLE t1 WITH (publish_generated_columns = false);\n> > > > is equivalent to\n> > > > PUBLICATION pub2 FOR TABLE t1(a,b,c);\n> > >\n> > > This makes sense to me as it preserves the current behavior.\n> > >\n> > > > Then:\n> > > > PUBLICATION pub1 FOR TABLE t1 WITH (publish_generated_columns = true);\n> > > > is equivalent to\n> > > > PUBLICATION pub1 FOR TABLE t1(a,b,c,gen1,gen2);\n> > >\n> > > This also makes sense. It would also include future generated columns.\n> > >\n> > > > So, I would expect this to fail because the SUBSCRIPTION docs say\n> > > > \"Subscriptions having several publications in which the same table has\n> > > > been published with different column lists are not supported.\"\n> > >\n> > > So I agree that it would raise an error if users subscribe to both\n> > > pub1 and pub2.\n> > >\n> > > And looking back at your examples,\n> > >\n> > > > > > e.g.1\n> > > > > > -----\n> > > > > > CREATE PUBLICATION pub1 FOR TABLE t1 WITH (publish_generated_columns = true);\n> > > > > > CREATE PUBLICATION pub2 FOR TABLE t1 WITH (publish_generated_columns = false);\n> > > > > > CREATE SUBSCRIPTION sub ... PUBLICATIONS pub1,pub2;\n> > > > > > -----\n> > > > > >\n> > > > > > e.g.2\n> > > > > > -----\n> > > > > > CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = true);\n> > > > > > CREATE PUBLICATION pub2 FOR TABLE t1 WITH (publish_generated_columns = false);\n> > > > > > CREATE SUBSCRIPTION sub ... PUBLICATIONS pub1,pub2;\n> > > > > > -----\n> > >\n> > > Both examples would not be supported.\n> > >\n> > > > > >\n> > > > > > The CREATE SUBSCRIPTION docs [1] only says \"Subscriptions having\n> > > > > > several publications in which the same table has been published with\n> > > > > > different column lists are not supported.\"\n> > > > > >\n> > > > > > Perhaps the user is supposed to deduce that the example above would\n> > > > > > work OK if table 't1' has no generated cols. OTOH, if it did have\n> > > > > > generated cols then the PUBLICATION column lists must be different and\n> > > > > > therefore it is \"not supported\" (??).\n> > > > >\n> > > > > With the patch, how should this feature work when users specify a\n> > > > > generated column to the column list and set publish_generated_column =\n> > > > > false, in the first place? raise an error (as we do today)? or always\n> > > > > send NULL?\n> > > >\n> > > > For this scenario, I suggested (see [1] #3) that the code could give a\n> > > > WARNING. As I wrote up-thread: This combination doesn't seem\n> > > > like something a user would do intentionally, so just silently\n> > > > ignoring it (which the current patch does) is likely going to give\n> > > > someone unexpected results/grief.\n> > >\n> > > It gives a WARNING, and then publishes the specified generated column\n> > > data (even if publish_generated_column = false)?\n>\n>\n> I think that the column list should take priority and we should\n> publish the generated column if it is mentioned in irrespective of\n> the option.\n\nAgreed.\n\n>\n> > > If so, it would mean\n> > > that specifying the generated column to the column list means to\n> > > publish its data regardless of the publish_generated_column parameter\n> > > value.\n> > >\n> >\n> > No. I meant only it can give the WARNING to tell the user user \"Hey,\n> > there is a conflict here because you said publish_generated_column=\n> > false, but you also specified gencols in the column list\".\n> >\n>\n> Users can use a publication like \"create publication pub1 for table\n> t1(c1, c2), t2;\" where they want t1's generated column to be published\n> but not for t2. They can specify the generated column name in the\n> column list of t1 in that case even though the rest of the tables\n> won't publish generated columns.\n\nAgreed.\n\nI think that users can use the publish_generated_column option when\nthey want to publish all generated columns, instead of specifying all\nthe columns in the column list. It's another advantage of this option\nthat it will also include the future generated columns.\n\nGiven that we publish the generated columns if they are mentioned in\nthe column list, can we separate the patch into two if it helps\nreviews? One is to allow logical replication to publish generated\ncolumns if they are explicitly mentioned in the column list. The\nsecond patch is to introduce the publish_generated_columns option.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 19 Sep 2024 10:25:42 -0700", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Fri, Sep 20, 2024 at 3:26 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Sep 19, 2024 at 2:32 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n...\n> > I think that the column list should take priority and we should\n> > publish the generated column if it is mentioned in irrespective of\n> > the option.\n>\n> Agreed.\n>\n> >\n...\n> >\n> > Users can use a publication like \"create publication pub1 for table\n> > t1(c1, c2), t2;\" where they want t1's generated column to be published\n> > but not for t2. They can specify the generated column name in the\n> > column list of t1 in that case even though the rest of the tables\n> > won't publish generated columns.\n>\n> Agreed.\n>\n> I think that users can use the publish_generated_column option when\n> they want to publish all generated columns, instead of specifying all\n> the columns in the column list. It's another advantage of this option\n> that it will also include the future generated columns.\n>\n\nOK. Let me give some examples below to help understand this idea.\n\nPlease correct me if these are incorrect.\n\n======\n\nAssuming these tables:\n\nt1(a,b,gen1,gen2)\nt2(c,d,gen1,gen2)\n\nExamples, when publish_generated_columns=false:\n\nCREATE PUBLICATION pub1 FOR t1(a,b,gen2), t2 WITH\n(publish_generated_columns=false)\nt1 -> publishes a, b, gen2 (e.g. what column list says)\nt2 -> publishes c, d\n\nCREATE PUBLICATION pub1 FOR t1, t2(gen1) WITH (publish_generated_columns=false)\nt1 -> publishes a, b\nt2 -> publishes gen1 (e.g. what column list says)\n\nCREATE PUBLICATION pub1 FOR t1, t2 WITH (publish_generated_columns=false)\nt1 -> publishes a, b\nt2 -> publishes c, d\n\nCREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns=false)\nt1 -> publishes a, b\nt2 -> publishes c, d\n\n~~\n\nExamples, when publish_generated_columns=true:\n\nCREATE PUBLICATION pub1 FOR t1(a,b,gen2), t2 WITH\n(publish_generated_columns=true)\nt1 -> publishes a, b, gen2 (e.g. what column list says)\nt2 -> publishes c, d + ALSO gen1, gen2\n\nCREATE PUBLICATION pub1 FOR t1, t2(gen1) WITH (publish_generated_columns=true)\nt1 -> publishes a, b + ALSO gen1, gen2\nt2 -> publishes gen1 (e.g. what column list says)\n\nCREATE PUBLICATION pub1 FOR t1, t2 WITH (publish_generated_columns=true)\nt1 -> publishes a, b + ALSO gen1, gen2\nt2 -> publishes c, d + ALSO gen1, gen2\n\nCREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns=true)\nt1 -> publishes a, b + ALSO gen1, gen2\nt2 -> publishes c, d + ALSO gen1, gen2\n\n======\n\nThe idea LGTM, although now the parameter name\n('publish_generated_columns') seems a bit misleading since sometimes\ngenerated columns get published \"irrespective of the option\".\n\nSo, I think the original parameter name 'include_generated_columns'\nmight be better here because IMO \"include\" seems more like \"add them\nif they are not already specified\", which is exactly what this idea is\ndoing.\n\nThoughts?\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 20 Sep 2024 08:46:01 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Fri, Sep 20, 2024 at 4:16 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Fri, Sep 20, 2024 at 3:26 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Sep 19, 2024 at 2:32 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > >\n> > > Users can use a publication like \"create publication pub1 for table\n> > > t1(c1, c2), t2;\" where they want t1's generated column to be published\n> > > but not for t2. They can specify the generated column name in the\n> > > column list of t1 in that case even though the rest of the tables\n> > > won't publish generated columns.\n> >\n> > Agreed.\n> >\n> > I think that users can use the publish_generated_column option when\n> > they want to publish all generated columns, instead of specifying all\n> > the columns in the column list. It's another advantage of this option\n> > that it will also include the future generated columns.\n> >\n>\n> OK. Let me give some examples below to help understand this idea.\n>\n> Please correct me if these are incorrect.\n>\n> Examples, when publish_generated_columns=true:\n>\n> CREATE PUBLICATION pub1 FOR t1(a,b,gen2), t2 WITH\n> (publish_generated_columns=true)\n> t1 -> publishes a, b, gen2 (e.g. what column list says)\n> t2 -> publishes c, d + ALSO gen1, gen2\n>\n> CREATE PUBLICATION pub1 FOR t1, t2(gen1) WITH (publish_generated_columns=true)\n> t1 -> publishes a, b + ALSO gen1, gen2\n> t2 -> publishes gen1 (e.g. what column list says)\n>\n\nThese two could be controversial because one could expect that if\n\"publish_generated_columns=true\" then publish generated columns\nirrespective of whether they are mentioned in column_list. I am of the\nopinion that column_list should take priority the results should be as\nmentioned by you but let us see if anyone thinks otherwise.\n\n>\n> ======\n>\n> The idea LGTM, although now the parameter name\n> ('publish_generated_columns') seems a bit misleading since sometimes\n> generated columns get published \"irrespective of the option\".\n>\n> So, I think the original parameter name 'include_generated_columns'\n> might be better here because IMO \"include\" seems more like \"add them\n> if they are not already specified\", which is exactly what this idea is\n> doing.\n>\n\nI still prefer 'publish_generated_columns' because it matches with\nother publication option names. One can also deduce from\n'include_generated_columns' that add all the generated columns even\nwhen some of them are specified in column_list.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 20 Sep 2024 09:55:56 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Thu, Sep 19, 2024 at 10:56 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n>\n> Given that we publish the generated columns if they are mentioned in\n> the column list, can we separate the patch into two if it helps\n> reviews? One is to allow logical replication to publish generated\n> columns if they are explicitly mentioned in the column list. The\n> second patch is to introduce the publish_generated_columns option.\n>\n\nIt sounds like a reasonable idea to me but I haven't looked at the\nfeasibility of the same. So, if it is possible without much effort, we\nshould split the patch as per your suggestion.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 20 Sep 2024 10:02:06 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Wed, Sep 11, 2024 at 8:55 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are a some more review comments for patch v30-0001.\n>\n> ======\n> src/sgml/ref/create_publication.sgml\n>\n> 1.\n> + <para>\n> + If the publisher-side column is also a generated column\n> then this option\n> + has no effect; the publisher column will be filled as normal with the\n> + publisher-side computed or default data.\n> + </para>\n>\n> It should say \"subscriber-side\"; not \"publisher-side\". The same was\n> already reported by Sawada-San [1].\n>\n> ~~~\n>\n> 2.\n> + <para>\n> + This parameter can only be set <literal>true</literal> if\n> <literal>copy_data</literal> is\n> + set to <literal>false</literal>.\n> + </para>\n>\n> IMO this limitation should be addressed by patch 0001 like it was\n> already done in the previous patches (e.g. v22-0002). I think\n> Sawada-san suggested the same [1].\n>\n> Anyway, 'copy_data' is not a PUBLICATION option, so the fact it is\n> mentioned like this without any reference to the SUBSCRIPTION seems\n> like a cut/paste error from the previous implementation.\n>\n> ======\n> src/backend/catalog/pg_publication.c\n>\n> 3. pub_collist_validate\n> - if (TupleDescAttr(tupdesc, attnum - 1)->attgenerated)\n> - ereport(ERROR,\n> - errcode(ERRCODE_INVALID_COLUMN_REFERENCE),\n> - errmsg(\"cannot use generated column \\\"%s\\\" in publication column list\",\n> - colname));\n> -\n>\n> Instead of just removing this ERROR entirely here, I thought it would\n> be more user-friendly to give a WARNING if the PUBLICATION's explicit\n> column list includes generated cols when the option\n> \"publish_generated_columns\" is false. This combination doesn't seem\n> like something a user would do intentionally, so just silently\n> ignoring it (like the current patch does) is likely going to give\n> someone unexpected results/grief.\n>\n> ======\n> src/backend/replication/logical/proto.c\n>\n> 4. logicalrep_write_tuple, and logicalrep_write_attrs:\n>\n> - if (att->attisdropped || att->attgenerated)\n> + if (att->attisdropped)\n> continue;\n>\n> Why aren't you also checking the new PUBLICATION option here and\n> skipping all gencols if the \"publish_generated_columns\" option is\n> false? Or is the BMS of pgoutput_column_list_init handling this case?\n> Maybe there should be an Assert for this?\n>\n> ======\n> src/backend/replication/pgoutput/pgoutput.c\n>\n> 5. send_relation_and_attrs\n>\n> - if (att->attisdropped || att->attgenerated)\n> + if (att->attisdropped)\n> continue;\n>\n> Same question as #4.\n>\n> ~~~\n>\n> 6. prepare_all_columns_bms and pgoutput_column_list_init\n>\n> + if (att->attgenerated && !pub->pubgencolumns)\n> + cols = bms_del_member(cols, i + 1);\n>\n> IIUC, the algorithm seems overly tricky filling the BMS with all\n> columns, before straight away conditionally removing the generated\n> columns. Can't it be refactored to assign all the correct columns\n> up-front, to avoid calling bms_del_member()?\n>\n> ======\n> src/bin/pg_dump/pg_dump.c\n>\n> 7. getPublications\n>\n> IIUC, there is lots of missing SQL code here (for all older versions)\n> that should be saying \"false AS pubgencolumns\".\n> e.g. compare the SQL with how \"false AS pubviaroot\" is used.\n>\n> ======\n> src/bin/pg_dump/t/002_pg_dump.pl\n>\n> 8. Missing tests?\n>\n> I expected to see a pg_dump test for this new PUBLICATION option.\n>\n> ======\n> src/test/regress/sql/publication.sql\n>\n> 9. Missing tests?\n>\n> How about adding another test case that checks this new option must be\n> \"Boolean\"?\n>\n> ~~~\n>\n> 10. Missing tests?\n>\n> --- error: generated column \"d\" can't be in list\n> +-- ok: generated columns can be in the list too\n> ALTER PUBLICATION testpub_fortable ADD TABLE testpub_tbl5 (a, d);\n> +ALTER PUBLICATION testpub_fortable DROP TABLE testpub_tbl5;\n>\n> (see my earlier comment #3)\n>\n> IMO there should be another test case for a WARNING here if the user\n> attempts to include generated column 'd' in an explicit PUBLICATION\n> column list while the \"publish_generated-columns\" is false.\n>\n> ======\n> [1] https://www.postgresql.org/message-id/CAD21AoA-tdTz0G-vri8KM2TXeFU8RCDsOpBXUBCgwkfokF7%3DjA%40mail.gmail.com\n>\n\nI have fixed all the comments. The attached patches contain the desired changes.\nAlso the merging of 0001 and 0002 can be done once there are no\ncomments on the patch to help in reviewing.\n\nThanks and Regards,\nShubham Khanna.", "msg_date": "Fri, 20 Sep 2024 17:15:27 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Tue, Sep 17, 2024 at 1:14 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are some review comments for v31-0001 (for the docs only)\n>\n> There may be some overlap here with some comments already made for\n> v30-0001 which are not yet addressed in v31-0001.\n>\n> ======\n> Commit message\n>\n> 1.\n> When introducing the 'publish_generated_columns' parameter, you must\n> also say this is a PUBLICATION parameter.\n>\n> ~~~\n>\n> 2.\n> With this enhancement, users can now include the 'include_generated_columns'\n> option when querying logical replication slots using either the pgoutput\n> plugin or the test_decoding plugin. This option, when set to 'true' or '1',\n> instructs the replication system to include generated column information\n> and data in the replication stream.\n>\n> ~\n>\n> The above is stale information because it still refers to the old name\n> 'include_generated_columns', and to test_decoding which was already\n> removed in this patch.\n>\n> ======\n> doc/src/sgml/ddl.sgml\n>\n> 3.\n> + Generated columns may be skipped during logical replication\n> according to the\n> + <command>CREATE PUBLICATION</command> option\n> + <link linkend=\"sql-createpublication-params-with-include-generated-columns\">\n> + <literal>publish_generated_columns</literal></link>.\n>\n> 3a.\n> nit - The linkend is based on the old name instead of the new name.\n>\n> 3b.\n> nit - Better to call this a parameter instead of an option because\n> that is what the CREATE PUBLICATION docs call it.\n>\n> ======\n> doc/src/sgml/protocol.sgml\n>\n> 4.\n> + <varlistentry>\n> + <term>publish_generated_columns</term>\n> + <listitem>\n> + <para>\n> + Boolean option to enable generated columns. This option controls\n> + whether generated columns should be included in the string\n> + representation of tuples during logical decoding in PostgreSQL.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> +\n>\n> Is this even needed anymore? Now that the implementation is using a\n> PUBLICATION parameter, isn't everything determined just by that\n> parameter? I don't see the reason why a protocol change is needed\n> anymore. And, if there is no protocol change needed, then this\n> documentation change is also not needed.\n>\n> ~~~~\n>\n> 5.\n> <para>\n> - Next, the following message part appears for each column included in\n> - the publication (except generated columns):\n> + Next, the following message parts appear for each column included in\n> + the publication (generated columns are excluded unless the parameter\n> + <link linkend=\"protocol-logical-replication-params\">\n> + <literal>publish_generated_columns</literal></link> specifies otherwise):\n> </para>\n>\n> Like the previous comment above, I think everything is now determined\n> by the PUBLICATION parameter. So maybe this should just be referring\n> to that instead.\n>\n> ======\n> doc/src/sgml/ref/create_publication.sgml\n>\n> 6.\n> + <varlistentry\n> id=\"sql-createpublication-params-with-include-generated-columns\">\n> + <term><literal>publish_generated_columns</literal>\n> (<type>boolean</type>)</term>\n> + <listitem>\n>\n> nit - the ID is based on the old parameter name.\n>\n> ~\n>\n> 7.\n> + <para>\n> + This option is only available for replicating generated\n> column data from the publisher\n> + to a regular, non-generated column in the subscriber.\n> + </para>\n>\n> IMO remove this paragraph. I really don't think you should be\n> mentioning the subscriber here at all. AFAIK this parameter is only\n> for determining if the generated column will be published or not. What\n> happens at the other end (e.g. logic whether it gets ignored or not by\n> the subscriber) is more like a matrix of behaviours that could be\n> documented in the \"Logical Replication\" section. But not here.\n>\n> (I removed this in my nitpicks attachment)\n>\n> ~~~\n>\n> 8.\n> + <para>\n> + This parameter can only be set <literal>true</literal> if\n> <literal>copy_data</literal> is\n> + set to <literal>false</literal>.\n> + </para>\n>\n> IMO remove this paragraph too. The user can create a PUBLICATION\n> before a SUBSCRIPTION even exists so to say it \"can only be set...\" is\n> not correct. Sure, your patch 0001 does not support the COPY of\n> generated columns but if you want to document that then it should be\n> documented in the CREATE SUBSCRIBER docs. But not here.\n>\n> (I removed this in my nitpicks attachment)\n>\n> TBH, it would be better if patches 0001 and 0002 were merged then you\n> can avoid all this. IIUC they were only separate in the first place\n> because 2 different people wrote them. It is not making reviews easier\n> with them split.\n>\n> ======\n>\n> Please see the attachment which implements some of the nits above.\n>\n\nI have addressed all the comments in the v32-0001 Patch. Please refer\nto the updated v32-0001 Patch here in [1]. See [1] for the changes\nadded.\n\n[1] https://www.postgresql.org/message-id/CAHv8RjKkoaS1oMsFvPRFB9nPSVC5p_D4Kgq5XB9Y2B2xU7smbA%40mail.gmail.com\n\nThanks and Regards,\nShubham Khanna.\n\n\n", "msg_date": "Fri, 20 Sep 2024 17:21:59 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Tue, Sep 17, 2024 at 3:12 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Review comments for v31-0001.\n>\n> (I tried to give only new comments, but there might be some overlap\n> with comments I previously made for v30-0001)\n>\n> ======\n> src/backend/catalog/pg_publication.c\n>\n> 1.\n> +\n> + if (publish_generated_columns_given)\n> + {\n> + values[Anum_pg_publication_pubgencolumns - 1] =\n> BoolGetDatum(publish_generated_columns);\n> + replaces[Anum_pg_publication_pubgencolumns - 1] = true;\n> + }\n>\n> nit - unnecessary whitespace above here.\n>\n> ======\n> src/backend/replication/pgoutput/pgoutput.c\n>\n> 2. prepare_all_columns_bms\n>\n> + /* Iterate the cols until generated columns are found. */\n> + cols = bms_add_member(cols, i + 1);\n>\n> How does the comment relate to the statement that follows it?\n>\n> ~~~\n>\n> 3.\n> + * Skip generated column if pubgencolumns option was not\n> + * specified.\n>\n> nit - /pubgencolumns option/publish_generated_columns parameter/\n>\n> ======\n> src/bin/pg_dump/pg_dump.c\n>\n> 4.\n> getPublications:\n>\n> nit - /i_pub_gencolumns/i_pubgencols/ (it's the same information but simpler)\n>\n> ======\n> src/bin/pg_dump/pg_dump.h\n>\n> 5.\n> + bool pubgencolumns;\n> } PublicationInfo;\n>\n> nit - /pubgencolumns/pubgencols/ (it's the same information but simpler)\n>\n> ======\n> vsrc/bin/psql/describe.c\n>\n> 6.\n> bool has_pubviaroot;\n> + bool has_pubgencol;\n>\n> nit - /has_pubgencol/has_pubgencols/ (plural consistency)\n>\n> ======\n> src/include/catalog/pg_publication.h\n>\n> 7.\n> + /* true if generated columns data should be published */\n> + bool pubgencolumns;\n> } FormData_pg_publication;\n>\n> nit - /pubgencolumns/pubgencols/ (it's the same information but simpler)\n>\n> ~~~\n>\n> 8.\n> + bool pubgencolumns;\n> PublicationActions pubactions;\n> } Publication;\n>\n> nit - /pubgencolumns/pubgencols/ (it's the same information but simpler)\n>\n> ======\n> src/test/regress/sql/publication.sql\n>\n> 9.\n> +-- Test the publication with or without 'PUBLISH_GENERATED_COLUMNS' parameter\n> +SET client_min_messages = 'ERROR';\n> +CREATE PUBLICATION pub1 FOR ALL TABLES WITH (PUBLISH_GENERATED_COLUMNS=1);\n> +\\dRp+ pub1\n> +\n> +CREATE PUBLICATION pub2 FOR ALL TABLES WITH (PUBLISH_GENERATED_COLUMNS=0);\n> +\\dRp+ pub2\n>\n> 9a.\n> nit - Use lowercase for the parameters.\n>\n> ~\n>\n> 9b.\n> nit - Fix the comment to say what the test is actually doing:\n> \"Test the publication 'publish_generated_columns' parameter enabled or disabled\"\n>\n> ======\n> src/test/subscription/t/031_column_list.pl\n>\n> 10.\n> Later I think you should add another test here to cover the scenario\n> that I was discussing with Sawada-San -- e.g. when there are 2\n> publications for the same table subscribed by just 1 subscription but\n> having different values of the 'publish_generated_columns' for the\n> publications.\n>\n\nI have addressed all the comments in the v32-0001 Patch. Please refer\nto the updated v32-0001 Patch here in [1]. See [1] for the changes\nadded.\n\n[1] https://www.postgresql.org/message-id/CAHv8RjKkoaS1oMsFvPRFB9nPSVC5p_D4Kgq5XB9Y2B2xU7smbA%40mail.gmail.com\n\nThanks and Regards,\nShubham Khanna.\n\n\n", "msg_date": "Fri, 20 Sep 2024 17:23:50 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Wed, Sep 18, 2024 at 8:58 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi, here are my review comments for patch v31-0002.\n>\n> ======\n>\n> 1. General.\n>\n> IMO patches 0001 and 0002 should be merged when next posted. IIUC the\n> reason for the split was only because there were 2 different authors\n> but that seems to be not relevant anymore.\n>\n> ======\n> Commit message\n>\n> 2.\n> When 'copy_data' is true, during the initial sync, the data is replicated from\n> the publisher to the subscriber using the COPY command. The normal COPY\n> command does not copy generated columns, so when 'publish_generated_columns'\n> is true, we need to copy using the syntax:\n> 'COPY (SELECT column_name FROM table_name) TO STDOUT'.\n>\n> ~\n>\n> 2a.\n> Should clarify that 'copy_data' is a SUBSCRIPTION parameter.\n>\n> 2b.\n> Should clarify that 'publish_generated_columns' is a PUBLICATION parameter.\n>\n> ======\n> src/backend/replication/logical/tablesync.c\n>\n> make_copy_attnamelist:\n>\n> 3.\n> - for (i = 0; i < rel->remoterel.natts; i++)\n> + desc = RelationGetDescr(rel->localrel);\n> + localgenlist = palloc0(rel->remoterel.natts * sizeof(bool));\n>\n> Each time I review this code I am tricked into thinking it is wrong to\n> use rel->remoterel.natts here for the localgenlist. AFAICT the code is\n> actually fine because you do not store *all* the subscriber gencols in\n> 'localgenlist' -- you only store those with matching names on the\n> publisher table. It might be good if you could add an explanatory\n> comment about that to prevent any future doubts.\n>\n> ~~~\n>\n> 4.\n> + if (!remotegenlist[remote_attnum])\n> + ereport(ERROR,\n> + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> + errmsg(\"logical replication target relation \\\"%s.%s\\\" has a\n> generated column \\\"%s\\\" \"\n> + \"but corresponding column on source relation is not a generated column\",\n> + rel->remoterel.nspname, rel->remoterel.relname, NameStr(attr->attname))));\n>\n> This error message has lots of good information. OTOH, I think when\n> copy_data=false the error would report the subscriber column just as\n> \"missing\", which is maybe less helpful. Perhaps that other\n> copy_data=false \"missing\" case can be improved to share the same error\n> message that you have here.\n>\n\nThis comment is still open. Will fix this in the next set of patches.\n\n> ~~~\n>\n> fetch_remote_table_info:\n>\n> 5.\n> IIUC, this logic needs to be more sophisticated to handle the case\n> that was being discussed earlier with Sawada-san [1]. e.g. when the\n> same table has gencols but there are multiple subscribed publications\n> where the 'publish_generated_columns' parameter differs.\n>\n> Also, you'll need test cases for this scenario, because it is too\n> difficult to judge correctness just by visual inspection of the code.\n>\n> ~~~~\n>\n> 6.\n> nit - Change 'hasgencolpub' to 'has_pub_with_pubgencols' for\n> readability, and initialize it to 'false' to make it easy to use\n> later.\n>\n> ~~~\n>\n> 7.\n> - * Get column lists for each relation.\n> + * Get column lists for each relation and check if any of the publication\n> + * has generated column option.\n>\n> and\n>\n> + /* Check if any of the publication has generated column option */\n> + if (server_version >= 180000)\n>\n> nit - tweak the comments to name the publication parameter properly.\n>\n> ~~~\n>\n> 8.\n> foreach(lc, MySubscription->publications)\n> {\n> if (foreach_current_index(lc) > 0)\n> appendStringInfoString(&pub_names, \", \");\n> appendStringInfoString(&pub_names, quote_literal_cstr(strVal(lfirst(lc))));\n> }\n>\n> I know this is existing code, but shouldn't all this be done by using\n> the purpose-built function 'get_publications_str'\n>\n> ~~~\n>\n> 9.\n> + ereport(ERROR,\n> + errcode(ERRCODE_CONNECTION_FAILURE),\n> + errmsg(\"could not fetch gencolumns information from publication list: %s\",\n> + pub_names.data));\n>\n> and\n>\n> + errcode(ERRCODE_UNDEFINED_OBJECT),\n> + errmsg(\"failed to fetch tuple for gencols from publication list: %s\",\n> + pub_names.data));\n>\n> nit - /gencolumns information/generated column publication\n> information/ to make the errmsg more human-readable\n>\n> ~~~\n>\n> 10.\n> + bool gencols_allowed = server_version >= 180000 && hasgencolpub;\n> +\n> + if (!gencols_allowed)\n> + appendStringInfo(&cmd, \" AND a.attgenerated = ''\");\n>\n> Can the 'gencols_allowed' var be removed, and the condition just be\n> replaced with if (!has_pub_with_pubgencols)? It seems equivalent\n> unless I am mistaken.\n>\n> ======\n>\n> Please refer to the attachment which implements some of the nits\n> mentioned above.\n>\n> ======\n> [1] https://www.postgresql.org/message-id/CAD21AoBun9crSWaxteMqyu8A_zme2ppa2uJvLJSJC2E3DJxQVA%40mail.gmail.com\n>\n\nI have addressed the comments in the v32-0002 Patch. Please refer to\nthe updated v32-0002 Patch here in [1]. See [1] for the changes added.\n\n[1] https://www.postgresql.org/message-id/CAHv8RjKkoaS1oMsFvPRFB9nPSVC5p_D4Kgq5XB9Y2B2xU7smbA%40mail.gmail.com\n\nThanks and Regards,\nShubham Khanna.\n\n\n", "msg_date": "Fri, 20 Sep 2024 17:25:57 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Thu, Sep 19, 2024 at 9:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Sep 20, 2024 at 4:16 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Fri, Sep 20, 2024 at 3:26 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Thu, Sep 19, 2024 at 2:32 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > >\n> > > > Users can use a publication like \"create publication pub1 for table\n> > > > t1(c1, c2), t2;\" where they want t1's generated column to be published\n> > > > but not for t2. They can specify the generated column name in the\n> > > > column list of t1 in that case even though the rest of the tables\n> > > > won't publish generated columns.\n> > >\n> > > Agreed.\n> > >\n> > > I think that users can use the publish_generated_column option when\n> > > they want to publish all generated columns, instead of specifying all\n> > > the columns in the column list. It's another advantage of this option\n> > > that it will also include the future generated columns.\n> > >\n> >\n> > OK. Let me give some examples below to help understand this idea.\n> >\n> > Please correct me if these are incorrect.\n> >\n> > Examples, when publish_generated_columns=true:\n> >\n> > CREATE PUBLICATION pub1 FOR t1(a,b,gen2), t2 WITH\n> > (publish_generated_columns=true)\n> > t1 -> publishes a, b, gen2 (e.g. what column list says)\n> > t2 -> publishes c, d + ALSO gen1, gen2\n> >\n> > CREATE PUBLICATION pub1 FOR t1, t2(gen1) WITH (publish_generated_columns=true)\n> > t1 -> publishes a, b + ALSO gen1, gen2\n> > t2 -> publishes gen1 (e.g. what column list says)\n> >\n>\n> These two could be controversial because one could expect that if\n> \"publish_generated_columns=true\" then publish generated columns\n> irrespective of whether they are mentioned in column_list. I am of the\n> opinion that column_list should take priority the results should be as\n> mentioned by you but let us see if anyone thinks otherwise.\n\nI agree with Amit. We also publish t2's future generated column in the\nfirst example and t1's future generated columns in the second example.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 20 Sep 2024 14:49:17 -0700", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Fri, Sep 20, 2024 at 2:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Sep 20, 2024 at 4:16 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Fri, Sep 20, 2024 at 3:26 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Thu, Sep 19, 2024 at 2:32 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > >\n> > > > Users can use a publication like \"create publication pub1 for table\n> > > > t1(c1, c2), t2;\" where they want t1's generated column to be published\n> > > > but not for t2. They can specify the generated column name in the\n> > > > column list of t1 in that case even though the rest of the tables\n> > > > won't publish generated columns.\n> > >\n> > > Agreed.\n> > >\n> > > I think that users can use the publish_generated_column option when\n> > > they want to publish all generated columns, instead of specifying all\n> > > the columns in the column list. It's another advantage of this option\n> > > that it will also include the future generated columns.\n> > >\n> >\n> > OK. Let me give some examples below to help understand this idea.\n> >\n> > Please correct me if these are incorrect.\n> >\n> > Examples, when publish_generated_columns=true:\n> >\n> > CREATE PUBLICATION pub1 FOR t1(a,b,gen2), t2 WITH\n> > (publish_generated_columns=true)\n> > t1 -> publishes a, b, gen2 (e.g. what column list says)\n> > t2 -> publishes c, d + ALSO gen1, gen2\n> >\n> > CREATE PUBLICATION pub1 FOR t1, t2(gen1) WITH (publish_generated_columns=true)\n> > t1 -> publishes a, b + ALSO gen1, gen2\n> > t2 -> publishes gen1 (e.g. what column list says)\n> >\n>\n> These two could be controversial because one could expect that if\n> \"publish_generated_columns=true\" then publish generated columns\n> irrespective of whether they are mentioned in column_list. I am of the\n> opinion that column_list should take priority the results should be as\n> mentioned by you but let us see if anyone thinks otherwise.\n>\n> >\n> > ======\n> >\n> > The idea LGTM, although now the parameter name\n> > ('publish_generated_columns') seems a bit misleading since sometimes\n> > generated columns get published \"irrespective of the option\".\n> >\n> > So, I think the original parameter name 'include_generated_columns'\n> > might be better here because IMO \"include\" seems more like \"add them\n> > if they are not already specified\", which is exactly what this idea is\n> > doing.\n> >\n>\n> I still prefer 'publish_generated_columns' because it matches with\n> other publication option names. One can also deduce from\n> 'include_generated_columns' that add all the generated columns even\n> when some of them are specified in column_list.\n>\n\nFair point. Anyway, to avoid surprises it will be important for the\nprecedence rules to be documented clearly (probably with some\nexamples),\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 23 Sep 2024 08:39:53 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Sat, Sep 21, 2024 at 3:19 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Sep 19, 2024 at 9:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > >\n> > > OK. Let me give some examples below to help understand this idea.\n> > >\n> > > Please correct me if these are incorrect.\n> > >\n> > > Examples, when publish_generated_columns=true:\n> > >\n> > > CREATE PUBLICATION pub1 FOR t1(a,b,gen2), t2 WITH\n> > > (publish_generated_columns=true)\n> > > t1 -> publishes a, b, gen2 (e.g. what column list says)\n> > > t2 -> publishes c, d + ALSO gen1, gen2\n> > >\n> > > CREATE PUBLICATION pub1 FOR t1, t2(gen1) WITH (publish_generated_columns=true)\n> > > t1 -> publishes a, b + ALSO gen1, gen2\n> > > t2 -> publishes gen1 (e.g. what column list says)\n> > >\n> >\n> > These two could be controversial because one could expect that if\n> > \"publish_generated_columns=true\" then publish generated columns\n> > irrespective of whether they are mentioned in column_list. I am of the\n> > opinion that column_list should take priority the results should be as\n> > mentioned by you but let us see if anyone thinks otherwise.\n>\n> I agree with Amit. We also publish t2's future generated column in the\n> first example and t1's future generated columns in the second example.\n>\n\nRight, it would be good to have at least one test that shows future\ngenerated columns also get published wherever applicable (like where\ncolumn_list is not given and publish_generated_columns is true).\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 23 Sep 2024 08:15:48 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Mon, Sep 23, 2024 at 4:10 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Fri, Sep 20, 2024 at 2:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Sep 20, 2024 at 4:16 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > On Fri, Sep 20, 2024 at 3:26 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Thu, Sep 19, 2024 at 2:32 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > >\n> > > > > Users can use a publication like \"create publication pub1 for table\n> > > > > t1(c1, c2), t2;\" where they want t1's generated column to be published\n> > > > > but not for t2. They can specify the generated column name in the\n> > > > > column list of t1 in that case even though the rest of the tables\n> > > > > won't publish generated columns.\n> > > >\n> > > > Agreed.\n> > > >\n> > > > I think that users can use the publish_generated_column option when\n> > > > they want to publish all generated columns, instead of specifying all\n> > > > the columns in the column list. It's another advantage of this option\n> > > > that it will also include the future generated columns.\n> > > >\n> > >\n> > > OK. Let me give some examples below to help understand this idea.\n> > >\n> > > Please correct me if these are incorrect.\n> > >\n> > > Examples, when publish_generated_columns=true:\n> > >\n> > > CREATE PUBLICATION pub1 FOR t1(a,b,gen2), t2 WITH\n> > > (publish_generated_columns=true)\n> > > t1 -> publishes a, b, gen2 (e.g. what column list says)\n> > > t2 -> publishes c, d + ALSO gen1, gen2\n> > >\n> > > CREATE PUBLICATION pub1 FOR t1, t2(gen1) WITH (publish_generated_columns=true)\n> > > t1 -> publishes a, b + ALSO gen1, gen2\n> > > t2 -> publishes gen1 (e.g. what column list says)\n> > >\n> >\n> > These two could be controversial because one could expect that if\n> > \"publish_generated_columns=true\" then publish generated columns\n> > irrespective of whether they are mentioned in column_list. I am of the\n> > opinion that column_list should take priority the results should be as\n> > mentioned by you but let us see if anyone thinks otherwise.\n> >\n> > >\n> > > ======\n> > >\n> > > The idea LGTM, although now the parameter name\n> > > ('publish_generated_columns') seems a bit misleading since sometimes\n> > > generated columns get published \"irrespective of the option\".\n> > >\n> > > So, I think the original parameter name 'include_generated_columns'\n> > > might be better here because IMO \"include\" seems more like \"add them\n> > > if they are not already specified\", which is exactly what this idea is\n> > > doing.\n> > >\n> >\n> > I still prefer 'publish_generated_columns' because it matches with\n> > other publication option names. One can also deduce from\n> > 'include_generated_columns' that add all the generated columns even\n> > when some of them are specified in column_list.\n> >\n>\n> Fair point. Anyway, to avoid surprises it will be important for the\n> precedence rules to be documented clearly (probably with some\n> examples),\n>\n\nYeah, one or two examples would be good, but we can have a separate\ndoc patch that has clearly mentioned all the rules.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 23 Sep 2024 08:17:16 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Thu, 12 Sept 2024 at 11:01, Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Because this feature is now being implemented as a PUBLICATION option,\n> there is another scenario that might need consideration; I am thinking\n> about where the same table is published by multiple PUBLICATIONS (with\n> different option settings) that are subscribed by a single\n> SUBSCRIPTION.\n>\n> e.g.1\n> -----\n> CREATE PUBLICATION pub1 FOR TABLE t1 WITH (publish_generated_columns = true);\n> CREATE PUBLICATION pub2 FOR TABLE t1 WITH (publish_generated_columns = false);\n> CREATE SUBSCRIPTION sub ... PUBLICATIONS pub1,pub2;\n> -----\n>\n> e.g.2\n> -----\n> CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns = true);\n> CREATE PUBLICATION pub2 FOR TABLE t1 WITH (publish_generated_columns = false);\n> CREATE SUBSCRIPTION sub ... PUBLICATIONS pub1,pub2;\n> -----\n>\n> Do you know if this case is supported? If yes, then which publication\n> option value wins?\n\nI have verified the various scenarios discussed here and the patch\nworks as expected:\nTest presetup:\n-- publisher\nCREATE TABLE t1 (a int PRIMARY KEY, b int, c int, gen1 int GENERATED\nALWAYS AS (a * 2) STORED, gen2 int GENERATED ALWAYS AS (a * 2)\nSTORED);\n-- Subscriber\nCREATE TABLE t1 (a int PRIMARY KEY, b int, c int, d int, e int);\n\nTest1: Subscriber will have only non-generated columns a,b,c\nreplicated from publisher:\ncreate publication pub1 for all tables with (\npublish_generated_columns = false);\nINSERT INTO t1 (a,b,c) VALUES (1,1,1);\n\n--Subscriber will have only non-generated columns a,b,c replicated\nfrom publisher:\nsubscriber=# select * from t1;\n a | b | c | d | e\n---+---+---+---+---\n 1 | 1 | 1 | |\n(1 row)\n\nTest2: Subscriber will include generated columns a,b,c replicated from\npublisher:\ncreate publication pub1 for all tables with ( publish_generated_columns = true);\nINSERT INTO t1 (a,b,c) VALUES (1,1,1);\n\n-- Subscriber will include generated columns a,b,c replicated from publisher:\nsubscriber=# select * from t1;\n a | b | c | d | e\n---+---+---+---+---\n 1 | 1 | 1 | 2 | 2\n(1 row)\n\nTest3: Cannot have subscription subscribing to publication with\npublish_generated_columns as true and false\n-- publisher\ncreate publication pub1 for all tables with (publish_generated_columns = false);\ncreate publication pub2 for all tables with (publish_generated_columns = true);\n\n-- subscriber\nsubscriber=# create subscription sub1 connection 'dbname=postgres\nhost=localhost port=5432' publication pub1,pub2;\nERROR: cannot use different column lists for table \"public.t1\" in\ndifferent publications\n\nTest4a: Warning thrown when a generated column is specified in column\nlist along with publish_generated_columns as false\n-- publisher\npostgres=# create publication pub1 for table t1(a,b,gen1) with (\npublish_generated_columns = false);\nWARNING: specified generated column \"gen1\" in publication column list\nfor publication with publish_generated_columns as false\nCREATE PUBLICATION\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 23 Sep 2024 17:27:04 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Fri, 20 Sept 2024 at 04:16, Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Fri, Sep 20, 2024 at 3:26 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Sep 19, 2024 at 2:32 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> ...\n> > > I think that the column list should take priority and we should\n> > > publish the generated column if it is mentioned in irrespective of\n> > > the option.\n> >\n> > Agreed.\n> >\n> > >\n> ...\n> > >\n> > > Users can use a publication like \"create publication pub1 for table\n> > > t1(c1, c2), t2;\" where they want t1's generated column to be published\n> > > but not for t2. They can specify the generated column name in the\n> > > column list of t1 in that case even though the rest of the tables\n> > > won't publish generated columns.\n> >\n> > Agreed.\n> >\n> > I think that users can use the publish_generated_column option when\n> > they want to publish all generated columns, instead of specifying all\n> > the columns in the column list. It's another advantage of this option\n> > that it will also include the future generated columns.\n> >\n>\n> OK. Let me give some examples below to help understand this idea.\n>\n> Please correct me if these are incorrect.\n>\n> ======\n>\n> Assuming these tables:\n>\n> t1(a,b,gen1,gen2)\n> t2(c,d,gen1,gen2)\n>\n> Examples, when publish_generated_columns=false:\n>\n> CREATE PUBLICATION pub1 FOR t1(a,b,gen2), t2 WITH\n> (publish_generated_columns=false)\n> t1 -> publishes a, b, gen2 (e.g. what column list says)\n> t2 -> publishes c, d\n>\n> CREATE PUBLICATION pub1 FOR t1, t2(gen1) WITH (publish_generated_columns=false)\n> t1 -> publishes a, b\n> t2 -> publishes gen1 (e.g. what column list says)\n>\n> CREATE PUBLICATION pub1 FOR t1, t2 WITH (publish_generated_columns=false)\n> t1 -> publishes a, b\n> t2 -> publishes c, d\n>\n> CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns=false)\n> t1 -> publishes a, b\n> t2 -> publishes c, d\n>\n> ~~\n>\n> Examples, when publish_generated_columns=true:\n>\n> CREATE PUBLICATION pub1 FOR t1(a,b,gen2), t2 WITH\n> (publish_generated_columns=true)\n> t1 -> publishes a, b, gen2 (e.g. what column list says)\n> t2 -> publishes c, d + ALSO gen1, gen2\n>\n> CREATE PUBLICATION pub1 FOR t1, t2(gen1) WITH (publish_generated_columns=true)\n> t1 -> publishes a, b + ALSO gen1, gen2\n> t2 -> publishes gen1 (e.g. what column list says)\n>\n> CREATE PUBLICATION pub1 FOR t1, t2 WITH (publish_generated_columns=true)\n> t1 -> publishes a, b + ALSO gen1, gen2\n> t2 -> publishes c, d + ALSO gen1, gen2\n>\n> CREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns=true)\n> t1 -> publishes a, b + ALSO gen1, gen2\n> t2 -> publishes c, d + ALSO gen1, gen2\n>\n> ======\n>\n> The idea LGTM, although now the parameter name\n> ('publish_generated_columns') seems a bit misleading since sometimes\n> generated columns get published \"irrespective of the option\".\n>\n> So, I think the original parameter name 'include_generated_columns'\n> might be better here because IMO \"include\" seems more like \"add them\n> if they are not already specified\", which is exactly what this idea is\n> doing.\n>\n> Thoughts?\n\nI have verified the various scenarios discussed here and the patch\nworks as expected with v32 version patch shared at [1]:\n\nTest presetup:\n-- publisher\nCREATE TABLE t1 (a int PRIMARY KEY, b int, gen1 int GENERATED ALWAYS\nAS (a * 2) STORED, gen2 int GENERATED ALWAYS AS (a * 2) STORED);\nCREATE TABLE t2 (c int PRIMARY KEY, d int, gen1 int GENERATED ALWAYS\nAS (c * 2) STORED, gen2 int GENERATED ALWAYS AS (d * 2) STORED);\n\n-- subscriber\nCREATE TABLE t1 (a int PRIMARY KEY, b int, gen1 int, gen2 int);\nCREATE TABLE t2 (c int PRIMARY KEY, d int, gen1 int, gen1 int);\n\nTest1: Publisher replicates the column list data including generated\ncolumns even though publish_generated_columns option is false:\nPublisher:\nCREATE PUBLICATION pub1 FOR table t1, t2(gen1) WITH\n(publish_generated_columns=false)\ninsert into t1 values(1,1);\ninsert into t2 values(1,1);\n\nSubscriber:\n--t1 -> publishes a, b\nsubscriber=# select * from t1;\n a | b | gen1 | gen2\n---+---+------+------\n 1 | 1 | |\n(1 row)\n\n--t2 -> publishes gen1 (e.g. what column list says)\nsubscriber=# select * from t2;\n c | d | gen1 | gen2\n---+---+------+------\n | | 2 |\n(1 row)\n\nTest2: Publisher does not replication gen column if\npublish_generated_columns option is false\nPublisher:\nCREATE PUBLICATION pub1 FOR table t1, t2 WITH (publish_generated_columns=false)\ninsert into t1 values(1,1);\ninsert into t2 values(1,1);\n\nSubscriber:\n--t1 -> publishes a, b\nsubscriber=# select * from t1;\n a | b | gen1 | gen2\n---+---+------+------\n 1 | 1 | |\n(1 row)\n\n-- t2 -> publishes c, d\nsubscriber=# select * from t2;\n c | d | gen1 | gen2\n---+---+------+------\n 1 | 1 | |\n(1 row)\n\nTest3: Publisher does not replication gen column if\npublish_generated_columns option is false\nPublisher:\nCREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns=false)\ninsert into t1 values(1,1);\ninsert into t2 values(1,1);\n\nSubscriber:\n--t1 -> publishes a, b\nsubscriber=# select * from t1;\n a | b | gen1 | gen2\n---+---+------+------\n 1 | 1 | |\n(1 row)\n\n-- t2 -> publishes c, d\nsubscriber=# select * from t2;\n c | d | gen1 | gen2\n---+---+------+------\n 1 | 1 | |\n(1 row)\n\nTest4: Publisher publishes only the data of the columns specified in\ncolumn list skipping other generated/non-generated columns:\nPublisher:\nCREATE PUBLICATION pub1 FOR table t1(a,b,gen2), t2 WITH\n(publish_generated_columns=true)\ninsert into t1 values(1,1);\ninsert into t2 values(1,1);\n\nSubscriber:\n-- t1 -> publishes a, b, gen2 (e.g. what column list says)\nsubscriber=# select * from t1;\n a | b | gen1 | gen2\n---+---+------+------\n 1 | 1 | | 2\n(1 row)\n\n-- t2 -> publishes c, d + ALSO gen1, gen2\nsubscriber=# select * from t2;\n c | d | gen1 | gen2\n---+---+------+------\n 1 | 1 | 2 | 2\n(1 row)\n\n\nTest5: Publisher publishes only the data of the columns specified in\ncolumn list skipping other generated/non-generated columns:\nPublisher:\nCREATE PUBLICATION pub1 FOR table t1, t2(gen1) WITH\n(publish_generated_columns=true)\ninsert into t1 values(1,1);\ninsert into t2 values(1,1);\n\nSubscriber:\n-- t1 -> publishes a, b + ALSO gen1, gen2\nsubscriber=# select * from t1;\n a | b | gen1 | gen2\n---+---+------+------\n 1 | 1 | 2 | 2\n(1 row)\n\n-- t2 -> publishes gen1 (e.g. what column list says)\nsubscriber=# select * from t2;\n c | d | gen1 | gen2\n---+---+------+------\n | | 2 |\n(1 row)\n\nTest6: Publisher replicates all columns if publish_generated_columns\nis enabled without column list\nPublisher:\nCREATE PUBLICATION pub1 FOR table t1, t2 WITH (publish_generated_columns=true)\ninsert into t1 values(1,1);\ninsert into t2 values(1,1);\n\nSubscriber:\n-- t1 -> publishes a, b + ALSO gen1, gen2\nsubscriber=# select * from t1;\n a | b | gen1 | gen2\n---+---+------+------\n 1 | 1 | 2 | 2\n(1 row)\n\n-- t2 -> publishes c, d + ALSO gen1, gen2\nsubscriber=# select * from t2;\n c | d | gen1 | gen2\n---+---+------+------\n 1 | 1 | 2 | 2\n(1 row)\n\nTest7: Publisher replicates all columns if publish_generated_columns\nis enabled without column list\nPublisher:\nCREATE PUBLICATION pub1 FOR ALL TABLES WITH (publish_generated_columns=true)\ninsert into t1 values(1,1);\ninsert into t2 values(1,1);\n\nSubscriber:\n-- t1 -> publishes a, b + ALSO gen1, gen2\nsubscriber=# select * from t1;\n a | b | gen1 | gen2\n---+---+------+------\n 1 | 1 | 2 | 2\n(1 row)\n\n-- t2 -> publishes c, d + ALSO gen1, gen2\nsubscriber=# select * from t2;\n c | d | gen1 | gen2\n---+---+------+------\n 1 | 1 | 2 | 2\n(1 row)\n\n[1] - https://www.postgresql.org/message-id/CAHv8RjKkoaS1oMsFvPRFB9nPSVC5p_D4Kgq5XB9Y2B2xU7smbA%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 23 Sep 2024 17:40:58 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Fri, 20 Sept 2024 at 17:15, Shubham Khanna\n<khannashubham1197@gmail.com> wrote:\n>\n> On Wed, Sep 11, 2024 at 8:55 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Here are a some more review comments for patch v30-0001.\n> >\n> > ======\n> > src/sgml/ref/create_publication.sgml\n> >\n> > 1.\n> > + <para>\n> > + If the publisher-side column is also a generated column\n> > then this option\n> > + has no effect; the publisher column will be filled as normal with the\n> > + publisher-side computed or default data.\n> > + </para>\n> >\n> > It should say \"subscriber-side\"; not \"publisher-side\". The same was\n> > already reported by Sawada-San [1].\n> >\n> > ~~~\n> >\n> > 2.\n> > + <para>\n> > + This parameter can only be set <literal>true</literal> if\n> > <literal>copy_data</literal> is\n> > + set to <literal>false</literal>.\n> > + </para>\n> >\n> > IMO this limitation should be addressed by patch 0001 like it was\n> > already done in the previous patches (e.g. v22-0002). I think\n> > Sawada-san suggested the same [1].\n> >\n> > Anyway, 'copy_data' is not a PUBLICATION option, so the fact it is\n> > mentioned like this without any reference to the SUBSCRIPTION seems\n> > like a cut/paste error from the previous implementation.\n> >\n> > ======\n> > src/backend/catalog/pg_publication.c\n> >\n> > 3. pub_collist_validate\n> > - if (TupleDescAttr(tupdesc, attnum - 1)->attgenerated)\n> > - ereport(ERROR,\n> > - errcode(ERRCODE_INVALID_COLUMN_REFERENCE),\n> > - errmsg(\"cannot use generated column \\\"%s\\\" in publication column list\",\n> > - colname));\n> > -\n> >\n> > Instead of just removing this ERROR entirely here, I thought it would\n> > be more user-friendly to give a WARNING if the PUBLICATION's explicit\n> > column list includes generated cols when the option\n> > \"publish_generated_columns\" is false. This combination doesn't seem\n> > like something a user would do intentionally, so just silently\n> > ignoring it (like the current patch does) is likely going to give\n> > someone unexpected results/grief.\n> >\n> > ======\n> > src/backend/replication/logical/proto.c\n> >\n> > 4. logicalrep_write_tuple, and logicalrep_write_attrs:\n> >\n> > - if (att->attisdropped || att->attgenerated)\n> > + if (att->attisdropped)\n> > continue;\n> >\n> > Why aren't you also checking the new PUBLICATION option here and\n> > skipping all gencols if the \"publish_generated_columns\" option is\n> > false? Or is the BMS of pgoutput_column_list_init handling this case?\n> > Maybe there should be an Assert for this?\n> >\n> > ======\n> > src/backend/replication/pgoutput/pgoutput.c\n> >\n> > 5. send_relation_and_attrs\n> >\n> > - if (att->attisdropped || att->attgenerated)\n> > + if (att->attisdropped)\n> > continue;\n> >\n> > Same question as #4.\n> >\n> > ~~~\n> >\n> > 6. prepare_all_columns_bms and pgoutput_column_list_init\n> >\n> > + if (att->attgenerated && !pub->pubgencolumns)\n> > + cols = bms_del_member(cols, i + 1);\n> >\n> > IIUC, the algorithm seems overly tricky filling the BMS with all\n> > columns, before straight away conditionally removing the generated\n> > columns. Can't it be refactored to assign all the correct columns\n> > up-front, to avoid calling bms_del_member()?\n> >\n> > ======\n> > src/bin/pg_dump/pg_dump.c\n> >\n> > 7. getPublications\n> >\n> > IIUC, there is lots of missing SQL code here (for all older versions)\n> > that should be saying \"false AS pubgencolumns\".\n> > e.g. compare the SQL with how \"false AS pubviaroot\" is used.\n> >\n> > ======\n> > src/bin/pg_dump/t/002_pg_dump.pl\n> >\n> > 8. Missing tests?\n> >\n> > I expected to see a pg_dump test for this new PUBLICATION option.\n> >\n> > ======\n> > src/test/regress/sql/publication.sql\n> >\n> > 9. Missing tests?\n> >\n> > How about adding another test case that checks this new option must be\n> > \"Boolean\"?\n> >\n> > ~~~\n> >\n> > 10. Missing tests?\n> >\n> > --- error: generated column \"d\" can't be in list\n> > +-- ok: generated columns can be in the list too\n> > ALTER PUBLICATION testpub_fortable ADD TABLE testpub_tbl5 (a, d);\n> > +ALTER PUBLICATION testpub_fortable DROP TABLE testpub_tbl5;\n> >\n> > (see my earlier comment #3)\n> >\n> > IMO there should be another test case for a WARNING here if the user\n> > attempts to include generated column 'd' in an explicit PUBLICATION\n> > column list while the \"publish_generated-columns\" is false.\n> >\n> > ======\n> > [1] https://www.postgresql.org/message-id/CAD21AoA-tdTz0G-vri8KM2TXeFU8RCDsOpBXUBCgwkfokF7%3DjA%40mail.gmail.com\n> >\n>\n> I have fixed all the comments. The attached patches contain the desired changes.\n> Also the merging of 0001 and 0002 can be done once there are no\n> comments on the patch to help in reviewing.\n\nThe warning message appears to be incorrect. Even though\npublish_generated_columns is set to true, the warning indicates that\nit is false.\nCREATE TABLE t1 (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);\npostgres=# CREATE PUBLICATION pub1 FOR table t1(gen1) WITH\n(publish_generated_columns=true);\nWARNING: specified generated column \"gen1\" in publication column list\nfor publication with publish_generated_columns as false\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 23 Sep 2024 17:46:45 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Fri, 20 Sept 2024 at 17:15, Shubham Khanna\n<khannashubham1197@gmail.com> wrote:\n>\n> On Wed, Sep 11, 2024 at 8:55 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> I have fixed all the comments. The attached patches contain the desired changes.\n> Also the merging of 0001 and 0002 can be done once there are no\n> comments on the patch to help in reviewing.\n\nFew comments:\n1) This commit message seems wrong, currently irrespective of\npublish_generated_columns, the column specified in column list take\npreceedene:\nWhen 'publish_generated_columns' is false, generated columns are not\nreplicated, even when present in a PUBLICATION col-list.\n\n2) Since we have added pubgencols to pg_pubication.h we can specify\n\"Bump catversion\" in the commit message.\n\n3) In create publication column list/publish_generated_columns\ndocumentation we should mention that if generated column is mentioned\nin column list, generated columns mentioned in column list will be\nreplication irrespective of publish_generated_columns option.\n\n4) This warning should be mentioned only if publish_generated_columns is false:\n if (TupleDescAttr(tupdesc, attnum - 1)->attgenerated)\n- ereport(ERROR,\n+ ereport(WARNING,\n\nerrcode(ERRCODE_INVALID_COLUMN_REFERENCE),\n- errmsg(\"cannot use generated\ncolumn \\\"%s\\\" in publication column list\",\n+ errmsg(\"specified generated\ncolumn \\\"%s\\\" in publication column list for publication with\npublish_generated_columns as false\",\n colname));\n\n5) These tests are not required for this feature:\n+ 'ALTER PUBLICATION pub5 ADD TABLE test_table WHERE (col1 > 0);' => {\n+ create_order => 51,\n+ create_sql =>\n+ 'ALTER PUBLICATION pub5 ADD TABLE\ndump_test.test_table WHERE (col1 > 0);',\n+ regexp => qr/^\n+ \\QALTER PUBLICATION pub5 ADD TABLE ONLY\ndump_test.test_table WHERE ((col1 > 0));\\E\n+ /xm,\n+ like => { %full_runs, section_post_data => 1, },\n+ unlike => {\n+ exclude_dump_test_schema => 1,\n+ exclude_test_table => 1,\n+ },\n+ },\n+\n+ 'ALTER PUBLICATION pub5 ADD TABLE test_second_table WHERE\n(col2 = \\'test\\');'\n+ => {\n+ create_order => 52,\n+ create_sql =>\n+ 'ALTER PUBLICATION pub5 ADD TABLE\ndump_test.test_second_table WHERE (col2 = \\'test\\');',\n+ regexp => qr/^\n+ \\QALTER PUBLICATION pub5 ADD TABLE ONLY\ndump_test.test_second_table WHERE ((col2 = 'test'::text));\\E\n+ /xm,\n+ like => { %full_runs, section_post_data => 1, },\n+ unlike => { exclude_dump_test_schema => 1, },\n+ },\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 23 Sep 2024 18:09:45 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi. Here are my review comments for v32-0001\n\nYou wrote: \"I have addressed all the comments in the v32-0001 Patch.\",\nhowever, I found multiple old review comments not addressed. Please\ngive a reason if a comment is deliberately left out, otherwise, I will\nassume they are omitted by accident and so keep repeating them.\n\nThere were also still some unanswered questions from previous reviews,\nso I have reminded you about those again here.\n\n======\nCommit message\n\n1.\nThis commit enables support for the 'publish_generated_columns' option\nin logical replication, allowing the transmission of generated column\ninformation and data alongside regular table changes. The option\n'publish_generated_columns' is a PUBLICATION parameter.\n\n~\n\nThat PUBLICATION info in the 2nd sentence would be easier to say in\nthe 1st sentence.\nSUGGESTION:\nThis commit supports the transmission of generated column information\nand data alongside regular table changes. This behaviour is controlled\nby a new PUBLICATION parameter ('publish_generated_columns').\n\n~~~\n\n2.\nWhen 'publish_generated_columns' is false, generated columns are not\nreplicated, even when present in a PUBLICATION col-list.\n\nHm. This contradicts the behaviour that Amit wanted, (e.g.\n\"column-list takes precedence\"). So I am not sure if this patch is\nalready catering for the behaviour suggested by Amit or if that is yet\nto come in v33. For now, I am assuming that 32* has not caught up with\nthe latest behaviour requirements, but that might be a wrong\nassumption; perhaps it is only this commit message that is bogus.\n\n~~~\n\n3. General.\n\nOn the same subject, there is lots of code, like:\n\nif (att->attgenerated && !pub->pubgencols)\ncontinue;\n\nI suspect that might not be quite what you want for the \"column-list\ntakes precedence\" behaviour, but I am not going to identify all those\nduring this review. It needs lots of combinations of column list tests\nto verify it.\n\n======\ndoc/src/sgml/ddl.sgml\n\n4ab.\nnit - Huh?? Not changed the linkend as told in a previous review [1-#3a]\nnit - Huh?? Not changed to call this a \"parameter\" instead of an\n\"option\" as told in a previous review [1-#3b]\n\n======\ndoc/src/sgml/protocol.sgml\n\n5.\n- <para>\n- Next, the following message part appears for each column included in\n- the publication (except generated columns):\n- </para>\n-\n\nnit -- Huh?? I don't think you can just remove this whole paragraph.\nBut, probably you can just remove the \"except generated columns\" part.\nI posted this same comment [4 #11] 20 patch versions back.\n\n======\ndoc/src/sgml/ref/create_publication.sgml\n\n6abc.\nnit - Huh?? Not changed the parameter ID as told in a previous review [1-#6]\nnit - Huh?? Not removed paragraph \"This option is only available...\"\nas told in a previous review. See [1-#7]\nnit - Huh?? Not removed paragraph \"This parameter can only be set\" as\ntold in a previous review. See [1-#8]\n\n======\nsrc/backend/catalog/pg_publication.c\n\n7.\n if (TupleDescAttr(tupdesc, attnum - 1)->attgenerated)\n- ereport(ERROR,\n+ ereport(WARNING,\n errcode(ERRCODE_INVALID_COLUMN_REFERENCE),\n- errmsg(\"cannot use generated column \\\"%s\\\" in publication column list\",\n+ errmsg(\"specified generated column \\\"%s\\\" in publication column list\nfor publication with publish_generated_columns as false\",\n colname));\n\nI did not understand how this WARNING can know\n\"publish_generated_columns as false\"? Should the code be checking the\nfunction parameter 'pubgencols'?\n\nThe errmsg also seemed a bit verbose. How about:\n\"specified generated column \\\"%s\\\" in publication column list when\npublish_generated_columns = false\"\n\n======\nsrc/backend/replication/logical/proto.c\n\n8.\nlogicalrep_write_tuple:\nlogicalrep_write_attrs:\n\nReminder. I think I have multiple questions about this code from\nprevious reviews that may be still unanswered. See [2 #4]. Maybe when\nyou implement Amit's \"column list takes precedence\" behaviour then\nthis code is fine as-is (because the replication message might include\ngencols or not-gecols regardless of the 'publish_generated_columns'\nvalue). But I don't think that is the current implementation, so\nsomething did not quite seem right. I am not sure. If you say it is\nfine then I will believe it, but the question [2 #4] remains\nunanswered.\n\n======\nsrc/backend/replication/pgoutput/pgoutput.c\n\n9.\nsend_relation_and_attrs:\n\nReminder: Here is another question that was answered from [2 #5]. I\ndid not really trust it for the current implementation, but for the\n\"column list takes precedence\" behaviour probably it will be ok.\n\n~~~\n\n10.\n+/*\n+ * Prepare new column list bitmap. This includes all the columns of the table.\n+ */\n+static Bitmapset *\n+prepare_all_columns_bms(PGOutputData *data, RelationSyncEntry *entry,\n+ TupleDesc desc)\n+{\n\nThis function needs a better comment with more explanation about what\nthis is REALLY doing. e.g. it says \"includes all columns of the\ntable\", but tthe implementation is skipping generated cols, so clearly\nit is not \"all columns of the table\".\n\n~~~\n\n11. pgoutput_column_list_init\n\nTBH, I struggle to read the logic of this function. Rewriting some\nparts, inverting some variables, and adding more commentary might help\na lot.\n\n11a.\nThere are too many \"negatives\" (with ! operator and with the word \"no\"\nin the variable).\n\ne.g. code is written in a backward way like:\nif (!pub_no_list)\ncols = pub_collist_to_bitmapset(cols, cfdatum, entry->entry_cxt);\nelse\ncols = prepare_all_columns_bms(data, entry, desc);\n\ninstead of what could have been said:\nif (pub_rel_has_collist)\ncols = pub_collist_to_bitmapset(cols, cfdatum, entry->entry_cxt);\nelse\ncols = prepare_all_columns_bms(data, entry, desc);\n\n~\n\n11b.\n- * If the publication is FOR ALL TABLES then it is treated the same as\n- * if there are no column lists (even if other publications have a\n- * list).\n+ * If the publication is FOR ALL TABLES and include generated columns\n+ * then it is treated the same as if there are no column lists (even\n+ * if other publications have a list).\n */\n- if (!pub->alltables)\n+ if (!pub->alltables || !pub->pubgencols)\n\nThe code does not appear to match the comment (\"If the publication is\nFOR ALL TABLES and include generated columns\"). If it did it should\nlook like \"if (pub->alltables && pub->pubgencols)\".\n\nAlso, should \"and include generated column\" be properly referring to\nthe new PUBLICATION parameter name?\n\nAlso, the comment is somewhat confusing. I saw in the thread Vignesh\nwrote an explanation like \"To handle cases where the\npublish_generated_columns option isn't specified for all tables in a\npublication, the pubgencolumns check needs to be performed. In such\ncases, we must create a column list that excludes generated columns\"\n[3]. IMO that was clearer information so something similar should be\nwritten in this code comment.\n~\n\n11c.\n+ /* Build the column list bitmap in the per-entry context. */\n+ if (!pub_no_list || !pub->pubgencols) /* when not null */\n\nI don't know what \"when not null\" means here. Aren't those both\nbooleans? How can it be \"null\"?\n\n======\nsrc/bin/pg_dump/pg_dump.c\n\n12. getPublications:\n\nHuh?? The code has not changed to address an old review comment I had\nposted to say there seem multiple code fragments missing that should\nsay \"false AS pubgencols\". Refer to [2 #7].\n\n======\nsrc/bin/pg_dump/t/002_pg_dump.pl\n\n13.\n'ALTER PUBLICATION pub5 ADD TABLE test_table WHERE (col1 > 0);' => {\n+ create_order => 51,\n+ create_sql =>\n+ 'ALTER PUBLICATION pub5 ADD TABLE dump_test.test_table WHERE (col1 > 0);',\n+ regexp => qr/^\n+ \\QALTER PUBLICATION pub5 ADD TABLE ONLY dump_test.test_table WHERE\n((col1 > 0));\\E\n+ /xm,\n+ like => { %full_runs, section_post_data => 1, },\n+ unlike => {\n+ exclude_dump_test_schema => 1,\n+ exclude_test_table => 1,\n+ },\n+ },\n+\n+ 'ALTER PUBLICATION pub5 ADD TABLE test_second_table WHERE (col2 = \\'test\\');'\n+ => {\n+ create_order => 52,\n+ create_sql =>\n+ 'ALTER PUBLICATION pub5 ADD TABLE dump_test.test_second_table\nWHERE (col2 = \\'test\\');',\n+ regexp => qr/^\n+ \\QALTER PUBLICATION pub5 ADD TABLE ONLY dump_test.test_second_table\nWHERE ((col2 = 'test'::text));\\E\n+ /xm,\n+ like => { %full_runs, section_post_data => 1, },\n+ unlike => { exclude_dump_test_schema => 1, },\n+ },\n+\n\nIt wasn't clear to me how these tests are related to the patch.\nShouldn't there instead be some ALTER tests for trying to modify the\n'publish_generate_columns' parameter?\n\n======\nsrc/test/regress/expected/publication.out\nsrc/test/regress/sql/publication.sql\n\n14.\n--- error: generated column \"d\" can't be in list\n+-- ok: generated columns can be in the list too\n ALTER PUBLICATION testpub_fortable ADD TABLE testpub_tbl5 (a, d);\n-ERROR: cannot use generated column \"d\" in publication column list\n+WARNING: specified generated column \"d\" in publication column list\nfor publication with publish_generated_columns as false\n\nI think these tests for the WARNING scenario need to be a bit more\ndeliberate. This seems to have happened as a side-effect. For example,\nI was expecting more testing like:\n\nComments about various combinations to say what you are doing and what\nyou are expecting:\n- gencols in column list with publish_generated_columns=false, expecting WARNING\n- gencols in column list with publish_generated_columns=true, NOT\nexpecting WARNING\n- gencols in column list with publish_generated_columns=true, then\nALTER PUBLICATION setting publication_generate_columns=false,\nexpecting WARNING\n- NO gencols in column list with publish_generated_columns=false, then\nALTER PUBLICATION to add gencols to column list, expecting WARNING\n\n======\nsrc/test/subscription/t/031_column_list.pl\n\n15.\n-# TEST: Generated and dropped columns are not considered for the column list.\n+# TEST: Dropped columns are not considered for the column list.\n # So, the publication having a column list except for those columns and a\n-# publication without any column (aka all columns as part of the columns\n+# publication without any column list (aka all columns as part of the column\n # list) are considered to have the same column list.\n $node_publisher->safe_psql(\n 'postgres', qq(\n CREATE TABLE test_mix_4 (a int PRIMARY KEY, b int, c int, d int\nGENERATED ALWAYS AS (a + 1) STORED);\n ALTER TABLE test_mix_4 DROP COLUMN c;\n\n- CREATE PUBLICATION pub_mix_7 FOR TABLE test_mix_4 (a, b);\n- CREATE PUBLICATION pub_mix_8 FOR TABLE test_mix_4;\n+ CREATE PUBLICATION pub_mix_7 FOR TABLE test_mix_4 WITH\n(publish_generated_columns = true);\n+ CREATE PUBLICATION pub_mix_8 FOR TABLE test_mix_4 WITH\n(publish_generated_columns = false);\n\nI felt the comment for this test ought to be saying something more\nabout what you are doing with the 'publish_generated_columns'\nparameters and what behaviour it was expecting.\n\n======\nPlease refer to the attachment which addresses some of the nit\ncomments mentioned above.\n\n======\n[1] my review of v31-0001:\nhttps://www.postgresql.org/message-id/CAHut%2BPsv-neEP_ftvBUBahh%2BKCWw%2BqQMF9N3sGU3YHWPEzFH-Q%40mail.gmail.com\n[2] my review of v30-0001:\nhttps://www.postgresql.org/message-id/CAHut%2BPuaitgE4tu3nfaR%3DPCQEKjB%3DmpDtZ1aWkbwb%3DJZE8YvqQ%40mail.gmail.com\n[3] https://www.postgresql.org/message-id/CALDaNm1c7xPBodHw6LKp9e8hvGVJHcKH%3DDHK0iXmZuXKPnxZ3Q%40mail.gmail.com\n[4] https://www.postgresql.org/message-id/CAHut%2BPv45gB4cV%2BSSs6730Kb8urQyqjdZ9PBVgmpwqCycr1Ybg%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 24 Sep 2024 09:46:30 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi. Here are my v32-0002 review comments:\n\n======\nsrc/backend/replication/logical/tablesync.c\n\n1. fetch_remote_table_info\n\n /*\n- * Get column lists for each relation.\n+ * Get column lists for each relation, and check if any of the\n+ * publications have the 'publish_generated_columns' parameter enabled.\n\nI am not 100% sure about this logic anymore. Maybe it is OK, but it\nrequires careful testing because with Amit's \"column lists take\nprecedence\" it is now possible for the publication to say\n'publish_generated_columns=false', but the publication can still\npublish gencols *anyway* if they were specified in a column list.\n\n~~~\n\n2.\n /*\n * Fetch info about column lists for the relation (from all the\n * publications).\n */\n+ StringInfo pub_names = makeStringInfo();\n+\n+ get_publications_str(MySubscription->publications, pub_names, true);\n resetStringInfo(&cmd);\n appendStringInfo(&cmd,\n~\n\nnit - The comment here seems misplaced.\n\n~~~\n\n3.\n+ if (server_version >= 120000)\n+ {\n+ has_pub_with_pubgencols = server_version >= 180000 && has_pub_with_pubgencols;\n+\n+ if (!has_pub_with_pubgencols)\n+ appendStringInfo(&cmd, \" AND a.attgenerated = ''\");\n+ }\n\nMy previous review comment about this [1 #10] was:\nCan the 'gencols_allowed' var be removed, and the condition just be\nreplaced with if (!has_pub_with_pubgencols)? It seems equivalent\nunless I am mistaken.\n\nnit - So the current v32 code is not what I was expecting. What I\nmeant was 'has_pub_with_pubgencols' can only be true if server_version\n>= 180000, so I thought there was no reason to check it again. For\nreference, I've changed it to like I meant in the nitpicks attachment.\nPlease see if that works the same.\n\n======\n[1] my review of v31-0002.\nhttps://www.postgresql.org/message-id/CAHut%2BPusbhvPrL1uN1TKY%3DFd4zu3h63eDebZvsF%3Duy%2BLBKTwgA%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 24 Sep 2024 11:37:49 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi, I have written a new patch to document this feature.\n\nThe patch adds a new section to the \"Logical Replication\" chapter. It\napplies atop the existing patches.\n\nv33-0001 (same as v32-0001)\nv33-0002 (same as v32-0002)\nv33-0003 (new DOCS)\n\nReview comments are welcome.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Thu, 26 Sep 2024 16:14:55 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Wed, Sep 25, 2024 at 11:15 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi, I have written a new patch to document this feature.\n>\n> The patch adds a new section to the \"Logical Replication\" chapter. It\n> applies atop the existing patches.\n>\n> v33-0001 (same as v32-0001)\n> v33-0002 (same as v32-0002)\n> v33-0003 (new DOCS)\n>\n> Review comments are welcome.\n\nThank you for updating the patch!\n\nI think that the patch doesn't have regression tests to check if\ngenerated column data is replicated to the subscriber as expected. I\nthink we should include some tests for this feature (especially with\nother features such as column list).\n\nAlso, when testing this feature, I got the following warning message\neven if the publication has publish_generated_columns = true:\n\n=# create publication pub for table test (a, c) with\n(publish_generated_columns = true);\nWARNING: specified generated column \"c\" in publication column list\nfor publication with publish_generated_columns as false\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 26 Sep 2024 17:03:14 -0700", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Mon, Sep 23, 2024 at 5:56 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Fri, 20 Sept 2024 at 17:15, Shubham Khanna\n> <khannashubham1197@gmail.com> wrote:\n> >\n> > On Wed, Sep 11, 2024 at 8:55 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > Here are a some more review comments for patch v30-0001.\n> > >\n> > > ======\n> > > src/sgml/ref/create_publication.sgml\n> > >\n> > > 1.\n> > > + <para>\n> > > + If the publisher-side column is also a generated column\n> > > then this option\n> > > + has no effect; the publisher column will be filled as normal with the\n> > > + publisher-side computed or default data.\n> > > + </para>\n> > >\n> > > It should say \"subscriber-side\"; not \"publisher-side\". The same was\n> > > already reported by Sawada-San [1].\n> > >\n> > > ~~~\n> > >\n> > > 2.\n> > > + <para>\n> > > + This parameter can only be set <literal>true</literal> if\n> > > <literal>copy_data</literal> is\n> > > + set to <literal>false</literal>.\n> > > + </para>\n> > >\n> > > IMO this limitation should be addressed by patch 0001 like it was\n> > > already done in the previous patches (e.g. v22-0002). I think\n> > > Sawada-san suggested the same [1].\n> > >\n> > > Anyway, 'copy_data' is not a PUBLICATION option, so the fact it is\n> > > mentioned like this without any reference to the SUBSCRIPTION seems\n> > > like a cut/paste error from the previous implementation.\n> > >\n> > > ======\n> > > src/backend/catalog/pg_publication.c\n> > >\n> > > 3. pub_collist_validate\n> > > - if (TupleDescAttr(tupdesc, attnum - 1)->attgenerated)\n> > > - ereport(ERROR,\n> > > - errcode(ERRCODE_INVALID_COLUMN_REFERENCE),\n> > > - errmsg(\"cannot use generated column \\\"%s\\\" in publication column list\",\n> > > - colname));\n> > > -\n> > >\n> > > Instead of just removing this ERROR entirely here, I thought it would\n> > > be more user-friendly to give a WARNING if the PUBLICATION's explicit\n> > > column list includes generated cols when the option\n> > > \"publish_generated_columns\" is false. This combination doesn't seem\n> > > like something a user would do intentionally, so just silently\n> > > ignoring it (like the current patch does) is likely going to give\n> > > someone unexpected results/grief.\n> > >\n> > > ======\n> > > src/backend/replication/logical/proto.c\n> > >\n> > > 4. logicalrep_write_tuple, and logicalrep_write_attrs:\n> > >\n> > > - if (att->attisdropped || att->attgenerated)\n> > > + if (att->attisdropped)\n> > > continue;\n> > >\n> > > Why aren't you also checking the new PUBLICATION option here and\n> > > skipping all gencols if the \"publish_generated_columns\" option is\n> > > false? Or is the BMS of pgoutput_column_list_init handling this case?\n> > > Maybe there should be an Assert for this?\n> > >\n> > > ======\n> > > src/backend/replication/pgoutput/pgoutput.c\n> > >\n> > > 5. send_relation_and_attrs\n> > >\n> > > - if (att->attisdropped || att->attgenerated)\n> > > + if (att->attisdropped)\n> > > continue;\n> > >\n> > > Same question as #4.\n> > >\n> > > ~~~\n> > >\n> > > 6. prepare_all_columns_bms and pgoutput_column_list_init\n> > >\n> > > + if (att->attgenerated && !pub->pubgencolumns)\n> > > + cols = bms_del_member(cols, i + 1);\n> > >\n> > > IIUC, the algorithm seems overly tricky filling the BMS with all\n> > > columns, before straight away conditionally removing the generated\n> > > columns. Can't it be refactored to assign all the correct columns\n> > > up-front, to avoid calling bms_del_member()?\n> > >\n> > > ======\n> > > src/bin/pg_dump/pg_dump.c\n> > >\n> > > 7. getPublications\n> > >\n> > > IIUC, there is lots of missing SQL code here (for all older versions)\n> > > that should be saying \"false AS pubgencolumns\".\n> > > e.g. compare the SQL with how \"false AS pubviaroot\" is used.\n> > >\n> > > ======\n> > > src/bin/pg_dump/t/002_pg_dump.pl\n> > >\n> > > 8. Missing tests?\n> > >\n> > > I expected to see a pg_dump test for this new PUBLICATION option.\n> > >\n> > > ======\n> > > src/test/regress/sql/publication.sql\n> > >\n> > > 9. Missing tests?\n> > >\n> > > How about adding another test case that checks this new option must be\n> > > \"Boolean\"?\n> > >\n> > > ~~~\n> > >\n> > > 10. Missing tests?\n> > >\n> > > --- error: generated column \"d\" can't be in list\n> > > +-- ok: generated columns can be in the list too\n> > > ALTER PUBLICATION testpub_fortable ADD TABLE testpub_tbl5 (a, d);\n> > > +ALTER PUBLICATION testpub_fortable DROP TABLE testpub_tbl5;\n> > >\n> > > (see my earlier comment #3)\n> > >\n> > > IMO there should be another test case for a WARNING here if the user\n> > > attempts to include generated column 'd' in an explicit PUBLICATION\n> > > column list while the \"publish_generated-columns\" is false.\n> > >\n> > > ======\n> > > [1] https://www.postgresql.org/message-id/CAD21AoA-tdTz0G-vri8KM2TXeFU8RCDsOpBXUBCgwkfokF7%3DjA%40mail.gmail.com\n> > >\n> >\n> > I have fixed all the comments. The attached patches contain the desired changes.\n> > Also the merging of 0001 and 0002 can be done once there are no\n> > comments on the patch to help in reviewing.\n>\n> The warning message appears to be incorrect. Even though\n> publish_generated_columns is set to true, the warning indicates that\n> it is false.\n> CREATE TABLE t1 (a int, gen1 int GENERATED ALWAYS AS (a * 2) STORED);\n> postgres=# CREATE PUBLICATION pub1 FOR table t1(gen1) WITH\n> (publish_generated_columns=true);\n> WARNING: specified generated column \"gen1\" in publication column list\n> for publication with publish_generated_columns as false\n>\n\nI have fixed the warning message. The attached patches contain the\ndesired changes.\n\nThanks and Regards,\nShubham Khanna.", "msg_date": "Fri, 27 Sep 2024 20:01:28 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Mon, Sep 23, 2024 at 6:19 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Fri, 20 Sept 2024 at 17:15, Shubham Khanna\n> <khannashubham1197@gmail.com> wrote:\n> >\n> > On Wed, Sep 11, 2024 at 8:55 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > I have fixed all the comments. The attached patches contain the desired changes.\n> > Also the merging of 0001 and 0002 can be done once there are no\n> > comments on the patch to help in reviewing.\n>\n> Few comments:\n> 1) This commit message seems wrong, currently irrespective of\n> publish_generated_columns, the column specified in column list take\n> preceedene:\n> When 'publish_generated_columns' is false, generated columns are not\n> replicated, even when present in a PUBLICATION col-list.\n>\n> 2) Since we have added pubgencols to pg_pubication.h we can specify\n> \"Bump catversion\" in the commit message.\n>\n> 3) In create publication column list/publish_generated_columns\n> documentation we should mention that if generated column is mentioned\n> in column list, generated columns mentioned in column list will be\n> replication irrespective of publish_generated_columns option.\n>\n> 4) This warning should be mentioned only if publish_generated_columns is false:\n> if (TupleDescAttr(tupdesc, attnum - 1)->attgenerated)\n> - ereport(ERROR,\n> + ereport(WARNING,\n>\n> errcode(ERRCODE_INVALID_COLUMN_REFERENCE),\n> - errmsg(\"cannot use generated\n> column \\\"%s\\\" in publication column list\",\n> + errmsg(\"specified generated\n> column \\\"%s\\\" in publication column list for publication with\n> publish_generated_columns as false\",\n> colname));\n>\n> 5) These tests are not required for this feature:\n> + 'ALTER PUBLICATION pub5 ADD TABLE test_table WHERE (col1 > 0);' => {\n> + create_order => 51,\n> + create_sql =>\n> + 'ALTER PUBLICATION pub5 ADD TABLE\n> dump_test.test_table WHERE (col1 > 0);',\n> + regexp => qr/^\n> + \\QALTER PUBLICATION pub5 ADD TABLE ONLY\n> dump_test.test_table WHERE ((col1 > 0));\\E\n> + /xm,\n> + like => { %full_runs, section_post_data => 1, },\n> + unlike => {\n> + exclude_dump_test_schema => 1,\n> + exclude_test_table => 1,\n> + },\n> + },\n> +\n> + 'ALTER PUBLICATION pub5 ADD TABLE test_second_table WHERE\n> (col2 = \\'test\\');'\n> + => {\n> + create_order => 52,\n> + create_sql =>\n> + 'ALTER PUBLICATION pub5 ADD TABLE\n> dump_test.test_second_table WHERE (col2 = \\'test\\');',\n> + regexp => qr/^\n> + \\QALTER PUBLICATION pub5 ADD TABLE ONLY\n> dump_test.test_second_table WHERE ((col2 = 'test'::text));\\E\n> + /xm,\n> + like => { %full_runs, section_post_data => 1, },\n> + unlike => { exclude_dump_test_schema => 1, },\n> + },\n>\n\nI have addressed all the comments in the v34-0001 Patch. Please refer\nto the updated v34-0001 Patch here in [1]. See [1] for the changes\nadded.\n\n[1] https://www.postgresql.org/message-id/CAHv8RjJkUdYCdK_bL3yvEV%3DzKrA2dsnZYa1VMT2H5v0%2BqbaGbA%40mail.gmail.com\n\nThanks and Regards,\nShubham Khanna.\n\n\n", "msg_date": "Fri, 27 Sep 2024 20:09:58 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Tue, Sep 24, 2024 at 5:16 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi. Here are my review comments for v32-0001\n>\n> You wrote: \"I have addressed all the comments in the v32-0001 Patch.\",\n> however, I found multiple old review comments not addressed. Please\n> give a reason if a comment is deliberately left out, otherwise, I will\n> assume they are omitted by accident and so keep repeating them.\n>\n> There were also still some unanswered questions from previous reviews,\n> so I have reminded you about those again here.\n>\n> ======\n> Commit message\n>\n> 1.\n> This commit enables support for the 'publish_generated_columns' option\n> in logical replication, allowing the transmission of generated column\n> information and data alongside regular table changes. The option\n> 'publish_generated_columns' is a PUBLICATION parameter.\n>\n> ~\n>\n> That PUBLICATION info in the 2nd sentence would be easier to say in\n> the 1st sentence.\n> SUGGESTION:\n> This commit supports the transmission of generated column information\n> and data alongside regular table changes. This behaviour is controlled\n> by a new PUBLICATION parameter ('publish_generated_columns').\n>\n> ~~~\n>\n> 2.\n> When 'publish_generated_columns' is false, generated columns are not\n> replicated, even when present in a PUBLICATION col-list.\n>\n> Hm. This contradicts the behaviour that Amit wanted, (e.g.\n> \"column-list takes precedence\"). So I am not sure if this patch is\n> already catering for the behaviour suggested by Amit or if that is yet\n> to come in v33. For now, I am assuming that 32* has not caught up with\n> the latest behaviour requirements, but that might be a wrong\n> assumption; perhaps it is only this commit message that is bogus.\n>\n> ~~~\n>\n> 3. General.\n>\n> On the same subject, there is lots of code, like:\n>\n> if (att->attgenerated && !pub->pubgencols)\n> continue;\n>\n> I suspect that might not be quite what you want for the \"column-list\n> takes precedence\" behaviour, but I am not going to identify all those\n> during this review. It needs lots of combinations of column list tests\n> to verify it.\n>\n> ======\n> doc/src/sgml/ddl.sgml\n>\n> 4ab.\n> nit - Huh?? Not changed the linkend as told in a previous review [1-#3a]\n> nit - Huh?? Not changed to call this a \"parameter\" instead of an\n> \"option\" as told in a previous review [1-#3b]\n>\n> ======\n> doc/src/sgml/protocol.sgml\n>\n> 5.\n> - <para>\n> - Next, the following message part appears for each column included in\n> - the publication (except generated columns):\n> - </para>\n> -\n>\n> nit -- Huh?? I don't think you can just remove this whole paragraph.\n> But, probably you can just remove the \"except generated columns\" part.\n> I posted this same comment [4 #11] 20 patch versions back.\n>\n> ======\n> doc/src/sgml/ref/create_publication.sgml\n>\n> 6abc.\n> nit - Huh?? Not changed the parameter ID as told in a previous review [1-#6]\n> nit - Huh?? Not removed paragraph \"This option is only available...\"\n> as told in a previous review. See [1-#7]\n> nit - Huh?? Not removed paragraph \"This parameter can only be set\" as\n> told in a previous review. See [1-#8]\n>\n> ======\n> src/backend/catalog/pg_publication.c\n>\n> 7.\n> if (TupleDescAttr(tupdesc, attnum - 1)->attgenerated)\n> - ereport(ERROR,\n> + ereport(WARNING,\n> errcode(ERRCODE_INVALID_COLUMN_REFERENCE),\n> - errmsg(\"cannot use generated column \\\"%s\\\" in publication column list\",\n> + errmsg(\"specified generated column \\\"%s\\\" in publication column list\n> for publication with publish_generated_columns as false\",\n> colname));\n>\n> I did not understand how this WARNING can know\n> \"publish_generated_columns as false\"? Should the code be checking the\n> function parameter 'pubgencols'?\n>\n> The errmsg also seemed a bit verbose. How about:\n> \"specified generated column \\\"%s\\\" in publication column list when\n> publish_generated_columns = false\"\n>\n> ======\n> src/backend/replication/logical/proto.c\n>\n> 8.\n> logicalrep_write_tuple:\n> logicalrep_write_attrs:\n>\n> Reminder. I think I have multiple questions about this code from\n> previous reviews that may be still unanswered. See [2 #4]. Maybe when\n> you implement Amit's \"column list takes precedence\" behaviour then\n> this code is fine as-is (because the replication message might include\n> gencols or not-gecols regardless of the 'publish_generated_columns'\n> value). But I don't think that is the current implementation, so\n> something did not quite seem right. I am not sure. If you say it is\n> fine then I will believe it, but the question [2 #4] remains\n> unanswered.\n>\n> ======\n> src/backend/replication/pgoutput/pgoutput.c\n>\n> 9.\n> send_relation_and_attrs:\n>\n> Reminder: Here is another question that was answered from [2 #5]. I\n> did not really trust it for the current implementation, but for the\n> \"column list takes precedence\" behaviour probably it will be ok.\n>\n> ~~~\n>\n> 10.\n> +/*\n> + * Prepare new column list bitmap. This includes all the columns of the table.\n> + */\n> +static Bitmapset *\n> +prepare_all_columns_bms(PGOutputData *data, RelationSyncEntry *entry,\n> + TupleDesc desc)\n> +{\n>\n> This function needs a better comment with more explanation about what\n> this is REALLY doing. e.g. it says \"includes all columns of the\n> table\", but tthe implementation is skipping generated cols, so clearly\n> it is not \"all columns of the table\".\n>\n> ~~~\n>\n> 11. pgoutput_column_list_init\n>\n> TBH, I struggle to read the logic of this function. Rewriting some\n> parts, inverting some variables, and adding more commentary might help\n> a lot.\n>\n> 11a.\n> There are too many \"negatives\" (with ! operator and with the word \"no\"\n> in the variable).\n>\n> e.g. code is written in a backward way like:\n> if (!pub_no_list)\n> cols = pub_collist_to_bitmapset(cols, cfdatum, entry->entry_cxt);\n> else\n> cols = prepare_all_columns_bms(data, entry, desc);\n>\n> instead of what could have been said:\n> if (pub_rel_has_collist)\n> cols = pub_collist_to_bitmapset(cols, cfdatum, entry->entry_cxt);\n> else\n> cols = prepare_all_columns_bms(data, entry, desc);\n>\n> ~\n>\n> 11b.\n> - * If the publication is FOR ALL TABLES then it is treated the same as\n> - * if there are no column lists (even if other publications have a\n> - * list).\n> + * If the publication is FOR ALL TABLES and include generated columns\n> + * then it is treated the same as if there are no column lists (even\n> + * if other publications have a list).\n> */\n> - if (!pub->alltables)\n> + if (!pub->alltables || !pub->pubgencols)\n>\n> The code does not appear to match the comment (\"If the publication is\n> FOR ALL TABLES and include generated columns\"). If it did it should\n> look like \"if (pub->alltables && pub->pubgencols)\".\n>\n> Also, should \"and include generated column\" be properly referring to\n> the new PUBLICATION parameter name?\n>\n> Also, the comment is somewhat confusing. I saw in the thread Vignesh\n> wrote an explanation like \"To handle cases where the\n> publish_generated_columns option isn't specified for all tables in a\n> publication, the pubgencolumns check needs to be performed. In such\n> cases, we must create a column list that excludes generated columns\"\n> [3]. IMO that was clearer information so something similar should be\n> written in this code comment.\n> ~\n>\n> 11c.\n> + /* Build the column list bitmap in the per-entry context. */\n> + if (!pub_no_list || !pub->pubgencols) /* when not null */\n>\n> I don't know what \"when not null\" means here. Aren't those both\n> booleans? How can it be \"null\"?\n>\n> ======\n> src/bin/pg_dump/pg_dump.c\n>\n> 12. getPublications:\n>\n> Huh?? The code has not changed to address an old review comment I had\n> posted to say there seem multiple code fragments missing that should\n> say \"false AS pubgencols\". Refer to [2 #7].\n>\n> ======\n> src/bin/pg_dump/t/002_pg_dump.pl\n>\n> 13.\n> 'ALTER PUBLICATION pub5 ADD TABLE test_table WHERE (col1 > 0);' => {\n> + create_order => 51,\n> + create_sql =>\n> + 'ALTER PUBLICATION pub5 ADD TABLE dump_test.test_table WHERE (col1 > 0);',\n> + regexp => qr/^\n> + \\QALTER PUBLICATION pub5 ADD TABLE ONLY dump_test.test_table WHERE\n> ((col1 > 0));\\E\n> + /xm,\n> + like => { %full_runs, section_post_data => 1, },\n> + unlike => {\n> + exclude_dump_test_schema => 1,\n> + exclude_test_table => 1,\n> + },\n> + },\n> +\n> + 'ALTER PUBLICATION pub5 ADD TABLE test_second_table WHERE (col2 = \\'test\\');'\n> + => {\n> + create_order => 52,\n> + create_sql =>\n> + 'ALTER PUBLICATION pub5 ADD TABLE dump_test.test_second_table\n> WHERE (col2 = \\'test\\');',\n> + regexp => qr/^\n> + \\QALTER PUBLICATION pub5 ADD TABLE ONLY dump_test.test_second_table\n> WHERE ((col2 = 'test'::text));\\E\n> + /xm,\n> + like => { %full_runs, section_post_data => 1, },\n> + unlike => { exclude_dump_test_schema => 1, },\n> + },\n> +\n>\n> It wasn't clear to me how these tests are related to the patch.\n> Shouldn't there instead be some ALTER tests for trying to modify the\n> 'publish_generate_columns' parameter?\n>\n> ======\n> src/test/regress/expected/publication.out\n> src/test/regress/sql/publication.sql\n>\n> 14.\n> --- error: generated column \"d\" can't be in list\n> +-- ok: generated columns can be in the list too\n> ALTER PUBLICATION testpub_fortable ADD TABLE testpub_tbl5 (a, d);\n> -ERROR: cannot use generated column \"d\" in publication column list\n> +WARNING: specified generated column \"d\" in publication column list\n> for publication with publish_generated_columns as false\n>\n> I think these tests for the WARNING scenario need to be a bit more\n> deliberate. This seems to have happened as a side-effect. For example,\n> I was expecting more testing like:\n>\n> Comments about various combinations to say what you are doing and what\n> you are expecting:\n> - gencols in column list with publish_generated_columns=false, expecting WARNING\n> - gencols in column list with publish_generated_columns=true, NOT\n> expecting WARNING\n> - gencols in column list with publish_generated_columns=true, then\n> ALTER PUBLICATION setting publication_generate_columns=false,\n> expecting WARNING\n> - NO gencols in column list with publish_generated_columns=false, then\n> ALTER PUBLICATION to add gencols to column list, expecting WARNING\n>\n> ======\n> src/test/subscription/t/031_column_list.pl\n>\n> 15.\n> -# TEST: Generated and dropped columns are not considered for the column list.\n> +# TEST: Dropped columns are not considered for the column list.\n> # So, the publication having a column list except for those columns and a\n> -# publication without any column (aka all columns as part of the columns\n> +# publication without any column list (aka all columns as part of the column\n> # list) are considered to have the same column list.\n> $node_publisher->safe_psql(\n> 'postgres', qq(\n> CREATE TABLE test_mix_4 (a int PRIMARY KEY, b int, c int, d int\n> GENERATED ALWAYS AS (a + 1) STORED);\n> ALTER TABLE test_mix_4 DROP COLUMN c;\n>\n> - CREATE PUBLICATION pub_mix_7 FOR TABLE test_mix_4 (a, b);\n> - CREATE PUBLICATION pub_mix_8 FOR TABLE test_mix_4;\n> + CREATE PUBLICATION pub_mix_7 FOR TABLE test_mix_4 WITH\n> (publish_generated_columns = true);\n> + CREATE PUBLICATION pub_mix_8 FOR TABLE test_mix_4 WITH\n> (publish_generated_columns = false);\n>\n> I felt the comment for this test ought to be saying something more\n> about what you are doing with the 'publish_generated_columns'\n> parameters and what behaviour it was expecting.\n>\n> ======\n> Please refer to the attachment which addresses some of the nit\n> comments mentioned above.\n>\n> ======\n> [1] my review of v31-0001:\n> https://www.postgresql.org/message-id/CAHut%2BPsv-neEP_ftvBUBahh%2BKCWw%2BqQMF9N3sGU3YHWPEzFH-Q%40mail.gmail.com\n> [2] my review of v30-0001:\n> https://www.postgresql.org/message-id/CAHut%2BPuaitgE4tu3nfaR%3DPCQEKjB%3DmpDtZ1aWkbwb%3DJZE8YvqQ%40mail.gmail.com\n> [3] https://www.postgresql.org/message-id/CALDaNm1c7xPBodHw6LKp9e8hvGVJHcKH%3DDHK0iXmZuXKPnxZ3Q%40mail.gmail.com\n> [4] https://www.postgresql.org/message-id/CAHut%2BPv45gB4cV%2BSSs6730Kb8urQyqjdZ9PBVgmpwqCycr1Ybg%40mail.gmail.com\n>\n\nI have addressed all the comments in the v34-0001 Patch. Please refer\nto the updated v34-0001 Patch here in [1]. See [1] for the changes\nadded.\n\n[1] https://www.postgresql.org/message-id/CAHv8RjJkUdYCdK_bL3yvEV%3DzKrA2dsnZYa1VMT2H5v0%2BqbaGbA%40mail.gmail.com\n\nThanks and Regards,\nShubham Khanna.\n\n\n", "msg_date": "Fri, 27 Sep 2024 20:12:42 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "On Tue, Sep 24, 2024 at 7:08 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi. Here are my v32-0002 review comments:\n>\n> ======\n> src/backend/replication/logical/tablesync.c\n>\n> 1. fetch_remote_table_info\n>\n> /*\n> - * Get column lists for each relation.\n> + * Get column lists for each relation, and check if any of the\n> + * publications have the 'publish_generated_columns' parameter enabled.\n>\n> I am not 100% sure about this logic anymore. Maybe it is OK, but it\n> requires careful testing because with Amit's \"column lists take\n> precedence\" it is now possible for the publication to say\n> 'publish_generated_columns=false', but the publication can still\n> publish gencols *anyway* if they were specified in a column list.\n>\n> ~~~\n>\n\nThis comment is still open. Will fix this in the next version of patches.\n\n> 2.\n> /*\n> * Fetch info about column lists for the relation (from all the\n> * publications).\n> */\n> + StringInfo pub_names = makeStringInfo();\n> +\n> + get_publications_str(MySubscription->publications, pub_names, true);\n> resetStringInfo(&cmd);\n> appendStringInfo(&cmd,\n> ~\n>\n> nit - The comment here seems misplaced.\n>\n> ~~~\n>\n> 3.\n> + if (server_version >= 120000)\n> + {\n> + has_pub_with_pubgencols = server_version >= 180000 && has_pub_with_pubgencols;\n> +\n> + if (!has_pub_with_pubgencols)\n> + appendStringInfo(&cmd, \" AND a.attgenerated = ''\");\n> + }\n>\n> My previous review comment about this [1 #10] was:\n> Can the 'gencols_allowed' var be removed, and the condition just be\n> replaced with if (!has_pub_with_pubgencols)? It seems equivalent\n> unless I am mistaken.\n>\n> nit - So the current v32 code is not what I was expecting. What I\n> meant was 'has_pub_with_pubgencols' can only be true if server_version\n> >= 180000, so I thought there was no reason to check it again. For\n> reference, I've changed it to like I meant in the nitpicks attachment.\n> Please see if that works the same.\n>\n> ======\n> [1] my review of v31-0002.\n> https://www.postgresql.org/message-id/CAHut%2BPusbhvPrL1uN1TKY%3DFd4zu3h63eDebZvsF%3Duy%2BLBKTwgA%40mail.gmail.com\n>\n\nI have addressed all the comments in the v34-0002 Patch. Please refer\nto the updated v34-0002 Patch here in [1]. See [1] for the changes\nadded.\n\n[1] https://www.postgresql.org/message-id/CAHv8RjJkUdYCdK_bL3yvEV%3DzKrA2dsnZYa1VMT2H5v0%2BqbaGbA%40mail.gmail.com\n\nThanks and Regards,\nShubham Khanna.\n\n\n", "msg_date": "Fri, 27 Sep 2024 20:16:54 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Here are my review comments for patch v34-0001\n\n======\ndoc/src/sgml/ddl.sgml\n\n1.\n- Generated columns are skipped for logical replication and cannot be\n- specified in a <command>CREATE PUBLICATION</command> column list.\n+ Generated columns may be skipped during logical replication\naccording to the\n+ <command>CREATE PUBLICATION</command> parameter\n+ <link linkend=\"sql-createpublication-params-with-publish-generated-columns\">\n+ <literal>publish_generated_columns</literal></link>.\n\nThis information is not quite correct because it makes no mention of\nPUBLICATION column lists. OTOH I replaced all this paragraph in the\n0003 patch anyhow, so maybe it is not worth worrying about this review\ncomment.\n\n======\nsrc/backend/catalog/pg_publication.c\n\n2.\n- if (TupleDescAttr(tupdesc, attnum - 1)->attgenerated)\n- ereport(ERROR,\n+ if (TupleDescAttr(tupdesc, attnum - 1)->attgenerated && !pubgencols)\n+ ereport(WARNING,\n errcode(ERRCODE_INVALID_COLUMN_REFERENCE),\n- errmsg(\"cannot use generated column \\\"%s\\\" in publication column list\",\n+ errmsg(\"specified generated column \\\"%s\\\" in publication column list\nwhen publish_generated_columns as false\",\n colname));\n\nBack when I proposed to have this WARNING I don't think there existed\nthe idea that the PUBLICATION column list would override the\n'publish_generated_columns' parameter. But now that this is the\nimplementation, I am no longer 100% sure if a warning should be given\nat all because this can be a perfectly valid combination. What do\nothers think?\n\n======\nsrc/backend/replication/pgoutput/pgoutput.c\n\n3. prepare_all_columns_bms\n\n3a.\nnit - minor rewording function comment\n\n~\n\n3b.\nI am not sure this is a good function name, particularly since it does\nnot return \"all\" columns. Can you name it more for what it does?\n\n~~~\n\n4. pgoutput_column_list_init\n\n /*\n- * If the publication is FOR ALL TABLES then it is treated the same as\n- * if there are no column lists (even if other publications have a\n- * list).\n+ * To handle cases where the publish_generated_columns option isn't\n+ * specified for all tables in a publication, the pubgencolumns check\n+ * needs to be performed. In such cases, we must create a column list\n+ * that excludes generated columns.\n */\n- if (!pub->alltables)\n+ if (!pub->alltables || !pub->pubgencols)\n\n4a.\nThat comment still doesn't make much sense to me:\n\ne.g.1. \"To handle cases where the publish_generated_columns option\nisn't specified for all tables in a publication\". What is this trying\nto say? A publication parameter is per-publication, so it is always\n\"for all tables in a publication\".\n\ne.g.2. \" the pubgencolumns check\" -- what is the pubgencols check?\n\nPlease rewrite the comment more clearly to explain the logic: What is\nit doing? Why is it doing it?\n\n~\n\n4b.\n+ if (!pub->alltables || !pub->pubgencols)\n\nAs mentioned in a previous review, there is too much negativity in\nconditions like this. I think anything you can do to reverse all the\nnegativity will surely improve the readability of this function. See\n[1 - #11a]\n\n\n~\n\n5.\n- cols = pub_collist_to_bitmapset(cols, cfdatum,\n- entry->entry_cxt);\n+ if (!pub_rel_has_collist)\n+ cols = pub_collist_to_bitmapset(cols, cfdatum, entry->entry_cxt);\n+ else\n+ cols = prepare_all_columns_bms(data, entry, desc);\n\nHm. Is that correct? The if/else is all backwards from what I would\nhave expected. IIUC the variable 'pub_rel_has_collist' is assigned to\nthe opposite value of what the name says, so then you are using it all\nbackwards to make it work.\n\nnit - I have changed the code in the attachment how I think it should\nbe. Please check it makes sense.\n\n~\n\n6.\n+ /* Get the number of live attributes. */\n+ for (i = 0; i < desc->natts; i++)\n\nnit - use a for-loop variable 'i'.\n\n======\nsrc/bin/pg_dump/pg_dump.c\n\n7.\n+ else if (fout->remoteVersion >= 130000)\n+ appendPQExpBufferStr(query,\n+ \"SELECT p.tableoid, p.oid, p.pubname, \"\n+ \"p.pubowner, \"\n+ \"p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete,\np.pubtruncate, p.pubviaroot, false AS pubviagencols \"\n \"FROM pg_publication p\");\n else if (fout->remoteVersion >= 110000)\n appendPQExpBufferStr(query,\n \"SELECT p.tableoid, p.oid, p.pubname, \"\n \"p.pubowner, \"\n- \"p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete,\np.pubtruncate, false AS pubviaroot \"\n+ \"p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete,\np.pubtruncate, false AS pubviaroot, false AS pubviagencols \"\n \"FROM pg_publication p\");\n else\n appendPQExpBufferStr(query,\n \"SELECT p.tableoid, p.oid, p.pubname, \"\n \"p.pubowner, \"\n- \"p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, false AS\npubtruncate, false AS pubviaroot \"\n+ \"p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, false AS\npubtruncate, false AS pubviaroot, false AS pubviagencols \"\n \"FROM pg_publication p\");\n\nThese changes are all wrong due to a typo. There is no such column as\n'pubviagencols'. I made these changes in the nit attachment. Please\ncheck it for correctness.\n\ns/pubviagencols/pubgencols/\n\n======\nsrc/test/regress/sql/publication.sql\n\n8.\n--- error: generated column \"d\" can't be in list\n+-- ok: generated columns can be in the list too\n\nnit - name the generated column \"d\", to clarify this comment\n\n~~~\n\n9.\n+-- Test the publication 'publish_generated_columns' parameter enabled\nor disabled for different combinations.\n\nnit - add another \"======\" separator comment before the new test\nnit - some minor adjustments to whitespace and comments\n\n~~~\n\n10.\n+-- No gencols in column list with 'publish_generated_columns'=false.\n+CREATE PUBLICATION pub3 WITH (publish_generated_columns=false);\n+-- ALTER PUBLICATION to add gencols to column list.\n+ALTER PUBLICATION pub3 ADD TABLE gencols(a, gen1);\n\nnit - by adding and removing collist for the existing table, we don't\nneed to have a 'pub3'.\n\n======\nsrc/test/subscription/t/031_column_list.pl\n\n11.\nIt is not clear to me -- is there, or is there not yet any test case\nfor the multiple publication issues that were discussed previously?\ne.g. when the same table has gencols but there are multiple\npublications for the same subscription and the\n'publish_generated_columns' parameter or column lists conflict.\n\n======\n[1] https://www.postgresql.org/message-id/CAHut%2BPt9ArtYc1Vtb1DqzEEFwjcoDHMyRCnUqkHQeFJuMCDkvQ%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Mon, 30 Sep 2024 16:17:05 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi Shubham. Here are my review comment for patch v34-0002.\n\n======\ndoc/src/sgml/ref/create_publication.sgml\n\n1.\n+ <para>\n+ This parameter can only be set <literal>true</literal> if\n<literal>copy_data</literal> is\n+ set to <literal>false</literal>.\n+ </para>\n\nHuh? AFAIK the patch implements COPY for generated columns, so why are\nyou saying this limitation?\n\n======\nsrc/backend/replication/logical/tablesync.c\n\n2. reminder\n\nPreviously (18/9) [1 #4] I wrote maybe that other copy_data=false\n\"missing\" case error can be improved to share the same error message\nthat you have in make_copy_attnamelist. And you replied [2] it would\nbe addressed in the next patchset, but that was at least 2 versions\nback and I don't see any change yet.\n\n======\n[1] 18/9 review\nhttps://www.postgresql.org/message-id/CAHut%2BPusbhvPrL1uN1TKY%3DFd4zu3h63eDebZvsF%3Duy%2BLBKTwgA%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAHv8RjJ5_dmyCH58xQ0StXMdPt9gstemMMWytR79%2BLfOMAHdLw%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 30 Sep 2024 17:26:06 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi, here is an updated patch set v35:\n\nv35-0001 (same as v34-0001)\nv35-0002 (same as v34-0002)\n\nv35-0003 (updated)\n- fixed a linkend problem of v34-0003.\n- added info in \"Column Lists\" about generated cols and linked that to\nthe new docs section.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Mon, 30 Sep 2024 18:33:39 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" }, { "msg_contents": "Hi Vignesh,\n\nOn Mon, Sep 23, 2024 at 10:49 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n\n> 3) In create publication column list/publish_generated_columns\n> documentation we should mention that if generated column is mentioned\n> in column list, generated columns mentioned in column list will be\n> replication irrespective of publish_generated_columns option.\n>\n\nv34-0003 introduced a new Chapter 29 (Logical Replication) section for\n\"Generated Column Replication\"\n- This version also added a link from CREATE PUBLICATION\n'publish_generated_column' parameter to this new section\n\nTo address your column list point, in v35-0003 I added more\ninformation about Generate Columns in the Chapter 29 section \"Column\nList\". The CREATE PUBLICATION column lists docs already linked to\nthat. See [1]\n\n======\n[1] v35-0003 - https://www.postgresql.org/message-id/CAHut%2BPvoQS9HjcGFZrTHrUQZ8vzyfAcSgeTgQEoO_-f8CrhW4A%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 30 Sep 2024 18:45:54 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pgoutput not capturing the generated columns" } ]
[ { "msg_contents": "add the missing leading `l` for log_statement_sample_rate\n\n-- \nRegards\nJunwang Zhao", "msg_date": "Tue, 1 Aug 2023 16:09:32 +0800", "msg_from": "Junwang Zhao <zhjwpku@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] [zh_CN.po] fix a typo in simplified Chinese translation file" }, { "msg_contents": ">\r\n> add the missing leading `l` for log_statement_sample_rate\r\n>\r\n> --\r\n> Regards\r\n> Junwang Zhao\r\n\r\n-msgstr \"如果值设置为0,那么打印出所有查询,以og_statement_sample_rate为准. 如果设置为-1,那么将把这个功能特性关闭.\"\r\n+msgstr \"如果值设置为-msgstr \"如果值设置为0,那么打印出所有查询,以og_statement_sample_rate为准.\r\n如果设置为-1,那么将把这个功能特性关闭.\"\r\n\r\nI think it's pretty obviously. anyway. I created an commitfest entry.\r\nhttps://commitfest.postgresql.org/44/4470/\r\n", "msg_date": "Wed, 2 Aug 2023 11:45:06 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] [zh_CN.po] fix a typo in simplified Chinese translation\n file" }, { "msg_contents": "On 01.08.23 10:09, Junwang Zhao wrote:\n> add the missing leading `l` for log_statement_sample_rate\n\nI have committed this fix to the translations repository.\n\n\n\n\n", "msg_date": "Wed, 2 Aug 2023 09:55:18 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: [PATCH] [zh_CN.po] fix a typo in simplified Chinese translation\n file" }, { "msg_contents": "On Wed, 2 Aug 2023 at 15:45, jian he <jian.universality@gmail.com> wrote:\n> I think it's pretty obviously. anyway. I created an commitfest entry.\n> https://commitfest.postgresql.org/44/4470/\n\nI saw that there were two CF entries for this patch. I marked one as\ncommitted and the other as withdrawn.\n\nFor the future, I believe the discussion for translations goes on in\nthe pgsql-translators mailing list [1]. Ordinarily, there'd be no CF\nentry for translation updates.\n\nDavid\n\n[1] https://www.postgresql.org/list/pgsql-translators/\n\n\n", "msg_date": "Tue, 8 Aug 2023 14:25:16 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] [zh_CN.po] fix a typo in simplified Chinese translation\n file" } ]
[ { "msg_contents": "For paths of type 'T_Path', reparameterize_path_by_child just does the\nflat-copy but does not adjust the expressions that have lateral\nreferences. This would have problems for partitionwise-join. As an\nexample, consider\n\nregression=# explain (costs off)\nselect * from prt1 t1 join lateral\n (select * from prt1 t2 TABLESAMPLE SYSTEM (t1.a)) s\non t1.a = s.a;\n QUERY PLAN\n-----------------------------------------\n Append\n -> Nested Loop\n -> Seq Scan on prt1_p1 t1_1\n -> Sample Scan on prt1_p1 t2_1\n Sampling: system (t1.a)\n Filter: (t1_1.a = a)\n -> Nested Loop\n -> Seq Scan on prt1_p2 t1_2\n -> Sample Scan on prt1_p2 t2_2\n Sampling: system (t1.a)\n Filter: (t1_2.a = a)\n -> Nested Loop\n -> Seq Scan on prt1_p3 t1_3\n -> Sample Scan on prt1_p3 t2_3\n Sampling: system (t1.a)\n Filter: (t1_3.a = a)\n(16 rows)\n\nNote that the lateral references in the sampling info are not\nreparameterized correctly. They are supposed to reference the child\nrelations, but as we can see from the plan they are still referencing\nthe top parent relation. Running this plan would lead to executor\ncrash.\n\nregression=# explain (analyze, costs off)\nselect * from prt1 t1 join lateral\n (select * from prt1 t2 TABLESAMPLE SYSTEM (t1.a)) s\non t1.a = s.a;\nserver closed the connection unexpectedly\n\nIn this case what we need to do is to adjust the TableSampleClause to\nrefer to the correct child relations. We can do that with the help of\nadjust_appendrel_attrs_multilevel(). One problem is that the\nTableSampleClause is stored in RangeTblEntry, and it does not seem like\na good practice to alter RangeTblEntry in this place. What if we end up\nchoosing the non-partitionwise-join path as the cheapest one? In that\ncase altering the RangeTblEntry here would cause a problem of the\nopposite side: the lateral references in TableSampleClause should refer\nto the top parent relation but they are referring to the child\nrelations.\n\nSo what I'm thinking is that maybe we can add a new type of path, named\nSampleScanPath, to have the TableSampleClause per path. Then we can\nsafely reparameterize the TableSampleClause as needed for each\nSampleScanPath. That's what the attached patch does.\n\nThere are some other plan types that do not have a separate path type\nbut may have lateral references in expressions stored in RangeTblEntry,\nsuch as FunctionScan, TableFuncScan and ValuesScan. But it's not clear\nto me if there are such problems with them too.\n\nThanks\nRichard", "msg_date": "Tue, 1 Aug 2023 16:44:27 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Oversight in reparameterize_path_by_child leading to executor crash" }, { "msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> In this case what we need to do is to adjust the TableSampleClause to\n> refer to the correct child relations. We can do that with the help of\n> adjust_appendrel_attrs_multilevel(). One problem is that the\n> TableSampleClause is stored in RangeTblEntry, and it does not seem like\n> a good practice to alter RangeTblEntry in this place.\n\nUgh. That's why we didn't think to adjust it, obviously. You are right\nthat we can't just modify the RTE on the fly, since it's shared with\nother non-parameterized paths.\n\n> So what I'm thinking is that maybe we can add a new type of path, named\n> SampleScanPath, to have the TableSampleClause per path. Then we can\n> safely reparameterize the TableSampleClause as needed for each\n> SampleScanPath. That's what the attached patch does.\n\nAlternatively, could we postpone the reparameterization until\ncreateplan.c? Having to build reparameterized expressions we might\nnot use seems expensive, and it would also contribute to the memory\nbloat being complained of in nearby threads.\n\n> There are some other plan types that do not have a separate path type\n> but may have lateral references in expressions stored in RangeTblEntry,\n> such as FunctionScan, TableFuncScan and ValuesScan. But it's not clear\n> to me if there are such problems with them too.\n\nHmm, well, not for partition-wise join anyway.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 01 Aug 2023 09:20:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "On Tue, Aug 1, 2023 at 9:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Richard Guo <guofenglinux@gmail.com> writes:\n> > So what I'm thinking is that maybe we can add a new type of path, named\n> > SampleScanPath, to have the TableSampleClause per path. Then we can\n> > safely reparameterize the TableSampleClause as needed for each\n> > SampleScanPath. That's what the attached patch does.\n>\n> Alternatively, could we postpone the reparameterization until\n> createplan.c? Having to build reparameterized expressions we might\n> not use seems expensive, and it would also contribute to the memory\n> bloat being complained of in nearby threads.\n\n\nI did think about this option but figured out that it seems beyond the\nscope of just fixing SampleScan. But if we want to optimize the\nreparameterization mechanism besides fixing this crash, I think this\noption is much better. I drafted a patch as attached.\n\nThanks\nRichard", "msg_date": "Wed, 2 Aug 2023 19:02:38 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "On Tue, Aug 1, 2023 at 6:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>\n> Alternatively, could we postpone the reparameterization until\n> createplan.c? Having to build reparameterized expressions we might\n> not use seems expensive, and it would also contribute to the memory\n> bloat being complained of in nearby threads.\n>\n\nAs long as the translation happens only once, it should be fine. It's\nthe repeated translations which should be avoided.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Thu, 3 Aug 2023 10:08:39 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "On Thu, Aug 3, 2023 at 12:38 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n\n> On Tue, Aug 1, 2023 at 6:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Alternatively, could we postpone the reparameterization until\n> > createplan.c? Having to build reparameterized expressions we might\n> > not use seems expensive, and it would also contribute to the memory\n> > bloat being complained of in nearby threads.\n>\n> As long as the translation happens only once, it should be fine. It's\n> the repeated translations which should be avoided.\n\n\nSometimes it would help to avoid the translation at all if we postpone\nthe reparameterization until createplan.c, such as in the case that a\nnon-parameterized path wins at last. For instance, for the query below\n\nregression=# explain (costs off)\nselect * from prt1 t1 join prt1 t2 on t1.a = t2.a;\n QUERY PLAN\n--------------------------------------------\n Append\n -> Hash Join\n Hash Cond: (t1_1.a = t2_1.a)\n -> Seq Scan on prt1_p1 t1_1\n -> Hash\n -> Seq Scan on prt1_p1 t2_1\n -> Hash Join\n Hash Cond: (t1_2.a = t2_2.a)\n -> Seq Scan on prt1_p2 t1_2\n -> Hash\n -> Seq Scan on prt1_p2 t2_2\n -> Hash Join\n Hash Cond: (t1_3.a = t2_3.a)\n -> Seq Scan on prt1_p3 t1_3\n -> Hash\n -> Seq Scan on prt1_p3 t2_3\n(16 rows)\n\nOur current code would reparameterize each inner paths although at last\nwe choose the non-parameterized paths. If we do the reparameterization\nin createplan.c, we would be able to avoid all the translations.\n\nThanks\nRichard\n\nOn Thu, Aug 3, 2023 at 12:38 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:On Tue, Aug 1, 2023 at 6:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alternatively, could we postpone the reparameterization until\n> createplan.c?  Having to build reparameterized expressions we might\n> not use seems expensive, and it would also contribute to the memory\n> bloat being complained of in nearby threads.\n\nAs long as the translation happens only once, it should be fine. It's\nthe repeated translations which should be avoided.Sometimes it would help to avoid the translation at all if we postponethe reparameterization until createplan.c, such as in the case that anon-parameterized path wins at last.  For instance, for the query belowregression=# explain (costs off)select * from prt1 t1 join prt1 t2 on t1.a = t2.a;                 QUERY PLAN-------------------------------------------- Append   ->  Hash Join         Hash Cond: (t1_1.a = t2_1.a)         ->  Seq Scan on prt1_p1 t1_1         ->  Hash               ->  Seq Scan on prt1_p1 t2_1   ->  Hash Join         Hash Cond: (t1_2.a = t2_2.a)         ->  Seq Scan on prt1_p2 t1_2         ->  Hash               ->  Seq Scan on prt1_p2 t2_2   ->  Hash Join         Hash Cond: (t1_3.a = t2_3.a)         ->  Seq Scan on prt1_p3 t1_3         ->  Hash               ->  Seq Scan on prt1_p3 t2_3(16 rows)Our current code would reparameterize each inner paths although at lastwe choose the non-parameterized paths.  If we do the reparameterizationin createplan.c, we would be able to avoid all the translations.ThanksRichard", "msg_date": "Thu, 3 Aug 2023 18:43:52 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "On Thu, Aug 3, 2023 at 4:14 PM Richard Guo <guofenglinux@gmail.com> wrote:\n\n>\n> On Thu, Aug 3, 2023 at 12:38 PM Ashutosh Bapat <\n> ashutosh.bapat.oss@gmail.com> wrote:\n>\n\n\n> Sometimes it would help to avoid the translation at all if we postpone\n> the reparameterization until createplan.c, such as in the case that a\n> non-parameterized path wins at last. For instance, for the query below\n>\n> regression=# explain (costs off)\n> select * from prt1 t1 join prt1 t2 on t1.a = t2.a;\n> QUERY PLAN\n> --------------------------------------------\n> Append\n> -> Hash Join\n> Hash Cond: (t1_1.a = t2_1.a)\n> -> Seq Scan on prt1_p1 t1_1\n> -> Hash\n> -> Seq Scan on prt1_p1 t2_1\n> -> Hash Join\n> Hash Cond: (t1_2.a = t2_2.a)\n> -> Seq Scan on prt1_p2 t1_2\n> -> Hash\n> -> Seq Scan on prt1_p2 t2_2\n> -> Hash Join\n> Hash Cond: (t1_3.a = t2_3.a)\n> -> Seq Scan on prt1_p3 t1_3\n> -> Hash\n> -> Seq Scan on prt1_p3 t2_3\n> (16 rows)\n>\n> Our current code would reparameterize each inner paths although at last\n> we choose the non-parameterized paths. If we do the reparameterization\n> in createplan.c, we would be able to avoid all the translations.\n>\n>\nI agree. The costs do not change because of reparameterization. The process\nof creating paths is to estimate costs of different plans. So I think it\nmakes sense to delay anything which doesn't contribute to costing till the\nfinal plan is decided.\n\nHowever, if reparameterization can *not* happen at the planning time, we\nhave chosen a plan which can not be realised meeting a dead end. So as long\nas we can ensure that the reparameterization is possible we can delay\nactual translations. I think it should be possible to decide whether\nreparameterization is possible based on the type of path alone. So seems\ndoable.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Thu, Aug 3, 2023 at 4:14 PM Richard Guo <guofenglinux@gmail.com> wrote:On Thu, Aug 3, 2023 at 12:38 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote: Sometimes it would help to avoid the translation at all if we postponethe reparameterization until createplan.c, such as in the case that anon-parameterized path wins at last.  For instance, for the query belowregression=# explain (costs off)select * from prt1 t1 join prt1 t2 on t1.a = t2.a;                 QUERY PLAN-------------------------------------------- Append   ->  Hash Join         Hash Cond: (t1_1.a = t2_1.a)         ->  Seq Scan on prt1_p1 t1_1         ->  Hash               ->  Seq Scan on prt1_p1 t2_1   ->  Hash Join         Hash Cond: (t1_2.a = t2_2.a)         ->  Seq Scan on prt1_p2 t1_2         ->  Hash               ->  Seq Scan on prt1_p2 t2_2   ->  Hash Join         Hash Cond: (t1_3.a = t2_3.a)         ->  Seq Scan on prt1_p3 t1_3         ->  Hash               ->  Seq Scan on prt1_p3 t2_3(16 rows)Our current code would reparameterize each inner paths although at lastwe choose the non-parameterized paths.  If we do the reparameterizationin createplan.c, we would be able to avoid all the translations.I agree. The costs do not change because of reparameterization. The process of creating paths is to estimate costs of different plans. So I think it makes sense to delay anything which doesn't contribute to costing till the final plan is decided.However, if reparameterization can *not* happen at the planning time, we have chosen a plan which can not be realised meeting a dead end. So as long as we can ensure that the reparameterization is possible we can delay actual translations. I think it should be possible to decide whether reparameterization is possible based on the type of path alone. So seems doable.-- Best Wishes,Ashutosh Bapat", "msg_date": "Thu, 3 Aug 2023 16:50:05 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "On Thu, Aug 3, 2023 at 7:20 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n\n> However, if reparameterization can *not* happen at the planning time, we\n> have chosen a plan which can not be realised meeting a dead end. So as long\n> as we can ensure that the reparameterization is possible we can delay\n> actual translations. I think it should be possible to decide whether\n> reparameterization is possible based on the type of path alone. So seems\n> doable.\n>\n\nThat has been done in v2 patch. See the new added function\npath_is_reparameterizable_by_child().\n\nThanks\nRichard\n\nOn Thu, Aug 3, 2023 at 7:20 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:However, if reparameterization can *not* happen at the planning time, we have chosen a plan which can not be realised meeting a dead end. So as long as we can ensure that the reparameterization is possible we can delay actual translations. I think it should be possible to decide whether reparameterization is possible based on the type of path alone. So seems doable.That has been done in v2 patch.  See the new added functionpath_is_reparameterizable_by_child().ThanksRichard", "msg_date": "Fri, 4 Aug 2023 08:38:36 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "On Fri, Aug 4, 2023 at 6:08 AM Richard Guo <guofenglinux@gmail.com> wrote:\n\n>\n> On Thu, Aug 3, 2023 at 7:20 PM Ashutosh Bapat <\n> ashutosh.bapat.oss@gmail.com> wrote:\n>\n>> However, if reparameterization can *not* happen at the planning time, we\n>> have chosen a plan which can not be realised meeting a dead end. So as long\n>> as we can ensure that the reparameterization is possible we can delay\n>> actual translations. I think it should be possible to decide whether\n>> reparameterization is possible based on the type of path alone. So seems\n>> doable.\n>>\n>\n> That has been done in v2 patch. See the new added function\n> path_is_reparameterizable_by_child().\n>\n\nThanks. The patch looks good overall. I like it because of its potential to\nreduce memory consumption in reparameterization. That's cake on top of\ncream :)\n\nSome comments here.\n\n+ {\n+ FLAT_COPY_PATH(new_path, path, Path);\n+\n+ if (path->pathtype == T_SampleScan)\n+ {\n+ Index scan_relid = path->parent->relid;\n+ RangeTblEntry *rte;\n+\n+ /* it should be a base rel with a tablesample clause... */\n+ Assert(scan_relid > 0);\n+ rte = planner_rt_fetch(scan_relid, root);\n+ Assert(rte->rtekind == RTE_RELATION);\n+ Assert(rte->tablesample != NULL);\n+\n+ ADJUST_CHILD_EXPRS(rte->tablesample, TableSampleClause *);\n+ }\n+ }\n\nThis change makes this function usable only after the final plan has been\nselected. If some code accidently uses it earlier, it would corrupt\nrte->tablesample without getting detected easily. I think we should mention\nthis in the function prologue and move it somewhere else. Other option is\nto a.\nleave rte->tablesample unchanged here with a comment as to why we aren't\nchanging it b. reparameterize tablesample expression in\ncreate_samplescan_plan() if the path is parameterized by the parent. We call\nreparameterize_path_by_child() in create_nestloop_plan() as this patch does\nright now. I like that we are delaying reparameterization. It saves a bunch\nof\nmemory. I haven't measured it but from my recent experiments I know it will\nbe\na lot.\n\n * If the inner path is parameterized, it is parameterized by the\n- * topmost parent of the outer rel, not the outer rel itself. Fix\n- * that.\n+ * topmost parent of the outer rel, not the outer rel itself. We will\n\nThere's something wrong with the original sentence (which was probably\nwritten\nby me back then :)). I think it should have read \"If the inner path is\nparameterized by the topmost parent of the outer rel instead of the outer\nrel\nitself, fix that.\". If you agree, the new comment should change it\naccordingly.\n\n@@ -2430,6 +2430,16 @@ create_nestloop_path(PlannerInfo *root,\n {\n NestPath *pathnode = makeNode(NestPath);\n Relids inner_req_outer = PATH_REQ_OUTER(inner_path);\n+ Relids outerrelids;\n+\n+ /*\n+ * Paths are parameterized by top-level parents, so run parameterization\n+ * tests on the parent relids.\n+ */\n+ if (outer_path->parent->top_parent_relids)\n+ outerrelids = outer_path->parent->top_parent_relids;\n+ else\n+ outerrelids = outer_path->parent->relids;\n\nThis looks like an existing bug. Did partitionwise join paths ended up with\nextra restrict clauses without this fix? I am not able to see why this\nchange\nis needed by rest of the changes in the patch?\n\nAnyway, let's rename outerrelids to something else e.g. outer_param_relids\nto reflect\nits true meaning.\n\n+bool\n+path_is_reparameterizable_by_child(Path *path)\n+{\n+ switch (nodeTag(path))\n+ {\n+ case T_Path:\n... snip ...\n+ case T_GatherPath:\n+ return true;\n+ default:\n+\n+ /* We don't know how to reparameterize this path. */\n+ return false;\n+ }\n+\n+ return false;\n+}\n\nHow do we make sure that any changes to reparameterize_path_by_child() are\nreflected here? One way is to call this function from\nreparameterize_path_by_child() the first thing and return if the path can\nnot\nbe reparameterized.\n\nI haven't gone through each path's translation to make sure that it's\npossible\nto do that during create_nestloop_plan(). I will do that in my next round of\nreview if time permits. But AFAIR, each of the translations are possible\nduring\ncreate_nestloop_plan() when it happens in this patch.\n\n-#define ADJUST_CHILD_ATTRS(node) \\\n+#define ADJUST_CHILD_EXPRS(node, fieldtype) \\\n ((node) = \\\n- (List *) adjust_appendrel_attrs_multilevel(root, (Node *) (node), \\\n- child_rel, \\\n- child_rel->top_parent))\n+ (fieldtype) adjust_appendrel_attrs_multilevel(root, (Node *) (node), \\\n+ child_rel, \\\n+ child_rel->top_parent))\n+\n+#define ADJUST_CHILD_ATTRS(node) ADJUST_CHILD_EXPRS(node, List *)\n\nIIRC, ATTRS here is taken from the function being called. I think we should\njust change the macro definition, not its name. If you would like to have a\nseparate macro for List nodes, create ADJUST_CHILD_ATTRS_IN_LIST or some\nsuch.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Fri, Aug 4, 2023 at 6:08 AM Richard Guo <guofenglinux@gmail.com> wrote:On Thu, Aug 3, 2023 at 7:20 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:However, if reparameterization can *not* happen at the planning time, we have chosen a plan which can not be realised meeting a dead end. So as long as we can ensure that the reparameterization is possible we can delay actual translations. I think it should be possible to decide whether reparameterization is possible based on the type of path alone. So seems doable.That has been done in v2 patch.  See the new added functionpath_is_reparameterizable_by_child().Thanks. The patch looks good overall. I like it because of its potential to reduce memory consumption in reparameterization. That's cake on top of cream :)Some comments here.+\t\t\t{+\t\t\t\tFLAT_COPY_PATH(new_path, path, Path);++\t\t\t\tif (path->pathtype == T_SampleScan)+\t\t\t\t{+\t\t\t\t\tIndex\t\tscan_relid = path->parent->relid;+\t\t\t\t\tRangeTblEntry *rte;++\t\t\t\t\t/* it should be a base rel with a tablesample clause... */+\t\t\t\t\tAssert(scan_relid > 0);+\t\t\t\t\trte = planner_rt_fetch(scan_relid, root);+\t\t\t\t\tAssert(rte->rtekind == RTE_RELATION);+\t\t\t\t\tAssert(rte->tablesample != NULL);++\t\t\t\t\tADJUST_CHILD_EXPRS(rte->tablesample, TableSampleClause *);+\t\t\t\t}+\t\t\t}This change makes this function usable only after the final plan has beenselected. If some code accidently uses it earlier, it would corruptrte->tablesample without getting detected easily. I think we should mentionthis in the function prologue and move it somewhere else. Other option is to a.leave rte->tablesample unchanged here with a comment as to why we aren'tchanging it b. reparameterize tablesample expression increate_samplescan_plan() if the path is parameterized by the parent. We callreparameterize_path_by_child() in create_nestloop_plan() as this patch doesright now. I like that we are delaying reparameterization. It saves a bunch ofmemory. I haven't measured it but from my recent experiments I know it will bea lot. \t\t * If the inner path is parameterized, it is parameterized by the-\t\t * topmost parent of the outer rel, not the outer rel itself.  Fix-\t\t * that.+\t\t * topmost parent of the outer rel, not the outer rel itself.  We willThere's something wrong with the original sentence (which was probably writtenby me back then :)). I think it should have read \"If the inner path isparameterized by the topmost parent of the outer rel instead of the outer relitself, fix that.\". If you agree, the new comment should change it accordingly.@@ -2430,6 +2430,16 @@ create_nestloop_path(PlannerInfo *root, { \tNestPath   *pathnode = makeNode(NestPath); \tRelids\t\tinner_req_outer = PATH_REQ_OUTER(inner_path);+\tRelids\t\touterrelids;++\t/*+\t * Paths are parameterized by top-level parents, so run parameterization+\t * tests on the parent relids.+\t */+\tif (outer_path->parent->top_parent_relids)+\t\touterrelids = outer_path->parent->top_parent_relids;+\telse+\t\touterrelids = outer_path->parent->relids;This looks like an existing bug. Did partitionwise join paths ended up withextra restrict clauses without this fix? I am not able to see why this changeis needed by rest of the changes in the patch?Anyway, let's rename outerrelids to something else e.g.  outer_param_relids to reflectits true meaning.+bool+path_is_reparameterizable_by_child(Path *path)+{+\tswitch (nodeTag(path))+\t{+\t\tcase T_Path:... snip ...+\t\tcase T_GatherPath:+\t\t\treturn true;+\t\tdefault:++\t\t\t/* We don't know how to reparameterize this path. */+\t\t\treturn false;+\t}++\treturn false;+}How do we make sure that any changes to reparameterize_path_by_child() arereflected here? One way is to call this function fromreparameterize_path_by_child() the first thing and return if the path can notbe reparameterized.I haven't gone through each path's translation to make sure that it's possibleto do that during create_nestloop_plan(). I will do that in my next round ofreview if time permits. But AFAIR, each of the translations are possible duringcreate_nestloop_plan() when it happens in this patch.-#define ADJUST_CHILD_ATTRS(node) \\+#define ADJUST_CHILD_EXPRS(node, fieldtype) \\ \t((node) = \\-\t (List *) adjust_appendrel_attrs_multilevel(root, (Node *) (node), \\-\t\t\t\t\t\t\t\t\t\t\t\tchild_rel, \\-\t\t\t\t\t\t\t\t\t\t\t\tchild_rel->top_parent))+\t (fieldtype) adjust_appendrel_attrs_multilevel(root, (Node *) (node), \\+\t\t\t\t\t\t\t\t\t\t\t\t   child_rel, \\+\t\t\t\t\t\t\t\t\t\t\t\t   child_rel->top_parent))++#define ADJUST_CHILD_ATTRS(node) ADJUST_CHILD_EXPRS(node, List *)IIRC, ATTRS here is taken from the function being called. I think we shouldjust change the macro definition, not its name. If you would like to have aseparate macro for List nodes, create ADJUST_CHILD_ATTRS_IN_LIST or some such. -- Best Wishes,Ashutosh Bapat", "msg_date": "Mon, 7 Aug 2023 18:59:14 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "On Mon, Aug 7, 2023 at 9:29 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n\n> Thanks. The patch looks good overall. I like it because of its potential\n> to reduce memory consumption in reparameterization. That's cake on top of\n> cream :)\n>\n\nThanks for reviewing this patch!\n\n\n> Some comments here.\n>\n> + {\n> + FLAT_COPY_PATH(new_path, path, Path);\n> +\n> + if (path->pathtype == T_SampleScan)\n> + {\n> + Index scan_relid = path->parent->relid;\n> + RangeTblEntry *rte;\n> +\n> + /* it should be a base rel with a tablesample clause... */\n> + Assert(scan_relid > 0);\n> + rte = planner_rt_fetch(scan_relid, root);\n> + Assert(rte->rtekind == RTE_RELATION);\n> + Assert(rte->tablesample != NULL);\n> +\n> + ADJUST_CHILD_EXPRS(rte->tablesample, TableSampleClause *);\n> + }\n> + }\n>\n> This change makes this function usable only after the final plan has been\n> selected. If some code accidently uses it earlier, it would corrupt\n> rte->tablesample without getting detected easily. I think we should mention\n> this in the function prologue and move it somewhere else. Other option is\n> to a.\n> leave rte->tablesample unchanged here with a comment as to why we aren't\n> changing it b. reparameterize tablesample expression in\n> create_samplescan_plan() if the path is parameterized by the parent. We\n> call\n> reparameterize_path_by_child() in create_nestloop_plan() as this patch does\n> right now. I like that we are delaying reparameterization. It saves a\n> bunch of\n> memory. I haven't measured it but from my recent experiments I know it\n> will be\n> a lot.\n>\n\nI agree that we should mention in the function's comment that\nreparameterize_path_by_child can only be used after the best path has\nbeen selected because the RTE might be modified by this function. I'm\nnot sure if it's a good idea to move the reparameterization of\ntablesample to create_samplescan_plan(). That would make the\nreparameterization work separated in different places. And in the\nfuture we may find more cases where we need modify RTEs or RELs for\nreparameterization. It seems better to me that we keep all the work in\none place.\n\n\n> * If the inner path is parameterized, it is parameterized by the\n> - * topmost parent of the outer rel, not the outer rel itself. Fix\n> - * that.\n> + * topmost parent of the outer rel, not the outer rel itself. We will\n>\n> There's something wrong with the original sentence (which was probably\n> written\n> by me back then :)). I think it should have read \"If the inner path is\n> parameterized by the topmost parent of the outer rel instead of the outer\n> rel\n> itself, fix that.\". If you agree, the new comment should change it\n> accordingly.\n>\n\nRight. Will do that.\n\n\n> @@ -2430,6 +2430,16 @@ create_nestloop_path(PlannerInfo *root,\n> {\n> NestPath *pathnode = makeNode(NestPath);\n> Relids inner_req_outer = PATH_REQ_OUTER(inner_path);\n> + Relids outerrelids;\n> +\n> + /*\n> + * Paths are parameterized by top-level parents, so run parameterization\n> + * tests on the parent relids.\n> + */\n> + if (outer_path->parent->top_parent_relids)\n> + outerrelids = outer_path->parent->top_parent_relids;\n> + else\n> + outerrelids = outer_path->parent->relids;\n>\n> This looks like an existing bug. Did partitionwise join paths ended up with\n> extra restrict clauses without this fix? I am not able to see why this\n> change\n> is needed by rest of the changes in the patch?\n>\n\nThis is not an existing bug. Our current code (without this patch)\nwould always call create_nestloop_path() after the reparameterization\nwork has been done for the inner path. So we can safely check against\nouter rel (not its topmost parent) in create_nestloop_path(). But now\nwith this patch, the reparameterization is postponed until createplan.c,\nso we have to check against the topmost parent of outer rel in\ncreate_nestloop_path(), otherwise we'd have duplicate clauses in the\nfinal plan. For instance, without this change you'd get a plan like\n\nregression=# explain (costs off)\nselect * from prt1 t1 left join lateral\n (select t1.a as t1a, t2.* from prt1 t2) s on t1.a = s.a;\n QUERY PLAN\n---------------------------------------------------------\n Append\n -> Nested Loop Left Join\n Join Filter: (t1_1.a = t2_1.a)\n -> Seq Scan on prt1_p1 t1_1\n -> Index Scan using iprt1_p1_a on prt1_p1 t2_1\n Index Cond: (a = t1_1.a)\n -> Nested Loop Left Join\n Join Filter: (t1_2.a = t2_2.a)\n -> Seq Scan on prt1_p2 t1_2\n -> Index Scan using iprt1_p2_a on prt1_p2 t2_2\n Index Cond: (a = t1_2.a)\n -> Nested Loop Left Join\n Join Filter: (t1_3.a = t2_3.a)\n -> Seq Scan on prt1_p3 t1_3\n -> Index Scan using iprt1_p3_a on prt1_p3 t2_3\n Index Cond: (a = t1_3.a)\n(16 rows)\n\nNote that the clauses in Join Filter are duplicate.\n\n\n> Anyway, let's rename outerrelids to something else e.g.\n> outer_param_relids to reflect\n> its true meaning.\n>\n\nHmm, I'm not sure this is a good idea. Here the 'outerrelids' is just\nthe relids of the outer rel or outer rel's topmost parent. I think it's\nbetter to keep it as-is, and this is consistent with 'outerrelids' in\ntry_nestloop_path().\n\n\n> +bool\n> +path_is_reparameterizable_by_child(Path *path)\n> +{\n> + switch (nodeTag(path))\n> + {\n> + case T_Path:\n> ... snip ...\n> + case T_GatherPath:\n> + return true;\n> + default:\n> +\n> + /* We don't know how to reparameterize this path. */\n> + return false;\n> + }\n> +\n> + return false;\n> +}\n>\n> How do we make sure that any changes to reparameterize_path_by_child() are\n> reflected here? One way is to call this function from\n> reparameterize_path_by_child() the first thing and return if the path can\n> not\n> be reparameterized.\n>\n\nGood question. But I don't think calling this function at the beginning\nof reparameterize_path_by_child() can solve this problem. Even if we do\nthat, if we find that the path cannot be reparameterized in\nreparameterize_path_by_child(), it would be too late for us to take any\nmeasures. So we need to make sure that such situation cannot happen. I\nthink we can emphasize this point in the comments of the two functions,\nand meanwhile add an Assert in reparameterize_path_by_child() that the\npath must be reparameterizable.\n\n\n> -#define ADJUST_CHILD_ATTRS(node) \\\n> +#define ADJUST_CHILD_EXPRS(node, fieldtype) \\\n> ((node) = \\\n> - (List *) adjust_appendrel_attrs_multilevel(root, (Node *) (node), \\\n> - child_rel, \\\n> - child_rel->top_parent))\n> + (fieldtype) adjust_appendrel_attrs_multilevel(root, (Node *) (node), \\\n> + child_rel, \\\n> + child_rel->top_parent))\n> +\n> +#define ADJUST_CHILD_ATTRS(node) ADJUST_CHILD_EXPRS(node, List *)\n>\n> IIRC, ATTRS here is taken from the function being called. I think we should\n> just change the macro definition, not its name. If you would like to have a\n> separate macro for List nodes, create ADJUST_CHILD_ATTRS_IN_LIST or some\n> such.\n>\n\nAgreed. I introduced the new macro ADJUST_CHILD_EXPRS just to keep\nmacro ADJUST_CHILD_ATTRS as-is, so callers to ADJUST_CHILD_ATTRS can\nkeep unchanged. It seems better to just change ADJUST_CHILD_ATTRS as\nwell as its callers. Will do that.\n\nAttaching v3 patch to address all the reviews above.\n\nThanks\nRichard", "msg_date": "Tue, 8 Aug 2023 15:14:10 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "On Tue, Aug 8, 2023 at 12:44 PM Richard Guo <guofenglinux@gmail.com> wrote:\n>\n>\n> I agree that we should mention in the function's comment that\n> reparameterize_path_by_child can only be used after the best path has\n> been selected because the RTE might be modified by this function. I'm\n> not sure if it's a good idea to move the reparameterization of\n> tablesample to create_samplescan_plan(). That would make the\n> reparameterization work separated in different places. And in the\n> future we may find more cases where we need modify RTEs or RELs for\n> reparameterization. It seems better to me that we keep all the work in\n> one place.\n\nLet's see what a committer thinks. Let's leave it like that for now.\n\n>\n>\n> This is not an existing bug. Our current code (without this patch)\n> would always call create_nestloop_path() after the reparameterization\n> work has been done for the inner path. So we can safely check against\n> outer rel (not its topmost parent) in create_nestloop_path(). But now\n> with this patch, the reparameterization is postponed until createplan.c,\n> so we have to check against the topmost parent of outer rel in\n> create_nestloop_path(), otherwise we'd have duplicate clauses in the\n> final plan.\n\nHmm. I am worried about the impact this will have when the nestloop\npath is created for two child relations. Your code will still pick the\ntop_parent_relids which seems odd. Have you tested that case?\n\n> For instance, without this change you'd get a plan like\n>\n> regression=# explain (costs off)\n> select * from prt1 t1 left join lateral\n> (select t1.a as t1a, t2.* from prt1 t2) s on t1.a = s.a;\n> QUERY PLAN\n... snip ...\n>\n> Note that the clauses in Join Filter are duplicate.\n\nThanks for the example.\n\n>\n> Good question. But I don't think calling this function at the beginning\n> of reparameterize_path_by_child() can solve this problem. Even if we do\n> that, if we find that the path cannot be reparameterized in\n> reparameterize_path_by_child(), it would be too late for us to take any\n> measures. So we need to make sure that such situation cannot happen. I\n> think we can emphasize this point in the comments of the two functions,\n> and meanwhile add an Assert in reparameterize_path_by_child() that the\n> path must be reparameterizable.\n\nWorks for me.\n\n>\n> Attaching v3 patch to address all the reviews above.\n\nThe patch looks good to me.\n\nPlease add this to the next commitfest.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 8 Aug 2023 17:03:56 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "On Tue, Aug 8, 2023 at 7:34 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n\n> On Tue, Aug 8, 2023 at 12:44 PM Richard Guo <guofenglinux@gmail.com>\n> wrote:\n> > This is not an existing bug. Our current code (without this patch)\n> > would always call create_nestloop_path() after the reparameterization\n> > work has been done for the inner path. So we can safely check against\n> > outer rel (not its topmost parent) in create_nestloop_path(). But now\n> > with this patch, the reparameterization is postponed until createplan.c,\n> > so we have to check against the topmost parent of outer rel in\n> > create_nestloop_path(), otherwise we'd have duplicate clauses in the\n> > final plan.\n>\n> Hmm. I am worried about the impact this will have when the nestloop\n> path is created for two child relations. Your code will still pick the\n> top_parent_relids which seems odd. Have you tested that case?\n\n\nWell, the way it's working is that for child rels all parameterized\npaths arriving at try_nestloop_path (or try_partial_nestloop_path) are\nparameterized by top parents at first. Then our current code (without\nthis patch) applies reparameterize_path_by_child to the inner path\nbefore calling create_nestloop_path(). So in create_nestloop_path() the\ninner path is parameterized by child rel. That's why we check against\nouter rel itself not its top parent there.\n\nWith this patch, the reparameterize_path_by_child work is postponed\nuntil createplan time, so in create_nestloop_path() the inner path is\nstill parameterized by top parent. So we have to check against the top\nparent of outer rel.\n\nI think the query shown upthread is sufficient to verify this theory.\n\nregression=# explain (costs off)\nselect * from prt1 t1 left join lateral\n (select t1.a as t1a, t2.* from prt1 t2) s on t1.a = s.a;\n QUERY PLAN\n---------------------------------------------------------\n Append\n -> Nested Loop Left Join\n -> Seq Scan on prt1_p1 t1_1\n -> Index Scan using iprt1_p1_a on prt1_p1 t2_1\n Index Cond: (a = t1_1.a)\n -> Nested Loop Left Join\n -> Seq Scan on prt1_p2 t1_2\n -> Index Scan using iprt1_p2_a on prt1_p2 t2_2\n Index Cond: (a = t1_2.a)\n -> Nested Loop Left Join\n -> Seq Scan on prt1_p3 t1_3\n -> Index Scan using iprt1_p3_a on prt1_p3 t2_3\n Index Cond: (a = t1_3.a)\n(13 rows)\n\n\n> Please add this to the next commitfest.\n\n\nDone here https://commitfest.postgresql.org/44/4477/\n\nThanks\nRichard\n\nOn Tue, Aug 8, 2023 at 7:34 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:On Tue, Aug 8, 2023 at 12:44 PM Richard Guo <guofenglinux@gmail.com> wrote:\n> This is not an existing bug.  Our current code (without this patch)\n> would always call create_nestloop_path() after the reparameterization\n> work has been done for the inner path.  So we can safely check against\n> outer rel (not its topmost parent) in create_nestloop_path().  But now\n> with this patch, the reparameterization is postponed until createplan.c,\n> so we have to check against the topmost parent of outer rel in\n> create_nestloop_path(), otherwise we'd have duplicate clauses in the\n> final plan.\n\nHmm. I am worried about the impact this will have when the nestloop\npath is created for two child relations. Your code will still pick the\ntop_parent_relids which seems odd. Have you tested that case?Well, the way it's working is that for child rels all parameterizedpaths arriving at try_nestloop_path (or try_partial_nestloop_path) areparameterized by top parents at first.  Then our current code (withoutthis patch) applies reparameterize_path_by_child to the inner pathbefore calling create_nestloop_path().  So in create_nestloop_path() theinner path is parameterized by child rel.  That's why we check againstouter rel itself not its top parent there.With this patch, the reparameterize_path_by_child work is postponeduntil createplan time, so in create_nestloop_path() the inner path isstill parameterized by top parent.  So we have to check against the topparent of outer rel.I think the query shown upthread is sufficient to verify this theory.regression=# explain (costs off)select * from prt1 t1 left join lateral    (select t1.a as t1a, t2.* from prt1 t2) s on t1.a = s.a;                       QUERY PLAN--------------------------------------------------------- Append   ->  Nested Loop Left Join         ->  Seq Scan on prt1_p1 t1_1         ->  Index Scan using iprt1_p1_a on prt1_p1 t2_1               Index Cond: (a = t1_1.a)   ->  Nested Loop Left Join         ->  Seq Scan on prt1_p2 t1_2         ->  Index Scan using iprt1_p2_a on prt1_p2 t2_2               Index Cond: (a = t1_2.a)   ->  Nested Loop Left Join         ->  Seq Scan on prt1_p3 t1_3         ->  Index Scan using iprt1_p3_a on prt1_p3 t2_3               Index Cond: (a = t1_3.a)(13 rows) \nPlease add this to the next commitfest.Done here https://commitfest.postgresql.org/44/4477/ThanksRichard", "msg_date": "Wed, 9 Aug 2023 10:43:48 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "On Wed, Aug 9, 2023 at 8:14 AM Richard Guo <guofenglinux@gmail.com> wrote:\n>\n> With this patch, the reparameterize_path_by_child work is postponed\n> until createplan time, so in create_nestloop_path() the inner path is\n> still parameterized by top parent. So we have to check against the top\n> parent of outer rel.\n>\n\nYour changes in create_nestloop_path() only affect the check for\nparameterized path not the unparameterized paths. So I think we are\ngood there. My worry was misplaced.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Wed, 9 Aug 2023 19:50:59 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "I looked over the v3 patch. I think it's going in generally\nthe right direction, but there is a huge oversight:\npath_is_reparameterizable_by_child does not come close to modeling\nthe success/failure conditions of reparameterize_path_by_child.\nIn all the cases where reparameterize_path_by_child can recurse,\npath_is_reparameterizable_by_child has to do so also, to check\nwhether the child path is reparameterizable. (I'm somewhat\ndisturbed that apparently we have no test cases that caught that\noversight; can we add one cheaply?)\n\nBTW, I also note from the cfbot that 9e9931d2b broke this patch by\nadding more ADJUST_CHILD_ATTRS calls that need to be modified.\nI wonder if we could get away with having that macro cast to \"void *\"\nto avoid needing to change all its call sites. I'm not sure whether\npickier compilers might warn about doing it that way.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 20 Aug 2023 11:48:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "On Sun, Aug 20, 2023 at 11:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I looked over the v3 patch. I think it's going in generally\n> the right direction, but there is a huge oversight:\n> path_is_reparameterizable_by_child does not come close to modeling\n> the success/failure conditions of reparameterize_path_by_child.\n> In all the cases where reparameterize_path_by_child can recurse,\n> path_is_reparameterizable_by_child has to do so also, to check\n> whether the child path is reparameterizable. (I'm somewhat\n> disturbed that apparently we have no test cases that caught that\n> oversight; can we add one cheaply?)\n\n\nThanks for pointing this out. It's my oversight. We have to check the\nchild path(s) recursively to tell if a path is reparameterizable or not.\nI've fixed this in v4 patch, along with a test case. In the test case\nwe have a MemoizePath whose subpath is TidPath, which is not\nreparameterizable. This test case would trigger Assert in v3 patch.\n\n\n> BTW, I also note from the cfbot that 9e9931d2b broke this patch by\n> adding more ADJUST_CHILD_ATTRS calls that need to be modified.\n> I wonder if we could get away with having that macro cast to \"void *\"\n> to avoid needing to change all its call sites. I'm not sure whether\n> pickier compilers might warn about doing it that way.\n\n\nAgreed and have done that in v4 patch.\n\nThanks\nRichard", "msg_date": "Mon, 21 Aug 2023 18:39:18 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "On Mon, Aug 21, 2023 at 4:09 PM Richard Guo <guofenglinux@gmail.com> wrote:\n>\n>\n> On Sun, Aug 20, 2023 at 11:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> I looked over the v3 patch. I think it's going in generally\n>> the right direction, but there is a huge oversight:\n>> path_is_reparameterizable_by_child does not come close to modeling\n>> the success/failure conditions of reparameterize_path_by_child.\n>> In all the cases where reparameterize_path_by_child can recurse,\n>> path_is_reparameterizable_by_child has to do so also, to check\n>> whether the child path is reparameterizable. (I'm somewhat\n>> disturbed that apparently we have no test cases that caught that\n>> oversight; can we add one cheaply?)\n>\n>\n> Thanks for pointing this out. It's my oversight. We have to check the\n> child path(s) recursively to tell if a path is reparameterizable or not.\n> I've fixed this in v4 patch, along with a test case. In the test case\n> we have a MemoizePath whose subpath is TidPath, which is not\n> reparameterizable. This test case would trigger Assert in v3 patch.\n\nThis goes back to my question about how do we keep\npath_is_reparameterizable_by_child() and\nreparameterize_path_by_child() in sync with each other. This makes it\nfurther hard to do so. One idea I have is to use the same function\n(reparameterize_path_by_child()) in two modes 1. actual\nreparameterization 2. assessing whether reparameterization is\npossible. Essentially combing reparameterize_path_by_child() and\npath_is_reparameterizable_by_child(). The name of such a function can\nbe chosen appropriately. The mode will be controlled by a fourth\nargument to the function. When assessing a path no translations happen\nand no extra memory is allocated.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 21 Aug 2023 17:53:01 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "I spent some time reviewing the v4 patch. I noted that\npath_is_reparameterizable_by_child still wasn't modeling the pass/fail\nbehavior of reparameterize_path_by_child very well, because that\nfunction checks this at every recursion level:\n\n /*\n * If the path is not parameterized by the parent of the given relation,\n * it doesn't need reparameterization.\n */\n if (!path->param_info ||\n !bms_overlap(PATH_REQ_OUTER(path), child_rel->top_parent_relids))\n return path;\n\nSo we might have a sub-path (eg a join input rel) that is not one of\nthe supported kinds, and yet we can succeed because parameterization\nappears only in other sub-paths. The patch as written will not crash\nin such a case, but it might refuse to use a path that we previously\nwould have allowed. So I think we need to put the same test into\npath_is_reparameterizable_by_child, which requires adding child_rel\nto its param list but is otherwise simple enough.\n\nI also realized that this test is equivalent to \nPATH_PARAM_BY_PARENT(), which makes it really unnecessary for\ncreateplan.c to test PATH_PARAM_BY_PARENT, so we don't need to\nexpose those macros globally after all.\n\n(On the same logic, we could skip PATH_PARAM_BY_PARENT at the\ncall sites of path_is_reparameterizable_by_child. I didn't\ndo that in the attached, mainly because it seems to make it\nharder to understand/explain what is being tested.)\n\nAnother change I made is to put the path_is_reparameterizable_by_child\ntests before the initial_cost_nestloop/add_path_precheck steps, on\nthe grounds that they're now cheap enough that we might as well do\nthem first. The existing ordering of these steps was sensible when\nwe were doing the expensive reparameterization, but it seems a bit\nunnatural IMO.\n\nLastly, although I'd asked for a test case demonstrating detection of\nan unparameterizable sub-path, I ended up not using that, because\nit seemed pretty fragile. If somebody decides that\nreparameterize_path_by_child ought to cover TidPaths, the test won't\nprove anything any more.\n\nSo that led me to the attached v5, which seemed committable to me so I\nset about back-patching it ... and it fell over immediately in v15, as\nshown in the attached regression diffs from v15. It looks to me like\nwe are now failing to recognize that reparameterized quals are\nredundant with not-reparameterized ones, so this patch is somehow\ndependent on restructuring that happened during the v16 cycle.\nI don't have time to dig deeper than that, and I'm not sure that that\nis an area we'd want to mess with in a back-patched bug fix.\n\nWhat I'm thinking we ought to do instead for the back branches\nis just refuse to generate a reparameterized path for tablesample\nscans. A minimal fix like that could be as little as\n\n case T_Path:\n+ if (path->pathtype == T_SampleScan)\n+ return NULL;\n FLAT_COPY_PATH(new_path, path, Path);\n break;\n\nThis rejects more than it absolutely has to, because the\nparameterization (that we know exists) might be in the path's\nregular quals or tlist rather than in the tablesample.\nSo we could add something to see if the tablesample is\nparameter-free, but I'm quite unsure that it's worth the\ntrouble. There must be just about nobody using such cases,\nor we'd have heard of this bug long ago.\n\n(BTW, I did look at Ashutosh's idea of merging the\nreparameterize_path_by_child and path_is_reparameterizable_by_child\nfunctions, but I didn't think that would be an improvement,\nbecause we'd have to clutter reparameterize_path_by_child with\na lot of should-I-skip-this-step tests. Some of that could be\nhidden in the macros, but a lot would not be. Another issue\nis that I do not think we can change reparameterize_path_by_child's\nAPI contract in the back branches, because we advertise it as\nsomething that FDWs and custom scan providers can use.)\n\nThoughts?\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 21 Aug 2023 16:48:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "On Tue, Aug 22, 2023 at 4:48 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I spent some time reviewing the v4 patch. I noted that\n> path_is_reparameterizable_by_child still wasn't modeling the pass/fail\n> behavior of reparameterize_path_by_child very well, because that\n> function checks this at every recursion level:\n>\n> /*\n> * If the path is not parameterized by the parent of the given\n> relation,\n> * it doesn't need reparameterization.\n> */\n> if (!path->param_info ||\n> !bms_overlap(PATH_REQ_OUTER(path), child_rel->top_parent_relids))\n> return path;\n>\n> So we might have a sub-path (eg a join input rel) that is not one of\n> the supported kinds, and yet we can succeed because parameterization\n> appears only in other sub-paths. The patch as written will not crash\n> in such a case, but it might refuse to use a path that we previously\n> would have allowed. So I think we need to put the same test into\n> path_is_reparameterizable_by_child, which requires adding child_rel\n> to its param list but is otherwise simple enough.\n\n\nsigh.. I overlooked this check. You're right. We have to have this\nsame test in path_is_reparameterizable_by_child.\n\n\n> So that led me to the attached v5, which seemed committable to me so I\n> set about back-patching it ... and it fell over immediately in v15, as\n> shown in the attached regression diffs from v15. It looks to me like\n> we are now failing to recognize that reparameterized quals are\n> redundant with not-reparameterized ones, so this patch is somehow\n> dependent on restructuring that happened during the v16 cycle.\n> I don't have time to dig deeper than that, and I'm not sure that that\n> is an area we'd want to mess with in a back-patched bug fix.\n\n\nI looked at this and I think the culprit is that in create_nestloop_path\nwe are failing to recognize those restrict_clauses that have been moved\ninto the inner path. In v16, we have the new serial number stuff to\nhelp detect such clauses and that works very well. In v15, we are still\nusing join_clause_is_movable_into() for that purpose. It does not work\nwell with the patch because now in create_nestloop_path the inner path\nis still parameterized by top-level parents, while the restrict_clauses\nalready have been adjusted to refer to the child rels. So the check\nperformed by join_clause_is_movable_into() would always fail.\n\nI'm wondering if we can instead adjust the 'inner_req_outer' in\ncreate_nestloop_path before we perform the check to work around this\nissue for the back branches, something like\n\n@@ -2418,6 +2419,15 @@ create_nestloop_path(PlannerInfo *root,\n NestPath *pathnode = makeNode(NestPath);\n Relids inner_req_outer = PATH_REQ_OUTER(inner_path);\n\n+ /*\n+ * Adjust the parameterization information, which refers to the topmost\n+ * parent.\n+ */\n+ inner_req_outer =\n+ adjust_child_relids_multilevel(root, inner_req_outer,\n+ outer_path->parent->relids,\n+\n outer_path->parent->top_parent_relids);\n+\n\nAnd with it we do not need the changes as in the patch for\ncreate_nestloop_path any more.\n\nThanks\nRichard\n\nOn Tue, Aug 22, 2023 at 4:48 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:I spent some time reviewing the v4 patch.  I noted that\npath_is_reparameterizable_by_child still wasn't modeling the pass/fail\nbehavior of reparameterize_path_by_child very well, because that\nfunction checks this at every recursion level:\n\n    /*\n     * If the path is not parameterized by the parent of the given relation,\n     * it doesn't need reparameterization.\n     */\n    if (!path->param_info ||\n        !bms_overlap(PATH_REQ_OUTER(path), child_rel->top_parent_relids))\n        return path;\n\nSo we might have a sub-path (eg a join input rel) that is not one of\nthe supported kinds, and yet we can succeed because parameterization\nappears only in other sub-paths.  The patch as written will not crash\nin such a case, but it might refuse to use a path that we previously\nwould have allowed.  So I think we need to put the same test into\npath_is_reparameterizable_by_child, which requires adding child_rel\nto its param list but is otherwise simple enough.sigh.. I overlooked this check.  You're right.  We have to have thissame test in path_is_reparameterizable_by_child. \nSo that led me to the attached v5, which seemed committable to me so I\nset about back-patching it ... and it fell over immediately in v15, as\nshown in the attached regression diffs from v15.  It looks to me like\nwe are now failing to recognize that reparameterized quals are\nredundant with not-reparameterized ones, so this patch is somehow\ndependent on restructuring that happened during the v16 cycle.\nI don't have time to dig deeper than that, and I'm not sure that that\nis an area we'd want to mess with in a back-patched bug fix.I looked at this and I think the culprit is that in create_nestloop_pathwe are failing to recognize those restrict_clauses that have been movedinto the inner path.  In v16, we have the new serial number stuff tohelp detect such clauses and that works very well.  In v15, we are stillusing join_clause_is_movable_into() for that purpose.  It does not workwell with the patch because now in create_nestloop_path the inner pathis still parameterized by top-level parents, while the restrict_clausesalready have been adjusted to refer to the child rels.  So the checkperformed by join_clause_is_movable_into() would always fail.I'm wondering if we can instead adjust the 'inner_req_outer' increate_nestloop_path before we perform the check to work around thisissue for the back branches, something like@@ -2418,6 +2419,15 @@ create_nestloop_path(PlannerInfo *root,    NestPath   *pathnode = makeNode(NestPath);    Relids      inner_req_outer = PATH_REQ_OUTER(inner_path);+   /*+    * Adjust the parameterization information, which refers to the topmost+    * parent.+    */+   inner_req_outer =+       adjust_child_relids_multilevel(root, inner_req_outer,+                                      outer_path->parent->relids,+                                      outer_path->parent->top_parent_relids);+And with it we do not need the changes as in the patch forcreate_nestloop_path any more.ThanksRichard", "msg_date": "Tue, 22 Aug 2023 11:12:07 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> On Tue, Aug 22, 2023 at 4:48 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> So that led me to the attached v5, which seemed committable to me so I\n>> set about back-patching it ... and it fell over immediately in v15, as\n>> shown in the attached regression diffs from v15. It looks to me like\n>> we are now failing to recognize that reparameterized quals are\n>> redundant with not-reparameterized ones, so this patch is somehow\n>> dependent on restructuring that happened during the v16 cycle.\n>> I don't have time to dig deeper than that, and I'm not sure that that\n>> is an area we'd want to mess with in a back-patched bug fix.\n\n> I looked at this and I think the culprit is that in create_nestloop_path\n> we are failing to recognize those restrict_clauses that have been moved\n> into the inner path. In v16, we have the new serial number stuff to\n> help detect such clauses and that works very well. In v15, we are still\n> using join_clause_is_movable_into() for that purpose. It does not work\n> well with the patch because now in create_nestloop_path the inner path\n> is still parameterized by top-level parents, while the restrict_clauses\n> already have been adjusted to refer to the child rels. So the check\n> performed by join_clause_is_movable_into() would always fail.\n\nAh.\n\n> I'm wondering if we can instead adjust the 'inner_req_outer' in\n> create_nestloop_path before we perform the check to work around this\n> issue for the back branches, something like\n> + /*\n> + * Adjust the parameterization information, which refers to the topmost\n> + * parent.\n> + */\n> + inner_req_outer =\n> + adjust_child_relids_multilevel(root, inner_req_outer,\n> + outer_path->parent->relids,\n> + outer_path->parent->top_parent_relids);\n\nMmm ... at the very least you'd need to not do that when top_parent_relids\nis unset. But I think the risk/reward ratio for messing with this in the\nback branches is unattractive in any case: to fix a corner case that\napparently nobody uses in the field, we risk breaking any number of\nmainstream parameterized-path cases. I'm content to commit the v5 patch\n(or a successor) into HEAD, but at this point I'm not sure I even want\nto risk it in v16, let alone perform delicate surgery to get it to work\nin older branches. I think we ought to go with the \"tablesample scans\ncan't be reparameterized\" approach in v16 and before.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 22 Aug 2023 10:39:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "I wrote:\n> ... I think the risk/reward ratio for messing with this in the\n> back branches is unattractive in any case: to fix a corner case that\n> apparently nobody uses in the field, we risk breaking any number of\n> mainstream parameterized-path cases. I'm content to commit the v5 patch\n> (or a successor) into HEAD, but at this point I'm not sure I even want\n> to risk it in v16, let alone perform delicate surgery to get it to work\n> in older branches. I think we ought to go with the \"tablesample scans\n> can't be reparameterized\" approach in v16 and before.\n\nConcretely, about like this for v16, and similarly in older branches.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 22 Aug 2023 11:43:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "Hi Tom,\n\nOn Tue, Aug 22, 2023 at 2:18 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>\n> (BTW, I did look at Ashutosh's idea of merging the\n> reparameterize_path_by_child and path_is_reparameterizable_by_child\n> functions, but I didn't think that would be an improvement,\n> because we'd have to clutter reparameterize_path_by_child with\n> a lot of should-I-skip-this-step tests. Some of that could be\n> hidden in the macros, but a lot would not be. Another issue\n> is that I do not think we can change reparameterize_path_by_child's\n> API contract in the back branches, because we advertise it as\n> something that FDWs and custom scan providers can use.)\n\nHere are two patches\n0001 same as your v5 patch\n0002 implements what I had in mind - combining the assessment and\nreparameterization in a single function.\n\nI don't know whether you had something similar to 0002 when you\nconsidered that approach. Hence implemented it quickly. The names of\nthe functions and arguments can be changed. But I think this version\nwill help us keep assessment and actual reparameterization in sync,\nvery tightly. Most of the conditionals have been pushed into macros,\nexcept for two which need to be outside the macros and actually look\nbetter to me. Let me know what you think of this.\n\nFWIW I ran make check and all the tests pass.\n\nI agree that we can not consider this for backbranches but we are\nanyway thinking of disabling reparameterization for tablesample scans\nanyway.\n\n--\nBest Wishes,\nAshutosh Bapat", "msg_date": "Tue, 22 Aug 2023 21:37:22 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "On Tue, Aug 22, 2023 at 10:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Richard Guo <guofenglinux@gmail.com> writes:\n> > I'm wondering if we can instead adjust the 'inner_req_outer' in\n> > create_nestloop_path before we perform the check to work around this\n> > issue for the back branches, something like\n> > + /*\n> > + * Adjust the parameterization information, which refers to the\n> topmost\n> > + * parent.\n> > + */\n> > + inner_req_outer =\n> > + adjust_child_relids_multilevel(root, inner_req_outer,\n> > + outer_path->parent->relids,\n> > +\n> outer_path->parent->top_parent_relids);\n>\n> Mmm ... at the very least you'd need to not do that when top_parent_relids\n> is unset.\n\n\nHmm, adjust_child_relids_multilevel would just return the given relids\nunchanged when top_parent_relids is unset, so I think it should be safe\nto call it even for non-other rel.\n\n\n> ... But I think the risk/reward ratio for messing with this in the\n> back branches is unattractive in any case: to fix a corner case that\n> apparently nobody uses in the field, we risk breaking any number of\n> mainstream parameterized-path cases. I'm content to commit the v5 patch\n> (or a successor) into HEAD, but at this point I'm not sure I even want\n> to risk it in v16, let alone perform delicate surgery to get it to work\n> in older branches. I think we ought to go with the \"tablesample scans\n> can't be reparameterized\" approach in v16 and before.\n\n\nIf we go with the \"tablesample scans can't be reparameterized\" approach\nin the back branches, I'm a little concerned that what if we find more\ncases in the futrue where we need modify RTEs for reparameterization.\nSo I spent some time seeking and have managed to find one: there might\nbe lateral references in a scan path's restriction clauses, and\ncurrently reparameterize_path_by_child fails to adjust them.\n\nregression=# explain (verbose, costs off)\nselect count(*) from prt1 t1 left join lateral\n (select t1.b as t1b, t2.* from prt2 t2) s\n on t1.a = s.b where s.t1b = s.a;\n QUERY PLAN\n----------------------------------------------------------------------\n Aggregate\n Output: count(*)\n -> Append\n -> Nested Loop\n -> Seq Scan on public.prt1_p1 t1_1\n Output: t1_1.a, t1_1.b\n -> Index Scan using iprt2_p1_b on public.prt2_p1 t2_1\n Output: t2_1.b\n Index Cond: (t2_1.b = t1_1.a)\n Filter: (t1.b = t2_1.a)\n -> Nested Loop\n -> Seq Scan on public.prt1_p2 t1_2\n Output: t1_2.a, t1_2.b\n -> Index Scan using iprt2_p2_b on public.prt2_p2 t2_2\n Output: t2_2.b\n Index Cond: (t2_2.b = t1_2.a)\n Filter: (t1.b = t2_2.a)\n -> Nested Loop\n -> Seq Scan on public.prt1_p3 t1_3\n Output: t1_3.a, t1_3.b\n -> Index Scan using iprt2_p3_b on public.prt2_p3 t2_3\n Output: t2_3.b\n Index Cond: (t2_3.b = t1_3.a)\n Filter: (t1.b = t2_3.a)\n(24 rows)\n\nThe clauses in 'Filter' are not adjusted correctly. Var 't1.b' still\nrefers to the top parent.\n\nFor this query it would just give wrong results.\n\nregression=# set enable_partitionwise_join to off;\nSET\nregression=# select count(*) from prt1 t1 left join lateral\n (select t1.b as t1b, t2.* from prt2 t2) s\n on t1.a = s.b where s.t1b = s.a;\n count\n-------\n 100\n(1 row)\n\nregression=# set enable_partitionwise_join to on;\nSET\nregression=# select count(*) from prt1 t1 left join lateral\n (select t1.b as t1b, t2.* from prt2 t2) s\n on t1.a = s.b where s.t1b = s.a;\n count\n-------\n 5\n(1 row)\n\n\nFor another query it would error out during planning.\n\nregression=# explain (verbose, costs off)\nselect count(*) from prt1 t1 left join lateral\n (select t1.b as t1b, t2.* from prt2 t2) s\n on t1.a = s.b where s.t1b = s.b;\nERROR: variable not found in subplan target list\n\nTo fix it we may need to modify RelOptInfos for Path, BitmapHeapPath,\nForeignPath and CustomPath, and modify IndexOptInfos for IndexPath. It\nseems that that is not easily done without postponing reparameterization\nof paths until createplan.c.\n\nAttached is a patch which is v5 + fix for this new issue.\n\nThanks\nRichard", "msg_date": "Wed, 23 Aug 2023 13:37:46 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "On Wed, Aug 23, 2023 at 11:07 AM Richard Guo <guofenglinux@gmail.com> wrote:\n>\n>\n> On Tue, Aug 22, 2023 at 10:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> Richard Guo <guofenglinux@gmail.com> writes:\n>> > I'm wondering if we can instead adjust the 'inner_req_outer' in\n>> > create_nestloop_path before we perform the check to work around this\n>> > issue for the back branches, something like\n>> > + /*\n>> > + * Adjust the parameterization information, which refers to the topmost\n>> > + * parent.\n>> > + */\n>> > + inner_req_outer =\n>> > + adjust_child_relids_multilevel(root, inner_req_outer,\n>> > + outer_path->parent->relids,\n>> > + outer_path->parent->top_parent_relids);\n>>\n>> Mmm ... at the very least you'd need to not do that when top_parent_relids\n>> is unset.\n>\n>\n> Hmm, adjust_child_relids_multilevel would just return the given relids\n> unchanged when top_parent_relids is unset, so I think it should be safe\n> to call it even for non-other rel.\n>\n>>\n>> ... But I think the risk/reward ratio for messing with this in the\n>> back branches is unattractive in any case: to fix a corner case that\n>> apparently nobody uses in the field, we risk breaking any number of\n>> mainstream parameterized-path cases. I'm content to commit the v5 patch\n>> (or a successor) into HEAD, but at this point I'm not sure I even want\n>> to risk it in v16, let alone perform delicate surgery to get it to work\n>> in older branches. I think we ought to go with the \"tablesample scans\n>> can't be reparameterized\" approach in v16 and before.\n>\n>\n> If we go with the \"tablesample scans can't be reparameterized\" approach\n> in the back branches, I'm a little concerned that what if we find more\n> cases in the futrue where we need modify RTEs for reparameterization.\n> So I spent some time seeking and have managed to find one: there might\n> be lateral references in a scan path's restriction clauses, and\n> currently reparameterize_path_by_child fails to adjust them.\n>\n> regression=# explain (verbose, costs off)\n> select count(*) from prt1 t1 left join lateral\n> (select t1.b as t1b, t2.* from prt2 t2) s\n> on t1.a = s.b where s.t1b = s.a;\n> QUERY PLAN\n> ----------------------------------------------------------------------\n> Aggregate\n> Output: count(*)\n> -> Append\n> -> Nested Loop\n> -> Seq Scan on public.prt1_p1 t1_1\n> Output: t1_1.a, t1_1.b\n> -> Index Scan using iprt2_p1_b on public.prt2_p1 t2_1\n> Output: t2_1.b\n> Index Cond: (t2_1.b = t1_1.a)\n> Filter: (t1.b = t2_1.a)\n> -> Nested Loop\n> -> Seq Scan on public.prt1_p2 t1_2\n> Output: t1_2.a, t1_2.b\n> -> Index Scan using iprt2_p2_b on public.prt2_p2 t2_2\n> Output: t2_2.b\n> Index Cond: (t2_2.b = t1_2.a)\n> Filter: (t1.b = t2_2.a)\n> -> Nested Loop\n> -> Seq Scan on public.prt1_p3 t1_3\n> Output: t1_3.a, t1_3.b\n> -> Index Scan using iprt2_p3_b on public.prt2_p3 t2_3\n> Output: t2_3.b\n> Index Cond: (t2_3.b = t1_3.a)\n> Filter: (t1.b = t2_3.a)\n> (24 rows)\n>\n> The clauses in 'Filter' are not adjusted correctly. Var 't1.b' still\n> refers to the top parent.\n>\n> For this query it would just give wrong results.\n>\n> regression=# set enable_partitionwise_join to off;\n> SET\n> regression=# select count(*) from prt1 t1 left join lateral\n> (select t1.b as t1b, t2.* from prt2 t2) s\n> on t1.a = s.b where s.t1b = s.a;\n> count\n> -------\n> 100\n> (1 row)\n>\n> regression=# set enable_partitionwise_join to on;\n> SET\n> regression=# select count(*) from prt1 t1 left join lateral\n> (select t1.b as t1b, t2.* from prt2 t2) s\n> on t1.a = s.b where s.t1b = s.a;\n> count\n> -------\n> 5\n> (1 row)\n>\n>\n> For another query it would error out during planning.\n>\n> regression=# explain (verbose, costs off)\n> select count(*) from prt1 t1 left join lateral\n> (select t1.b as t1b, t2.* from prt2 t2) s\n> on t1.a = s.b where s.t1b = s.b;\n> ERROR: variable not found in subplan target list\n>\n> To fix it we may need to modify RelOptInfos for Path, BitmapHeapPath,\n> ForeignPath and CustomPath, and modify IndexOptInfos for IndexPath. It\n> seems that that is not easily done without postponing reparameterization\n> of paths until createplan.c.\n\nMaybe I am missing something here but why aren't we copying these\nrestrictinfos in the path somewhere? I have a similar question for\ntablesample clause as well.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Wed, 23 Aug 2023 21:45:44 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> If we go with the \"tablesample scans can't be reparameterized\" approach\n> in the back branches, I'm a little concerned that what if we find more\n> cases in the futrue where we need modify RTEs for reparameterization.\n> So I spent some time seeking and have managed to find one: there might\n> be lateral references in a scan path's restriction clauses, and\n> currently reparameterize_path_by_child fails to adjust them.\n\nHmm, this seems completely wrong to me. By definition, such clauses\nought to be join clauses not restriction clauses, so how are we getting\ninto this state? IOW, I agree this is clearly buggy but I think the\nbug is someplace else.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 23 Aug 2023 13:44:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "On Thu, Aug 24, 2023 at 1:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Richard Guo <guofenglinux@gmail.com> writes:\n> > If we go with the \"tablesample scans can't be reparameterized\" approach\n> > in the back branches, I'm a little concerned that what if we find more\n> > cases in the futrue where we need modify RTEs for reparameterization.\n> > So I spent some time seeking and have managed to find one: there might\n> > be lateral references in a scan path's restriction clauses, and\n> > currently reparameterize_path_by_child fails to adjust them.\n>\n> Hmm, this seems completely wrong to me. By definition, such clauses\n> ought to be join clauses not restriction clauses, so how are we getting\n> into this state? IOW, I agree this is clearly buggy but I think the\n> bug is someplace else.\n\n\nIf the clause contains PHVs that syntactically belong to a rel and\nmeanwhile have lateral references to other rels, then it may become a\nrestriction clause with lateral references. Take the query shown\nupthread as an example,\n\nselect count(*) from prt1 t1 left join lateral\n (select t1.b as t1b, t2.* from prt2 t2) s\n on t1.a = s.b where s.t1b = s.a;\n\nThe clause 's.t1b = s.a' would become 'PHV(t1.b) = t2.a' after we have\npulled up the subquery. The PHV in it syntactically belongs to 't2' and\nlaterally refers to 't1'. So this clause is actually a restriction\nclause for rel 't2', and will be put into the baserestrictinfo of t2\nrel. But it also has lateral reference to rel 't1', which we need to\nadjust in reparameterize_path_by_child for partitionwise join.\n\nThanks\nRichard\n\nOn Thu, Aug 24, 2023 at 1:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Richard Guo <guofenglinux@gmail.com> writes:\n> If we go with the \"tablesample scans can't be reparameterized\" approach\n> in the back branches, I'm a little concerned that what if we find more\n> cases in the futrue where we need modify RTEs for reparameterization.\n> So I spent some time seeking and have managed to find one: there might\n> be lateral references in a scan path's restriction clauses, and\n> currently reparameterize_path_by_child fails to adjust them.\n\nHmm, this seems completely wrong to me.  By definition, such clauses\nought to be join clauses not restriction clauses, so how are we getting\ninto this state?  IOW, I agree this is clearly buggy but I think the\nbug is someplace else.If the clause contains PHVs that syntactically belong to a rel andmeanwhile have lateral references to other rels, then it may become arestriction clause with lateral references.  Take the query shownupthread as an example,select count(*) from prt1 t1 left join lateral    (select t1.b as t1b, t2.* from prt2 t2) s    on t1.a = s.b where s.t1b = s.a;The clause 's.t1b = s.a' would become 'PHV(t1.b) = t2.a' after we havepulled up the subquery.  The PHV in it syntactically belongs to 't2' andlaterally refers to 't1'.  So this clause is actually a restrictionclause for rel 't2', and will be put into the baserestrictinfo of t2rel.  But it also has lateral reference to rel 't1', which we need toadjust in reparameterize_path_by_child for partitionwise join.ThanksRichard", "msg_date": "Thu, 24 Aug 2023 10:47:11 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "On Thu, Aug 24, 2023 at 8:17 AM Richard Guo <guofenglinux@gmail.com> wrote:\n>\n>\n> On Thu, Aug 24, 2023 at 1:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> Richard Guo <guofenglinux@gmail.com> writes:\n>> > If we go with the \"tablesample scans can't be reparameterized\" approach\n>> > in the back branches, I'm a little concerned that what if we find more\n>> > cases in the futrue where we need modify RTEs for reparameterization.\n>> > So I spent some time seeking and have managed to find one: there might\n>> > be lateral references in a scan path's restriction clauses, and\n>> > currently reparameterize_path_by_child fails to adjust them.\n>>\n>> Hmm, this seems completely wrong to me. By definition, such clauses\n>> ought to be join clauses not restriction clauses, so how are we getting\n>> into this state? IOW, I agree this is clearly buggy but I think the\n>> bug is someplace else.\n>\n>\n> If the clause contains PHVs that syntactically belong to a rel and\n> meanwhile have lateral references to other rels, then it may become a\n> restriction clause with lateral references. Take the query shown\n> upthread as an example,\n>\n> select count(*) from prt1 t1 left join lateral\n> (select t1.b as t1b, t2.* from prt2 t2) s\n> on t1.a = s.b where s.t1b = s.a;\n>\n> The clause 's.t1b = s.a' would become 'PHV(t1.b) = t2.a' after we have\n> pulled up the subquery. The PHV in it syntactically belongs to 't2' and\n> laterally refers to 't1'. So this clause is actually a restriction\n> clause for rel 't2', and will be put into the baserestrictinfo of t2\n> rel. But it also has lateral reference to rel 't1', which we need to\n> adjust in reparameterize_path_by_child for partitionwise join.\n\nWhen the clause s.t1b = s.a is presented to distribute_qual_to_rels()\nit has form PHV(t1.b) = t2.b. The PHV's ph_eval_at is 4, which is what\nis returned as varno to pull_varnos(). The other Var in the caluse has\nvarno = 4 already so pull_varnos() returns a SINGLETON relids (b 4).\nThe clause is an equality clause, so it is used to create an\nEquivalence class.\ngenerate_base_implied_equalities_no_const() then constructs the same\nRestrictInfo again and adds to baserestrictinfo of Rel with relid = 4\ni.e. t2's baserestrictinfo. I don't know whether that's the right\nthing to do. After the subquery has been pulled up, t1 and t2 can be\njoined at the same level and thus the clause makes more sense as a\njoininfo in both t1 as well as t2. So I tend to agree with Tom. That's\nhow it will move up the join tree and be evaluated at appropriate\nlevel. But then why the query returns the right results is a mystery.\n\nI am not sure where we are taking the original bug fix with this\ninvestigation. Is it required to fix this problem in order to fix the\noriginal problem OR we should commit the fix for the original problem\nand then investigate this further?\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 8 Sep 2023 12:34:24 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "On Fri, Sep 8, 2023 at 3:04 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n\n> When the clause s.t1b = s.a is presented to distribute_qual_to_rels()\n> it has form PHV(t1.b) = t2.b. The PHV's ph_eval_at is 4, which is what\n> is returned as varno to pull_varnos(). The other Var in the caluse has\n> varno = 4 already so pull_varnos() returns a SINGLETON relids (b 4).\n> The clause is an equality clause, so it is used to create an\n> Equivalence class.\n> generate_base_implied_equalities_no_const() then constructs the same\n> RestrictInfo again and adds to baserestrictinfo of Rel with relid = 4\n> i.e. t2's baserestrictinfo. I don't know whether that's the right\n> thing to do.\n\n\nWell, I think that's what PHVs are supposed to do. Quoting the README:\n\n... Note that even with this restriction, pullup of a LATERAL\nsubquery can result in creating PlaceHolderVars that contain lateral\nreferences to relations outside their syntactic scope. We still evaluate\nsuch PHVs at their syntactic location or lower, but the presence of such a\nPHV in the quals or targetlist of a plan node requires that node to appear\non the inside of a nestloop join relative to the rel(s) supplying the\nlateral reference.\n\n\n> I am not sure where we are taking the original bug fix with this\n> investigation. Is it required to fix this problem in order to fix the\n> original problem OR we should commit the fix for the original problem\n> and then investigate this further?\n\n\nFair point. This seems a separate problem from the original, so I'm\nokay we fix them separately.\n\nThanks\nRichard\n\nOn Fri, Sep 8, 2023 at 3:04 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:\nWhen the clause s.t1b = s.a is presented to distribute_qual_to_rels()\nit has form PHV(t1.b) = t2.b. The PHV's ph_eval_at is 4, which is what\nis returned as varno to pull_varnos(). The other Var in the caluse has\nvarno = 4 already so  pull_varnos() returns a SINGLETON relids (b 4).\nThe clause is an equality clause, so it is used to create an\nEquivalence class.\ngenerate_base_implied_equalities_no_const() then constructs the same\nRestrictInfo again and adds to baserestrictinfo of Rel with relid = 4\ni.e. t2's baserestrictinfo. I don't know whether that's the right\nthing to do. Well, I think that's what PHVs are supposed to do.  Quoting the README:... Note that even with this restriction, pullup of a LATERALsubquery can result in creating PlaceHolderVars that contain lateralreferences to relations outside their syntactic scope.  We still evaluatesuch PHVs at their syntactic location or lower, but the presence of such aPHV in the quals or targetlist of a plan node requires that node to appearon the inside of a nestloop join relative to the rel(s) supplying thelateral reference. \nI am not sure where we are taking the original bug fix with this\ninvestigation. Is it required to fix this problem in order to fix the\noriginal problem OR we should commit the fix for the original problem\nand then investigate this further?Fair point.  This seems a separate problem from the original, so I'mokay we fix them separately.ThanksRichard", "msg_date": "Mon, 11 Sep 2023 10:05:36 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "On 23/8/2023 12:37, Richard Guo wrote:\n> If we go with the \"tablesample scans can't be reparameterized\" approach\n> in the back branches, I'm a little concerned that what if we find more\n> cases in the futrue where we need modify RTEs for reparameterization.\n> So I spent some time seeking and have managed to find one: there might\n> be lateral references in a scan path's restriction clauses, and\n> currently reparameterize_path_by_child fails to adjust them.\nIt may help you somehow: in [1], we designed a feature where the \npartitionwise join technique can be applied to a JOIN of partitioned and \nnon-partitioned tables. Unfortunately, it is out of community \ndiscussions, but we still support it for sharding usage - it is helpful \nfor the implementation of 'global' tables in a distributed \nconfiguration. And there we were stuck into the same problem with \nlateral relids adjustment. So you can build a more general view of the \nproblem with this patch.\n\n[1] Asymmetric partition-wise JOIN\nhttps://www.postgresql.org/message-id/flat/CAOP8fzaVL_2SCJayLL9kj5pCA46PJOXXjuei6-3aFUV45j4LJQ%40mail.gmail.com\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n\n", "msg_date": "Wed, 20 Sep 2023 14:18:20 +0700", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "On 23/8/2023 12:37, Richard Guo wrote:\n> To fix it we may need to modify RelOptInfos for Path, BitmapHeapPath,\n> ForeignPath and CustomPath, and modify IndexOptInfos for IndexPath.  It\n> seems that that is not easily done without postponing reparameterization\n> of paths until createplan.c.\n> \n> Attached is a patch which is v5 + fix for this new issue.\n\nHaving looked into the patch for a while, I couldn't answer to myself \nfor some stupid questions:\n1. If we postpone parameterization until the plan creation, why do we \nstill copy the path node in the FLAT_COPY_PATH macros? Do we really need it?\n2. I see big switches on path nodes. May it be time to create a \npath_walker function? I recall some thread where such a suggestion was \ndeclined, but I don't remember why.\n3. Clause changing is postponed. Why does it not influence the \ncalc_joinrel_size_estimate? We will use statistics on the parent table \nhere. Am I wrong?\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n\n", "msg_date": "Fri, 13 Oct 2023 17:18:19 +0700", "msg_from": "Andrei Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "On Fri, Oct 13, 2023 at 6:18 PM Andrei Lepikhov <a.lepikhov@postgrespro.ru>\nwrote:\n\n> On 23/8/2023 12:37, Richard Guo wrote:\n> > To fix it we may need to modify RelOptInfos for Path, BitmapHeapPath,\n> > ForeignPath and CustomPath, and modify IndexOptInfos for IndexPath. It\n> > seems that that is not easily done without postponing reparameterization\n> > of paths until createplan.c.\n> >\n> > Attached is a patch which is v5 + fix for this new issue.\n>\n> Having looked into the patch for a while, I couldn't answer to myself\n> for some stupid questions:\n\n\nThanks for reviewing this patch! I think these are great questions.\n\n\n> 1. If we postpone parameterization until the plan creation, why do we\n> still copy the path node in the FLAT_COPY_PATH macros? Do we really need\n> it?\n\n\nGood point. The NestPath's origin inner path should not be referenced\nany more after the reparameterization, so it seems safe to adjust the\npath itself, without the need of a flat-copy. I've done that in v8\npatch.\n\n\n> 2. I see big switches on path nodes. May it be time to create a\n> path_walker function? I recall some thread where such a suggestion was\n> declined, but I don't remember why.\n\n\nI'm not sure. But this seems a separate topic, so maybe it's better to\ndiscuss it separately?\n\n\n> 3. Clause changing is postponed. Why does it not influence the\n> calc_joinrel_size_estimate? We will use statistics on the parent table\n> here. Am I wrong?\n\n\nHmm, I think the reparameterization does not change the estimated\nsize/costs. Quoting the comment\n\n* The cost, number of rows, width and parallel path properties depend upon\n* path->parent, which does not change during the translation.\n\n\nHi Tom, I'm wondering if you've had a chance to follow this issue. What\ndo you think about the proposed patch?\n\nThanks\nRichard", "msg_date": "Wed, 18 Oct 2023 14:39:41 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "On 18/10/2023 13:39, Richard Guo wrote:\n> \n> On Fri, Oct 13, 2023 at 6:18 PM Andrei Lepikhov \n> <a.lepikhov@postgrespro.ru <mailto:a.lepikhov@postgrespro.ru>> wrote:\n> \n> On 23/8/2023 12:37, Richard Guo wrote:\n> > To fix it we may need to modify RelOptInfos for Path, BitmapHeapPath,\n> > ForeignPath and CustomPath, and modify IndexOptInfos for\n> IndexPath.  It\n> > seems that that is not easily done without postponing\n> reparameterization\n> > of paths until createplan.c.\n> >\n> > Attached is a patch which is v5 + fix for this new issue.\n> \n> Having looked into the patch for a while, I couldn't answer to myself\n> for some stupid questions:\n> \n> \n> Thanks for reviewing this patch!  I think these are great questions.\n> \n> 1. If we postpone parameterization until the plan creation, why do we\n> still copy the path node in the FLAT_COPY_PATH macros? Do we really\n> need it?\n> \n> \n> Good point.  The NestPath's origin inner path should not be referenced\n> any more after the reparameterization, so it seems safe to adjust the\n> path itself, without the need of a flat-copy.  I've done that in v8\n> patch.\n> \n> 2. I see big switches on path nodes. May it be time to create a\n> path_walker function? I recall some thread where such a suggestion was\n> declined, but I don't remember why.\n> \n> \n> I'm not sure.  But this seems a separate topic, so maybe it's better to\n> discuss it separately?\n\nAgree.\n\n> 3. Clause changing is postponed. Why does it not influence the\n> calc_joinrel_size_estimate? We will use statistics on the parent table\n> here. Am I wrong?\n> \n> \n> Hmm, I think the reparameterization does not change the estimated\n> size/costs.  Quoting the comment\n\nAgree. I have looked at the code and figured it out - you're right. But \nit seems strange: maybe I don't understand something. Why not estimate \nselectivity for parameterized clauses based on leaf partition statistic, \nnot the parent one?\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n\n", "msg_date": "Wed, 18 Oct 2023 16:41:05 +0700", "msg_from": "Andrei Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "Hi!\n\nThank you for your work on the subject.\n\n\nDuring review your patch I didn't understand why are you checking that \nthe variable is path and not new_path of type T_SamplePath (I \nhighlighted it)?\n\nPath *\nreparameterize_path_by_child(PlannerInfo *root, Path *path,\n  RelOptInfo *child_rel)\n\n...\nswitch (nodeTag(path))\n     {\n         case T_Path:\n             new_path = path;\nADJUST_CHILD_ATTRS(new_path->parent->baserestrictinfo);\n             if (*path*->pathtype == T_SampleScan)\n             {\n\nIs it a typo and should be new_path?\n\n\nBesides, it may not be important, but reviewing your code all the time \nstumbled on the statement of the comments while reading it (I \nhighlighted it too). This:\n\n* path_is_reparameterizable_by_child\n  *         Given a path parameterized by the parent of the given child \nrelation,\n  *         see if it can be translated to be parameterized by the child \nrelation.\n  *\n  * This must return true if *and only if *reparameterize_path_by_child()\n  * would succeed on this path.\n\nMaybe is it better to rewrite it simplier:\n\n  * This must return true *only if *reparameterize_path_by_child()\n  * would succeed on this path.\n\n\nAnd can we add assert in reparameterize_pathlist_by_child function that \npathlist is not a NIL, because according to the comment it needs to be \nadded there:\n\nReturns NIL to indicate failure, so pathlist had better not be NIL.\n\n-- \nRegards,\nAlena Rybakina\n\n\n\n\n\n\nHi!\nThank you for your work on the subject.\n\n\nDuring review your patch I didn't understand why are you checking\n that the variable is path and not new_path of type T_SamplePath (I\n highlighted it)?\n\nPath *\nreparameterize_path_by_child(PlannerInfo\n *root, Path *path,\n                           \n  RelOptInfo *child_rel)\n\n...\nswitch (nodeTag(path))\n    {\n        case T_Path:\n            new_path = path;\n           \n ADJUST_CHILD_ATTRS(new_path->parent->baserestrictinfo);\n            if (path->pathtype\n == T_SampleScan)\n            {\nIs it a typo and should be new_path?\n\n\nBesides, it may not be important, but reviewing your code all the\n time stumbled on the statement of the comments while reading it (I\n highlighted it too). This:\n*\n path_is_reparameterizable_by_child\n  *         Given a path parameterized by the parent of the given\n child relation,\n  *         see if it can be translated to be parameterized by\n the child relation.\n  *\n  * This must return true if and only if reparameterize_path_by_child()\n  * would succeed on this path.\nMaybe is it better to rewrite it simplier:\n * This must return true only\n if reparameterize_path_by_child()\n  * would succeed on this path.\n\n\nAnd can we add assert in reparameterize_pathlist_by_child\n function that pathlist is not a NIL, because according to the\n comment it needs to be added there:\nReturns NIL to indicate failure, so\n pathlist had better not be NIL.\n-- \nRegards,\nAlena Rybakina", "msg_date": "Thu, 19 Oct 2023 21:52:00 +0300", "msg_from": "Alena Rybakina <lena.ribackina@yandex.ru>", "msg_from_op": false, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "On Fri, Oct 20, 2023 at 2:52 AM Alena Rybakina <lena.ribackina@yandex.ru>\nwrote:\n\n> Thank you for your work on the subject.\n>\nThanks for taking an interest in this patch.\n\n\n> During review your patch I didn't understand why are you checking that the\n> variable is path and not new_path of type T_SamplePath (I highlighted it)?\n>\n> Is it a typo and should be new_path?\n>\nI don't think there is any difference: path and new_path are the same\npointer in this case.\n\n\n> Besides, it may not be important, but reviewing your code all the time\n> stumbled on the statement of the comments while reading it (I highlighted\n> it too). This:\n>\n> * path_is_reparameterizable_by_child\n> * Given a path parameterized by the parent of the given child\n> relation,\n> * see if it can be translated to be parameterized by the child\n> relation.\n> *\n> * This must return true if *and only if *reparameterize_path_by_child()\n> * would succeed on this path.\n>\n> Maybe is it better to rewrite it simplier:\n>\n> * This must return true *only if *reparameterize_path_by_child()\n> * would succeed on this path.\n>\nI don't think so. \"if and only if\" is more accurate to me.\n\n\n> And can we add assert in reparameterize_pathlist_by_child function that\n> pathlist is not a NIL, because according to the comment it needs to be\n> added there:\n>\nHmm, I'm not sure, as in REPARAMETERIZE_CHILD_PATH_LIST we have already\nexplicitly checked that the pathlist is not NIL.\n\nThanks\nRichard\n\nOn Fri, Oct 20, 2023 at 2:52 AM Alena Rybakina <lena.ribackina@yandex.ru> wrote:\nThank you for your work on the subject.Thanks for taking an interest in this patch.  \nDuring review your patch I didn't understand why are you checking\n that the variable is path and not new_path of type T_SamplePath (I\n highlighted it)?Is it a typo and should be new_path?I don't think there is any difference: path and new_path are the samepointer in this case. \nBesides, it may not be important, but reviewing your code all the\n time stumbled on the statement of the comments while reading it (I\n highlighted it too). This:\n*\n path_is_reparameterizable_by_child\n  *         Given a path parameterized by the parent of the given\n child relation,\n  *         see if it can be translated to be parameterized by\n the child relation.\n  *\n  * This must return true if and only if reparameterize_path_by_child()\n  * would succeed on this path.\nMaybe is it better to rewrite it simplier:\n * This must return true only\n if reparameterize_path_by_child()\n  * would succeed on this path.I don't think so.  \"if and only if\" is more accurate to me. \nAnd can we add assert in reparameterize_pathlist_by_child\n function that pathlist is not a NIL, because according to the\n comment it needs to be added there:Hmm, I'm not sure, as in REPARAMETERIZE_CHILD_PATH_LIST we have alreadyexplicitly checked that the pathlist is not NIL.ThanksRichard", "msg_date": "Wed, 6 Dec 2023 14:51:27 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "I've self-reviewed this patch again and I think it's now in a\ncommittable state. I'm wondering if we can mark it as 'Ready for\nCommitter' now, or we need more review comments/feedbacks.\n\nTo recap, this patch postpones reparameterization of paths until\ncreateplan.c, which would help avoid building the reparameterized\nexpressions we might not use. More importantly, it makes it possible to\nmodify the expressions in RTEs (e.g. tablesample) and in RelOptInfos\n(e.g. baserestrictinfo) for reparameterization. Failing to do that can\nlead to executor crashes, wrong results, or planning errors, as we have\nalready seen.\n\nThanks\nRichard\n\nI've self-reviewed this patch again and I think it's now in acommittable state.  I'm wondering if we can mark it as 'Ready forCommitter' now, or we need more review comments/feedbacks.To recap, this patch postpones reparameterization of paths untilcreateplan.c, which would help avoid building the reparameterizedexpressions we might not use.  More importantly, it makes it possible tomodify the expressions in RTEs (e.g. tablesample) and in RelOptInfos(e.g. baserestrictinfo) for reparameterization.  Failing to do that canlead to executor crashes, wrong results, or planning errors, as we havealready seen.ThanksRichard", "msg_date": "Wed, 6 Dec 2023 15:30:22 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "On 06.12.2023 10:30, Richard Guo wrote:\n> I've self-reviewed this patch again and I think it's now in a\n> committable state.  I'm wondering if we can mark it as 'Ready for\n> Committer' now, or we need more review comments/feedbacks.\n>\n> To recap, this patch postpones reparameterization of paths until\n> createplan.c, which would help avoid building the reparameterized\n> expressions we might not use.  More importantly, it makes it possible to\n> modify the expressions in RTEs (e.g. tablesample) and in RelOptInfos\n> (e.g. baserestrictinfo) for reparameterization.  Failing to do that can\n> lead to executor crashes, wrong results, or planning errors, as we have\n> already seen.\nI marked it as 'Ready for Committer'. I think it is ready.\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Fri, 8 Dec 2023 12:38:58 +0300", "msg_from": "Alena Rybakina <lena.ribackina@yandex.ru>", "msg_from_op": false, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "On Fri, Dec 8, 2023 at 5:39 PM Alena Rybakina <lena.ribackina@yandex.ru>\nwrote:\n\n> On 06.12.2023 10:30, Richard Guo wrote:\n> > I've self-reviewed this patch again and I think it's now in a\n> > committable state. I'm wondering if we can mark it as 'Ready for\n> > Committer' now, or we need more review comments/feedbacks.\n> >\n> > To recap, this patch postpones reparameterization of paths until\n> > createplan.c, which would help avoid building the reparameterized\n> > expressions we might not use. More importantly, it makes it possible to\n> > modify the expressions in RTEs (e.g. tablesample) and in RelOptInfos\n> > (e.g. baserestrictinfo) for reparameterization. Failing to do that can\n> > lead to executor crashes, wrong results, or planning errors, as we have\n> > already seen.\n\nI marked it as 'Ready for Committer'. I think it is ready.\n\n\nThank you. Appreciate that.\n\nThanks\nRichard\n\nOn Fri, Dec 8, 2023 at 5:39 PM Alena Rybakina <lena.ribackina@yandex.ru> wrote:On 06.12.2023 10:30, Richard Guo wrote:\n> I've self-reviewed this patch again and I think it's now in a\n> committable state.  I'm wondering if we can mark it as 'Ready for\n> Committer' now, or we need more review comments/feedbacks.\n>\n> To recap, this patch postpones reparameterization of paths until\n> createplan.c, which would help avoid building the reparameterized\n> expressions we might not use.  More importantly, it makes it possible to\n> modify the expressions in RTEs (e.g. tablesample) and in RelOptInfos\n> (e.g. baserestrictinfo) for reparameterization.  Failing to do that can\n> lead to executor crashes, wrong results, or planning errors, as we have\n> already seen. I marked it as 'Ready for Committer'. I think it is ready.Thank you.  Appreciate that.ThanksRichard", "msg_date": "Mon, 11 Dec 2023 11:02:24 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "On 6/12/2023 14:30, Richard Guo wrote:\n> I've self-reviewed this patch again and I think it's now in a\n> committable state.  I'm wondering if we can mark it as 'Ready for\n> Committer' now, or we need more review comments/feedbacks.\n> \n> To recap, this patch postpones reparameterization of paths until\n> createplan.c, which would help avoid building the reparameterized\n> expressions we might not use.  More importantly, it makes it possible to\n> modify the expressions in RTEs (e.g. tablesample) and in RelOptInfos\n> (e.g. baserestrictinfo) for reparameterization.  Failing to do that can\n> lead to executor crashes, wrong results, or planning errors, as we have\n> already seen.\n\nAs I see, this patch contains the following modifications:\n1. Changes in the create_nestloop_path: here, it arranges all tests to \nthe value of top_parent_relids, if any. It is ok for me.\n2. Changes in reparameterize_path_by_child and coupled new function \npath_is_reparameterizable_by_child. I compared them, and it looks good.\nOne thing here. You use the assertion trap in the case of inconsistency \nin the behaviour of these two functions. How disastrous would it be if \nsomeone found such inconsistency in production? Maybe better to use \nelog(PANIC, ...) report?\n3. ADJUST_CHILD_ATTRS() macros. Understanding the necessity of this \nchange, I want to say it looks a bit ugly.\n4. SampleScan reparameterization part. It looks ok. It can help us in \nthe future with asymmetric join and something else.\n5. Tests. All of them are related to partitioning and the SampleScan \nissue. Maybe it is enough, but remember, this change now impacts the \nparameterization feature in general.\n6. I can't judge how this switch of overall approach could impact \nsomething in the planner. I am only wary about what, from the first \nreparameterization up to the plan creation, we have some inconsistency \nin expressions (partitions refer to expressions with the parent relid). \nWhat if an extension in the middle changes the parent expression?\n\nBy and large, this patch is in a good state and may be committed.\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n", "msg_date": "Wed, 13 Dec 2023 09:55:02 +0700", "msg_from": "Andrei Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "On Tue, Dec 12, 2023 at 9:55 PM Andrei Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> By and large, this patch is in a good state and may be committed.\n\nOK, so a few people like the current form of this patch but we haven't\nheard from Tom since August. Tom, any thoughts on the current\nincarnation?\n\nRichard, I think it could be useful to put a better commit message\ninto the patch file, describing both what problem is being fixed and\nwhat the design of the fix is. I gather that the problem is that we\ncrash if the query contains a partioningwise join and also $SOMETHING,\nand the solution is to move reparameterization to happen at\ncreateplan() time but with a precheck that runs during path\ngeneration. Presumably, that means this is more than a minimal bug\nfix, because the bug could be fixed without splitting\ncan-it-be-reparameterized to reparameterize-it in this way. Probably\nthat's a performance optimization, so maybe it's worth clarifying\nwhether that's just an independently good idea or whether it's a part\nof making the bug fix not regress performance.\n\nI think the macro names in path_is_reparameterizable_by_child could be\nbetter chosen. CHILD_PATH_IS_REPARAMETERIZABLE doesn't convey that the\nmacro will return from the calling function if not -- it looks like it\njust returns a Boolean. Maybe REJECT_IF_PATH_NOT_REPARAMETERIZABLE and\nREJECT_IF_PATH_LIST_NOT_REPARAMETERIZABLE or some such.\n\nAnother question here is whether we really want to back-patch all of\nthis or whether it might be better to, as Tom proposed previously,\nback-patch a more minimal fix and leave the more invasive stuff for\nmaster.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Jan 2024 13:36:39 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> OK, so a few people like the current form of this patch but we haven't\n> heard from Tom since August. Tom, any thoughts on the current\n> incarnation?\n\nNot yet, but it is on my to-look-at list. In the meantime I concur\nwith your comments here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 05 Jan 2024 13:55:28 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "Thanks for the review!\n\nOn Sat, Jan 6, 2024 at 2:36 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> Richard, I think it could be useful to put a better commit message\n> into the patch file, describing both what problem is being fixed and\n> what the design of the fix is. I gather that the problem is that we\n> crash if the query contains a partioningwise join and also $SOMETHING,\n> and the solution is to move reparameterization to happen at\n> createplan() time but with a precheck that runs during path\n> generation. Presumably, that means this is more than a minimal bug\n> fix, because the bug could be fixed without splitting\n> can-it-be-reparameterized to reparameterize-it in this way. Probably\n> that's a performance optimization, so maybe it's worth clarifying\n> whether that's just an independently good idea or whether it's a part\n> of making the bug fix not regress performance.\n\n\nThanks for the suggestion. Attached is an updated patch which is added\nwith a commit message that tries to explain the problem and the fix.\n\nI think the macro names in path_is_reparameterizable_by_child could be\n> better chosen. CHILD_PATH_IS_REPARAMETERIZABLE doesn't convey that the\n> macro will return from the calling function if not -- it looks like it\n> just returns a Boolean. Maybe REJECT_IF_PATH_NOT_REPARAMETERIZABLE and\n> REJECT_IF_PATH_LIST_NOT_REPARAMETERIZABLE or some such.\n\n\nAgreed.\n\n\n> Another question here is whether we really want to back-patch all of\n> this or whether it might be better to, as Tom proposed previously,\n> back-patch a more minimal fix and leave the more invasive stuff for\n> master.\n\n\nFair point. I think we can back-patch a more minimal fix, as Tom\nproposed in [1], which disallows the reparameterization if the path\ncontains sampling info that references the outer rel. But I think we\nneed also to disallow the reparameterization of a path if it contains\nrestriction clauses that reference the outer rel, as such paths have\nbeen found to cause incorrect results, or planning errors as in [2].\n\n[1] https://www.postgresql.org/message-id/3163033.1692719009%40sss.pgh.pa.us\n[2]\nhttps://www.postgresql.org/message-id/CAMbWs4-CSR4VnZCDep3ReSoHGTA7E%2B3tnjF_LmHcX7yiGrkVfQ%40mail.gmail.com\n\nThanks\nRichard", "msg_date": "Mon, 8 Jan 2024 16:32:37 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "On Mon, Jan 8, 2024 at 3:32 AM Richard Guo <guofenglinux@gmail.com> wrote:\n> Thanks for the suggestion. Attached is an updated patch which is added\n> with a commit message that tries to explain the problem and the fix.\n\nThis is great. The references to \"the sampling infos\" are a little bit\nconfusing to me. I'm not sure if this is referring to\nTableSampleClause objects or what.\n\n> Fair point. I think we can back-patch a more minimal fix, as Tom\n> proposed in [1], which disallows the reparameterization if the path\n> contains sampling info that references the outer rel. But I think we\n> need also to disallow the reparameterization of a path if it contains\n> restriction clauses that reference the outer rel, as such paths have\n> been found to cause incorrect results, or planning errors as in [2].\n\nDo you want to produce a patch for that, to go along with this patch for master?\n\nI know this is on Tom's to-do list which makes me a bit reluctant to\nget too involved here, because certainly he knows this code better\nthan I do, maybe better than anyone does, but on the other hand, we\nshouldn't leave server crashes unfixed for too long, so maybe I can do\nsomething to help at least with that part of it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 15 Jan 2024 13:30:08 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I know this is on Tom's to-do list which makes me a bit reluctant to\n> get too involved here, because certainly he knows this code better\n> than I do, maybe better than anyone does, but on the other hand, we\n> shouldn't leave server crashes unfixed for too long, so maybe I can do\n> something to help at least with that part of it.\n\nThis is indeed on my to-do list, and I have every intention of\ngetting to it before the end of the month. But if you want to\nhelp push things along, feel free.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 15 Jan 2024 14:17:24 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "On Tue, Jan 16, 2024 at 2:30 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Jan 8, 2024 at 3:32 AM Richard Guo <guofenglinux@gmail.com> wrote:\n> > Thanks for the suggestion. Attached is an updated patch which is added\n> > with a commit message that tries to explain the problem and the fix.\n>\n> This is great. The references to \"the sampling infos\" are a little bit\n> confusing to me. I'm not sure if this is referring to\n> TableSampleClause objects or what.\n\n\nYeah, it's referring to TableSampleClause objects. I've updated the\ncommit message to clarify that. In passing I also updated the test\ncases a bit. Please see\nv10-0001-Postpone-reparameterization-of-paths-until-when-creating-plans.patch\n\n\n> > Fair point. I think we can back-patch a more minimal fix, as Tom\n> > proposed in [1], which disallows the reparameterization if the path\n> > contains sampling info that references the outer rel. But I think we\n> > need also to disallow the reparameterization of a path if it contains\n> > restriction clauses that reference the outer rel, as such paths have\n> > been found to cause incorrect results, or planning errors as in [2].\n>\n> Do you want to produce a patch for that, to go along with this patch for\n> master?\n\n\nSure, here it is:\nv10-0001-Avoid-reparameterizing-Paths-when-it-s-not-suitable.patch\n\nThanks\nRichard", "msg_date": "Wed, 17 Jan 2024 17:01:36 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "On Wed, Jan 17, 2024 at 5:01 PM Richard Guo <guofenglinux@gmail.com> wrote:\n\n> On Tue, Jan 16, 2024 at 2:30 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n>> On Mon, Jan 8, 2024 at 3:32 AM Richard Guo <guofenglinux@gmail.com>\n>> wrote:\n>\n> > Fair point. I think we can back-patch a more minimal fix, as Tom\n>> > proposed in [1], which disallows the reparameterization if the path\n>> > contains sampling info that references the outer rel. But I think we\n>> > need also to disallow the reparameterization of a path if it contains\n>> > restriction clauses that reference the outer rel, as such paths have\n>> > been found to cause incorrect results, or planning errors as in [2].\n>>\n>> Do you want to produce a patch for that, to go along with this patch for\n>> master?\n>\n>\n> Sure, here it is:\n> v10-0001-Avoid-reparameterizing-Paths-when-it-s-not-suitable.patch\n>\n\nI forgot to mention that this patch applies on v16 not master.\n\nThanks\nRichard\n\nOn Wed, Jan 17, 2024 at 5:01 PM Richard Guo <guofenglinux@gmail.com> wrote:On Tue, Jan 16, 2024 at 2:30 AM Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Jan 8, 2024 at 3:32 AM Richard Guo <guofenglinux@gmail.com> wrote:\n> Fair point.  I think we can back-patch a more minimal fix, as Tom\n> proposed in [1], which disallows the reparameterization if the path\n> contains sampling info that references the outer rel.  But I think we\n> need also to disallow the reparameterization of a path if it contains\n> restriction clauses that reference the outer rel, as such paths have\n> been found to cause incorrect results, or planning errors as in [2].\n\nDo you want to produce a patch for that, to go along with this patch for master?Sure, here it is:v10-0001-Avoid-reparameterizing-Paths-when-it-s-not-suitable.patchI forgot to mention that this patch applies on v16 not master.ThanksRichard", "msg_date": "Wed, 17 Jan 2024 17:50:16 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> On Wed, Jan 17, 2024 at 5:01 PM Richard Guo <guofenglinux@gmail.com> wrote:\n>> Sure, here it is:\n>> v10-0001-Avoid-reparameterizing-Paths-when-it-s-not-suitable.patch\n\n> I forgot to mention that this patch applies on v16 not master.\n\nI spent some time looking at this patch (which seems more urgent than\nthe patch for master, given that we have back-branch releases coming\nup). There are two things I'm not persuaded about:\n\n* Why is it okay to just use pull_varnos on a tablesample expression,\nwhen contain_references_to() does something different?\n\n* Is contain_references_to() doing the right thing by specifying\nPVC_RECURSE_PLACEHOLDERS? That causes it to totally ignore a\nPlaceHolderVar's ph_eval_at, and I'm not convinced that's correct.\n\nIdeally it seems to me that we want to reject a PlaceHolderVar\nif either its ph_eval_at or ph_lateral overlap the other join\nrelation; if it was coded that way then we'd not need to recurse\ninto the PHV's contents. pull_varnos isn't directly amenable\nto this, but I think we could use pull_var_clause with\nPVC_INCLUDE_PLACEHOLDERS and then iterate through the resulting\nlist manually. (If this patch were meant for HEAD, I'd think\nabout extending the var.c code to support this usage more directly.\nBut as things stand, this is a one-off so I think we should just do\nwhat we must in reparameterize_path_by_child.)\n\nBTW, it shouldn't be necessary to write either PVC_RECURSE_AGGREGATES\nor PVC_RECURSE_WINDOWFUNCS, because neither kind of node should ever\nappear in a scan-level expression. I'd leave those out so that we\nget an error if something unexpected happens.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 30 Jan 2024 16:12:56 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "On Wed, Jan 31, 2024 at 5:12 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Richard Guo <guofenglinux@gmail.com> writes:\n> > On Wed, Jan 17, 2024 at 5:01 PM Richard Guo <guofenglinux@gmail.com>\n> wrote:\n> >> Sure, here it is:\n> >> v10-0001-Avoid-reparameterizing-Paths-when-it-s-not-suitable.patch\n>\n> > I forgot to mention that this patch applies on v16 not master.\n>\n> I spent some time looking at this patch (which seems more urgent than\n> the patch for master, given that we have back-branch releases coming\n> up).\n\n\nThanks for looking at this patch!\n\n\n> There are two things I'm not persuaded about:\n>\n> * Why is it okay to just use pull_varnos on a tablesample expression,\n> when contain_references_to() does something different?\n\n\nHmm, the main reason is that the expression_tree_walker() function does\nnot handle T_RestrictInfo nodes. So we cannot just use pull_varnos or\npull_var_clause on a restrictinfo list. So contain_references_to()\ncalls extract_actual_clauses() first before it calls pull_var_clause().\n\n\n> * Is contain_references_to() doing the right thing by specifying\n> PVC_RECURSE_PLACEHOLDERS? That causes it to totally ignore a\n> PlaceHolderVar's ph_eval_at, and I'm not convinced that's correct.\n\n\nI was thinking that we should recurse into the PHV's contents to see if\nthere are any lateral references to the other join relation. But now I\nrealize that checking phinfo->ph_lateral, as you suggested, might be a\nbetter way to do that.\n\nBut I'm not sure about checking phinfo->ph_eval_at. It seems to me that\nthe ph_eval_at could not overlap the other join relation; otherwise the\nclause would not be a restriction clause but rather a join clause.\n\n\n> Ideally it seems to me that we want to reject a PlaceHolderVar\n> if either its ph_eval_at or ph_lateral overlap the other join\n> relation; if it was coded that way then we'd not need to recurse\n> into the PHV's contents. pull_varnos isn't directly amenable\n> to this, but I think we could use pull_var_clause with\n> PVC_INCLUDE_PLACEHOLDERS and then iterate through the resulting\n> list manually. (If this patch were meant for HEAD, I'd think\n> about extending the var.c code to support this usage more directly.\n> But as things stand, this is a one-off so I think we should just do\n> what we must in reparameterize_path_by_child.)\n\n\nThanks for the suggestion. I've coded it this way in the v11 patch, and\nleft a XXX comment about checking phinfo->ph_eval_at.\n\n\n> BTW, it shouldn't be necessary to write either PVC_RECURSE_AGGREGATES\n> or PVC_RECURSE_WINDOWFUNCS, because neither kind of node should ever\n> appear in a scan-level expression. I'd leave those out so that we\n> get an error if something unexpected happens.\n\n\nGood point.\n\nThanks\nRichard", "msg_date": "Wed, 31 Jan 2024 14:39:59 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> On Wed, Jan 31, 2024 at 5:12 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> * Why is it okay to just use pull_varnos on a tablesample expression,\n>> when contain_references_to() does something different?\n\n> Hmm, the main reason is that the expression_tree_walker() function does\n> not handle T_RestrictInfo nodes. So we cannot just use pull_varnos or\n> pull_var_clause on a restrictinfo list.\n\nRight, the extract_actual_clauses step is not wanted for the\ntablesample expression. But my point is that surely the handling of\nVars and PlaceHolderVars should be the same, which it's not, and your\nv11 makes it even less so. How can it be OK to ignore Vars in the\nrestrictinfo case?\n\nI think the code structure we need to end up with is a routine that\ntakes a RestrictInfo-free node tree (and is called directly in the\ntablesample case) with a wrapper routine that strips the RestrictInfos\n(for use on restriction lists).\n\n> But I'm not sure about checking phinfo->ph_eval_at. It seems to me that\n> the ph_eval_at could not overlap the other join relation; otherwise the\n> clause would not be a restriction clause but rather a join clause.\n\nAt least in the tablesample case, plain Vars as well as PHVs belonging\nto the other relation are definitely possible.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 31 Jan 2024 10:21:12 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "On Wed, Jan 31, 2024 at 11:21 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Richard Guo <guofenglinux@gmail.com> writes:\n> > On Wed, Jan 31, 2024 at 5:12 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> * Why is it okay to just use pull_varnos on a tablesample expression,\n> >> when contain_references_to() does something different?\n>\n> > Hmm, the main reason is that the expression_tree_walker() function does\n> > not handle T_RestrictInfo nodes. So we cannot just use pull_varnos or\n> > pull_var_clause on a restrictinfo list.\n>\n> Right, the extract_actual_clauses step is not wanted for the\n> tablesample expression. But my point is that surely the handling of\n> Vars and PlaceHolderVars should be the same, which it's not, and your\n> v11 makes it even less so. How can it be OK to ignore Vars in the\n> restrictinfo case?\n\n\nHmm, in this specific scenario it seems that it's not possible to have\nVars in the restrictinfo list that laterally reference the outer join\nrelation; otherwise the clause containing such Vars would not be a\nrestriction clause but rather a join clause. So in v11 I did not check\nVars in contain_references_to().\n\nBut I agree that we'd better to handle Vars and PlaceHolderVars in the\nsame way.\n\n\n> I think the code structure we need to end up with is a routine that\n> takes a RestrictInfo-free node tree (and is called directly in the\n> tablesample case) with a wrapper routine that strips the RestrictInfos\n> (for use on restriction lists).\n\n\nHow about the attached v12 patch? But I'm a little concerned about\nomitting PVC_RECURSE_AGGREGATES and PVC_RECURSE_WINDOWFUNCS in this\ncase. Is it possible that aggregates or window functions appear in the\ntablesample expression?\n\nThanks\nRichard", "msg_date": "Thu, 1 Feb 2024 10:54:02 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> How about the attached v12 patch? But I'm a little concerned about\n> omitting PVC_RECURSE_AGGREGATES and PVC_RECURSE_WINDOWFUNCS in this\n> case. Is it possible that aggregates or window functions appear in the\n> tablesample expression?\n\nNo, they can't appear there; it'd make no sense to allow such things\nat the relation scan level, and we don't.\n\nregression=# select * from float8_tbl tablesample system(count(*));\nERROR: aggregate functions are not allowed in functions in FROM\nLINE 1: select * from float8_tbl tablesample system(count(*));\n ^\nregression=# select * from float8_tbl tablesample system(count(*) over ());\nERROR: window functions are not allowed in functions in FROM\nLINE 1: select * from float8_tbl tablesample system(count(*) over ()...\n ^\n\nI pushed v12 (with some cosmetic adjustments) into the back branches,\nsince we're getting close to the February release freeze. We still\nneed to deal with the larger fix for HEAD. Please re-post that\none so that the cfbot knows which is the patch-of-record.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 01 Feb 2024 12:45:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "On Fri, Feb 2, 2024 at 1:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I pushed v12 (with some cosmetic adjustments) into the back branches,\n> since we're getting close to the February release freeze. We still\n> need to deal with the larger fix for HEAD. Please re-post that\n> one so that the cfbot knows which is the patch-of-record.\n\n\nThanks for the adjustments and pushing!\n\nHere is the patch for HEAD. I simply re-posted v10. Nothing has\nchanged.\n\nThanks\nRichard", "msg_date": "Fri, 2 Feb 2024 10:10:10 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> Here is the patch for HEAD. I simply re-posted v10. Nothing has\n> changed.\n\nI got back to this finally, and pushed it with some minor cosmetic\nadjustments.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 Mar 2024 14:57:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" }, { "msg_contents": "On Wed, Mar 20, 2024 at 2:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Richard Guo <guofenglinux@gmail.com> writes:\n> > Here is the patch for HEAD. I simply re-posted v10. Nothing has\n> > changed.\n>\n> I got back to this finally, and pushed it with some minor cosmetic\n> adjustments.\n\n\nThanks for pushing!\n\nThanks\nRichard\n\nOn Wed, Mar 20, 2024 at 2:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Richard Guo <guofenglinux@gmail.com> writes:\n> Here is the patch for HEAD.  I simply re-posted v10.  Nothing has\n> changed.\n\nI got back to this finally, and pushed it with some minor cosmetic\nadjustments.Thanks for pushing!ThanksRichard", "msg_date": "Thu, 21 Mar 2024 19:20:30 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Oversight in reparameterize_path_by_child leading to executor\n crash" } ]
[ { "msg_contents": "\n\n\n\n\n\n\n\n\n\n\n\n\n\nHi, I’m trying to develop a new grammar for pg, can you give me a code example to reference?\n\n\n\n\n\n\n\n\n\n\n\n\n\n\njacktby\n\n\n\n\njacktby@gmail.com\n\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "Tue, 1 Aug 2023 19:36:36 +0800", "msg_from": "jacktby <jacktby@gmail.com>", "msg_from_op": true, "msg_subject": "How to build a new grammer for pg?" }, { "msg_contents": "Hi,\n\nOn Tue, Aug 01, 2023 at 07:36:36PM +0800, jacktby wrote:\n> <span style=\"font-family: Helvetica, Helvetica, 微软雅黑, 宋体;\">Hi, I’m trying to develop a new grammar for pg, can\n+you give me a code example to reference?</span><span>\n\nIt's unclear to me whether you want to entirely replace the flex/bison parser\nwith something else or just add some new bison rule.\n\nIf the latter, it really depends on what you change to achieve exactly. I\nguess you could just look at the gram.y git history until you find something\nclose enough to your use case and get inspiration from that.\n\n\n", "msg_date": "Tue, 1 Aug 2023 19:58:07 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: How to build a new grammer for pg?" }, { "msg_contents": "On 2023-08-01 07:58, Julien Rouhaud wrote:\n> On Tue, Aug 01, 2023 at 07:36:36PM +0800, jacktby wrote:\n>> Hi, I’m trying to develop a new grammar for pg\n> \n> It's unclear to me whether you want to entirely replace the flex/bison \n> parser\n> with something else or just add some new bison rule.\n\nOr express a grammar matching PG's in some other parser-generator\nlanguage, to enable some other tool to parse PG productions?\n\nRegards,\n-Chap\n\n\n", "msg_date": "Tue, 01 Aug 2023 12:50:40 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: How to build a new grammer for pg?" }, { "msg_contents": "On 2023-08-01 Tu 12:50, Chapman Flack wrote:\n> On 2023-08-01 07:58, Julien Rouhaud wrote:\n>> On Tue, Aug 01, 2023 at 07:36:36PM +0800, jacktby wrote:\n>>> Hi, I’m trying to develop a new grammar for pg\n>>\n>> It's unclear to me whether you want to entirely replace the \n>> flex/bison parser\n>> with something else or just add some new bison rule.\n>\n> Or express a grammar matching PG's in some other parser-generator\n> language, to enable some other tool to parse PG productions?\n>\n>\n\nOr to enable some language other than SQL (QUEL anyone?)\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-08-01 Tu 12:50, Chapman Flack\n wrote:\n\nOn\n 2023-08-01 07:58, Julien Rouhaud wrote:\n \nOn Tue, Aug 01, 2023 at 07:36:36PM +0800,\n jacktby wrote:\n \nHi, I’m trying to develop a new grammar\n for pg\n \n\n\n It's unclear to me whether you want to entirely replace the\n flex/bison parser\n \n with something else or just add some new bison rule.\n \n\n\n Or express a grammar matching PG's in some other parser-generator\n \n language, to enable some other tool to parse PG productions?\n \n\n\n\n\n\nOr to enable some language other than SQL (QUEL anyone?)\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Tue, 1 Aug 2023 15:45:07 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: How to build a new grammer for pg?" }, { "msg_contents": "On Tue, Aug 1, 2023 at 3:45 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n> Or to enable some language other than SQL (QUEL anyone?)\n>\n\nA few years ago, I got a minimal POSTQUEL working again to release as a\npatch for April Fools' Day, which I never did. I should dig that up\nsomewhere :)\n\nAnyway, as far as OP's original question regarding replacing the grammar,\nthere are a couple of such implementations floating around that have done\nthat. But, I actually think the pluggable parser patches were good examples\nof how to integrate a replacement parser that generates the expected parse\ntree nodes for anyone who wants to do their own custom parser. See Julien\nRouhaud's SQLOL in the \"Hook for extensible parsing\" thread and Jim\nMlodgenski's \"Parser Hook\" thread.\n\n-- \nJonah H. Harris\n\nOn Tue, Aug 1, 2023 at 3:45 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n\nOr to enable some language other than SQL (QUEL anyone?)A few years ago, I got a minimal POSTQUEL working again to release as a patch for April Fools' Day, which I never did. I should dig that up somewhere :)Anyway, as far as OP's original question regarding replacing the grammar, there are a couple of such implementations floating around that have done that. But, I actually think the pluggable parser patches were good examples of how to integrate a replacement parser that generates the expected parse tree nodes for anyone who wants to do their own custom parser. See Julien Rouhaud's SQLOL in the \"Hook for extensible parsing\" thread and Jim Mlodgenski's \"Parser Hook\" thread.-- Jonah H. Harris", "msg_date": "Tue, 1 Aug 2023 16:07:24 -0400", "msg_from": "\"Jonah H. Harris\" <jonah.harris@gmail.com>", "msg_from_op": false, "msg_subject": "Re: How to build a new grammer for pg?" }, { "msg_contents": "I would look at how Babelfish DB did it when adding SQL Server compatibility\n\nhttps://babelfishpg.org/ and https://github.com/babelfish-for-postgresql/\n\nanother source to inspect could be\nhttps://github.com/IvorySQL/IvorySQL for \"oracle compatible\nPostgreSQL\"\n\nOn Tue, Aug 1, 2023 at 10:07 PM Jonah H. Harris <jonah.harris@gmail.com> wrote:\n>\n> On Tue, Aug 1, 2023 at 3:45 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>>\n>> Or to enable some language other than SQL (QUEL anyone?)\n>\n>\n> A few years ago, I got a minimal POSTQUEL working again to release as a patch for April Fools' Day, which I never did. I should dig that up somewhere :)\n>\n> Anyway, as far as OP's original question regarding replacing the grammar, there are a couple of such implementations floating around that have done that. But, I actually think the pluggable parser patches were good examples of how to integrate a replacement parser that generates the expected parse tree nodes for anyone who wants to do their own custom parser. See Julien Rouhaud's SQLOL in the \"Hook for extensible parsing\" thread and Jim Mlodgenski's \"Parser Hook\" thread.\n>\n> --\n> Jonah H. Harris\n>\n\n\n", "msg_date": "Tue, 8 Aug 2023 13:24:24 +0200", "msg_from": "Hannu Krosing <hannuk@google.com>", "msg_from_op": false, "msg_subject": "Re: How to build a new grammer for pg?" } ]
[ { "msg_contents": "Hi,\n\nA static analyzer reported a possible pfree(NULL) in \nbe_tls_open_server(). Here is a fix. Also handle an error from \nX509_NAME_print_ex().\n\nAFAICS, the error \"SSL certificate's distinguished name contains \nembedded null\" could not be reached at all, because XN_FLAG_RFC2253 \npassed to X509_NAME_print_ex() ensures that null bytes are escaped.\n\nBest regards,\n\n-- \nSergey Shinderuk\t\thttps://postgrespro.com/", "msg_date": "Tue, 1 Aug 2023 17:44:13 +0300", "msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Fix error handling in be_tls_open_server()" }, { "msg_contents": "> On 1 Aug 2023, at 16:44, Sergey Shinderuk <s.shinderuk@postgrespro.ru> wrote:\n\n> A static analyzer reported a possible pfree(NULL) in be_tls_open_server().\n\nThis has the smell of a theoretical problem, I can't really imagine a\ncertificate where which would produce this. Have you been able to trigger it?\n\nWouldn't a better fix be to error out on len == -1 as in the attached, maybe\nwith a \"Shouldn't happen\" comment?\n\n--\nDaniel Gustafsson", "msg_date": "Wed, 23 Aug 2023 15:23:20 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Fix error handling in be_tls_open_server()" }, { "msg_contents": "On Wed, Aug 23, 2023 at 6:23 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> This has the smell of a theoretical problem, I can't really imagine a\n> certificate where which would produce this. Have you been able to trigger it?\n>\n> Wouldn't a better fix be to error out on len == -1 as in the attached, maybe\n> with a \"Shouldn't happen\" comment?\n\nUsing \"cert clientname=DN\" in the HBA should let you issue Subjects\nwithout Common Names. Or, if you're using a certificate to authorize\nthe connection rather than authenticate the user (for example\n\"scram-sha-256 clientcert=verify-ca\" in your HBA), then the certs you\ndistribute could even be SAN-only with a completely empty Subject.\n\n--Jacob\n\n\n", "msg_date": "Wed, 23 Aug 2023 14:47:55 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Fix error handling in be_tls_open_server()" }, { "msg_contents": "> On 23 Aug 2023, at 23:47, Jacob Champion <jchampion@timescale.com> wrote:\n> \n> On Wed, Aug 23, 2023 at 6:23 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>> This has the smell of a theoretical problem, I can't really imagine a\n>> certificate where which would produce this. Have you been able to trigger it?\n>> \n>> Wouldn't a better fix be to error out on len == -1 as in the attached, maybe\n>> with a \"Shouldn't happen\" comment?\n> \n> Using \"cert clientname=DN\" in the HBA should let you issue Subjects\n> without Common Names. Or, if you're using a certificate to authorize\n> the connection rather than authenticate the user (for example\n> \"scram-sha-256 clientcert=verify-ca\" in your HBA), then the certs you\n> distribute could even be SAN-only with a completely empty Subject.\n\nThat's a good point, didn't think about those. In those cases the original\npatch by Sergey seems along the right lines.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 23 Aug 2023 23:54:03 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Fix error handling in be_tls_open_server()" }, { "msg_contents": "On 23.08.2023 16:23, Daniel Gustafsson wrote:\n>> On 1 Aug 2023, at 16:44, Sergey Shinderuk <s.shinderuk@postgrespro.ru> wrote:\n> \n>> A static analyzer reported a possible pfree(NULL) in be_tls_open_server().\n> \n> This has the smell of a theoretical problem, I can't really imagine a\n> certificate where which would produce this. Have you been able to trigger it?\n\n\nI triggered a crash by generating a certificate without a CN and forcing \nmalloc to return NULL when called from X509_NAME_print_ex or \nBIO_get_mem_ptr with gdb.\n\nInitially I tried to trigger a crash by generating a certificate without \na CN and with a DN contaning the null byte. But as I said, the error \ncondition \"SSL certificate's distinguished name contains embedded null\" \nisn't really reachable, because XN_FLAG_RFC2253 escapes null bytes.\n\n\n-- \nSergey Shinderuk\t\thttps://postgrespro.com/\n\n\n\n", "msg_date": "Thu, 24 Aug 2023 11:11:49 +0300", "msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Fix error handling in be_tls_open_server()" }, { "msg_contents": "> On 24 Aug 2023, at 10:11, Sergey Shinderuk <s.shinderuk@postgrespro.ru> wrote:\n> \n> On 23.08.2023 16:23, Daniel Gustafsson wrote:\n>>> On 1 Aug 2023, at 16:44, Sergey Shinderuk <s.shinderuk@postgrespro.ru> wrote:\n>>> A static analyzer reported a possible pfree(NULL) in be_tls_open_server().\n>> This has the smell of a theoretical problem, I can't really imagine a\n>> certificate where which would produce this. Have you been able to trigger it?\n> \n> I triggered a crash by generating a certificate without a CN and forcing malloc to return NULL when called from X509_NAME_print_ex or BIO_get_mem_ptr with gdb.\n\nCan you extend the patch with that certificate and a test using it? The\ncertificates are generated from config files kept in the repo in src/test/ssl\nin order to be reproducible.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 24 Aug 2023 10:38:24 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Fix error handling in be_tls_open_server()" }, { "msg_contents": "On 24.08.2023 11:38, Daniel Gustafsson wrote:\n>> On 24 Aug 2023, at 10:11, Sergey Shinderuk <s.shinderuk@postgrespro.ru> wrote:\n\n>> I triggered a crash by generating a certificate without a CN and forcing malloc to return NULL when called from X509_NAME_print_ex or BIO_get_mem_ptr with gdb.\n> \n> Can you extend the patch with that certificate and a test using it? The\n> certificates are generated from config files kept in the repo in src/test/ssl\n> in order to be reproducible.\n\nBut I need to force malloc to fail at the right time during the test. I \ndon't think I can make a stable and portable test from this.\n\n\n-- \nSergey Shinderuk\t\thttps://postgrespro.com/\n\n\n\n", "msg_date": "Thu, 24 Aug 2023 12:13:18 +0300", "msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Fix error handling in be_tls_open_server()" }, { "msg_contents": "On Thu, Aug 24, 2023 at 12:13:18PM +0300, Sergey Shinderuk wrote:\n> But I need to force malloc to fail at the right time during the test. I\n> don't think I can make a stable and portable test from this.\n\nLD_PRELOAD is the only thing I can think about, but that's very fancy.\nEven with that, having a certificate with a NULL peer_cn could prove\nto be useful in the SSL suite to stress more patterns around it?\n--\nMichael", "msg_date": "Fri, 25 Aug 2023 10:25:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix error handling in be_tls_open_server()" }, { "msg_contents": "On Thu, Aug 24, 2023 at 6:25 PM Michael Paquier <michael@paquier.xyz> wrote:\n> LD_PRELOAD is the only thing I can think about, but that's very fancy.\n> Even with that, having a certificate with a NULL peer_cn could prove\n> to be useful in the SSL suite to stress more patterns around it?\n\n+1. Last we tried it, OpenSSL didn't want to create a certificate with\nan embedded null, but maybe things have changed?\n\n--Jacob\n\n\n", "msg_date": "Mon, 28 Aug 2023 08:12:45 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Fix error handling in be_tls_open_server()" }, { "msg_contents": "On 28.08.2023 18:12, Jacob Champion wrote:\n> On Thu, Aug 24, 2023 at 6:25 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> LD_PRELOAD is the only thing I can think about, but that's very fancy.\n>> Even with that, having a certificate with a NULL peer_cn could prove\n>> to be useful in the SSL suite to stress more patterns around it?\n> \n> +1. Last we tried it, OpenSSL didn't want to create a certificate with\n> an embedded null, but maybe things have changed?\n> \n\nTo embed a null byte into the Subject, I first generated a regular \ncertificate request in the DER (binary) format, then manually inserted \nnull into the file and recomputed the checksum. Like this:\nhttps://security.stackexchange.com/a/58845\n\nI'll try to add a client certificate lacking a CN to the SSL test suite.\n\n-- \nSergey Shinderuk\t\thttps://postgrespro.com/\n\n\n\n", "msg_date": "Mon, 28 Aug 2023 20:39:07 +0300", "msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Fix error handling in be_tls_open_server()" }, { "msg_contents": "> On 28 Aug 2023, at 19:39, Sergey Shinderuk <s.shinderuk@postgrespro.ru> wrote:\n\n> I'll try to add a client certificate lacking a CN to the SSL test suite.\n\nCertificates can be regenerated with the buildsystem, which ideally would apply\nto this cert as well, but if that's not feasible we can perhaps accept a static\none with build information detailed in the README. Having such a cert could\nfor sure be interesting for testing.\n\nAwaiting resolution on this, I propose we go ahead with the original patch from\nthis thread. Any objections to that?\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 18 Sep 2023 14:35:28 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Fix error handling in be_tls_open_server()" }, { "msg_contents": "On Mon, Sep 18, 2023 at 02:35:28PM +0200, Daniel Gustafsson wrote:\n> Certificates can be regenerated with the buildsystem, which ideally would apply\n> to this cert as well, but if that's not feasible we can perhaps accept a static\n> one with build information detailed in the README. Having such a cert could\n> for sure be interesting for testing.\n\nWFM, but I'd prefer something that would be generated with the\nmakefile rules. These are so handy when it comes to regenerate all\nthese certs..\n\n> Awaiting resolution on this, I propose we go ahead with the original patch from\n> this thread. Any objections to that?\n\nI was wondering for a few seconds if you talked about the one posted\non [1], which would break the case where X509_NAME_get_text_by_NID()\nfails if there's a valid bio, but you mean the one at the top of the\nthread in [2], of course :)\n\nOne doubt that I have is if we shouldn't let X509_NAME_print_ex() be\nas it is now, and not force a failure on the bio if this calls fails.\n\n[1]: https://www.postgresql.org/message-id/E3921399-FAE7-4B1F-B1BF-B3357DDC9F19@yesql.se\n[2]: https://www.postgresql.org/message-id/8db5374d-32e0-6abb-d402-40762511eff2@postgrespro.ru\n--\nMichael", "msg_date": "Tue, 19 Sep 2023 09:54:54 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix error handling in be_tls_open_server()" }, { "msg_contents": "On 19.09.2023 03:54, Michael Paquier wrote:\n> One doubt that I have is if we shouldn't let X509_NAME_print_ex() be\n> as it is now, and not force a failure on the bio if this calls fails.\n\n\nIf malloc fails inside X509_NAME_print_ex, then we will be left with \nempty port->peer_dn. Here is a gdb session showing this:\n\n(gdb) b X509_NAME_print_ex\nBreakpoint 1 at 0x7f539f6c0cf0\n(gdb) c\nContinuing.\n\nBreakpoint 1, 0x00007f539f6c0cf0 in X509_NAME_print_ex () from \n/lib/x86_64-linux-gnu/libcrypto.so.3\n(gdb) bt\n#0 0x00007f539f6c0cf0 in X509_NAME_print_ex () from \n/lib/x86_64-linux-gnu/libcrypto.so.3\n#1 0x000056026d2fbe8d in be_tls_open_server \n(port=port@entry=0x56026ed5d730) at be-secure-openssl.c:635\n#2 0x000056026d2eefa5 in secure_open_server \n(port=port@entry=0x56026ed5d730) at be-secure.c:118\n#3 0x000056026d3dc412 in ProcessStartupPacket \n(port=port@entry=0x56026ed5d730, ssl_done=ssl_done@entry=false, \ngss_done=gss_done@entry=false) at postmaster.c:2065\n#4 0x000056026d3dcd8e in BackendInitialize \n(port=port@entry=0x56026ed5d730) at postmaster.c:4377\n#5 0x000056026d3def6a in BackendStartup \n(port=port@entry=0x56026ed5d730) at postmaster.c:4155\n#6 0x000056026d3df115 in ServerLoop () at postmaster.c:1781\n#7 0x000056026d3e0645 in PostmasterMain (argc=argc@entry=20, \nargv=argv@entry=0x56026ec5a0d0) at postmaster.c:1465\n#8 0x000056026d2fcd7c in main (argc=20, argv=0x56026ec5a0d0) at main.c:198\n(gdb) b malloc\nBreakpoint 2 at 0x7f539eca5120: file ./malloc/malloc.c, line 3287.\n(gdb) c\nContinuing.\n\nBreakpoint 2, __GI___libc_malloc (bytes=4) at ./malloc/malloc.c:3287\n3287\t./malloc/malloc.c: No such file or directory.\n(gdb) bt\n#0 __GI___libc_malloc (bytes=4) at ./malloc/malloc.c:3287\n#1 0x00007f539f6f6e09 in BUF_MEM_grow_clean () from \n/lib/x86_64-linux-gnu/libcrypto.so.3\n#2 0x00007f539f6e4fb8 in ?? () from /lib/x86_64-linux-gnu/libcrypto.so.3\n#3 0x00007f539f6d22fb in ?? () from /lib/x86_64-linux-gnu/libcrypto.so.3\n#4 0x00007f539f6d5c06 in ?? () from /lib/x86_64-linux-gnu/libcrypto.so.3\n#5 0x00007f539f6d5d37 in BIO_write () from \n/lib/x86_64-linux-gnu/libcrypto.so.3\n#6 0x00007f539f6bdb41 in ?? () from /lib/x86_64-linux-gnu/libcrypto.so.3\n#7 0x00007f539f6c0b7d in ?? () from /lib/x86_64-linux-gnu/libcrypto.so.3\n#8 0x000056026d2fbe8d in be_tls_open_server \n(port=port@entry=0x56026ed5d730) at be-secure-openssl.c:635\n#9 0x000056026d2eefa5 in secure_open_server \n(port=port@entry=0x56026ed5d730) at be-secure.c:118\n#10 0x000056026d3dc412 in ProcessStartupPacket \n(port=port@entry=0x56026ed5d730, ssl_done=ssl_done@entry=false, \ngss_done=gss_done@entry=false) at postmaster.c:2065\n#11 0x000056026d3dcd8e in BackendInitialize \n(port=port@entry=0x56026ed5d730) at postmaster.c:4377\n#12 0x000056026d3def6a in BackendStartup \n(port=port@entry=0x56026ed5d730) at postmaster.c:4155\n#13 0x000056026d3df115 in ServerLoop () at postmaster.c:1781\n#14 0x000056026d3e0645 in PostmasterMain (argc=argc@entry=20, \nargv=argv@entry=0x56026ec5a0d0) at postmaster.c:1465\n#15 0x000056026d2fcd7c in main (argc=20, argv=0x56026ec5a0d0) at main.c:198\n(gdb) return 0\nMake __GI___libc_malloc return now? (y or n) y\n#0 0x00007f539f6f6e09 in BUF_MEM_grow_clean () from \n/lib/x86_64-linux-gnu/libcrypto.so.3\n(gdb) delete\nDelete all breakpoints? (y or n) y\n(gdb) c\nContinuing.\n\nAnd in the server log we see:\n\nDEBUG: SSL connection from DN:\"\" CN:\"ssltestuser\"\n\nWhile in the normal case we get:\n\nDEBUG: SSL connection from DN:\"CN=ssltestuser\" CN:\"ssltestuser\"\\\n\nProbably we shouldn't ignore the error from X509_NAME_print_ex?\n\n\n-- \nSergey Shinderuk\t\thttps://postgrespro.com/\n\n\n\n", "msg_date": "Tue, 19 Sep 2023 11:06:25 +0300", "msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Fix error handling in be_tls_open_server()" }, { "msg_contents": "> On 19 Sep 2023, at 10:06, Sergey Shinderuk <s.shinderuk@postgrespro.ru> wrote:\n> \n> On 19.09.2023 03:54, Michael Paquier wrote:\n>> One doubt that I have is if we shouldn't let X509_NAME_print_ex() be\n>> as it is now, and not force a failure on the bio if this calls fails.\n> \n> If malloc fails inside X509_NAME_print_ex, then we will be left with empty port->peer_dn.\n\nLooking at the OpenSSL code, there a other (albeit esoteric) errors that return\n-1 as well. I agree that we should handle this error.\n\nX509_NAME_print_ex is not documented to return -1 in OpenSSL 1.0.2 but reading\nthe code it's clear that it does, so checking for -1 is safe for all supported\nOpenSSL versions (supported by us that is).\n\nAttached is a v2 on top of HEAD with commit message etc, which I propose to\nbackpatch to v15 where it was introduced.\n\n--\nDaniel Gustafsson", "msg_date": "Wed, 20 Sep 2023 10:42:23 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Fix error handling in be_tls_open_server()" }, { "msg_contents": "On 20.09.2023 11:42, Daniel Gustafsson wrote:\n> Attached is a v2 on top of HEAD with commit message etc, which I propose to\n> backpatch to v15 where it was introduced.\n> \nThere is a typo: upon en error. Otherwise, looks good to me. Thank you.\n\n-- \nSergey Shinderuk\t\thttps://postgrespro.com/\n\n\n\n", "msg_date": "Wed, 20 Sep 2023 11:55:54 +0300", "msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Fix error handling in be_tls_open_server()" }, { "msg_contents": "> On 20 Sep 2023, at 10:55, Sergey Shinderuk <s.shinderuk@postgrespro.ru> wrote:\n> \n> On 20.09.2023 11:42, Daniel Gustafsson wrote:\n>> Attached is a v2 on top of HEAD with commit message etc, which I propose to\n>> backpatch to v15 where it was introduced.\n> There is a typo: upon en error. Otherwise, looks good to me. Thank you.\n\nThanks, will fix before pushing.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 20 Sep 2023 10:58:19 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Fix error handling in be_tls_open_server()" }, { "msg_contents": "> On 20 Sep 2023, at 10:58, Daniel Gustafsson <daniel@yesql.se> wrote:\n>> On 20 Sep 2023, at 10:55, Sergey Shinderuk <s.shinderuk@postgrespro.ru> wrote:\n\n>> There is a typo: upon en error. Otherwise, looks good to me. Thank you.\n> \n> Thanks, will fix before pushing.\n\nI've applied this down to v15 now and the buildfarm have green builds on all\nthree branches, thanks for the submission!\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 22 Sep 2023 11:43:29 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Fix error handling in be_tls_open_server()" } ]
[ { "msg_contents": "Hi hackers,\n\n\nI am using pg_embedding extension for Postgres which implements HNSW \nindex (some kind of ANN search).\nSearch query looks something like this:\n\n     SELECT _id FROM documents ORDER BY openai <=> ARRAY[0.024466066, \n-0.011110042, -0.0012917554,... , -0.008700027] LIMIT 1;\n\nI do not put all query here because vector contains 1536 elements.\n\nThe query plans is straightforward:\n\n   Limit  (cost=4007500.00..4007502.06 rows=1 width=33)\n    ->  Index Scan using pgembedding_hnsw_idx on documents \n(cost=4007500.00..6064544.00 rows=1000000 width=33)\n          Order By: (openai <=> \n'{0.024466066,-0.011110042,-0.0012917554,-0.025349319,-0.017500998,0.015570462,-0.015835438,-0.0067442562,0.0015977388,-0.034...\n  Planning Time: 1.276 ms\n   JIT:\n    Functions: 5\n    Options: Inlining true, Optimization true, Expressions true, \nDeforming true\n    Timing: Generation 0.289 ms, Inlining 3.752 ms, Optimization 34.502 \nms, Emission 19.078 ms, Total 57.621 ms\n  Execution Time: 70.850 ms\n\nwithout JIT:\n\n  Planning Time: 0.472 ms\n  Execution Time: 13.300 ms\n\nThe problem is that JIT slowdown execution of this query 4x times: with \ndefault setting it is 70 msec, with jit=off - 13 msec.\nI wonder why do we ever try to use JIT for index scan, especially for \nuser defined index where no function can be inlined?\nWhy not to restrict JIT just to seqscan plan? But the way in this \nparticular case enable_seqscan=off...\nMay be it is naive suggestions, but 5x slowdown doesn't seems like right \nbehavior in any case.\n\nProfile of query execution is the following:\n\n-   78.52%     0.00%  postgres  postgres              [.] \nstandard_ExecutorRun\n    - 78.52% standard_ExecutorRun\n       - 55.38% 0000561dc27c89dc\n          - 55.38% 0000561dc27c89dc\n               0x7fc3a58dd731\n             - llvm_get_function\n                + 37.66% LLVMRunPassManager\n                + 14.99% LLVMOrcLLJITLookup\n                + 0.91% llvm_inline\n                  0.90% llvm::PassManagerBuilder::populateModulePassManager\n                + 0.79% LLVMRunFunctionPassManager\n       - 22.79% 0000561dc27c89dc\n          - ExecScan\n             - 22.76% 0000561dc27c89dc\n                  index_getnext_tid\n                - hnsw_gettuple\n                   - 22.76% hnsw_search\n                      - 22.72% searchKnn\n\n                         + 22.29% searchBaseLayer\n\n\nVersion of Postgres: Ubuntu 15.3-1.pgdg22.04+1)\n\n\n\n\n", "msg_date": "Tue, 1 Aug 2023 19:00:10 +0300", "msg_from": "Konstantin Knizhnik <knizhnik@garret.ru>", "msg_from_op": true, "msg_subject": "One more problem with JIT" } ]
[ { "msg_contents": "Hi,\n\nOn Tue, Aug 01, 2023 at 11:51:35AM +0900, Masahiro Ikeda wrote:\n> Thanks for committing the main patch.\n\nLatest head\nUbuntu 64 bits\ngcc 13 64 bits\n\n./configure --without-icu\nmake clean\nmake\n\nIn file included from ../../src/include/pgstat.h:20,\n from controldata_utils.c:38:\n../../src/include/utils/wait_event.h:56:9: error: redeclaration of\nenumerator ‘WAIT_EVENT_EXTENSION’\n 56 | WAIT_EVENT_EXTENSION = PG_WAIT_EXTENSION,\n | ^~~~~~~~~~~~~~~~~~~~\nIn file included from ../../src/include/utils/wait_event.h:29,\n from ../../src/include/pgstat.h:20,\n from controldata_utils.c:38:\n../../src/include/utils/wait_event_types.h:60:9: note: previous definition\nof ‘WAIT_EVENT_EXTENSION’ with type ‘enum <anonymous>’\n 60 | WAIT_EVENT_EXTENSION = PG_WAIT_EXTENSION\n | ^~~~~~~~~~~~~~~~~~~~\nIn file included from ../../src/include/pgstat.h:20,\n from controldata_utils.c:38:\n../../src/include/utils/wait_event.h:58:3: error: conflicting types for\n‘WaitEventExtension’; have ‘enum <anonymous>’\n 58 | } WaitEventExtension;\n | ^~~~~~~~~~~~~~~~~~\nIn file included from ../../src/include/utils/wait_event.h:29,\n from ../../src/include/pgstat.h:20,\n from controldata_utils.c:38:\n../../src/include/utils/wait_event_types.h:61:3: note: previous declaration\nof ‘WaitEventExtension’ with type ‘WaitEventExtension’\n 61 | } WaitEventExtension;\n | ^~~~~~~~~~~~~~~~~~\nmake[2]: *** [Makefile:174: controldata_utils_srv.o] Erro 1\nmake[2]: Saindo do diretório '/usr/src/postgres_commits/src/common'\nmake[1]: *** [Makefile:42: all-common-recurse] Erro 2\nmake[1]: Saindo do diretório '/usr/src/postgres_commits/src'\nmake: *** [GNUmakefile:11: all-src-recurse] Erro 2\nranier@notebook2:/usr/src/postgres_commits$ grep -r --include '*.c'\n\"WAIT_EVENT_EXTESION\" .\nranier@notebook2:/usr/src/postgres_commits$ grep -r --include '*.c'\n\"WAIT_EVENT_EXTENSION\" .\n\ngrep -r --include '*.h' \"WAIT_EVENT_EXTENSION\" .\n./src/backend/utils/activity/wait_event_types.h: WAIT_EVENT_EXTENSION =\nPG_WAIT_EXTENSION\n./src/include/utils/wait_event.h: WAIT_EVENT_EXTENSION = PG_WAIT_EXTENSION,\n./src/include/utils/wait_event.h: WAIT_EVENT_EXTENSION_FIRST_USER_DEFINED\n\nNot sure if this is really a mistake in my environment.\n\nregards,\nRanier Vilela\n\nHi,On Tue, Aug 01, 2023 at 11:51:35AM +0900, Masahiro Ikeda wrote:> Thanks for committing the main patch.Latest headUbuntu 64 bitsgcc 13 64 bits./configure --without-icumake cleanmakeIn file included from ../../src/include/pgstat.h:20,                 from controldata_utils.c:38:../../src/include/utils/wait_event.h:56:9: error: redeclaration of enumerator ‘WAIT_EVENT_EXTENSION’   56 |         WAIT_EVENT_EXTENSION = PG_WAIT_EXTENSION,      |         ^~~~~~~~~~~~~~~~~~~~In file included from ../../src/include/utils/wait_event.h:29,                 from ../../src/include/pgstat.h:20,                 from controldata_utils.c:38:../../src/include/utils/wait_event_types.h:60:9: note: previous definition of ‘WAIT_EVENT_EXTENSION’ with type ‘enum <anonymous>’   60 |         WAIT_EVENT_EXTENSION = PG_WAIT_EXTENSION      |         ^~~~~~~~~~~~~~~~~~~~In file included from ../../src/include/pgstat.h:20,                 from controldata_utils.c:38:../../src/include/utils/wait_event.h:58:3: error: conflicting types for ‘WaitEventExtension’; have ‘enum <anonymous>’   58 | } WaitEventExtension;      |   ^~~~~~~~~~~~~~~~~~In file included from ../../src/include/utils/wait_event.h:29,                 from ../../src/include/pgstat.h:20,                 from controldata_utils.c:38:../../src/include/utils/wait_event_types.h:61:3: note: previous declaration of ‘WaitEventExtension’ with type ‘WaitEventExtension’   61 | } WaitEventExtension;      |   ^~~~~~~~~~~~~~~~~~make[2]: *** [Makefile:174: controldata_utils_srv.o] Erro 1make[2]: Saindo do diretório '/usr/src/postgres_commits/src/common'make[1]: *** [Makefile:42: all-common-recurse] Erro 2make[1]: Saindo do diretório '/usr/src/postgres_commits/src'make: *** [GNUmakefile:11: all-src-recurse] Erro 2ranier@notebook2:/usr/src/postgres_commits$ grep -r --include '*.c' \"WAIT_EVENT_EXTESION\" .ranier@notebook2:/usr/src/postgres_commits$ grep -r --include '*.c' \"WAIT_EVENT_EXTENSION\" .grep -r --include '*.h' \"WAIT_EVENT_EXTENSION\" ../src/backend/utils/activity/wait_event_types.h:\tWAIT_EVENT_EXTENSION = PG_WAIT_EXTENSION./src/include/utils/wait_event.h:\tWAIT_EVENT_EXTENSION = PG_WAIT_EXTENSION,./src/include/utils/wait_event.h:\tWAIT_EVENT_EXTENSION_FIRST_USER_DEFINEDNot sure if this is really a mistake in my environment.regards,Ranier Vilela", "msg_date": "Tue, 1 Aug 2023 20:38:40 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Support to define custom wait events for extensions" }, { "msg_contents": "On 2023-08-02 08:38, Ranier Vilela wrote:\n> Hi,\n> \n> On Tue, Aug 01, 2023 at 11:51:35AM +0900, Masahiro Ikeda wrote:\n>> Thanks for committing the main patch.\n> \n> Latest head\n> Ubuntu 64 bits\n> gcc 13 64 bits\n> \n> ./configure --without-icu\n> make clean\n> \n> make\n> \n> In file included from ../../src/include/pgstat.h:20,\n> from controldata_utils.c:38:\n> ../../src/include/utils/wait_event.h:56:9: error: redeclaration of\n> enumerator ‘WAIT_EVENT_EXTENSION’\n> 56 | WAIT_EVENT_EXTENSION = PG_WAIT_EXTENSION,\n> | ^~~~~~~~~~~~~~~~~~~~\n> In file included from ../../src/include/utils/wait_event.h:29,\n> from ../../src/include/pgstat.h:20,\n> from controldata_utils.c:38:\n> ../../src/include/utils/wait_event_types.h:60:9: note: previous\n> definition of ‘WAIT_EVENT_EXTENSION’ with type ‘enum\n> <anonymous>’\n> 60 | WAIT_EVENT_EXTENSION = PG_WAIT_EXTENSION\n> | ^~~~~~~~~~~~~~~~~~~~\n> In file included from ../../src/include/pgstat.h:20,\n> from controldata_utils.c:38:\n> ../../src/include/utils/wait_event.h:58:3: error: conflicting types\n> for ‘WaitEventExtension’; have ‘enum <anonymous>’\n> 58 | } WaitEventExtension;\n> | ^~~~~~~~~~~~~~~~~~\n> In file included from ../../src/include/utils/wait_event.h:29,\n> from ../../src/include/pgstat.h:20,\n> from controldata_utils.c:38:\n> ../../src/include/utils/wait_event_types.h:61:3: note: previous\n> declaration of ‘WaitEventExtension’ with type\n> ‘WaitEventExtension’\n> 61 | } WaitEventExtension;\n> | ^~~~~~~~~~~~~~~~~~\n> make[2]: *** [Makefile:174: controldata_utils_srv.o] Erro 1\n> make[2]: Saindo do diretório '/usr/src/postgres_commits/src/common'\n> make[1]: *** [Makefile:42: all-common-recurse] Erro 2\n> make[1]: Saindo do diretório '/usr/src/postgres_commits/src'\n> make: *** [GNUmakefile:11: all-src-recurse] Erro 2\n> ranier@notebook2:/usr/src/postgres_commits$ grep -r --include '*.c'\n> \"WAIT_EVENT_EXTESION\" .\n> ranier@notebook2:/usr/src/postgres_commits$ grep -r --include '*.c'\n> \"WAIT_EVENT_EXTENSION\" .\n> \n> grep -r --include '*.h' \"WAIT_EVENT_EXTENSION\" .\n> ./src/backend/utils/activity/wait_event_types.h: WAIT_EVENT_EXTENSION\n> = PG_WAIT_EXTENSION\n> ./src/include/utils/wait_event.h: WAIT_EVENT_EXTENSION =\n> PG_WAIT_EXTENSION,\n> ./src/include/utils/wait_event.h:\n> WAIT_EVENT_EXTENSION_FIRST_USER_DEFINED\n> \n> Not sure if this is really a mistake in my environment.\n\nI can build without error.\n* latest commit(3845577cb55eab3e06b3724e307ebbcac31f4841)\n* gcc version 9.4.0 (Ubuntu 9.4.0-1ubuntu1~20.04.1)\n\nThe result in my environment is\n$ grep -r --include '*.h' \"WAIT_EVENT_EXTENSION\" .\n./src/include/utils/wait_event.h: WAIT_EVENT_EXTENSION = \nPG_WAIT_EXTENSION,\n./src/include/utils/wait_event.h: \nWAIT_EVENT_EXTENSION_FIRST_USER_DEFINED\n\nI think the error occurred because the old wait_event_types.h still \nexists.\nCan you remove it before building and try again?\n\n$ make maintainer-clean # remove wait_event_types.h\n$ ./configure --without-icu\n$ make\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 02 Aug 2023 09:34:55 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Support to define custom wait events for extensions" }, { "msg_contents": "Em ter., 1 de ago. de 2023 às 21:34, Masahiro Ikeda <\nikedamsh@oss.nttdata.com> escreveu:\n\n> On 2023-08-02 08:38, Ranier Vilela wrote:\n> > Hi,\n> >\n> > On Tue, Aug 01, 2023 at 11:51:35AM +0900, Masahiro Ikeda wrote:\n> >> Thanks for committing the main patch.\n> >\n> > Latest head\n> > Ubuntu 64 bits\n> > gcc 13 64 bits\n> >\n> > ./configure --without-icu\n> > make clean\n> >\n> > make\n> >\n> > In file included from ../../src/include/pgstat.h:20,\n> > from controldata_utils.c:38:\n> > ../../src/include/utils/wait_event.h:56:9: error: redeclaration of\n> > enumerator ‘WAIT_EVENT_EXTENSION’\n> > 56 | WAIT_EVENT_EXTENSION = PG_WAIT_EXTENSION,\n> > | ^~~~~~~~~~~~~~~~~~~~\n> > In file included from ../../src/include/utils/wait_event.h:29,\n> > from ../../src/include/pgstat.h:20,\n> > from controldata_utils.c:38:\n> > ../../src/include/utils/wait_event_types.h:60:9: note: previous\n> > definition of ‘WAIT_EVENT_EXTENSION’ with type ‘enum\n> > <anonymous>’\n> > 60 | WAIT_EVENT_EXTENSION = PG_WAIT_EXTENSION\n> > | ^~~~~~~~~~~~~~~~~~~~\n> > In file included from ../../src/include/pgstat.h:20,\n> > from controldata_utils.c:38:\n> > ../../src/include/utils/wait_event.h:58:3: error: conflicting types\n> > for ‘WaitEventExtension’; have ‘enum <anonymous>’\n> > 58 | } WaitEventExtension;\n> > | ^~~~~~~~~~~~~~~~~~\n> > In file included from ../../src/include/utils/wait_event.h:29,\n> > from ../../src/include/pgstat.h:20,\n> > from controldata_utils.c:38:\n> > ../../src/include/utils/wait_event_types.h:61:3: note: previous\n> > declaration of ‘WaitEventExtension’ with type\n> > ‘WaitEventExtension’\n> > 61 | } WaitEventExtension;\n> > | ^~~~~~~~~~~~~~~~~~\n> > make[2]: *** [Makefile:174: controldata_utils_srv.o] Erro 1\n> > make[2]: Saindo do diretório '/usr/src/postgres_commits/src/common'\n> > make[1]: *** [Makefile:42: all-common-recurse] Erro 2\n> > make[1]: Saindo do diretório '/usr/src/postgres_commits/src'\n> > make: *** [GNUmakefile:11: all-src-recurse] Erro 2\n> > ranier@notebook2:/usr/src/postgres_commits$ grep -r --include '*.c'\n> > \"WAIT_EVENT_EXTESION\" .\n> > ranier@notebook2:/usr/src/postgres_commits$ grep -r --include '*.c'\n> > \"WAIT_EVENT_EXTENSION\" .\n> >\n> > grep -r --include '*.h' \"WAIT_EVENT_EXTENSION\" .\n> > ./src/backend/utils/activity/wait_event_types.h: WAIT_EVENT_EXTENSION\n> > = PG_WAIT_EXTENSION\n> > ./src/include/utils/wait_event.h: WAIT_EVENT_EXTENSION =\n> > PG_WAIT_EXTENSION,\n> > ./src/include/utils/wait_event.h:\n> > WAIT_EVENT_EXTENSION_FIRST_USER_DEFINED\n> >\n> > Not sure if this is really a mistake in my environment.\n>\n> I can build without error.\n> * latest commit(3845577cb55eab3e06b3724e307ebbcac31f4841)\n> * gcc version 9.4.0 (Ubuntu 9.4.0-1ubuntu1~20.04.1)\n>\n> The result in my environment is\n> $ grep -r --include '*.h' \"WAIT_EVENT_EXTENSION\" .\n> ./src/include/utils/wait_event.h: WAIT_EVENT_EXTENSION =\n> PG_WAIT_EXTENSION,\n> ./src/include/utils/wait_event.h:\n> WAIT_EVENT_EXTENSION_FIRST_USER_DEFINED\n>\n> I think the error occurred because the old wait_event_types.h still\n> exists.\n> Can you remove it before building and try again?\n>\n> $ make maintainer-clean # remove wait_event_types.h\n> $ ./configure --without-icu\n> $ make\n>\nYeah, removing wait_event_types.h works.\n\nThanks.\n\nregards,\nRanier Vilela\n\nEm ter., 1 de ago. de 2023 às 21:34, Masahiro Ikeda <ikedamsh@oss.nttdata.com> escreveu:On 2023-08-02 08:38, Ranier Vilela wrote:\n> Hi,\n> \n> On Tue, Aug 01, 2023 at 11:51:35AM +0900, Masahiro Ikeda wrote:\n>> Thanks for committing the main patch.\n> \n> Latest head\n> Ubuntu 64 bits\n> gcc 13 64 bits\n> \n> ./configure --without-icu\n> make clean\n> \n> make\n> \n> In file included from ../../src/include/pgstat.h:20,\n>                  from controldata_utils.c:38:\n> ../../src/include/utils/wait_event.h:56:9: error: redeclaration of\n> enumerator ‘WAIT_EVENT_EXTENSION’\n>    56 |         WAIT_EVENT_EXTENSION = PG_WAIT_EXTENSION,\n>       |         ^~~~~~~~~~~~~~~~~~~~\n> In file included from ../../src/include/utils/wait_event.h:29,\n>                  from ../../src/include/pgstat.h:20,\n>                  from controldata_utils.c:38:\n> ../../src/include/utils/wait_event_types.h:60:9: note: previous\n> definition of ‘WAIT_EVENT_EXTENSION’ with type ‘enum\n> <anonymous>’\n>    60 |         WAIT_EVENT_EXTENSION = PG_WAIT_EXTENSION\n>       |         ^~~~~~~~~~~~~~~~~~~~\n> In file included from ../../src/include/pgstat.h:20,\n>                  from controldata_utils.c:38:\n> ../../src/include/utils/wait_event.h:58:3: error: conflicting types\n> for ‘WaitEventExtension’; have ‘enum <anonymous>’\n>    58 | } WaitEventExtension;\n>       |   ^~~~~~~~~~~~~~~~~~\n> In file included from ../../src/include/utils/wait_event.h:29,\n>                  from ../../src/include/pgstat.h:20,\n>                  from controldata_utils.c:38:\n> ../../src/include/utils/wait_event_types.h:61:3: note: previous\n> declaration of ‘WaitEventExtension’ with type\n> ‘WaitEventExtension’\n>    61 | } WaitEventExtension;\n>       |   ^~~~~~~~~~~~~~~~~~\n> make[2]: *** [Makefile:174: controldata_utils_srv.o] Erro 1\n> make[2]: Saindo do diretório '/usr/src/postgres_commits/src/common'\n> make[1]: *** [Makefile:42: all-common-recurse] Erro 2\n> make[1]: Saindo do diretório '/usr/src/postgres_commits/src'\n> make: *** [GNUmakefile:11: all-src-recurse] Erro 2\n> ranier@notebook2:/usr/src/postgres_commits$ grep -r --include '*.c'\n> \"WAIT_EVENT_EXTESION\" .\n> ranier@notebook2:/usr/src/postgres_commits$ grep -r --include '*.c'\n> \"WAIT_EVENT_EXTENSION\" .\n> \n> grep -r --include '*.h' \"WAIT_EVENT_EXTENSION\" .\n> ./src/backend/utils/activity/wait_event_types.h: WAIT_EVENT_EXTENSION\n> = PG_WAIT_EXTENSION\n> ./src/include/utils/wait_event.h: WAIT_EVENT_EXTENSION =\n> PG_WAIT_EXTENSION,\n> ./src/include/utils/wait_event.h:\n> WAIT_EVENT_EXTENSION_FIRST_USER_DEFINED\n> \n> Not sure if this is really a mistake in my environment.\n\nI can build without error.\n* latest commit(3845577cb55eab3e06b3724e307ebbcac31f4841)\n* gcc version 9.4.0 (Ubuntu 9.4.0-1ubuntu1~20.04.1)\n\nThe result in my environment is\n$ grep -r --include '*.h' \"WAIT_EVENT_EXTENSION\" .\n./src/include/utils/wait_event.h:       WAIT_EVENT_EXTENSION = \nPG_WAIT_EXTENSION,\n./src/include/utils/wait_event.h:       \nWAIT_EVENT_EXTENSION_FIRST_USER_DEFINED\n\nI think the error occurred because the old wait_event_types.h still \nexists.\nCan you remove it before building and try again?\n\n$ make maintainer-clean     # remove wait_event_types.h\n$ ./configure --without-icu\n$ makeYeah, removing wait_event_types.h works.Thanks.regards,Ranier Vilela", "msg_date": "Tue, 1 Aug 2023 21:46:24 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Support to define custom wait events for extensions" } ]
[ { "msg_contents": "Dear PostgreSQL Development Team,\n\nI am inquiring about the availability of certain functionalities in the\nstandard PostgreSQL database. Could you please confirm if the following\nfunctionalities are currently available in PostgreSQL:\n\n1. Enforcement of Security Attribute Expiry\n2. Restricted Data Audit Management Access\n3. Limit on Concurrent Sessions\n4. Access History for PostgreSQL\n\nIf any of these functionalities are not present in the standard PostgreSQL\ndatabase, could you kindly let me know and provide any relevant insights\nabout their potential implementation in future versions?\n\nThank you for your time and assistance. I look forward to your response.\n\nWith Regards,\n\nSultan Berentayev\n\nDear PostgreSQL Development Team,I am inquiring about the availability of certain functionalities in the standard PostgreSQL database. Could you please confirm if the following functionalities are currently available in PostgreSQL:1. Enforcement of Security Attribute Expiry2. Restricted Data Audit Management Access3. Limit on Concurrent Sessions4. Access History for PostgreSQLIf any of these functionalities are not present in the standard PostgreSQL database, could you kindly let me know and provide any relevant insights about their potential implementation in future versions?Thank you for your time and assistance. I look forward to your response.With Regards,\nSultan Berentayev", "msg_date": "Wed, 2 Aug 2023 09:10:51 +0600", "msg_from": "Sultan Berentaev <s.berentaev@gmail.com>", "msg_from_op": true, "msg_subject": "Inquiry about Functionality Availability in PostgreSQL" }, { "msg_contents": "Hi Sultan,\nThanks for your email and interest in PostgreSQL.\n\nOn Wed, Aug 2, 2023 at 8:41 AM Sultan Berentaev <s.berentaev@gmail.com>\nwrote:\n>\n>\n> Dear PostgreSQL Development Team,\n>\n> I am inquiring about the availability of certain functionalities in the\nstandard PostgreSQL database. Could you please confirm if the following\nfunctionalities are currently available in PostgreSQL:\n>\n\nIt might help if you describe each of these functionalities in a sentence\nor two.\n\n> 1. Enforcement of Security Attribute Expiry\n\nThis looks like a broader term. Each attribute (certificates, passwords, )\nmay have a different mechanism of expiry. Some attributes may not have\nexpiry enforcement. Some may have expiry controlled by external agents. Can\nyou please provide more details about what you are looking for?\n\n> 2. Restricted Data Audit Management Access\n\nI have vague understanding of what this could be but I think more details\nwill be helpful.\n\n> 3. Limit on Concurrent Sessions\n\nIf it's just the number of client connections PostgreSQL is supposed to\nhandle at a time, max_connection\n<https://www.postgresql.org/docs/15/runtime-config-connection.html#GUC-MAX-CONNECTIONS>\nis all you need.\n\n> 4. Access History for PostgreSQL\n\nMore details on what kind of access you are looking for will be helpful.\nPostgreSQL has GUCs that enable/disable different kinds of accesses to be\nlogged to server error log.\n\n>\n> If any of these functionalities are not present in the standard\nPostgreSQL database, could you kindly let me know and provide any relevant\ninsights about their potential implementation in future versions?\n>\n\nCore PostgreSQL may not provide some of the functionalities by itself but\nextensions like pg_audit may provide those functionalities. Maybe you want\nto look for such extensions.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nHi Sultan,Thanks for your email and interest in PostgreSQL.On Wed, Aug 2, 2023 at 8:41 AM Sultan Berentaev <s.berentaev@gmail.com> wrote:>>> Dear PostgreSQL Development Team,>> I am inquiring about the availability of certain functionalities in the standard PostgreSQL database. Could you please confirm if the following functionalities are currently available in PostgreSQL:>It might help if you describe each of these functionalities in a sentence or two.> 1. Enforcement of Security Attribute ExpiryThis looks like a broader term. Each attribute (certificates, passwords, ) may have a different mechanism of expiry. Some attributes may not have expiry enforcement. Some may have expiry controlled by external agents. Can you please provide more details about what you are looking for?> 2. Restricted Data Audit Management AccessI have vague understanding of what this could be but I think more details will be helpful.> 3. Limit on Concurrent SessionsIf it's just the number of client connections PostgreSQL is supposed to handle at a time, max_connection is all you need.> 4. Access History for PostgreSQLMore details on what kind of access you are looking for will be helpful. PostgreSQL has GUCs that enable/disable different kinds of accesses to be logged to server error log.>> If any of these functionalities are not present in the standard PostgreSQL database, could you kindly let me know and provide any relevant insights about their potential implementation in future versions?>Core PostgreSQL may not provide some of the functionalities by itself but extensions like pg_audit may provide those functionalities. Maybe you want to look for such extensions.-- Best Wishes,Ashutosh Bapat", "msg_date": "Thu, 3 Aug 2023 13:51:33 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inquiry about Functionality Availability in PostgreSQL" } ]
[ { "msg_contents": "Now that we have effectively de-supported CentOS 6, by removing support \nfor its OpenSSL version, I think we could also modernize the use of some \nother libraries, such as zlib.\n\nIf we define ZLIB_CONST before including zlib.h, zlib augments some\ninterfaces with const decorations. By doing that we can keep our own\ninterfaces cleaner and can remove some unconstify calls.\n\nZLIB_CONST was introduced in zlib 1.2.5.2 (17 Dec 2011); CentOS 6 has \nzlib-1.2.3-29.el6.x86_64.\n\nNote that if you use this patch and compile on CentOS 6, it still works, \nyou just get a few compiler warnings about discarding qualifiers. Old \nenvironments tend to produce more compiler warnings anyway, so this \ndoesn't seem so bad.", "msg_date": "Wed, 2 Aug 2023 11:50:57 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": true, "msg_subject": "Improve const use in zlib-using code" }, { "msg_contents": "Peter,\n\nI like the idea. Though the way you have it implemented at the moment \nseems like a trap in that any time zlib.h is included someone also has \nto remember to add this define. I would recommend adding the define to \nthe build systems instead.\n\nSince you put in the work to find the version of zlib that added this, \nYou might also want to add `required: '>= 1.2.5.2'` to the \n`dependency('zlib')` call in the main meson.build. I am wondering if we \ncould remove the `z_streamp` check too. The version that it was added \nisn't obvious.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Wed, 02 Aug 2023 09:39:00 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": false, "msg_subject": "Re: Improve const use in zlib-using code" }, { "msg_contents": "On 02.08.23 16:39, Tristan Partin wrote:\n> I like the idea. Though the way you have it implemented at the moment \n> seems like a trap in that any time zlib.h is included someone also has \n> to remember to add this define. I would recommend adding the define to \n> the build systems instead.\n\nOk, moved to c.h.\n\n> Since you put in the work to find the version of zlib that added this, \n> You might also want to add `required: '>= 1.2.5.2'` to the \n> `dependency('zlib')` call in the main meson.build.\n\nWell, it's not a hard requirement, so I think this is not necessary.\n\n> I am wondering if we \n> could remove the `z_streamp` check too. The version that it was added \n> isn't obvious.\n\nYeah, that appears to be very obsolete. I have made an additional patch \nto remove that.", "msg_date": "Thu, 3 Aug 2023 09:08:13 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": true, "msg_subject": "Re: Improve const use in zlib-using code" }, { "msg_contents": "Both patches look good to me.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Thu, 03 Aug 2023 09:30:10 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": false, "msg_subject": "Re: Improve const use in zlib-using code" }, { "msg_contents": "On 03.08.23 16:30, Tristan Partin wrote:\n> Both patches look good to me.\n\ncommitted, thanks\n\n\n\n", "msg_date": "Mon, 7 Aug 2023 09:42:04 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": true, "msg_subject": "Re: Improve const use in zlib-using code" } ]
[ { "msg_contents": "Hi,\n\nCurrently, we have an option to drop the expression of stored generated\ncolumns\nas:\n\nALTER [ COLUMN ] column_name DROP EXPRESSION [ IF EXISTS ]\n\nBut don't have support to update that expression. The attached patch\nprovides\nthat as:\n\nALTER [ COLUMN ] column_name SET EXPRESSION expression\n\nNote that this form of ALTER is meant to work for the column which is\nalready\ngenerated. It then changes the generation expression in the catalog and\nrewrite\nthe table, using the existing table rewrite facilities for ALTER TABLE.\nOtherwise, an error will be reported.\n\nTo keep the code flow simple, I have renamed the existing function that was\nin\nuse for DROP EXPRESSION so that it can be used for SET EXPRESSION as well,\nwhich is a similar design as SET/DROP DEFAULT. I kept this renaming code\nchanges in a separate patch to minimize the diff in the main patch.\n\nDemo:\n-- Create table\nCREATE TABLE t1 (x int, y int GENERATED ALWAYS AS (x * 2) STORED);\nINSERT INTO t1 VALUES(generate_series(1,3));\n\n-- Check the generated data\nSELECT * FROM t1;\n x | y\n---+---\n 1 | 2\n 2 | 4\n 3 | 6\n(3 rows)\n\n-- Alter the expression\nALTER TABLE t1 ALTER COLUMN y SET EXPRESSION (x * 4);\n\n-- Check the new data\nSELECT * FROM t1;\n x | y\n---+----\n 1 | 4\n 2 | 8\n 3 | 12\n(3 rows)\n\nThank you.\n-- \nRegards,\nAmul Sul\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 2 Aug 2023 16:05:22 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "ALTER COLUMN ... SET EXPRESSION to alter stored generated column's\n expression" }, { "msg_contents": "On Wed, Aug 2, 2023 at 6:36 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> Hi,\n>\n> Currently, we have an option to drop the expression of stored generated columns\n> as:\n>\n> ALTER [ COLUMN ] column_name DROP EXPRESSION [ IF EXISTS ]\n>\n> But don't have support to update that expression. The attached patch provides\n> that as:\n>\n> ALTER [ COLUMN ] column_name SET EXPRESSION expression\n>\n> Note that this form of ALTER is meant to work for the column which is already\n> generated. It then changes the generation expression in the catalog and rewrite\n> the table, using the existing table rewrite facilities for ALTER TABLE.\n> Otherwise, an error will be reported.\n>\n> To keep the code flow simple, I have renamed the existing function that was in\n> use for DROP EXPRESSION so that it can be used for SET EXPRESSION as well,\n> which is a similar design as SET/DROP DEFAULT. I kept this renaming code\n> changes in a separate patch to minimize the diff in the main patch.\n>\n> Demo:\n> -- Create table\n> CREATE TABLE t1 (x int, y int GENERATED ALWAYS AS (x * 2) STORED);\n> INSERT INTO t1 VALUES(generate_series(1,3));\n>\n> -- Check the generated data\n> SELECT * FROM t1;\n> x | y\n> ---+---\n> 1 | 2\n> 2 | 4\n> 3 | 6\n> (3 rows)\n>\n> -- Alter the expression\n> ALTER TABLE t1 ALTER COLUMN y SET EXPRESSION (x * 4);\n>\n> -- Check the new data\n> SELECT * FROM t1;\n> x | y\n> ---+----\n> 1 | 4\n> 2 | 8\n> 3 | 12\n> (3 rows)\n>\n> Thank you.\n> --\n> Regards,\n> Amul Sul\n> EDB: http://www.enterprisedb.com\n-------------------------\nsetup.\n\nBEGIN;\nset search_path = test;\nDROP TABLE if exists gtest_parent, gtest_child;\n\nCREATE TABLE gtest_parent (f1 date NOT NULL, f2 bigint, f3 bigint\nGENERATED ALWAYS AS (f2 * 2) STORED) PARTITION BY RANGE (f1);\n\nCREATE TABLE gtest_child PARTITION OF gtest_parent\n FOR VALUES FROM ('2016-07-01') TO ('2016-08-01'); -- inherits gen expr\n\nCREATE TABLE gtest_child2 PARTITION OF gtest_parent (\n f3 WITH OPTIONS GENERATED ALWAYS AS (f2 * 22) STORED -- overrides gen expr\n) FOR VALUES FROM ('2016-08-01') TO ('2016-09-01');\n\nCREATE TABLE gtest_child3 (f1 date NOT NULL, f2 bigint, f3 bigint\nGENERATED ALWAYS AS (f2 * 33) STORED);\nALTER TABLE gtest_parent ATTACH PARTITION gtest_child3 FOR VALUES FROM\n('2016-09-01') TO ('2016-10-01');\n\nINSERT INTO gtest_parent (f1, f2) VALUES ('2016-07-15', 1);\nINSERT INTO gtest_parent (f1, f2) VALUES ('2016-07-15', 2);\nINSERT INTO gtest_parent (f1, f2) VALUES ('2016-08-15', 3);\nUPDATE gtest_parent SET f1 = f1 + 60 WHERE f2 = 1;\n\nALTER TABLE ONLY gtest_parent ALTER COLUMN f3 SET EXPRESSION (f2 * 4);\nALTER TABLE gtest_child ALTER COLUMN f3 SET EXPRESSION (f2 * 10);\nCOMMIT;\n\nset search_path = test;\nSELECT table_name, column_name, is_generated, generation_expression\nFROM information_schema.columns\nWHERE table_name in ('gtest_child','gtest_child1',\n'gtest_child2','gtest_child3')\norder by 1,2;\nresult:\n table_name | column_name | is_generated | generation_expression\n--------------+-------------+--------------+-----------------------\n gtest_child | f1 | NEVER |\n gtest_child | f1 | NEVER |\n gtest_child | f2 | NEVER |\n gtest_child | f2 | NEVER |\n gtest_child | f3 | ALWAYS | (f2 * 2)\n gtest_child | f3 | ALWAYS | (f2 * 10)\n gtest_child2 | f1 | NEVER |\n gtest_child2 | f1 | NEVER |\n gtest_child2 | f2 | NEVER |\n gtest_child2 | f2 | NEVER |\n gtest_child2 | f3 | ALWAYS | (f2 * 22)\n gtest_child2 | f3 | ALWAYS | (f2 * 2)\n gtest_child3 | f1 | NEVER |\n gtest_child3 | f1 | NEVER |\n gtest_child3 | f2 | NEVER |\n gtest_child3 | f2 | NEVER |\n gtest_child3 | f3 | ALWAYS | (f2 * 2)\n gtest_child3 | f3 | ALWAYS | (f2 * 33)\n(18 rows)\n\none partition, one column 2 generated expression. Is this the expected\nbehavior?\n\nIn the regress, you can replace \\d table_name to sql query (similar\nto above) to get the generated expression meta data.\nsince here you want the meta data to be correct. then one select query\nto valid generated expression behaviored sane or not.\n\n\n", "msg_date": "Wed, 2 Aug 2023 23:46:41 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ALTER COLUMN ... SET EXPRESSION to alter stored generated\n column's expression" }, { "msg_contents": "On Wed, Aug 2, 2023 at 9:16 PM jian he <jian.universality@gmail.com> wrote:\n\n> On Wed, Aug 2, 2023 at 6:36 PM Amul Sul <sulamul@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > Currently, we have an option to drop the expression of stored generated\n> columns\n> > as:\n> >\n> > ALTER [ COLUMN ] column_name DROP EXPRESSION [ IF EXISTS ]\n> >\n> > But don't have support to update that expression. The attached patch\n> provides\n> > that as:\n> >\n> > ALTER [ COLUMN ] column_name SET EXPRESSION expression\n> >\n> > Note that this form of ALTER is meant to work for the column which is\n> already\n> > generated. It then changes the generation expression in the catalog and\n> rewrite\n> > the table, using the existing table rewrite facilities for ALTER TABLE.\n> > Otherwise, an error will be reported.\n> >\n> > To keep the code flow simple, I have renamed the existing function that\n> was in\n> > use for DROP EXPRESSION so that it can be used for SET EXPRESSION as\n> well,\n> > which is a similar design as SET/DROP DEFAULT. I kept this renaming code\n> > changes in a separate patch to minimize the diff in the main patch.\n> >\n> > Demo:\n> > -- Create table\n> > CREATE TABLE t1 (x int, y int GENERATED ALWAYS AS (x * 2) STORED);\n> > INSERT INTO t1 VALUES(generate_series(1,3));\n> >\n> > -- Check the generated data\n> > SELECT * FROM t1;\n> > x | y\n> > ---+---\n> > 1 | 2\n> > 2 | 4\n> > 3 | 6\n> > (3 rows)\n> >\n> > -- Alter the expression\n> > ALTER TABLE t1 ALTER COLUMN y SET EXPRESSION (x * 4);\n> >\n> > -- Check the new data\n> > SELECT * FROM t1;\n> > x | y\n> > ---+----\n> > 1 | 4\n> > 2 | 8\n> > 3 | 12\n> > (3 rows)\n> >\n> > Thank you.\n> > --\n> > Regards,\n> > Amul Sul\n> > EDB: http://www.enterprisedb.com\n> -------------------------\n> setup.\n>\n> BEGIN;\n> set search_path = test;\n> DROP TABLE if exists gtest_parent, gtest_child;\n>\n> CREATE TABLE gtest_parent (f1 date NOT NULL, f2 bigint, f3 bigint\n> GENERATED ALWAYS AS (f2 * 2) STORED) PARTITION BY RANGE (f1);\n>\n> CREATE TABLE gtest_child PARTITION OF gtest_parent\n> FOR VALUES FROM ('2016-07-01') TO ('2016-08-01'); -- inherits gen expr\n>\n> CREATE TABLE gtest_child2 PARTITION OF gtest_parent (\n> f3 WITH OPTIONS GENERATED ALWAYS AS (f2 * 22) STORED -- overrides gen\n> expr\n> ) FOR VALUES FROM ('2016-08-01') TO ('2016-09-01');\n>\n> CREATE TABLE gtest_child3 (f1 date NOT NULL, f2 bigint, f3 bigint\n> GENERATED ALWAYS AS (f2 * 33) STORED);\n> ALTER TABLE gtest_parent ATTACH PARTITION gtest_child3 FOR VALUES FROM\n> ('2016-09-01') TO ('2016-10-01');\n>\n> INSERT INTO gtest_parent (f1, f2) VALUES ('2016-07-15', 1);\n> INSERT INTO gtest_parent (f1, f2) VALUES ('2016-07-15', 2);\n> INSERT INTO gtest_parent (f1, f2) VALUES ('2016-08-15', 3);\n> UPDATE gtest_parent SET f1 = f1 + 60 WHERE f2 = 1;\n>\n> ALTER TABLE ONLY gtest_parent ALTER COLUMN f3 SET EXPRESSION (f2 * 4);\n> ALTER TABLE gtest_child ALTER COLUMN f3 SET EXPRESSION (f2 * 10);\n> COMMIT;\n>\n> set search_path = test;\n> SELECT table_name, column_name, is_generated, generation_expression\n> FROM information_schema.columns\n> WHERE table_name in ('gtest_child','gtest_child1',\n> 'gtest_child2','gtest_child3')\n> order by 1,2;\n> result:\n> table_name | column_name | is_generated | generation_expression\n> --------------+-------------+--------------+-----------------------\n> gtest_child | f1 | NEVER |\n> gtest_child | f1 | NEVER |\n> gtest_child | f2 | NEVER |\n> gtest_child | f2 | NEVER |\n> gtest_child | f3 | ALWAYS | (f2 * 2)\n> gtest_child | f3 | ALWAYS | (f2 * 10)\n> gtest_child2 | f1 | NEVER |\n> gtest_child2 | f1 | NEVER |\n> gtest_child2 | f2 | NEVER |\n> gtest_child2 | f2 | NEVER |\n> gtest_child2 | f3 | ALWAYS | (f2 * 22)\n> gtest_child2 | f3 | ALWAYS | (f2 * 2)\n> gtest_child3 | f1 | NEVER |\n> gtest_child3 | f1 | NEVER |\n> gtest_child3 | f2 | NEVER |\n> gtest_child3 | f2 | NEVER |\n> gtest_child3 | f3 | ALWAYS | (f2 * 2)\n> gtest_child3 | f3 | ALWAYS | (f2 * 33)\n> (18 rows)\n>\n> one partition, one column 2 generated expression. Is this the expected\n> behavior?\n\n\nThat is not expected & acceptable. But, somehow, I am not able to reproduce\nthis behavior. Could you please retry this experiment by adding\n\"table_schema\"\nin your output query?\n\nThank you.\n\nRegards,\nAmul\n\nOn Wed, Aug 2, 2023 at 9:16 PM jian he <jian.universality@gmail.com> wrote:On Wed, Aug 2, 2023 at 6:36 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> Hi,\n>\n> Currently, we have an option to drop the expression of stored generated columns\n> as:\n>\n> ALTER [ COLUMN ] column_name DROP EXPRESSION [ IF EXISTS ]\n>\n> But don't have support to update that expression. The attached patch provides\n> that as:\n>\n> ALTER [ COLUMN ] column_name SET EXPRESSION expression\n>\n> Note that this form of ALTER is meant to work for the column which is already\n> generated. It then changes the generation expression in the catalog and rewrite\n> the table, using the existing table rewrite facilities for ALTER TABLE.\n> Otherwise, an error will be reported.\n>\n> To keep the code flow simple, I have renamed the existing function that was in\n> use for DROP EXPRESSION so that it can be used for SET EXPRESSION as well,\n> which is a similar design as SET/DROP DEFAULT. I kept this renaming code\n> changes in a separate patch to minimize the diff in the main patch.\n>\n> Demo:\n> -- Create table\n> CREATE TABLE t1 (x int, y int GENERATED ALWAYS AS (x * 2) STORED);\n> INSERT INTO t1 VALUES(generate_series(1,3));\n>\n> -- Check the generated data\n> SELECT * FROM t1;\n>  x | y\n> ---+---\n>  1 | 2\n>  2 | 4\n>  3 | 6\n> (3 rows)\n>\n> -- Alter the expression\n> ALTER TABLE t1 ALTER COLUMN y SET EXPRESSION (x * 4);\n>\n> -- Check the new data\n> SELECT * FROM t1;\n>  x | y\n> ---+----\n>  1 |  4\n>  2 |  8\n>  3 | 12\n> (3 rows)\n>\n> Thank you.\n> --\n> Regards,\n> Amul Sul\n> EDB: http://www.enterprisedb.com\n-------------------------\nsetup.\n\nBEGIN;\nset search_path = test;\nDROP TABLE if exists gtest_parent, gtest_child;\n\nCREATE TABLE gtest_parent (f1 date NOT NULL, f2 bigint, f3 bigint\nGENERATED ALWAYS AS (f2 * 2) STORED) PARTITION BY RANGE (f1);\n\nCREATE TABLE gtest_child PARTITION OF gtest_parent\n  FOR VALUES FROM ('2016-07-01') TO ('2016-08-01');  -- inherits gen expr\n\nCREATE TABLE gtest_child2 PARTITION OF gtest_parent (\n    f3 WITH OPTIONS GENERATED ALWAYS AS (f2 * 22) STORED  -- overrides gen expr\n) FOR VALUES FROM ('2016-08-01') TO ('2016-09-01');\n\nCREATE TABLE gtest_child3 (f1 date NOT NULL, f2 bigint, f3 bigint\nGENERATED ALWAYS AS (f2 * 33) STORED);\nALTER TABLE gtest_parent ATTACH PARTITION gtest_child3 FOR VALUES FROM\n('2016-09-01') TO ('2016-10-01');\n\nINSERT INTO gtest_parent (f1, f2) VALUES ('2016-07-15', 1);\nINSERT INTO gtest_parent (f1, f2) VALUES ('2016-07-15', 2);\nINSERT INTO gtest_parent (f1, f2) VALUES ('2016-08-15', 3);\nUPDATE gtest_parent SET f1 = f1 + 60 WHERE f2 = 1;\n\nALTER TABLE ONLY gtest_parent ALTER COLUMN f3 SET EXPRESSION (f2 * 4);\nALTER TABLE gtest_child ALTER COLUMN f3 SET EXPRESSION (f2 * 10);\nCOMMIT;\n\nset search_path = test;\nSELECT table_name, column_name, is_generated, generation_expression\nFROM information_schema.columns\nWHERE table_name  in ('gtest_child','gtest_child1',\n'gtest_child2','gtest_child3')\norder by 1,2;\nresult:\n  table_name  | column_name | is_generated | generation_expression\n--------------+-------------+--------------+-----------------------\n gtest_child  | f1          | NEVER        |\n gtest_child  | f1          | NEVER        |\n gtest_child  | f2          | NEVER        |\n gtest_child  | f2          | NEVER        |\n gtest_child  | f3          | ALWAYS       | (f2 * 2)\n gtest_child  | f3          | ALWAYS       | (f2 * 10)\n gtest_child2 | f1          | NEVER        |\n gtest_child2 | f1          | NEVER        |\n gtest_child2 | f2          | NEVER        |\n gtest_child2 | f2          | NEVER        |\n gtest_child2 | f3          | ALWAYS       | (f2 * 22)\n gtest_child2 | f3          | ALWAYS       | (f2 * 2)\n gtest_child3 | f1          | NEVER        |\n gtest_child3 | f1          | NEVER        |\n gtest_child3 | f2          | NEVER        |\n gtest_child3 | f2          | NEVER        |\n gtest_child3 | f3          | ALWAYS       | (f2 * 2)\n gtest_child3 | f3          | ALWAYS       | (f2 * 33)\n(18 rows)\n\none partition, one column 2 generated expression. Is this the expected\nbehavior?That is not expected & acceptable. But, somehow, I am not able to reproducethis behavior. Could you please retry this experiment by adding \"table_schema\"in your output query?Thank you.Regards,Amul", "msg_date": "Thu, 3 Aug 2023 10:53:10 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ALTER COLUMN ... SET EXPRESSION to alter stored generated\n column's expression" }, { "msg_contents": "On Thu, Aug 3, 2023 at 1:23 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n>\n> That is not expected & acceptable. But, somehow, I am not able to reproduce\n> this behavior. Could you please retry this experiment by adding \"table_schema\"\n> in your output query?\n>\n> Thank you.\n>\n> Regards,\n> Amul\n>\n\nsorry. my mistake.\nI created these partitions in a public schema and test schema. I\nignored table_schema when querying it.\nNow, this patch works as expected.\n\n\n", "msg_date": "Thu, 3 Aug 2023 14:34:36 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ALTER COLUMN ... SET EXPRESSION to alter stored generated\n column's expression" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nHi Amul,\r\nPatch changes look fine.\r\nBelow are some of my observations as soon as we alter default expression on column\r\n\r\n1. Materialized view becomes stale and starts giving incorrect results. We need to refresh the materialized view to get correct results.\r\n2. Index on generated column need to be reindexed in order to use new expression.\r\n3. Column Stats become stale and plan may be impacted due to outdated stats.\r\n\r\nThese things also happen as soon as we delete default expression or set default expression on column.\r\n\r\nThanks & Best Regards,\r\nAjit", "msg_date": "Tue, 22 Aug 2023 07:14:54 +0000", "msg_from": "ajitpostgres awekar <ajitpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ALTER COLUMN ... SET EXPRESSION to alter stored generated\n column's expression" }, { "msg_contents": "On 8/2/23 12:35, Amul Sul wrote:\n> Hi,\n> \n> Currently, we have an option to drop the expression of stored generated\n> columns\n> as:\n> \n> ALTER [ COLUMN ] column_name DROP EXPRESSION [ IF EXISTS ]\n> \n> But don't have support to update that expression. The attached patch\n> provides\n> that as:\n> \n> ALTER [ COLUMN ] column_name SET EXPRESSION expression\n\nI am surprised this is not in the standard already. I will go work on that.\n-- \nVik Fearing\n\n\n\n", "msg_date": "Tue, 22 Aug 2023 13:01:57 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: ALTER COLUMN ... SET EXPRESSION to alter stored generated\n column's expression" }, { "msg_contents": "Hi Amul,\n\n\nOn Wed, Aug 2, 2023 at 4:06 PM Amul Sul <sulamul@gmail.com> wrote:\n\n> Hi,\n>\n> Currently, we have an option to drop the expression of stored generated\n> columns\n> as:\n>\n> ALTER [ COLUMN ] column_name DROP EXPRESSION [ IF EXISTS ]\n>\n> But don't have support to update that expression. The attached patch\n> provides\n> that as:\n>\n> ALTER [ COLUMN ] column_name SET EXPRESSION expression\n>\n> +1 to the idea.\n\nHere are few review comments:.\n*0001 patch*\n1. Alignment changed for below comment:\n\n> AT_ColumnExpression, /* alter column drop expression */\n\n\n*00002 patch*\n1. EXPRESSION should be added after DEFAULT per alphabetic order?\n\n> + COMPLETE_WITH(\"(\", \"COMPRESSION\", \"EXPRESSION\", \"DEFAULT\", \"GENERATED\",\n> \"NOT NULL\", \"STATISTICS\", \"STORAGE\",\n>\n\n2. The variable *isdrop* can be aligned better:\n\n> + bool isdrop = (cmd->def == NULL);\n>\n\n3. The AlteredTableInfo structure has member Relation, So need to pass\nparameter Relation separately?\n\n> static ObjectAddress ATExecColumnExpression(AlteredTableInfo *tab,\n> Relation rel,\n> const char *colName, Node *newDefault,\n> bool missing_ok, LOCKMODE lockmode);\n>\n\n4. Exceeded 80 char limit:\n\n> /*\n> * Mark the column as no longer generated. (The atthasdef flag needs to\n>\n\n5. Update the comment. Use 'set' along with 'drop':\n\n> AT_ColumnExpression, /* alter column drop expression */\n\n\nThanks,\nVaibhav Dalvi\n\nHi Amul,On Wed, Aug 2, 2023 at 4:06 PM Amul Sul <sulamul@gmail.com> wrote:Hi,Currently, we have an option to drop the expression of stored generated columnsas:ALTER [ COLUMN ] column_name DROP EXPRESSION [ IF EXISTS ]But don't have support to update that expression. The attached patch providesthat as:ALTER [ COLUMN ] column_name SET EXPRESSION expression+1 to the idea. Here are few review comments:. 0001 patch1. Alignment changed for below comment:  AT_ColumnExpression,\t\t\t\t/* alter column drop expression */00002 patch1. EXPRESSION should be added after DEFAULT per alphabetic order?+\t\tCOMPLETE_WITH(\"(\", \"COMPRESSION\", \"EXPRESSION\", \"DEFAULT\", \"GENERATED\", \"NOT NULL\", \"STATISTICS\", \"STORAGE\",2. The variable isdrop can be aligned better:+\tbool\tisdrop = (cmd->def == NULL);3. The AlteredTableInfo structure has member Relation, So need to pass parameter Relation separately?static ObjectAddress ATExecColumnExpression(AlteredTableInfo *tab, Relation rel, \t\t\t\t\t    const char *colName, Node *newDefault, \t\t\t\t\t    bool missing_ok, LOCKMODE lockmode);4.  Exceeded 80 char limit:/*\t\t * Mark the column as no longer generated.  (The atthasdef flag needs to5. Update the comment. Use 'set' along with 'drop':AT_ColumnExpression,\t\t\t/* alter column drop expression */Thanks,Vaibhav Dalvi", "msg_date": "Thu, 24 Aug 2023 09:36:12 +0530", "msg_from": "Vaibhav Dalvi <vaibhav.dalvi@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: ALTER COLUMN ... SET EXPRESSION to alter stored generated\n column's expression" }, { "msg_contents": "On 8/2/23 12:35, Amul Sul wrote:\n> Hi,\n> \n> Currently, we have an option to drop the expression of stored generated\n> columns\n> as:\n> \n> ALTER [ COLUMN ] column_name DROP EXPRESSION [ IF EXISTS ]\n> \n> But don't have support to update that expression. The attached patch\n> provides\n> that as:\n> \n> ALTER [ COLUMN ] column_name SET EXPRESSION expression\n\nI love this idea. It is something that the standard SQL language is \nlacking and I am submitting a paper to correct that based on this. I \nwill know in October what the committee thinks of it. Thanks!\n\n> Note that this form of ALTER is meant to work for the column which is\n> already generated.\n\nWhy? SQL does not have a way to convert a non-generated column into a \ngenerated column, and this seems like as good a way as any.\n\n> To keep the code flow simple, I have renamed the existing function that was\n> in use for DROP EXPRESSION so that it can be used for SET EXPRESSION as well,\n> which is a similar design as SET/DROP DEFAULT. I kept this renaming code\n> changes in a separate patch to minimize the diff in the main patch.\n\nI don't like this part of the patch at all. Not only is the \ndocumentation only half baked, but the entire concept of the two \ncommands is different. Especially since I believe the command should \nalso create a generated column from a non-generated one.\n\n\nIs is possible to compare the old and new expressions and no-op if they \nare the same?\n\n\npsql (17devel)\nType \"help\" for help.\n\npostgres=# create table t (c integer generated always as (null) stored);\nCREATE TABLE\npostgres=# select relfilenode from pg_class where oid = 't'::regclass;\n relfilenode\n-------------\n 16384\n(1 row)\n\npostgres=# alter table t alter column c set expression (null);\nALTER TABLE\npostgres=# select relfilenode from pg_class where oid = 't'::regclass;\n relfilenode\n-------------\n 16393\n(1 row)\n\n\nI am not saying we should make every useless case avoid rewriting the \ntable, but if there are simple wins, we should take them. (I don't know \nhow feasible this is.)\n\nI think repeating the STORED keyword should be required here to \nfuture-proof virtual generated columns.\n\nConsider this hypothetical example:\n\nCREATE TABLE t (c INTEGER);\nALTER TABLE t ALTER COLUMN c SET EXPRESSION (42) STORED;\nALTER TABLE t ALTER COLUMN c SET EXPRESSION VIRTUAL;\n\nIf we don't require the STORED keyword on the second command, it becomes \nambiguous. If we then decide that VIRTUAL should be the default, we \nwill break people's scripts.\n-- \nVik Fearing\n\n\n\n", "msg_date": "Fri, 25 Aug 2023 02:05:54 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: ALTER COLUMN ... SET EXPRESSION to alter stored generated\n column's expression" }, { "msg_contents": "On Thu, Aug 24, 2023 at 9:36 AM Vaibhav Dalvi <\nvaibhav.dalvi@enterprisedb.com> wrote:\n\n> Hi Amul,\n>\n>\n> On Wed, Aug 2, 2023 at 4:06 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n>> Hi,\n>>\n>> Currently, we have an option to drop the expression of stored generated\n>> columns\n>> as:\n>>\n>> ALTER [ COLUMN ] column_name DROP EXPRESSION [ IF EXISTS ]\n>>\n>> But don't have support to update that expression. The attached patch\n>> provides\n>> that as:\n>>\n>> ALTER [ COLUMN ] column_name SET EXPRESSION expression\n>>\n>> +1 to the idea.\n>\n\nThank you.\n\n\n> 3. The AlteredTableInfo structure has member Relation, So need to pass\n> parameter Relation separately?\n>\n>> static ObjectAddress ATExecColumnExpression(AlteredTableInfo *tab,\n>> Relation rel,\n>> const char *colName, Node *newDefault,\n>> bool missing_ok, LOCKMODE lockmode);\n>\n>\nYeah, but I think, let it be since other AT routines have the same.\n\nThanks for the review comments, I have fixed those in the attached version.\nIn\naddition to that, extended syntax to have the STORE keyword as suggested by\nVik.\n\nRegards,\nAmul", "msg_date": "Mon, 28 Aug 2023 15:24:12 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ALTER COLUMN ... SET EXPRESSION to alter stored generated\n column's expression" }, { "msg_contents": "On Fri, Aug 25, 2023 at 5:35 AM Vik Fearing <vik@postgresfriends.org> wrote:\n\n> On 8/2/23 12:35, Amul Sul wrote:\n> > Hi,\n> >\n> > Currently, we have an option to drop the expression of stored generated\n> > columns\n> > as:\n> >\n> > ALTER [ COLUMN ] column_name DROP EXPRESSION [ IF EXISTS ]\n> >\n> > But don't have support to update that expression. The attached patch\n> > provides\n> > that as:\n> >\n> > ALTER [ COLUMN ] column_name SET EXPRESSION expression\n>\n> I love this idea. It is something that the standard SQL language is\n> lacking and I am submitting a paper to correct that based on this. I\n> will know in October what the committee thinks of it. Thanks!\n>\n>\nGreat, thank you so much.\n\n\n> > Note that this form of ALTER is meant to work for the column which is\n> > already generated.\n>\n> Why? SQL does not have a way to convert a non-generated column into a\n> generated column, and this seems like as good a way as any.\n>\n\nWell, I had to have the same thought but Peter Eisentraut thinks that we\nshould\nhave that in a separate patch & I am fine with that.\n\n> To keep the code flow simple, I have renamed the existing function that\n> was\n> > in use for DROP EXPRESSION so that it can be used for SET EXPRESSION as\n> well,\n> > which is a similar design as SET/DROP DEFAULT. I kept this renaming code\n> > changes in a separate patch to minimize the diff in the main patch.\n>\n> I don't like this part of the patch at all. Not only is the\n> documentation only half baked, but the entire concept of the two\n> commands is different. Especially since I believe the command should\n> also create a generated column from a non-generated one.\n>\n\nI am not sure I understood this, why would that break the documentation\neven if\nwe allow non-generated columns to be generated. This makes the code flow\nsimple\nand doesn't have any issue for the future extension to allow non-generated\ncolumns too.\n\n\n>\n> Is is possible to compare the old and new expressions and no-op if they\n> are the same?\n>\n>\n> psql (17devel)\n> Type \"help\" for help.\n>\n> postgres=# create table t (c integer generated always as (null) stored);\n> CREATE TABLE\n> postgres=# select relfilenode from pg_class where oid = 't'::regclass;\n> relfilenode\n> -------------\n> 16384\n> (1 row)\n>\n> postgres=# alter table t alter column c set expression (null);\n> ALTER TABLE\n> postgres=# select relfilenode from pg_class where oid = 't'::regclass;\n> relfilenode\n> -------------\n> 16393\n> (1 row)\n>\n>\n> I am not saying we should make every useless case avoid rewriting the\n> table, but if there are simple wins, we should take them. (I don't know\n> how feasible this is.)\n>\n\nI think that is feasible, but I am not sure if we want to do that & add an\nextra\ncode for the case, which is not really breaking anything except making the\nsystem do extra work for the user's thoughtless action.\n\n\n>\n> I think repeating the STORED keyword should be required here to\n> future-proof virtual generated columns.\n>\n\nAgree, added in the v2 version posted a few minutes ago.\n\nRegards,\nAmul\n\nOn Fri, Aug 25, 2023 at 5:35 AM Vik Fearing <vik@postgresfriends.org> wrote:On 8/2/23 12:35, Amul Sul wrote:\n> Hi,\n> \n> Currently, we have an option to drop the expression of stored generated\n> columns\n> as:\n> \n> ALTER [ COLUMN ] column_name DROP EXPRESSION [ IF EXISTS ]\n> \n> But don't have support to update that expression. The attached patch\n> provides\n> that as:\n> \n> ALTER [ COLUMN ] column_name SET EXPRESSION expression\n\nI love this idea.  It is something that the standard SQL language is \nlacking and I am submitting a paper to correct that based on this.  I \nwill know in October what the committee thinks of it.  Thanks!\nGreat, thank you so much. \n> Note that this form of ALTER is meant to work for the column which is\n> already generated.\n\nWhy?  SQL does not have a way to convert a non-generated column into a \ngenerated column, and this seems like as good a way as any.Well, I had to have the same thought but Peter Eisentraut thinks that we shouldhave that in a separate patch & I am fine with that.\n> To keep the code flow simple, I have renamed the existing function that was\n> in use for DROP EXPRESSION so that it can be used for SET EXPRESSION as well,\n> which is a similar design as SET/DROP DEFAULT. I kept this renaming code\n> changes in a separate patch to minimize the diff in the main patch.\n\nI don't like this part of the patch at all.  Not only is the \ndocumentation only half baked, but the entire concept of the two \ncommands is different.  Especially since I believe the command should \nalso create a generated column from a non-generated one. I am not sure I understood this, why would that break the documentation even ifwe allow non-generated columns to be generated. This makes the code flow simpleand doesn't have any issue for the future extension to allow non-generatedcolumns too. \n\nIs is possible to compare the old and new expressions and no-op if they \nare the same?\n\n\npsql (17devel)\nType \"help\" for help.\n\npostgres=# create table t (c integer generated always as (null) stored);\nCREATE TABLE\npostgres=# select relfilenode from pg_class where oid = 't'::regclass;\n  relfilenode\n-------------\n        16384\n(1 row)\n\npostgres=# alter table t alter column c set expression (null);\nALTER TABLE\npostgres=# select relfilenode from pg_class where oid = 't'::regclass;\n  relfilenode\n-------------\n        16393\n(1 row)\n\n\nI am not saying we should make every useless case avoid rewriting the \ntable, but if there are simple wins, we should take them.  (I don't know \nhow feasible this is.)I think that is feasible, but I am not sure if we want to do that & add an extracode for the case, which is not really breaking anything except making thesystem do extra work for the user's thoughtless action. \n\nI think repeating the STORED keyword should be required here to \nfuture-proof virtual generated columns.Agree, added in the v2 version posted a few minutes ago.Regards,Amul", "msg_date": "Mon, 28 Aug 2023 15:41:59 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ALTER COLUMN ... SET EXPRESSION to alter stored generated\n column's expression" }, { "msg_contents": "Hi!\n\nI'm pretty much like the idea of the patch. Looks like an overlook in SQL\nstandard for me.\nAnyway, patch apply with no conflicts and implements described\nfunctionality.\n\nOn Fri, 25 Aug 2023 at 03:06, Vik Fearing <vik@postgresfriends.org> wrote:\n\n>\n> I don't like this part of the patch at all. Not only is the\n> documentation only half baked, but the entire concept of the two\n> commands is different. Especially since I believe the command should\n> also create a generated column from a non-generated one.\n\n\nBut I have to agree with Vik Fearing, we can make this patch better, should\nwe?\nI totally understand your intentions to keep the code flow simple and reuse\nexisting code as much\nas possible. But in terms of semantics of these commands, they are quite\ndifferent from each other.\nAnd in terms of reading of the code, this makes it even harder to\nunderstand what is going on here.\nSo, in my view, consider split these commands.\n\nHope, that helps. Again, I'm +1 for this patch.\n\n-- \nBest regards,\nMaxim Orlov.\n\nHi!I'm pretty much like the idea of the patch. Looks like an overlook in SQL standard for me.Anyway, patch apply with no conflicts and implements described functionality.On Fri, 25 Aug 2023 at 03:06, Vik Fearing <vik@postgresfriends.org> wrote:\nI don't like this part of the patch at all.  Not only is the \ndocumentation only half baked, but the entire concept of the two \ncommands is different.  Especially since I believe the command should \nalso create a generated column from a non-generated one.But I have to agree with Vik Fearing, we can make this patch better, should we? I totally understand your intentions to keep the code flow simple and reuse existing code as much as possible. But in terms of semantics of these commands, they are quite different from each other. And in terms of reading of the code, this makes it even harder to understand what is going on here.So, in my view, consider split these commands. Hope, that helps. Again, I'm +1 for this patch.-- Best regards,Maxim Orlov.", "msg_date": "Wed, 13 Sep 2023 11:57:52 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ALTER COLUMN ... SET EXPRESSION to alter stored generated\n column's expression" }, { "msg_contents": "On Wed, Sep 13, 2023 at 2:28 PM Maxim Orlov <orlovmg@gmail.com> wrote:\n\n> Hi!\n>\n> I'm pretty much like the idea of the patch. Looks like an overlook in SQL\n> standard for me.\n> Anyway, patch apply with no conflicts and implements described\n> functionality.\n>\n>\nThank you for looking at this.\n\n\n> On Fri, 25 Aug 2023 at 03:06, Vik Fearing <vik@postgresfriends.org> wrote:\n>\n>>\n>> I don't like this part of the patch at all. Not only is the\n>> documentation only half baked, but the entire concept of the two\n>> commands is different. Especially since I believe the command should\n>> also create a generated column from a non-generated one.\n>\n>\n> But I have to agree with Vik Fearing, we can make this patch better,\n> should we?\n> I totally understand your intentions to keep the code flow simple and reuse\n> existing code as much\n> as possible. But in terms of semantics of these commands, they are quite\n> different from each other.\n> And in terms of reading of the code, this makes it even harder to\n> understand what is going on here.\n> So, in my view, consider split these commands.\n>\n\nOk, probably, I would work in that direction. I did the same thing that\nSET/DROP DEFAULT does, despite semantic differences, and also, if I am not\nmissing anything, the code complexity should be the same as that.\n\nRegards,\nAmul\n\nOn Wed, Sep 13, 2023 at 2:28 PM Maxim Orlov <orlovmg@gmail.com> wrote:Hi!I'm pretty much like the idea of the patch. Looks like an overlook in SQL standard for me.Anyway, patch apply with no conflicts and implements described functionality.Thank you for looking at this. On Fri, 25 Aug 2023 at 03:06, Vik Fearing <vik@postgresfriends.org> wrote:\nI don't like this part of the patch at all.  Not only is the \ndocumentation only half baked, but the entire concept of the two \ncommands is different.  Especially since I believe the command should \nalso create a generated column from a non-generated one.But I have to agree with Vik Fearing, we can make this patch better, should we? I totally understand your intentions to keep the code flow simple and reuse existing code as much as possible. But in terms of semantics of these commands, they are quite different from each other. And in terms of reading of the code, this makes it even harder to understand what is going on here.So, in my view, consider split these commands.Ok, probably, I would work in that direction. I did the same thing thatSET/DROP DEFAULT does, despite semantic differences, and also, if I am notmissing anything, the code complexity should be the same as that.Regards,Amul", "msg_date": "Thu, 14 Sep 2023 14:34:31 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ALTER COLUMN ... SET EXPRESSION to alter stored generated\n column's expression" }, { "msg_contents": "Hi Amul,\nI share others opinion that this feature is useful.\n\n>> On Fri, 25 Aug 2023 at 03:06, Vik Fearing <vik@postgresfriends.org> wrote:\n>>>\n>>>\n>>> I don't like this part of the patch at all. Not only is the\n>>> documentation only half baked, but the entire concept of the two\n>>> commands is different. Especially since I believe the command should\n>>> also create a generated column from a non-generated one.\n>>\n>>\n>> But I have to agree with Vik Fearing, we can make this patch better, should we?\n>> I totally understand your intentions to keep the code flow simple and reuse existing code as much\n>> as possible. But in terms of semantics of these commands, they are quite different from each other.\n>> And in terms of reading of the code, this makes it even harder to understand what is going on here.\n>> So, in my view, consider split these commands.\n>\n>\n> Ok, probably, I would work in that direction. I did the same thing that\n> SET/DROP DEFAULT does, despite semantic differences, and also, if I am not\n> missing anything, the code complexity should be the same as that.\n\nIf we allow SET EXPRESSION to convert a non-generated column to a\ngenerated one, the current way of handling ONLY would yield mismatch\nbetween parent and child. That's not allowed as per the documentation\n[1]. In that sense not allowing SET to change the GENERATED status is\nbetter. I think that can be added as a V2 feature, if it overly\ncomplicates the patch Or at least till a point that becomes part of\nSQL standard.\n\nI think V1 patch can focus on changing the expression of a column\nwhich is already a generated column.\n\nRegarding code, I think we should place it where it's reasonable -\nfollowing precedence is usually good. But I haven't reviewed the code\nto comment on it.\n\n[1] https://www.postgresql.org/docs/16/ddl-generated-columns.html\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Thu, 14 Sep 2023 19:23:19 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ALTER COLUMN ... SET EXPRESSION to alter stored generated\n column's expression" }, { "msg_contents": "On Thu, Sep 14, 2023 at 7:23 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n\n> Hi Amul,\n> I share others opinion that this feature is useful.\n>\n> >> On Fri, 25 Aug 2023 at 03:06, Vik Fearing <vik@postgresfriends.org>\n> wrote:\n> >>>\n> >>>\n> >>> I don't like this part of the patch at all. Not only is the\n> >>> documentation only half baked, but the entire concept of the two\n> >>> commands is different. Especially since I believe the command should\n> >>> also create a generated column from a non-generated one.\n> >>\n> >>\n> >> But I have to agree with Vik Fearing, we can make this patch better,\n> should we?\n> >> I totally understand your intentions to keep the code flow simple and\n> reuse existing code as much\n> >> as possible. But in terms of semantics of these commands, they are\n> quite different from each other.\n> >> And in terms of reading of the code, this makes it even harder to\n> understand what is going on here.\n> >> So, in my view, consider split these commands.\n> >\n> >\n> > Ok, probably, I would work in that direction. I did the same thing that\n> > SET/DROP DEFAULT does, despite semantic differences, and also, if I am\n> not\n> > missing anything, the code complexity should be the same as that.\n>\n> If we allow SET EXPRESSION to convert a non-generated column to a\n> generated one, the current way of handling ONLY would yield mismatch\n> between parent and child. That's not allowed as per the documentation\n> [1]. In that sense not allowing SET to change the GENERATED status is\n> better. I think that can be added as a V2 feature, if it overly\n> complicates the patch Or at least till a point that becomes part of\n> SQL standard.\n>\n\nYes, that going to be a bit complicated including the case trying to convert\nthe non-generated column of a child table where need to find all the\nancestors\nand siblings and make the same changes.\n\nRegards,\nAmul\n\nOn Thu, Sep 14, 2023 at 7:23 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:Hi Amul,\nI share others opinion that this feature is useful.\n\n>> On Fri, 25 Aug 2023 at 03:06, Vik Fearing <vik@postgresfriends.org> wrote:\n>>>\n>>>\n>>> I don't like this part of the patch at all.  Not only is the\n>>> documentation only half baked, but the entire concept of the two\n>>> commands is different.  Especially since I believe the command should\n>>> also create a generated column from a non-generated one.\n>>\n>>\n>> But I have to agree with Vik Fearing, we can make this patch better, should we?\n>> I totally understand your intentions to keep the code flow simple and reuse existing code as much\n>> as possible. But in terms of semantics of these commands, they are quite different from each other.\n>> And in terms of reading of the code, this makes it even harder to understand what is going on here.\n>> So, in my view, consider split these commands.\n>\n>\n> Ok, probably, I would work in that direction. I did the same thing that\n> SET/DROP DEFAULT does, despite semantic differences, and also, if I am not\n> missing anything, the code complexity should be the same as that.\n\nIf we allow SET EXPRESSION to convert a non-generated column to a\ngenerated one, the current way of handling ONLY would yield mismatch\nbetween parent and child. That's not allowed as per the documentation\n[1]. In that sense not allowing SET to change the GENERATED status is\nbetter. I think that can be added as a V2 feature, if it overly\ncomplicates the patch Or at least till a point that becomes part of\nSQL standard.Yes, that going to be a bit complicated including the case trying to convertthe non-generated column of a child table where need to find all the ancestorsand siblings and make the same changes. Regards,Amul", "msg_date": "Mon, 18 Sep 2023 13:59:13 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ALTER COLUMN ... SET EXPRESSION to alter stored generated\n column's expression" }, { "msg_contents": "On 28.08.23 11:54, Amul Sul wrote:\n> Thanks for the review comments, I have fixed those in the attached \n> version. In\n> addition to that, extended syntax to have the STORE keyword as suggested by\n> Vik.\n\nAn additional comment: When you change the generation expression, you \nneed to run ON UPDATE triggers on all rows, if there are any triggers \ndefined. That is because someone could have triggers defined on the \ncolumn to either check for valid values or propagate values somewhere \nelse, and if the expression changes, that is kind of like an UPDATE.\n\nSimilarly, I think we should consider how logical decoding should handle \nthis operation. I'd imagine it should generate UPDATE events on all \nrows. A test case in test_decoding would be useful.\n\n\n\n", "msg_date": "Fri, 6 Oct 2023 14:33:40 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: ALTER COLUMN ... SET EXPRESSION to alter stored generated\n column's expression" }, { "msg_contents": "On Fri, Oct 6, 2023 at 6:06 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n>\n> On 28.08.23 11:54, Amul Sul wrote:\n> > Thanks for the review comments, I have fixed those in the attached\n> > version. In\n> > addition to that, extended syntax to have the STORE keyword as suggested by\n> > Vik.\n>\n> An additional comment: When you change the generation expression, you\n> need to run ON UPDATE triggers on all rows, if there are any triggers\n> defined. That is because someone could have triggers defined on the\n> column to either check for valid values or propagate values somewhere\n> else, and if the expression changes, that is kind of like an UPDATE.\n>\n> Similarly, I think we should consider how logical decoding should handle\n> this operation. I'd imagine it should generate UPDATE events on all\n> rows. A test case in test_decoding would be useful.\n>\n\nShould we treat it the same fashion as ALTER COLUMN ... TYPE which\nrewrites the column values? Of course that rewrites the whole table,\nbut logically they are comparable.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 6 Oct 2023 18:27:36 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ALTER COLUMN ... SET EXPRESSION to alter stored generated\n column's expression" }, { "msg_contents": "On 06.10.23 14:57, Ashutosh Bapat wrote:\n> On Fri, Oct 6, 2023 at 6:06 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n>>\n>> On 28.08.23 11:54, Amul Sul wrote:\n>>> Thanks for the review comments, I have fixed those in the attached\n>>> version. In\n>>> addition to that, extended syntax to have the STORE keyword as suggested by\n>>> Vik.\n>>\n>> An additional comment: When you change the generation expression, you\n>> need to run ON UPDATE triggers on all rows, if there are any triggers\n>> defined. That is because someone could have triggers defined on the\n>> column to either check for valid values or propagate values somewhere\n>> else, and if the expression changes, that is kind of like an UPDATE.\n>>\n>> Similarly, I think we should consider how logical decoding should handle\n>> this operation. I'd imagine it should generate UPDATE events on all\n>> rows. A test case in test_decoding would be useful.\n> \n> Should we treat it the same fashion as ALTER COLUMN ... TYPE which\n> rewrites the column values? Of course that rewrites the whole table,\n> but logically they are comparable.\n\nI don't know. What are the semantics of that command with respect to \ntriggers and logical decoding?\n\n\n\n", "msg_date": "Fri, 6 Oct 2023 15:13:57 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: ALTER COLUMN ... SET EXPRESSION to alter stored generated\n column's expression" }, { "msg_contents": "On Fri, Oct 6, 2023 at 6:03 PM Peter Eisentraut <peter@eisentraut.org>\nwrote:\n\n> On 28.08.23 11:54, Amul Sul wrote:\n> > Thanks for the review comments, I have fixed those in the attached\n> > version. In\n> > addition to that, extended syntax to have the STORE keyword as suggested\n> by\n> > Vik.\n>\n> An additional comment: When you change the generation expression, you\n> need to run ON UPDATE triggers on all rows, if there are any triggers\n> defined. That is because someone could have triggers defined on the\n> column to either check for valid values or propagate values somewhere\n> else, and if the expression changes, that is kind of like an UPDATE.\n>\n> Similarly, I think we should consider how logical decoding should handle\n> this operation. I'd imagine it should generate UPDATE events on all\n> rows. A test case in test_decoding would be useful.\n>\n\nIf I am not mistaken, the existing table rewrite facilities for ALTER TABLE\ndon't have support to run triggers or generate an event for each row,\nright?\n\nDo you expect to write a new code to handle this rewriting?\n\nRegards,\nAmul\n\nOn Fri, Oct 6, 2023 at 6:03 PM Peter Eisentraut <peter@eisentraut.org> wrote:On 28.08.23 11:54, Amul Sul wrote:\n> Thanks for the review comments, I have fixed those in the attached \n> version. In\n> addition to that, extended syntax to have the STORE keyword as suggested by\n> Vik.\n\nAn additional comment: When you change the generation expression, you \nneed to run ON UPDATE triggers on all rows, if there are any triggers \ndefined.  That is because someone could have triggers defined on the \ncolumn to either check for valid values or propagate values somewhere \nelse, and if the expression changes, that is kind of like an UPDATE.\n\nSimilarly, I think we should consider how logical decoding should handle \nthis operation.  I'd imagine it should generate UPDATE events on all \nrows.  A test case in test_decoding would be useful.If I am not mistaken, the existing table rewrite facilities for ALTER TABLEdon't have support to run triggers or generate an event for each row, right? Do you expect to write a new code to handle this rewriting?Regards,Amul", "msg_date": "Mon, 9 Oct 2023 18:56:58 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ALTER COLUMN ... SET EXPRESSION to alter stored generated\n column's expression" }, { "msg_contents": "On Fri, Oct 6, 2023 at 9:14 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n> > Should we treat it the same fashion as ALTER COLUMN ... TYPE which\n> > rewrites the column values? Of course that rewrites the whole table,\n> > but logically they are comparable.\n>\n> I don't know. What are the semantics of that command with respect to\n> triggers and logical decoding?\n\nALTER COLUMN ... TYPE doesn't fire triggers, and I don't think logical\ndecoding will do anything with it, either. As Amul also suggested, I\ntend to think that this command should behave like that command\ninstead of inventing some new behavior. Sure, this is kind of like an\nUPDATE, but it's also not actually an UPDATE: it's DDL. Consider this\nexample:\n\nrhaas=# create table foo (a int, b text);\nCREATE TABLE\nrhaas=# create function nozero () returns trigger as $$begin if (new.b\n= '0') then raise 'zero is bad'; end if; return new; end$$ language\nplpgsql;\nCREATE FUNCTION\nrhaas=# create trigger fnz before insert or update or delete on foo\nfor each row execute function nozero();\nCREATE TRIGGER\nrhaas=# insert into foo values (1, '0');\nERROR: zero is bad\nCONTEXT: PL/pgSQL function nozero() line 1 at RAISE\nrhaas=# insert into foo values (1, '00');\nINSERT 0 1\nrhaas=# alter table foo alter column b set data type integer using b::integer;\nALTER TABLE\nrhaas=# select * from foo;\n a | b\n---+---\n 1 | 0\n(1 row)\n\nrhaas=# insert into foo values (2, '0');\nERROR: type of parameter 14 (integer) does not match that when\npreparing the plan (text)\nCONTEXT: PL/pgSQL function nozero() line 1 at IF\nrhaas=# \\c\nYou are now connected to database \"rhaas\" as user \"rhaas\".\nrhaas=# insert into foo values (2, '0');\nERROR: zero is bad\nCONTEXT: PL/pgSQL function nozero() line 1 at RAISE\nrhaas=# insert into foo values (2, '00');\nERROR: zero is bad\nCONTEXT: PL/pgSQL function nozero() line 1 at RAISE\n\nThe trigger here is supposed to prevent me from inserting 0 into\ncolumn b, but I've ended up with one anyway, because when the column\nwas of type text, I could insert 00, and when I changed the column to\ntype integer, the value got smashed down to just 0, and the trigger\nwasn't fired to prevent that. You could certainly argue with that\nbehavior, but I think it's pretty reasonable, and it seems like if\nthis command behaved that way too, that would also be pretty\nreasonable. In fact, I'm inclined to think it would be preferable,\nboth for consistency and because it would be less work.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 16 Oct 2023 14:54:23 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ALTER COLUMN ... SET EXPRESSION to alter stored generated\n column's expression" }, { "msg_contents": "Here is the rebase version for the latest master head(673a17e3120).\n\nI haven't done any other changes related to the ON UPDATE trigger since that\nseems non-trivial; need a bit of work to add trigger support in\nATRewriteTable().\nAlso, I am not sure yet, if we were doing these changes, and the correct\ndirection\nfor that.\n\nRegards,\nAmul", "msg_date": "Wed, 25 Oct 2023 11:42:35 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ALTER COLUMN ... SET EXPRESSION to alter stored generated\n column's expression" }, { "msg_contents": "On 25.10.23 08:12, Amul Sul wrote:\n> Here is the rebase version for the latest master head(673a17e3120).\n> \n> I haven't done any other changes related to the ON UPDATE trigger since that\n> seems non-trivial; need a bit of work to add trigger support in \n> ATRewriteTable().\n> Also, I am not sure yet, if we were doing these changes, and the correct \n> direction\n> for that.\n\nI did some detailed testing on Db2, Oracle, and MariaDB (the three \nexisting implementations of this feature that I'm tracking), and none of \nthem fire any row or statement triggers when the respective statement to \nalter the generation expression is run. So I'm withdrawing my comment \nthat this should fire triggers. (I suppose event triggers are available \nif anyone really needs the functionality.)\n\n\n\n", "msg_date": "Tue, 7 Nov 2023 15:51:20 +0100", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: ALTER COLUMN ... SET EXPRESSION to alter stored generated\n column's expression" }, { "msg_contents": "On Tue, Nov 7, 2023 at 8:21 PM Peter Eisentraut <peter@eisentraut.org>\nwrote:\n\n> On 25.10.23 08:12, Amul Sul wrote:\n> > Here is the rebase version for the latest master head(673a17e3120).\n> >\n> > I haven't done any other changes related to the ON UPDATE trigger since\n> that\n> > seems non-trivial; need a bit of work to add trigger support in\n> > ATRewriteTable().\n> > Also, I am not sure yet, if we were doing these changes, and the correct\n> > direction\n> > for that.\n>\n> I did some detailed testing on Db2, Oracle, and MariaDB (the three\n> existing implementations of this feature that I'm tracking), and none of\n> them fire any row or statement triggers when the respective statement to\n> alter the generation expression is run. So I'm withdrawing my comment\n> that this should fire triggers. (I suppose event triggers are available\n> if anyone really needs the functionality.)\n>\n\nThank you for the confirmation.\n\nHere is the updated version patch. Did minor changes to documents and tests.\n\nRegards,\nAmul", "msg_date": "Thu, 9 Nov 2023 17:30:49 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ALTER COLUMN ... SET EXPRESSION to alter stored generated\n column's expression" }, { "msg_contents": "On Thu, 9 Nov 2023 at 15:01, Amul Sul <sulamul@gmail.com> wrote:\n\n>\n> Here is the updated version patch. Did minor changes to documents and\n> tests.\n>\nOverall patch looks good to me. Since Peter did withdraw his comment on\ntriggers and no open problems\nare present, we can make this patch RfC, shall we? It would be nice to\ncorrect this in the next release.\n\n\n-- \nBest regards,\nMaxim Orlov.\n\nOn Thu, 9 Nov 2023 at 15:01, Amul Sul <sulamul@gmail.com> wrote:Here is the updated version patch. Did minor changes to documents and tests.Overall patch looks good to me.  Since Peter did withdraw his comment on triggers and no open problems are present, we can make this patch RfC, shall we?  It would be nice to correct this in the next release. -- Best regards,Maxim Orlov.", "msg_date": "Thu, 9 Nov 2023 16:49:00 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ALTER COLUMN ... SET EXPRESSION to alter stored generated\n column's expression" }, { "msg_contents": "On 09.11.23 13:00, Amul Sul wrote:\n> On Tue, Nov 7, 2023 at 8:21 PM Peter Eisentraut <peter@eisentraut.org \n> <mailto:peter@eisentraut.org>> wrote:\n> \n> On 25.10.23 08:12, Amul Sul wrote:\n> > Here is the rebase version for the latest master head(673a17e3120).\n> >\n> > I haven't done any other changes related to the ON UPDATE trigger\n> since that\n> > seems non-trivial; need a bit of work to add trigger support in\n> > ATRewriteTable().\n> > Also, I am not sure yet, if we were doing these changes, and the\n> correct\n> > direction\n> > for that.\n> \n> I did some detailed testing on Db2, Oracle, and MariaDB (the three\n> existing implementations of this feature that I'm tracking), and\n> none of\n> them fire any row or statement triggers when the respective\n> statement to\n> alter the generation expression is run.  So I'm withdrawing my comment\n> that this should fire triggers.  (I suppose event triggers are\n> available\n> if anyone really needs the functionality.)\n> \n> \n> Thank you for the confirmation.\n> \n> Here is the updated version patch. Did minor changes to documents and tests.\n\nI don't like the renaming in the 0001 patch. I think it would be better \nto keep the two subcommands (DROP and SET) separate. There is some \noverlap now, but for example I'm thinking about virtual generated \ncolumns, then there will be even more conditionals in there. Let's keep \nit separate for clarity.\n\nAlso, it seems to me that the SET EXPRESSION variant should just do an \nupdate of the catalog table instead of a drop and re-insert.\n\nThe documentation needs some improvements:\n\n+ ALTER [ COLUMN ] <replaceable \nclass=\"parameter\">column_name</replaceable> SET EXPRESSION <replaceable \nclass=\"parameter\">expression</replaceable> STORED\n\nIf we're going to follow the Db2 syntax, there should be an \"AS\" after \nEXPRESSION. And the implemented syntax requires parentheses, so they \nshould appear in the documentation.\n\nAlso, the keyword STORED shouldn't be there. (The same command should \nbe applicable to virtual generated columns in the future.)\n\nThere should be separate <varlistentry>s for SET and DROP in \nalter_table.sgml.\n\nThe functionality looks ok otherwise.\n\n\n\n", "msg_date": "Mon, 13 Nov 2023 09:10:29 +0100", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: ALTER COLUMN ... SET EXPRESSION to alter stored generated\n column's expression" }, { "msg_contents": "On Mon, Nov 13, 2023 at 1:40 PM Peter Eisentraut <peter@eisentraut.org>\nwrote:\n\n> On 09.11.23 13:00, Amul Sul wrote:\n> > On Tue, Nov 7, 2023 at 8:21 PM Peter Eisentraut <peter@eisentraut.org\n> > <mailto:peter@eisentraut.org>> wrote:\n> >\n> > On 25.10.23 08:12, Amul Sul wrote:\n> > > Here is the rebase version for the latest master\n> head(673a17e3120).\n> > >\n> > > I haven't done any other changes related to the ON UPDATE trigger\n> > since that\n> > > seems non-trivial; need a bit of work to add trigger support in\n> > > ATRewriteTable().\n> > > Also, I am not sure yet, if we were doing these changes, and the\n> > correct\n> > > direction\n> > > for that.\n> >\n> > I did some detailed testing on Db2, Oracle, and MariaDB (the three\n> > existing implementations of this feature that I'm tracking), and\n> > none of\n> > them fire any row or statement triggers when the respective\n> > statement to\n> > alter the generation expression is run. So I'm withdrawing my\n> comment\n> > that this should fire triggers. (I suppose event triggers are\n> > available\n> > if anyone really needs the functionality.)\n> >\n> >\n> > Thank you for the confirmation.\n> >\n> > Here is the updated version patch. Did minor changes to documents and\n> tests.\n>\n> I don't like the renaming in the 0001 patch. I think it would be better\n> to keep the two subcommands (DROP and SET) separate. There is some\n> overlap now, but for example I'm thinking about virtual generated\n> columns, then there will be even more conditionals in there. Let's keep\n> it separate for clarity.\n>\n\nUnderstood. Will do the same.\n\n\n> Also, it seems to me that the SET EXPRESSION variant should just do an\n> update of the catalog table instead of a drop and re-insert.\n>\n\nI am not sure if that is sufficient; we need to get rid of the dependencies\nof\nexisting expressions on other columns and/or objects that need to be\nremoved.\nThe drop and re-insert does that easily.\n\n\n> The documentation needs some improvements:\n>\n> + ALTER [ COLUMN ] <replaceable\n> class=\"parameter\">column_name</replaceable> SET EXPRESSION <replaceable\n> class=\"parameter\">expression</replaceable> STORED\n>\n> If we're going to follow the Db2 syntax, there should be an \"AS\" after\n> EXPRESSION. And the implemented syntax requires parentheses, so they\n> should appear in the documentation.\n>\n> Also, the keyword STORED shouldn't be there. (The same command should\n> be applicable to virtual generated columns in the future.)\n>\n\nI have omitted \"AS\" intentionally, to keep syntax similar to our existing\nALTER COLUMN ... SET DEFAULT <a_expr>. Let me know if you want\nme to add that.\n\nThe STORED suggested by Vik[1]. I think we could skip that if there is no\nneed\nto differentiate between stored and virtual columns at ALTER.\n\nRegards,\nAmul\n\n1] postgr.es/m/d15cf691-55d0-e405-44ec-6448986c3276@postgresfriends.org\n\nOn Mon, Nov 13, 2023 at 1:40 PM Peter Eisentraut <peter@eisentraut.org> wrote:On 09.11.23 13:00, Amul Sul wrote:\n> On Tue, Nov 7, 2023 at 8:21 PM Peter Eisentraut <peter@eisentraut.org \n> <mailto:peter@eisentraut.org>> wrote:\n> \n>     On 25.10.23 08:12, Amul Sul wrote:\n>      > Here is the rebase version for the latest master head(673a17e3120).\n>      >\n>      > I haven't done any other changes related to the ON UPDATE trigger\n>     since that\n>      > seems non-trivial; need a bit of work to add trigger support in\n>      > ATRewriteTable().\n>      > Also, I am not sure yet, if we were doing these changes, and the\n>     correct\n>      > direction\n>      > for that.\n> \n>     I did some detailed testing on Db2, Oracle, and MariaDB (the three\n>     existing implementations of this feature that I'm tracking), and\n>     none of\n>     them fire any row or statement triggers when the respective\n>     statement to\n>     alter the generation expression is run.  So I'm withdrawing my comment\n>     that this should fire triggers.  (I suppose event triggers are\n>     available\n>     if anyone really needs the functionality.)\n> \n> \n> Thank you for the confirmation.\n> \n> Here is the updated version patch. Did minor changes to documents and tests.\n\nI don't like the renaming in the 0001 patch.  I think it would be better \nto keep the two subcommands (DROP and SET) separate.  There is some \noverlap now, but for example I'm thinking about virtual generated \ncolumns, then there will be even more conditionals in there.  Let's keep \nit separate for clarity.Understood. Will do the same. \nAlso, it seems to me that the SET EXPRESSION variant should just do an \nupdate of the catalog table instead of a drop and re-insert.I am not sure if that is sufficient; we need to get rid of the dependencies ofexisting expressions on other columns and/or objects that need to be removed. The drop and re-insert does that easily. \nThe documentation needs some improvements:\n\n+    ALTER [ COLUMN ] <replaceable \nclass=\"parameter\">column_name</replaceable> SET EXPRESSION <replaceable \nclass=\"parameter\">expression</replaceable> STORED\n\nIf we're going to follow the Db2 syntax, there should be an \"AS\" after \nEXPRESSION.  And the implemented syntax requires parentheses, so they \nshould appear in the documentation.\n\nAlso, the keyword STORED shouldn't be there.  (The same command should \nbe applicable to virtual generated columns in the future.) I have omitted \"AS\" intentionally, to keep syntax similar to our existingALTER COLUMN ... SET DEFAULT <a_expr>.  Let me know if you wantme to add that.The STORED suggested by Vik[1].  I think we could skip that if there is no needto differentiate between stored and virtual columns at ALTER.Regards,Amul1] postgr.es/m/d15cf691-55d0-e405-44ec-6448986c3276@postgresfriends.org", "msg_date": "Mon, 13 Nov 2023 18:37:23 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ALTER COLUMN ... SET EXPRESSION to alter stored generated\n column's expression" }, { "msg_contents": "On 13.11.23 14:07, Amul Sul wrote:\n> Also, it seems to me that the SET EXPRESSION variant should just do an\n> update of the catalog table instead of a drop and re-insert.\n> \n> I am not sure if that is sufficient; we need to get rid of the \n> dependencies of\n> existing expressions on other columns and/or objects that need to be \n> removed.\n> The drop and re-insert does that easily.\n\nOk, good point.\n\n> The documentation needs some improvements:\n> \n> +    ALTER [ COLUMN ] <replaceable\n> class=\"parameter\">column_name</replaceable> SET EXPRESSION <replaceable\n> class=\"parameter\">expression</replaceable> STORED\n> \n> If we're going to follow the Db2 syntax, there should be an \"AS\" after\n> EXPRESSION.  And the implemented syntax requires parentheses, so they\n> should appear in the documentation.\n> \n> Also, the keyword STORED shouldn't be there.  (The same command should\n> be applicable to virtual generated columns in the future.)\n> \n> I have omitted \"AS\" intentionally, to keep syntax similar to our existing\n> ALTERCOLUMN... SET DEFAULT <a_expr>.  Let me know if you want\n> me to add that.\n\nWell, my idea was to follow the Db2 syntax. Otherwise, we are adding \nyet another slightly different syntax to the world. Even if we think \nour idea is slightly better, it doesn't seem worth it.\n\n> The STORED suggested by Vik[1].  I think we could skip that if there is \n> no need\n> to differentiate between stored and virtual columns at ALTER.\n\nI think that suggestion was based on the idea that this would convert \nnon-generated columns to generated columns, but we have dropped that idea.\n\n\n\n", "msg_date": "Mon, 13 Nov 2023 16:39:26 +0100", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: ALTER COLUMN ... SET EXPRESSION to alter stored generated\n column's expression" }, { "msg_contents": "On Mon, Nov 13, 2023 at 9:09 PM Peter Eisentraut <peter@eisentraut.org>\nwrote:\n\n> On 13.11.23 14:07, Amul Sul wrote:\n> > Also, it seems to me that the SET EXPRESSION variant should just do\n> an\n> > update of the catalog table instead of a drop and re-insert.\n> >\n> > I am not sure if that is sufficient; we need to get rid of the\n> > dependencies of\n> > existing expressions on other columns and/or objects that need to be\n> > removed.\n> > The drop and re-insert does that easily.\n>\n> Ok, good point.\n>\n> > The documentation needs some improvements:\n> >\n> > + ALTER [ COLUMN ] <replaceable\n> > class=\"parameter\">column_name</replaceable> SET EXPRESSION\n> <replaceable\n> > class=\"parameter\">expression</replaceable> STORED\n> >\n> > If we're going to follow the Db2 syntax, there should be an \"AS\"\n> after\n> > EXPRESSION. And the implemented syntax requires parentheses, so they\n> > should appear in the documentation.\n> >\n> > Also, the keyword STORED shouldn't be there. (The same command\n> should\n> > be applicable to virtual generated columns in the future.)\n> >\n> > I have omitted \"AS\" intentionally, to keep syntax similar to our existing\n> > ALTERCOLUMN... SET DEFAULT <a_expr>. Let me know if you want\n> > me to add that.\n>\n> Well, my idea was to follow the Db2 syntax. Otherwise, we are adding\n> yet another slightly different syntax to the world. Even if we think\n> our idea is slightly better, it doesn't seem worth it.\n>\n\nOk.\n\nPlease have a look at the attached version, updating the syntax to have \"AS\"\nafter EXPRESSION and other changes suggested previously.\n\nRegards,\nAmul", "msg_date": "Tue, 14 Nov 2023 16:10:04 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ALTER COLUMN ... SET EXPRESSION to alter stored generated\n column's expression" }, { "msg_contents": "On 14.11.23 11:40, Amul Sul wrote:\n> Please have a look at the attached version, updating the syntax to have \"AS\"\n> after EXPRESSION and other changes suggested previously.\n\nThe code structure looks good to me now.\n\nQuestion: Why are you using AT_PASS_ADD_OTHERCONSTR? I don't know if \nit's right or wrong, but if you have a specific reason, it would be good \nto know.\n\nI think ATExecSetExpression() needs to lock pg_attribute? Did you lose \nthat during the refactoring?\n\nTiny comment: The error message in ATExecSetExpression() does not need \nto mention \"stored\", since it would be also applicable to virtual \ngenerated columns in the future.\n\nDocumentation additions in alter_table.sgml should use one-space indent \nconsistently. Also, \"This form replaces expression\" is missing a \"the\"?\n\n\n\n", "msg_date": "Wed, 15 Nov 2023 12:39:30 +0100", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: ALTER COLUMN ... SET EXPRESSION to alter stored generated\n column's expression" }, { "msg_contents": "On Wed, Nov 15, 2023 at 5:09 PM Peter Eisentraut <peter@eisentraut.org>\nwrote:\n\n> On 14.11.23 11:40, Amul Sul wrote:\n> > Please have a look at the attached version, updating the syntax to have\n> \"AS\"\n> > after EXPRESSION and other changes suggested previously.\n>\n> The code structure looks good to me now.\n>\n\nThank you for your review.\n\n\n>\n> Question: Why are you using AT_PASS_ADD_OTHERCONSTR? I don't know if\n> it's right or wrong, but if you have a specific reason, it would be good\n> to know.\n>\n\nI referred to ALTER COLUMN DEFAULT and used that.\n\n\n\n> I think ATExecSetExpression() needs to lock pg_attribute? Did you lose\n> that during the refactoring?\n>\n\nI have removed that intentionally since we were not updating anything in\npg_attribute like ALTER DROP EXPRESSION.\n\n\n>\n> Tiny comment: The error message in ATExecSetExpression() does not need\n> to mention \"stored\", since it would be also applicable to virtual\n> generated columns in the future.\n>\n\nI had to have the same thought, but later decided when we do that\nvirtual column thing, we could simply change that. I am fine to do that\nchange\nnow as well, let me know your thought.\n\n\n> Documentation additions in alter_table.sgml should use one-space indent\n> consistently. Also, \"This form replaces expression\" is missing a \"the\"?\n>\n\nOk, will fix that.\n\nRegards,\nAmul\n\nOn Wed, Nov 15, 2023 at 5:09 PM Peter Eisentraut <peter@eisentraut.org> wrote:On 14.11.23 11:40, Amul Sul wrote:\n> Please have a look at the attached version, updating the syntax to have \"AS\"\n> after EXPRESSION and other changes suggested previously.\n\nThe code structure looks good to me now.Thank you for your review. \n\nQuestion: Why are you using AT_PASS_ADD_OTHERCONSTR?  I don't know if \nit's right or wrong, but if you have a specific reason, it would be good \nto know. I referred to ALTER COLUMN DEFAULT and used that.  \nI think ATExecSetExpression() needs to lock pg_attribute?  Did you lose \nthat during the refactoring?I have removed that intentionally since we were not updating anything inpg_attribute like ALTER DROP EXPRESSION. \n\nTiny comment: The error message in ATExecSetExpression() does not need \nto mention \"stored\", since it would be also applicable to virtual \ngenerated columns in the future.I had to have the same thought, but later decided when we do thatvirtual column thing, we could simply change that. I am fine to do that changenow as well, let me know your thought. \n\nDocumentation additions in alter_table.sgml should use one-space indent \nconsistently.  Also, \"This form replaces expression\" is missing a \"the\"?Ok, will fix that.Regards,Amul", "msg_date": "Wed, 15 Nov 2023 17:56:12 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ALTER COLUMN ... SET EXPRESSION to alter stored generated\n column's expression" }, { "msg_contents": "On 15.11.23 13:26, Amul Sul wrote:\n> Question: Why are you using AT_PASS_ADD_OTHERCONSTR?  I don't know if\n> it's right or wrong, but if you have a specific reason, it would be\n> good\n> to know.\n> \n> I referred to ALTER COLUMN DEFAULT and used that.\n\nHmm, I'm not sure if that is a good comparison. For ALTER TABLE, SET \nDEFAULT is just a catalog manipulation, it doesn't change any data, so \nit's pretty easy. SET EXPRESSION changes data, which other phases might \nwant to inspect? For example, if you do SET EXPRESSION and add a \nconstraint in the same ALTER TABLE statement, do those run in the \ncorrect order?\n\n> I think ATExecSetExpression() needs to lock pg_attribute?  Did you lose\n> that during the refactoring?\n> \n> I have removed that intentionally since we were not updating anything in\n> pg_attribute like ALTER DROP EXPRESSION.\n\nok\n\n> Tiny comment: The error message in ATExecSetExpression() does not need\n> to mention \"stored\", since it would be also applicable to virtual\n> generated columns in the future.\n> \n> I had to have the same thought, but later decided when we do that\n> virtual column thing, we could simply change that. I am fine to do that \n> change\n> now as well, let me know your thought.\n\nNot a big deal, but I would change it now.\n\nAnother small thing I found: In ATExecColumnDefault(), there is an \nerrhint() that suggests DROP EXPRESSION instead of DROP DEFAULT. You \ncould now add another hint that suggests SET EXPRESSION instead of SET \nDEFAULT.\n\n\n\n", "msg_date": "Wed, 15 Nov 2023 22:20:41 +0100", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: ALTER COLUMN ... SET EXPRESSION to alter stored generated\n column's expression" }, { "msg_contents": "On Thu, Nov 16, 2023 at 2:50 AM Peter Eisentraut <peter@eisentraut.org>\nwrote:\n\n> On 15.11.23 13:26, Amul Sul wrote:\n> > Question: Why are you using AT_PASS_ADD_OTHERCONSTR? I don't know if\n> > it's right or wrong, but if you have a specific reason, it would be\n> > good\n> > to know.\n> >\n> > I referred to ALTER COLUMN DEFAULT and used that.\n>\n> Hmm, I'm not sure if that is a good comparison. For ALTER TABLE, SET\n> DEFAULT is just a catalog manipulation, it doesn't change any data, so\n> it's pretty easy. SET EXPRESSION changes data, which other phases might\n> want to inspect? For example, if you do SET EXPRESSION and add a\n> constraint in the same ALTER TABLE statement, do those run in the\n> correct order?\n>\n\nI think, you are correct, but currently AT_PASS_ADD_OTHERCONSTR is for\nAT_CookedColumnDefault, AT_ColumnDefault, and AT_AddIdentity.\nAT_CookedColumnDefault is only supported for CREATE TABLE. AT_ColumnDefault\nand AT_AddIdentity will be having errors while operating on the generated\ncolumn. So\nthat anomaly does not exist, but could be in future addition. I think it is\nbetter to\nuse AT_PASS_MISC to keep this operation at last.\n\nWhile testing this, I found a serious problem with the patch that CHECK and\nFOREIGN KEY constraint check does not happens at rewrite, see this:\n\ncreate table a (y int primary key);\ninsert into a values(1),(2);\ncreate table b (x int, y int generated always as(x) stored, foreign key(y)\nreferences a(y));\ninsert into b values(1),(2);\ninsert into b values(3); <------ an error, expected one\n\nalter table b alter column y set expression as (x*100); <------ no error,\nNOT expected\n\nselect * from b;\n x | y\n---+-----\n 1 | 100\n 2 | 200\n(2 rows)\n\nAlso,\n\ndelete from a; <------ no error, NOT expected.\nselect * from a;\n y\n---\n(0 rows)\n\nShouldn't that have been handled by the ATRewriteTables() facility\nimplicitly\nlike NOT NULL constraints? Or should we prepare a list of CHECK and FK\nconstraints and pass it through tab->constraints?\n\n\n> > Tiny comment: The error message in ATExecSetExpression() does not\n> need\n> > to mention \"stored\", since it would be also applicable to virtual\n> > generated columns in the future.\n> >\n> > I had to have the same thought, but later decided when we do that\n> > virtual column thing, we could simply change that. I am fine to do that\n> > change\n> > now as well, let me know your thought.\n>\n> Not a big deal, but I would change it now.\n>\n> Another small thing I found: In ATExecColumnDefault(), there is an\n> errhint() that suggests DROP EXPRESSION instead of DROP DEFAULT. You\n> could now add another hint that suggests SET EXPRESSION instead of SET\n> DEFAULT.\n>\n\nOk.\n\nRegards,\nAmul Sul\n\nOn Thu, Nov 16, 2023 at 2:50 AM Peter Eisentraut <peter@eisentraut.org> wrote:On 15.11.23 13:26, Amul Sul wrote:\n>     Question: Why are you using AT_PASS_ADD_OTHERCONSTR?  I don't know if\n>     it's right or wrong, but if you have a specific reason, it would be\n>     good\n>     to know.\n> \n> I referred to ALTER COLUMN DEFAULT and used that.\n\nHmm, I'm not sure if that is a good comparison.  For ALTER TABLE, SET \nDEFAULT is just a catalog manipulation, it doesn't change any data, so \nit's pretty easy.  SET EXPRESSION changes data, which other phases might \nwant to inspect?  For example, if you do SET EXPRESSION and add a \nconstraint in the same ALTER TABLE statement, do those run in the \ncorrect order?I think, you are correct, but currently AT_PASS_ADD_OTHERCONSTR is forAT_CookedColumnDefault, AT_ColumnDefault, and AT_AddIdentity.AT_CookedColumnDefault is only supported for CREATE TABLE.  AT_ColumnDefaultand AT_AddIdentity will be having errors while operating on the generated column. Sothat anomaly does not exist, but could be in future addition. I think it is better touse AT_PASS_MISC to keep this operation at last. While testing this, I found a serious problem with the patch that CHECK andFOREIGN KEY constraint check does not happens at rewrite, see this:create table a (y int primary key);insert into a values(1),(2);create table b (x int, y int generated always as(x) stored, foreign key(y) references a(y));insert into b values(1),(2);insert into b values(3);        <------ an error, expected onealter table b alter column y set expression as (x*100);  <------ no error, NOT expected select * from b; x |  y  ---+----- 1 | 100 2 | 200(2 rows)Also, delete from a;               <------ no error, NOT expected.select * from a; y ---(0 rows)Shouldn't that have been handled by the ATRewriteTables() facility implicitlylike NOT NULL constraints?  Or should we prepare a list of CHECK and FKconstraints and pass it through tab->constraints?\n\n>     Tiny comment: The error message in ATExecSetExpression() does not need\n>     to mention \"stored\", since it would be also applicable to virtual\n>     generated columns in the future.\n> \n> I had to have the same thought, but later decided when we do that\n> virtual column thing, we could simply change that. I am fine to do that \n> change\n> now as well, let me know your thought.\n\nNot a big deal, but I would change it now.\n\nAnother small thing I found:  In ATExecColumnDefault(), there is an \nerrhint() that suggests DROP EXPRESSION instead of DROP DEFAULT.  You \ncould now add another hint that suggests SET EXPRESSION instead of SET \nDEFAULT. Ok.Regards,Amul Sul", "msg_date": "Thu, 16 Nov 2023 19:05:29 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ALTER COLUMN ... SET EXPRESSION to alter stored generated\n column's expression" }, { "msg_contents": "On Thu, Nov 16, 2023 at 7:05 PM Amul Sul <sulamul@gmail.com> wrote:\n\n> On Thu, Nov 16, 2023 at 2:50 AM Peter Eisentraut <peter@eisentraut.org>\n> wrote:\n>\n>> On 15.11.23 13:26, Amul Sul wrote:\n>> > Question: Why are you using AT_PASS_ADD_OTHERCONSTR? I don't know\n>> if\n>> > it's right or wrong, but if you have a specific reason, it would be\n>> > good\n>> > to know.\n>> >\n>> > I referred to ALTER COLUMN DEFAULT and used that.\n>>\n>> Hmm, I'm not sure if that is a good comparison. For ALTER TABLE, SET\n>> DEFAULT is just a catalog manipulation, it doesn't change any data, so\n>> it's pretty easy. SET EXPRESSION changes data, which other phases might\n>> want to inspect? For example, if you do SET EXPRESSION and add a\n>> constraint in the same ALTER TABLE statement, do those run in the\n>> correct order?\n>>\n>\n> I think, you are correct, but currently AT_PASS_ADD_OTHERCONSTR is for\n> AT_CookedColumnDefault, AT_ColumnDefault, and AT_AddIdentity.\n> AT_CookedColumnDefault is only supported for CREATE TABLE.\n> AT_ColumnDefault\n> and AT_AddIdentity will be having errors while operating on the generated\n> column. So\n> that anomaly does not exist, but could be in future addition. I think it\n> is better to\n> use AT_PASS_MISC to keep this operation at last.\n>\n> While testing this, I found a serious problem with the patch that CHECK and\n> FOREIGN KEY constraint check does not happens at rewrite, see this:\n>\n> create table a (y int primary key);\n> insert into a values(1),(2);\n> create table b (x int, y int generated always as(x) stored, foreign\n> key(y) references a(y));\n> insert into b values(1),(2);\n> insert into b values(3); <------ an error, expected one\n>\n> alter table b alter column y set expression as (x*100); <------ no\n> error, NOT expected\n>\n> select * from b;\n> x | y\n> ---+-----\n> 1 | 100\n> 2 | 200\n> (2 rows)\n>\n> Also,\n>\n> delete from a; <------ no error, NOT expected.\n> select * from a;\n> y\n> ---\n> (0 rows)\n>\n> Shouldn't that have been handled by the ATRewriteTables() facility\n> implicitly\n> like NOT NULL constraints? Or should we prepare a list of CHECK and FK\n> constraints and pass it through tab->constraints?\n>\n\nTo fix this we should be doing something like ALTER COLUMN TYPE and the pass\nshould be AT_PASS_ALTER_TYPE (rename it or invent a new one near to that) so\nthat in ATRewriteCatalogs(), we would execute ATPostAlterTypeCleanup().\n\nI simply tried that by doing blind copy of code from\nATExecAlterColumnType() in\n0002 patch. We don't really need to do all the stuff such as re-adding\nindexes, constraints etc, but I am out of time for today to figure out the\noptimum code and I will be away from work in the first half of the coming\nweek and the week after that. Therefore, I thought of sharing an approach to\nget comments/thoughts on the direction, thanks.\n\nRegards,\nAmul", "msg_date": "Fri, 17 Nov 2023 17:55:15 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ALTER COLUMN ... SET EXPRESSION to alter stored generated\n column's expression" }, { "msg_contents": "On 17.11.23 13:25, Amul Sul wrote:\n> To fix this we should be doing something like ALTER COLUMN TYPE and the pass\n> should be AT_PASS_ALTER_TYPE (rename it or invent a new one near to that) so\n> that in ATRewriteCatalogs(), we would execute ATPostAlterTypeCleanup().\n> \n> I simply tried that by doing blind copy of code from \n> ATExecAlterColumnType() in\n> 0002 patch.  We don't really need to do all the stuff such as re-adding\n> indexes, constraints etc, but I am out of time for today to figure out the\n> optimum code and I will be away from work in the first half of the coming\n> week and the week after that. Therefore, I thought of sharing an approach to\n> get comments/thoughts on the direction, thanks.\n\nThe exact sequencing of this seems to be tricky. It's clear that we \nneed to do it earlier than at the end. I also think it should be \nstrictly after AT_PASS_ALTER_TYPE so that the new expression can refer \nto the new type of a column. It should also be after AT_PASS_ADD_COL, \nso that the new expression can refer to any newly added column. But \nthen it's after AT_PASS_OLD_INDEX and AT_PASS_OLD_CONSTR, is that a problem?\n\n(It might be an option for the first version of this feature to not \nsupport altering columns that have constraints on them. But we do need \nto support columns with indexes on them. Does that work ok? Does that \ndepend on the relative order of AT_PASS_OLD_INDEX?)\n\n\n\n", "msg_date": "Mon, 20 Nov 2023 08:42:00 +0100", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: ALTER COLUMN ... SET EXPRESSION to alter stored generated\n column's expression" }, { "msg_contents": "On Mon, Nov 20, 2023 at 1:12 PM Peter Eisentraut <peter@eisentraut.org>\nwrote:\n\n> On 17.11.23 13:25, Amul Sul wrote:\n> > To fix this we should be doing something like ALTER COLUMN TYPE and the\n> pass\n> > should be AT_PASS_ALTER_TYPE (rename it or invent a new one near to\n> that) so\n> > that in ATRewriteCatalogs(), we would execute ATPostAlterTypeCleanup().\n> >\n> > I simply tried that by doing blind copy of code from\n> > ATExecAlterColumnType() in\n> > 0002 patch. We don't really need to do all the stuff such as re-adding\n> > indexes, constraints etc, but I am out of time for today to figure out\n> the\n> > optimum code and I will be away from work in the first half of the coming\n> > week and the week after that. Therefore, I thought of sharing an\n> approach to\n> > get comments/thoughts on the direction, thanks.\n>\n> The exact sequencing of this seems to be tricky. It's clear that we\n> need to do it earlier than at the end. I also think it should be\n> strictly after AT_PASS_ALTER_TYPE so that the new expression can refer\n> to the new type of a column. It should also be after AT_PASS_ADD_COL,\n> so that the new expression can refer to any newly added column. But\n> then it's after AT_PASS_OLD_INDEX and AT_PASS_OLD_CONSTR, is that a\n> problem?\n>\n\nAT_PASS_ALTER_TYPE and AT_PASS_ADD_COL cannot be together, the ALTER TYPE\ncannot see that column, I think we can adopt the same behaviour.\n\nBut, we need to have ALTER SET EXPRESSION after the ALTER TYPE since if we\nadd\nthe new generated expression for the current type (e.g. int) and we would\nalter the type (e.g. text or numeric) then that will be problematic in the\nATRewriteTable() where a new generation expression will generate value for\nthe\nold type but the actual type is something else. Therefore I have added\nAT_PASS_SET_EXPRESSION to execute after AT_PASS_ALTER_TYPE.\n\n(It might be an option for the first version of this feature to not\n> support altering columns that have constraints on them. But we do need\n> to support columns with indexes on them. Does that work ok? Does that\n> depend on the relative order of AT_PASS_OLD_INDEX?)\n>\n\nI tried to reuse the code by borrowing code from ALTER TYPE, see if that\nlooks good to you.\n\nBut I have concerns, with that code reuse where we drop and re-add the\nindexes\nand constraints which seems unnecessary for SET EXPRESSION where column\nattributes will stay the same. I don't know why ATLER TYPE does that for\nindex\nsince finish_heap_swap() anyway does reindexing. We could skip re-adding\nindex for SET EXPRESSION which would be fine but we could not skip the\nre-addition of constraints, since rebuilding constraints for checking might\nneed a good amount of code copy especially for foreign key constraints.\n\nPlease have a look at the attached version, 0001 patch does the code\nrefactoring, and 0002 is the implementation, using the newly refactored\ncode to\nre-add indexes and constraints for the validation. Added tests for the same.\n\nRegards,\nAmul", "msg_date": "Thu, 23 Nov 2023 19:43:40 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ALTER COLUMN ... SET EXPRESSION to alter stored generated\n column's expression" }, { "msg_contents": "On 23.11.23 15:13, Amul Sul wrote:\n> The exact sequencing of this seems to be tricky.  It's clear that we\n> need to do it earlier than at the end.  I also think it should be\n> strictly after AT_PASS_ALTER_TYPE so that the new expression can refer\n> to the new type of a column.  It should also be after AT_PASS_ADD_COL,\n> so that the new expression can refer to any newly added column.  But\n> then it's after AT_PASS_OLD_INDEX and AT_PASS_OLD_CONSTR, is that a\n> problem?\n> \n> AT_PASS_ALTER_TYPE and AT_PASS_ADD_COL cannot be together, the ALTER TYPE\n> cannot see that column, I think we can adopt the samebehaviour.\n\nWith your v5 patch, I see the following behavior:\n\ncreate table t1 (a int, b int generated always as (a + 1) stored);\nalter table t1 add column c int, alter column b set expression as (a + c);\nERROR: 42703: column \"c\" does not exist\n\nI think intuitively, this ought to work. Maybe just moving the new pass \nafter AT_PASS_ADD_COL would do it.\n\n\nWhile looking at this, I figured that converting the AT_PASS_* macros to \nan enum would make this code simpler and easier to follow. For patches \nlike yours you wouldn't have to renumber the whole list. We could put \nthis patch before yours if people agree with it.\n\n\n> I tried to reuse the code by borrowing code from ALTER TYPE, see if that\n> looks good to you.\n> \n> But I have concerns, with that code reuse where we drop and re-add the \n> indexes\n> and constraints which seems unnecessary for SET EXPRESSION where column\n> attributes will stay the same. I don't know why ATLER TYPE does that for \n> index\n> since finish_heap_swap() anyway does reindexing. We could skip re-adding\n> index for SET EXPRESSION which would be fine but we could not skip the\n> re-addition of constraints, since rebuilding constraints for checking might\n> need a good amount of code copy especially for foreign key constraints.\n> \n> Please have a look at the attached version, 0001 patch does the code\n> refactoring, and 0002 is the implementation, using the newly refactored \n> code to\n> re-add indexes and constraints for the validation. Added tests for the same.\n\nThis looks reasonable after a first reading. (I have found that using \ngit diff --patience makes the 0001 patch look more readable. Maybe \nthat's helpful.)", "msg_date": "Tue, 28 Nov 2023 13:10:38 +0100", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: ALTER COLUMN ... SET EXPRESSION to alter stored generated\n column's expression" }, { "msg_contents": "On Tue, Nov 28, 2023 at 5:40 PM Peter Eisentraut <peter@eisentraut.org>\nwrote:\n\n> On 23.11.23 15:13, Amul Sul wrote:\n> > The exact sequencing of this seems to be tricky. It's clear that we\n> > need to do it earlier than at the end. I also think it should be\n> > strictly after AT_PASS_ALTER_TYPE so that the new expression can\n> refer\n> > to the new type of a column. It should also be after\n> AT_PASS_ADD_COL,\n> > so that the new expression can refer to any newly added column. But\n> > then it's after AT_PASS_OLD_INDEX and AT_PASS_OLD_CONSTR, is that a\n> > problem?\n> >\n> > AT_PASS_ALTER_TYPE and AT_PASS_ADD_COL cannot be together, the ALTER TYPE\n> > cannot see that column, I think we can adopt the samebehaviour.\n>\n> With your v5 patch, I see the following behavior:\n>\n> create table t1 (a int, b int generated always as (a + 1) stored);\n> alter table t1 add column c int, alter column b set expression as (a + c);\n> ERROR: 42703: column \"c\" does not exist\n>\n> I think intuitively, this ought to work. Maybe just moving the new pass\n> after AT_PASS_ADD_COL would do it.\n>\n>\nI think we can't support that (like alter type) since we need to place this\nnew\npass before AT_PASS_OLD_INDEX & AT_PASS_OLD_CONSTR to re-add indexes and\nconstraints for the validation.\n\nWhile looking at this, I figured that converting the AT_PASS_* macros to\n> an enum would make this code simpler and easier to follow. For patches\n> like yours you wouldn't have to renumber the whole list. We could put\n> this patch before yours if people agree with it.\n>\n\nOk, 0001 patch does that.\n\n\n>\n> > I tried to reuse the code by borrowing code from ALTER TYPE, see if that\n> > looks good to you.\n> >\n> > But I have concerns, with that code reuse where we drop and re-add the\n> > indexes\n> > and constraints which seems unnecessary for SET EXPRESSION where column\n> > attributes will stay the same. I don't know why ATLER TYPE does that for\n> > index\n> > since finish_heap_swap() anyway does reindexing. We could skip re-adding\n> > index for SET EXPRESSION which would be fine but we could not skip the\n> > re-addition of constraints, since rebuilding constraints for checking\n> might\n> > need a good amount of code copy especially for foreign key constraints.\n> >\n> > Please have a look at the attached version, 0001 patch does the code\n> > refactoring, and 0002 is the implementation, using the newly refactored\n> > code to\n> > re-add indexes and constraints for the validation. Added tests for the\n> same.\n>\n> This looks reasonable after a first reading. (I have found that using\n> git diff --patience makes the 0001 patch look more readable. Maybe\n> that's helpful.)\n\n\nYeah, the attached version is generated with a better git-diff algorithm for\nthe readability.\n\nRegards,\nAmul", "msg_date": "Mon, 11 Dec 2023 17:52:26 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ALTER COLUMN ... SET EXPRESSION to alter stored generated\n column's expression" }, { "msg_contents": "On 11.12.23 13:22, Amul Sul wrote:\n> \n> create table t1 (a int, b int generated always as (a + 1) stored);\n> alter table t1 add column c int, alter column b set expression as (a\n> + c);\n> ERROR:  42703: column \"c\" does not exist\n> \n> I think intuitively, this ought to work.  Maybe just moving the new\n> pass\n> after AT_PASS_ADD_COL would do it.\n> \n> \n> I think we can't support that (like alter type) since we need to place \n> this new\n> pass before AT_PASS_OLD_INDEX & AT_PASS_OLD_CONSTR to re-add indexes and\n> constraints for the validation.\n\nCould we have AT_PASS_ADD_COL before AT_PASS_OLD_*? So overall it would be\n\n...\nAT_PASS_ALTER_TYPE,\nAT_PASS_ADD_COL, // moved\nAT_PASS_SET_EXPRESSION, // new\nAT_PASS_OLD_INDEX,\nAT_PASS_OLD_CONSTR,\nAT_PASS_ADD_CONSTR,\n...\n\nThis appears to not break any existing tests.\n\n\n\n", "msg_date": "Mon, 18 Dec 2023 10:31:03 +0100", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: ALTER COLUMN ... SET EXPRESSION to alter stored generated\n column's expression" }, { "msg_contents": "On Mon, Dec 18, 2023 at 3:01 PM Peter Eisentraut <peter@eisentraut.org>\nwrote:\n\n> On 11.12.23 13:22, Amul Sul wrote:\n> >\n> > create table t1 (a int, b int generated always as (a + 1) stored);\n> > alter table t1 add column c int, alter column b set expression as (a\n> > + c);\n> > ERROR: 42703: column \"c\" does not exist\n> >\n> > I think intuitively, this ought to work. Maybe just moving the new\n> > pass\n> > after AT_PASS_ADD_COL would do it.\n> >\n> >\n> > I think we can't support that (like alter type) since we need to place\n> > this new\n> > pass before AT_PASS_OLD_INDEX & AT_PASS_OLD_CONSTR to re-add indexes and\n> > constraints for the validation.\n>\n> Could we have AT_PASS_ADD_COL before AT_PASS_OLD_*? So overall it would be\n>\n> ...\n> AT_PASS_ALTER_TYPE,\n> AT_PASS_ADD_COL, // moved\n> AT_PASS_SET_EXPRESSION, // new\n> AT_PASS_OLD_INDEX,\n> AT_PASS_OLD_CONSTR,\n> AT_PASS_ADD_CONSTR,\n> ...\n>\n> This appears to not break any existing tests.\n>\n\n(Sorry, for the delay)\n\nAgree. I did that change in 0001 patch.\n\nRegards,\nAmul", "msg_date": "Mon, 25 Dec 2023 17:40:09 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ALTER COLUMN ... SET EXPRESSION to alter stored generated\n column's expression" }, { "msg_contents": "On 25.12.23 13:10, Amul Sul wrote:\n> > I think we can't support that (like alter type) since we need to\n> place\n> > this new\n> > pass before AT_PASS_OLD_INDEX & AT_PASS_OLD_CONSTR to re-add\n> indexes and\n> > constraints for the validation.\n> \n> Could we have AT_PASS_ADD_COL before AT_PASS_OLD_*?  So overall it\n> would be\n> \n> ...\n> AT_PASS_ALTER_TYPE,\n> AT_PASS_ADD_COL,         // moved\n> AT_PASS_SET_EXPRESSION,  // new\n> AT_PASS_OLD_INDEX,\n> AT_PASS_OLD_CONSTR,\n> AT_PASS_ADD_CONSTR,\n> ...\n> \n> This appears to not break any existing tests.\n> \n> \n> (Sorry, for the delay)\n> \n> Agree. I did that change in 0001 patch.\n\nI have committed this patch set.\n\nI couple of notes:\n\nYou had included the moving of the AT_PASS_ADD_COL enum in the first \npatch. This is not a good style. Refactoring patches should not \ninclude semantic changes. I have moved that change the final patch that \nintroduced the new feature.\n\nI did not commit the 0002 patch that renamed some functions. I think \nnames like AlterColumn are too general, which makes this renaming \npossibly counterproductive. I don't know a better name, maybe \nAlterTypeOrSimilar, but that's obviously silly. I think leaving it at \nAlterType isn't too bad, since most of the code is indeed for ALTER TYPE \nsupport. We can reconsider this if we have a better idea.\n\nIn RememberAllDependentForRebuilding(), I dropped some of the additional \nerrors that you introduced for the AT_SetExpression cases. These didn't \nseem useful. For example, it is not possible for a generated column to \ndepend on another generated column, so there is no point checking for \nit. Also, there were no test cases that covered any of these \nsituations. If we do want any of these, we should have tests and \ndocumentation for them.\n\nFor the tests that examine the EXPLAIN plans, I had to add an ANALYZE \nafter the SET EXPRESSION. Otherwise, I got unstable test results, \npresumably because in some cases the analyze happened in the background.\n\n\n\n", "msg_date": "Thu, 4 Jan 2024 19:58:27 +0100", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: ALTER COLUMN ... SET EXPRESSION to alter stored generated\n column's expression" }, { "msg_contents": "On Fri, Jan 5, 2024 at 12:28 AM Peter Eisentraut <peter@eisentraut.org>\nwrote:\n\n> On 25.12.23 13:10, Amul Sul wrote:\n> >\n> I have committed this patch set.\n>\n> I couple of notes:\n>\n> You had included the moving of the AT_PASS_ADD_COL enum in the first\n> patch. This is not a good style. Refactoring patches should not\n> include semantic changes. I have moved that change the final patch that\n> introduced the new feature.\n>\n> I did not commit the 0002 patch that renamed some functions. I think\n> names like AlterColumn are too general, which makes this renaming\n> possibly counterproductive. I don't know a better name, maybe\n> AlterTypeOrSimilar, but that's obviously silly. I think leaving it at\n> AlterType isn't too bad, since most of the code is indeed for ALTER TYPE\n> support. We can reconsider this if we have a better idea.\n>\n> In RememberAllDependentForRebuilding(), I dropped some of the additional\n> errors that you introduced for the AT_SetExpression cases. These didn't\n> seem useful. For example, it is not possible for a generated column to\n> depend on another generated column, so there is no point checking for\n> it. Also, there were no test cases that covered any of these\n> situations. If we do want any of these, we should have tests and\n> documentation for them.\n>\n> For the tests that examine the EXPLAIN plans, I had to add an ANALYZE\n> after the SET EXPRESSION. Otherwise, I got unstable test results,\n> presumably because in some cases the analyze happened in the background.\n>\n>\nUnderstood.\n\nThank you for your guidance and the commit.\n\nRegards,\nAmul\n\nOn Fri, Jan 5, 2024 at 12:28 AM Peter Eisentraut <peter@eisentraut.org> wrote:On 25.12.23 13:10, Amul Sul wrote:\n>\nI have committed this patch set.\n\nI couple of notes:\n\nYou had included the moving of the AT_PASS_ADD_COL enum in the first \npatch.  This is not a good style.  Refactoring patches should not \ninclude semantic changes.  I have moved that change the final patch that \nintroduced the new feature.\n\nI did not commit the 0002 patch that renamed some functions.  I think \nnames like AlterColumn are too general, which makes this renaming \npossibly counterproductive.  I don't know a better name, maybe \nAlterTypeOrSimilar, but that's obviously silly.  I think leaving it at \nAlterType isn't too bad, since most of the code is indeed for ALTER TYPE \nsupport.  We can reconsider this if we have a better idea.\n\nIn RememberAllDependentForRebuilding(), I dropped some of the additional \nerrors that you introduced for the AT_SetExpression cases.  These didn't \nseem useful.  For example, it is not possible for a generated column to \ndepend on another generated column, so there is no point checking for \nit.  Also, there were no test cases that covered any of these \nsituations.  If we do want any of these, we should have tests and \ndocumentation for them.\n\nFor the tests that examine the EXPLAIN plans, I had to add an ANALYZE \nafter the SET EXPRESSION.  Otherwise, I got unstable test results, \npresumably because in some cases the analyze happened in the background.\nUnderstood.Thank you for your guidance and the commit. Regards,Amul", "msg_date": "Mon, 8 Jan 2024 09:08:31 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ALTER COLUMN ... SET EXPRESSION to alter stored generated\n column's expression" } ]
[ { "msg_contents": "... at\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=c6344d7686f3e3c8243c2c6771996cfc63e71eae\n\nThis is a bit earlier than usual because I will be tied up with\n$LIFE for the next couple of days. I will catch up with any\nsubsequent back-branch commits over the weekend.\n\nAs usual, please send comments/corrections by Sunday.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 02 Aug 2023 17:48:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "First draft of back-branch release notes is done" } ]
[ { "msg_contents": "As stated in [1], all paths arriving here are parameterized by top\nparents, so we should check against top_parent_relids if it exists in\nthe two Asserts.\n\nAttached is a patch fixing that.\n\n[1]\nhttps://www.postgresql.org/message-id/CAMbWs4_UoVcCwkVMfi9TjSC%3Do5U6BRHUNZiVhrvSbDfU2HaeDA%40mail.gmail.com\n\nThanks\nRichard", "msg_date": "Thu, 3 Aug 2023 10:24:48 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Fix bogus Asserts in calc_non_nestloop_required_outer" }, { "msg_contents": "Hi Richard,\n\n> As stated in [1], all paths arriving here are parameterized by top\n> parents, so we should check against top_parent_relids if it exists in\n> the two Asserts.\n>\n> Attached is a patch fixing that.\n\nProbably it's just because of my limited experience with the optimizer\nbut I don't find the proposed change particularly straightforward. I\nwould suggest adding a comment before the Assert's and/or a detailed\ncommit message would be helpful.\n\nOther than that I can confirm that both branches in the Assert's are\nexecuted and the tests pass in different test environments.\n\n--\nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 7 Sep 2023 18:09:40 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Fix bogus Asserts in calc_non_nestloop_required_outer" }, { "msg_contents": "On Thu, Sep 7, 2023 at 11:09 PM Aleksander Alekseev <\naleksander@timescale.com> wrote:\n\n> Probably it's just because of my limited experience with the optimizer\n> but I don't find the proposed change particularly straightforward. I\n> would suggest adding a comment before the Assert's and/or a detailed\n> commit message would be helpful.\n\n\nComment is now added above the Asserts. Thanks for taking an interest\nin this.\n\nThanks\nRichard", "msg_date": "Fri, 3 Nov 2023 14:46:56 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix bogus Asserts in calc_non_nestloop_required_outer" }, { "msg_contents": "I have reviewed the changes and it looks fine.\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Fri, 01 Dec 2023 19:18:29 +0000", "msg_from": "shihao zhong <zhong950419@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix bogus Asserts in calc_non_nestloop_required_outer" }, { "msg_contents": "On Fri, Nov 3, 2023 at 2:47 AM Richard Guo <guofenglinux@gmail.com> wrote:\n> Comment is now added above the Asserts. Thanks for taking an interest\n> in this.\n\nI'd like to suggest rewording this comment a little more. Here's my proposal:\n\nBoth of the paths passed to this function are still parameterized by\nthe topmost parent, because reparameterize_path_by_child() hasn't been\ncalled yet. So, these sanity checks use the parent relids.\n\nWhat do you think?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Jan 2024 14:14:30 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix bogus Asserts in calc_non_nestloop_required_outer" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Nov 3, 2023 at 2:47 AM Richard Guo <guofenglinux@gmail.com> wrote:\n>> Comment is now added above the Asserts. Thanks for taking an interest\n>> in this.\n\n> I'd like to suggest rewording this comment a little more. Here's my proposal:\n\n> Both of the paths passed to this function are still parameterized by\n> the topmost parent, because reparameterize_path_by_child() hasn't been\n> called yet. So, these sanity checks use the parent relids.\n\n> What do you think?\n\nI don't think I believe this code change, let alone any of the\nexplanations for it. The point of these Asserts is to be sure that\nwe don't form an alleged parameterization set that includes any rels\nthat are included in the new join, because that would be obviously\nwrong. They do check that as they stand. I'm not sure what invariant\nthe proposed revision would be enforcing.\n\nThere might be an argument for leaving the existing Asserts in\nplace and *also* checking\n\n Assert(!bms_overlap(outer_paramrels,\n inner_path->parent->top_parent_relids));\n Assert(!bms_overlap(inner_paramrels,\n outer_path->parent->top_parent_relids));\n\nbut I'm not quite convinced what the point of that is.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 05 Jan 2024 16:36:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix bogus Asserts in calc_non_nestloop_required_outer" }, { "msg_contents": "On Fri, Jan 5, 2024 at 4:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I don't think I believe this code change, let alone any of the\n> explanations for it. The point of these Asserts is to be sure that\n> we don't form an alleged parameterization set that includes any rels\n> that are included in the new join, because that would be obviously\n> wrong. They do check that as they stand. I'm not sure what invariant\n> the proposed revision would be enforcing.\n\nWell, this explanation made sense to me:\n\nhttps://www.postgresql.org/message-id/CAMbWs4-%2BGs0HJ9ouBUb%3DqwHsGCXxG%2B92eJzLOpCkedvgtOWQ%3DQ%40mail.gmail.com\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Jan 2024 19:03:32 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix bogus Asserts in calc_non_nestloop_required_outer" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Jan 5, 2024 at 4:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I don't think I believe this code change, let alone any of the\n>> explanations for it. The point of these Asserts is to be sure that\n>> we don't form an alleged parameterization set that includes any rels\n>> that are included in the new join, because that would be obviously\n>> wrong. They do check that as they stand. I'm not sure what invariant\n>> the proposed revision would be enforcing.\n\n> Well, this explanation made sense to me:\n> https://www.postgresql.org/message-id/CAMbWs4-%2BGs0HJ9ouBUb%3DqwHsGCXxG%2B92eJzLOpCkedvgtOWQ%3DQ%40mail.gmail.com\n\nThe argument for the patch as proposed is that we should make the\nmergejoin and hashjoin code paths do what the nestloop path is doing.\nHowever, as I replied further down in that other thread, I'm not\nexactly convinced that the nestloop path is doing the right thing.\nI've not had time to look closer at that, though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 06 Jan 2024 16:08:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix bogus Asserts in calc_non_nestloop_required_outer" }, { "msg_contents": "On Sat, Jan 6, 2024 at 4:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Well, this explanation made sense to me:\n> > https://www.postgresql.org/message-id/CAMbWs4-%2BGs0HJ9ouBUb%3DqwHsGCXxG%2B92eJzLOpCkedvgtOWQ%3DQ%40mail.gmail.com\n>\n> The argument for the patch as proposed is that we should make the\n> mergejoin and hashjoin code paths do what the nestloop path is doing.\n> However, as I replied further down in that other thread, I'm not\n> exactly convinced that the nestloop path is doing the right thing.\n> I've not had time to look closer at that, though.\n\nI don't really understand what you were saying in your response there,\nor what you're saying here. It seems to me that if the path is\nparameterized by top relids, and the assertion is verifying that a\ncertain set of non-toprelids i.e. otherrels isn't present, then\nobviously the assertion is going to pass, but it's not checking what\nit purports to be checking. The ostensible purpose of the assertion is\nto check that neither side is parameterized by the other side, because\na non-nestloop can't satisfy a parameterization. But with the\nassertion as currently formulated, it would pass even if one side\n*were* parameterized by the other side, because outer_paramrels and\ninner_paramrels would contain child relations, and the set that it was\nbeing compared to would contain only top-parents, and so they wouldn't\noverlap.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 8 Jan 2024 12:25:03 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix bogus Asserts in calc_non_nestloop_required_outer" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Sat, Jan 6, 2024 at 4:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The argument for the patch as proposed is that we should make the\n>> mergejoin and hashjoin code paths do what the nestloop path is doing.\n>> However, as I replied further down in that other thread, I'm not\n>> exactly convinced that the nestloop path is doing the right thing.\n>> I've not had time to look closer at that, though.\n\n> I don't really understand what you were saying in your response there,\n> or what you're saying here. It seems to me that if the path is\n> parameterized by top relids, and the assertion is verifying that a\n> certain set of non-toprelids i.e. otherrels isn't present, then\n> obviously the assertion is going to pass, but it's not checking what\n> it purports to be checking.\n\nI think we're talking at cross-purposes. What I was wondering about\n(at least further down in the thread) was whether we shouldn't be\nchecking *both* the \"real\" and the \"parent\" relids to make sure they\ndon't overlap the parameterization sets. But it's probably overkill.\nAfter reading the code again I agree that the parent relids are more\nuseful to check.\n\nHowever, I still don't like Richard's patch too much as-is, because\nthe Asserts are difficult to read/understand and even more difficult\nto compare to the other code path. I think we want to keep the\nnestloop and not-nestloop paths as visually similar as possible,\nso I propose we do it more like the attached (where I also borrowed\nsome of your wording for the comments).\n\nA variant of this could be to ifdef out all the added code in\nnon-Assert builds:\n\n Relids outer_paramrels = PATH_REQ_OUTER(outer_path);\n Relids inner_paramrels = PATH_REQ_OUTER(inner_path);\n Relids required_outer;\n#ifdef USE_ASSERT_CHECKING\n Relids innerrelids;\n Relids outerrelids;\n\n /*\n * Any parameterization of the input paths refers to topmost parents of\n * the relevant relations, because reparameterize_path_by_child() hasn't\n * been called yet. So we must consider topmost parents of the relations\n * being joined, too, while checking for disallowed parameterization\n * cases.\n */\n if (inner_path->parent->top_parent_relids)\n innerrelids = inner_path->parent->top_parent_relids;\n else\n innerrelids = inner_path->parent->relids;\n\n if (outer_path->parent->top_parent_relids)\n outerrelids = outer_path->parent->top_parent_relids;\n else\n outerrelids = outer_path->parent->relids;\n\n /* neither path can require rels from the other */\n Assert(!bms_overlap(outer_paramrels, innerrelids));\n Assert(!bms_overlap(inner_paramrels, outerrelids));\n#endif\n /* form the union ... */\n\nbut I'm not sure that's better. Probably any reasonable compiler\nwill throw away the whole calculation upon noticing the outputs\nare unused.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 08 Jan 2024 17:39:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix bogus Asserts in calc_non_nestloop_required_outer" }, { "msg_contents": "On Mon, Jan 8, 2024 at 5:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I think we're talking at cross-purposes. What I was wondering about\n> (at least further down in the thread) was whether we shouldn't be\n> checking *both* the \"real\" and the \"parent\" relids to make sure they\n> don't overlap the parameterization sets. But it's probably overkill.\n> After reading the code again I agree that the parent relids are more\n> useful to check.\n\nHmm, OK.\n\nBut we could also find some way to assert that the parameterization\nsets contain only top-most rels. That would be strictly stronger than\nasserting that they don't contain any non-topmost rels that exist on\nthe other side.\n\nTrying to say that more clearly, I think the rules here are:\n\n1. outer_paramrels can't contain ANY non-top rels\n2. outer_paramrels can't contain top rels that appear on the other\nside of the join\n\nWhat you're proposing to assert here is (2) plus a weaker version of\n(1), namely \"no non-top rels that appear on the other side of the\njoin\". That's not wrong, but I don't see why it's better than checking\nonly (2) or checking exactly (1) and (2).\n\n> However, I still don't like Richard's patch too much as-is, because\n> the Asserts are difficult to read/understand and even more difficult\n> to compare to the other code path. I think we want to keep the\n> nestloop and not-nestloop paths as visually similar as possible,\n> so I propose we do it more like the attached (where I also borrowed\n> some of your wording for the comments).\n\nI don't understand what this is parallel to;\ncalc_nestloop_required_outer does no similar dance.\n\nIf we don't set top_relids when it's the same as relids, then I agree\nthat doing this dance is warranted, but otherwise it's just clutter.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 9 Jan 2024 11:44:32 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix bogus Asserts in calc_non_nestloop_required_outer" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Jan 8, 2024 at 5:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I think we're talking at cross-purposes. What I was wondering about\n>> (at least further down in the thread) was whether we shouldn't be\n>> checking *both* the \"real\" and the \"parent\" relids to make sure they\n>> don't overlap the parameterization sets. But it's probably overkill.\n\n> But we could also find some way to assert that the parameterization\n> sets contain only top-most rels.\n\nHmm ... perhaps worth doing. I think bms_is_subset against\nall_baserels would work.\n\n>> However, I still don't like Richard's patch too much as-is, because\n>> the Asserts are difficult to read/understand and even more difficult\n>> to compare to the other code path. I think we want to keep the\n>> nestloop and not-nestloop paths as visually similar as possible,\n>> so I propose we do it more like the attached (where I also borrowed\n>> some of your wording for the comments).\n\n> I don't understand what this is parallel to;\n> calc_nestloop_required_outer does no similar dance.\n\nNo, because the dance is done in its caller. The fact that these\ntwo functions don't have more-similar argument lists is a bit of\na wart, perhaps.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 09 Jan 2024 11:58:21 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix bogus Asserts in calc_non_nestloop_required_outer" }, { "msg_contents": "I wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> But we could also find some way to assert that the parameterization\n>> sets contain only top-most rels.\n\n> Hmm ... perhaps worth doing. I think bms_is_subset against\n> all_baserels would work.\n\n[ tries that ... ] Nope, it needs to be all_query_rels. I plead\nENOCAFFEINE. However, while we could easily add\n\n+ Assert(bms_is_subset(inner_paramrels, root->all_query_rels));\n+ Assert(bms_is_subset(outer_paramrels, root->all_query_rels));\n\nto try_nestloop_path, that doesn't work as-is in\ncalc_non_nestloop_required_outer because we don't pass \"root\" to it.\nWe could add that, or perhaps contemplate some more-extensive\nrearrangement to make the division of labor between\ncalc_nestloop_required_outer and its caller more like that between\ncalc_non_nestloop_required_outer and its callers. But I'm feeling\nkind of unenthused about that, because\n\n(1) I don't see how to rearrange things without duplicating code:\ntry_nestloop_path needs access to some of these values, while moving\nany work out of calc_non_nestloop_required_outer would mean two\ncopies in its two callers. So there are reasons why that's not\nperfectly symmetric. (But I still maintain that we should try to\nmake the code look similar between these two code paths, when\nconsidering both the calc_xxx functions and their callers.)\n\n(2) joinpath.c already knows that parameterization sets mention\nonly top-level relids, specifically at the definition and uses of\nPATH_PARAM_BY_PARENT, and I bet there are more dependencies elsewhere.\nSo I'm not sure about the value of asserting it only here.\n\nIn short I'm inclined to apply v3 as-is.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 09 Jan 2024 23:22:25 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix bogus Asserts in calc_non_nestloop_required_outer" } ]
[ { "msg_contents": "Dear Hackers,\n\nOne of my customers suggested creating a function that could return the\nserver's hostname.\n\nAfter a quick search, we found [this Wiki page](\nhttps://wiki.postgresql.org/wiki/Pg_gethostname) referring to [that\nextension](https://github.com/theory/pg-hostname/) from David E. Wheeler.\n\nI used shamelessly the idea and created a working proof of concept:\n\n- the function takes no argument and returns the hostname or a null value\nif any error occurs\n- the function was added to the network.c file because it makes sense to me\nto have that near the inet_server_addr() and inet_server_port() functions.\n- I didn't add any test as the inet functions are not tested either.\n\nIf you think my design is good enough, I'll go ahead and port/test that\nfunction for Windows.\n\nHave a nice day,\n\nLætitia", "msg_date": "Thu, 3 Aug 2023 10:36:50 +0200", "msg_from": "Laetitia Avrot <laetitia.avrot@gmail.com>", "msg_from_op": true, "msg_subject": "Adding a pg_servername() function" }, { "msg_contents": "On Thu, 3 Aug 2023 at 10:37, Laetitia Avrot <laetitia.avrot@gmail.com> wrote:\n>\n> Dear Hackers,\n>\n> One of my customers suggested creating a function that could return the server's hostname.\n\nMostly for my curiosity: What would be their use case?\nI only see limited usability, considering that the local user's\nhostname can be very different from the hostname used in the url that\nconnects to that PostgreSQL instance.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Thu, 3 Aug 2023 11:31:01 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding a pg_servername() function" }, { "msg_contents": "Le jeu. 3 août 2023, 11:31, Matthias van de Meent <\nboekewurm+postgres@gmail.com> a écrit :\n\n> On Thu, 3 Aug 2023 at 10:37, Laetitia Avrot <laetitia.avrot@gmail.com>\n> wrote:\n> >\n> > Dear Hackers,\n> >\n> > One of my customers suggested creating a function that could return the\n> server's hostname.\n>\n> Mostly for my curiosity: What would be their use case?\n>\n\nThank you for showing interest in that patch.\n\n>\nFor my customer, their use case is to be able from an SQL client to double\ncheck they're on the right host before doing things that could become a\nproduction disaster.\n\nI see also another use case: being able to identify postgres metrics on a\nmonitoring tool. Look at the hack pg_staviz uses here:\nhttps://github.com/vyruss/pg_statviz/blob/7cd0c694cea40f780fb8b76275c6097b5d210de6/src/pg_statviz/libs/info.py#L30\n\nThose are the use cases I can think of.\n\nI only see limited usability, considering that the local user's\n> hostname can be very different from the hostname used in the url that\n> connects to that PostgreSQL instance.\n>\n\nAgreed, depending on how hosts and dns are set, it can be useless. But\nnormally, companies have host making standards to avoid that.\n\nHave a nice day,\n\nLætitia\n\n\n> Kind regards,\n>\n> Matthias van de Meent\n> Neon (https://neon.tech)\n>\n\nLe jeu. 3 août 2023, 11:31, Matthias van de Meent <boekewurm+postgres@gmail.com> a écrit :On Thu, 3 Aug 2023 at 10:37, Laetitia Avrot <laetitia.avrot@gmail.com> wrote:\n>\n> Dear Hackers,\n>\n> One of my customers suggested creating a function that could return the server's hostname.\n\nMostly for my curiosity: What would be their use case?Thank you for showing interest in that patch.For my customer,  their use case is to be able from an SQL client to double check they're on the right host before doing things that could become a production disaster. I see also another use case: being able to identify postgres metrics on a monitoring tool. Look at the hack pg_staviz uses here: https://github.com/vyruss/pg_statviz/blob/7cd0c694cea40f780fb8b76275c6097b5d210de6/src/pg_statviz/libs/info.py#L30Those are the  use cases I can think of.\nI only see limited usability, considering that the local user's\nhostname can be very different from the hostname used in the url that\nconnects to that PostgreSQL instance.Agreed, depending on how hosts and dns are set,  it can be useless. But normally,  companies have host making standards to avoid that.Have a nice day, Lætitia \nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)", "msg_date": "Thu, 3 Aug 2023 12:06:11 +0200", "msg_from": "Laetitia Avrot <laetitia.avrot@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Adding a pg_servername() function" }, { "msg_contents": "\n\n> Agreed, depending on how hosts and dns are set, it can be useless.\n> But normally, companies have host making standards to avoid that.\n\n\nBTW you already have it : \n\nselect setting from pg_settings where name = 'listen_addresses' \n\nNo ?\n\n\n", "msg_date": "Thu, 3 Aug 2023 12:47:39 +0200 (CEST)", "msg_from": "066ce286@free.fr", "msg_from_op": false, "msg_subject": "Re: Adding a pg_servername() function" }, { "msg_contents": "Le jeu. 3 août 2023 à 14:20, <066ce286@free.fr> a écrit :\n\n>\n>\n> > Agreed, depending on how hosts and dns are set, it can be useless.\n> > But normally, companies have host making standards to avoid that.\n>\n>\n> BTW you already have it :\n>\n> select setting from pg_settings where name = 'listen_addresses'\n>\n> No ?\n>\n>\nHello,\n\nNo, this is not the same thing. Most of the time, we will find either the\nwildcard '*' or an Ip address or a list of IP addresses, which is not what\nwe want in this particular use case. But I agree that there are some use\ncases where we will find the hostname there, but from my experience, that's\nnot the majority of the use cases.\n\nFor example, you could use a VIP in listen_addresses, so that you ensure\nthat distant connection will go through that VIP and you'll always end up\non the right server even though it might be another physical instance\n(after switch or failover).\n\nLætitia\n\nLe jeu. 3 août 2023 à 14:20, <066ce286@free.fr> a écrit :\n\n> Agreed, depending on how hosts and dns are set, it can be useless.\n> But normally, companies have host making standards to avoid that.\n\n\nBTW you already have it : \n\nselect setting from pg_settings where name = 'listen_addresses' \n\nNo ?\nHello,No, this is not the same thing. Most of the time, we will find either the wildcard '*' or an Ip address or a list of IP addresses, which is not what we want in this particular use case. But I agree that there are some use cases where we will find the hostname there, but from my experience, that's not the majority of the use cases.For example, you could use a VIP in listen_addresses, so that you ensure that distant connection will go through that VIP and you'll always end up on the right server even though it might be another physical instance (after switch or failover).Lætitia", "msg_date": "Thu, 3 Aug 2023 14:25:07 +0200", "msg_from": "Laetitia Avrot <laetitia.avrot@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Adding a pg_servername() function" }, { "msg_contents": "## Laetitia Avrot (laetitia.avrot@gmail.com):\n\n> For my customer, their use case is to be able from an SQL client to double\n> check they're on the right host before doing things that could become a\n> production disaster.\n\nWhy not use cluster_name for that? Even if it may not be relevant\nfor their setup, there might be multiple clusters on the same host\n(multiple clusters with the same hostname) or you database could be\nhiding behind some failover/loadbalancer mechanism (multiple hostnames\nin one cluster), thus using the hostname to identify the cluster is\ntotally not universal (the same goes for the monitoring/metrics\nstuff). And there's of course inet_server_addr(), if you really\nneed to double-check if you're connected to the right machine.\n\nRegards,\nChristoph\n\n-- \nSpare Space.\n\n\n", "msg_date": "Thu, 3 Aug 2023 15:17:14 +0200", "msg_from": "Christoph Moench-Tegeder <cmt@burggraben.net>", "msg_from_op": false, "msg_subject": "Re: Adding a pg_servername() function" }, { "msg_contents": "Le jeu. 3 août 2023 à 15:17, Christoph Moench-Tegeder <cmt@burggraben.net>\na écrit :\n\n> ## Laetitia Avrot (laetitia.avrot@gmail.com):\n>\n\n> > For my customer, their use case is to be able from an SQL client to\n> double\n> > check they're on the right host before doing things that could become a\n> > production disaster.\n>\n> Why not use cluster_name for that? Even if it may not be relevant\n> for their setup, there might be multiple clusters on the same host\n> (multiple clusters with the same hostname) or you database could be\n> hiding behind some failover/loadbalancer mechanism (multiple hostnames\n> in one cluster), thus using the hostname to identify the cluster is\n> totally not universal (the same goes for the monitoring/metrics\n> stuff). And there's of course inet_server_addr(), if you really\n> need to double-check if you're connected to the right machine.\n>\n\nHello Christoph,\n\nI understand your point and sure enough, my customer could set and use the\ncluster_name for that purpose. I totally disagree with using\ninet_server_addr() for that purpose as there are so many different network\nsettings with VIPs and so on that it won't help. Also, a socket connection\nwill give a NULL value, not that it's *that* bad because if it's a socket,\nyou're running locally and could still use `\\! ifconfig`, but it does not\nwork on some configurations (docker for example). Also, most humans will\nfind it easier to memorize a name than a number and that's one of the\nreasons why we remember websites' URLs instead of their IP.\n\nI still disagree with the monitoring part. A good monitoring tool will have\nto reconcile data from the OS with data from the Postgres cluster. So that,\nwe kind of need a way for the monitoring tool to know on which host this\nparticular cluster is running and I think it's smarter to get this\ninformation from the Postgres cluster.\n\nOf course, I do love the idea that Postgres is agnostic from where it's\nrunning, but I don't think this new function removes that.\n\nHave a nice day,\n\nLætitia\n\nLe jeu. 3 août 2023 à 15:17, Christoph Moench-Tegeder <cmt@burggraben.net> a écrit :## Laetitia Avrot (laetitia.avrot@gmail.com): \n\n> For my customer,  their use case is to be able from an SQL client to double\n> check they're on the right host before doing things that could become a\n> production disaster.\n\nWhy not use cluster_name for that? Even if it may not be relevant\nfor their setup, there might be multiple clusters on the same host\n(multiple clusters with the same hostname) or you database could be\nhiding behind some failover/loadbalancer mechanism (multiple hostnames\nin one cluster), thus using the hostname to identify the cluster is\ntotally not universal (the same goes for the monitoring/metrics\nstuff). And there's of course inet_server_addr(), if you really\nneed to double-check if you're connected to the right machine.Hello Christoph,I understand your point and sure enough, my customer could set and use the cluster_name for that purpose. I totally disagree with using inet_server_addr() for that purpose as there are so many different network settings with VIPs and so on that it won't help. Also, a socket connection will give a NULL value, not that it's *that* bad because if it's a socket, you're running locally and could still use `\\! ifconfig`, but it does not work on some configurations (docker for example). Also, most humans will find it easier to memorize a name than a number and that's one of the reasons why we remember websites' URLs instead of their IP.I still disagree with the monitoring part. A good monitoring tool will have to reconcile data from the OS with data from the Postgres cluster. So that, we kind of need a way for the monitoring tool to know on which host this particular cluster is running and I think it's smarter to get this information from the Postgres cluster.Of course, I do love the idea that Postgres is agnostic from where it's running, but I don't think this new function removes that. Have a nice day,Lætitia", "msg_date": "Fri, 4 Aug 2023 10:26:46 +0200", "msg_from": "Laetitia Avrot <laetitia.avrot@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Adding a pg_servername() function" }, { "msg_contents": "Hi,\n\n## Laetitia Avrot (laetitia.avrot@gmail.com):\n\n> I understand your point and sure enough, my customer could set and use the\n> cluster_name for that purpose. I totally disagree with using\n> inet_server_addr() for that purpose as there are so many different network\n> settings with VIPs and so on that it won't help.\n\nThat depends a bit on what the exact question is: \"where did I connect\nto\" vs \"where is the thing I conencted to running\".\n\n> Also, a socket connection\n> will give a NULL value, not that it's *that* bad because if it's a socket,\n> you're running locally and could still use `\\! ifconfig`, but it does not\n> work on some configurations (docker for example).\n\nBut with a local socket, you already know where you are.\n\n> Also, most humans will\n> find it easier to memorize a name than a number and that's one of the\n> reasons why we remember websites' URLs instead of their IP.\n\nThat's why we have DNS, and it works both ways (especially when\nworking programatically).\n\n> I still disagree with the monitoring part. A good monitoring tool will have\n> to reconcile data from the OS with data from the Postgres cluster. So that,\n> we kind of need a way for the monitoring tool to know on which host this\n> particular cluster is running and I think it's smarter to get this\n> information from the Postgres cluster.\n\nBut that's again at least two questions in here: \"what is the\ninstance on this host doing\" and \"what is the primary/read standby/...\nof service X doing\", and depending on that ask the base host's\nprimary address or the service's address. Yes, that's easy to say\nwhen you can define the whole environment, and many setups discover\nthis only later in the life cycle.\n\nRegards,\nChristoph\n\n-- \nSpare Space\n\n\n", "msg_date": "Sun, 6 Aug 2023 23:56:07 +0200", "msg_from": "Christoph Moench-Tegeder <cmt@burggraben.net>", "msg_from_op": false, "msg_subject": "Re: Adding a pg_servername() function" }, { "msg_contents": "Dear all,\n\nFirst, thank you so much for your interest in this patch. I didn't think I\nwould have so many answers.\n\nI agree that the feature I'm suggesting could be done with a few tricks. I\nmeant to simplify the life of the user by providing a simple new feature.\n(Also, I might have trust issues with DNS due to several past production\ndisasters.)\n\nMy question is very simple: Do you oppose having this feature in Postgres?\n\nIn other words, should I stop working on this?\n\nHave a nice day,\n\nLætitia\n\nDear all,First, thank you so much for your interest in this patch. I didn't think I would have so many answers.I agree that the feature I'm suggesting could be done with a few tricks. I meant to simplify the life of the user by providing a simple new feature. (Also, I might have trust issues with DNS due to several past production disasters.)My question is very simple: Do you oppose having this feature in Postgres?In other words, should I stop working on this?Have a nice day,Lætitia", "msg_date": "Wed, 9 Aug 2023 08:42:42 +0200", "msg_from": "Laetitia Avrot <laetitia.avrot@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Adding a pg_servername() function" }, { "msg_contents": "On 09.08.23 08:42, Laetitia Avrot wrote:\n> I agree that the feature I'm suggesting could be done with a few tricks. \n> I meant to simplify the life of the user by providing a simple new \n> feature. (Also, I might have trust issues with DNS due to several past \n> production disasters.)\n> \n> My question is very simple: Do you oppose having this feature in Postgres?\n\nI think this is pretty harmless(*) and can be useful, so it seems \nreasonable to pursue.\n\n(*) But we should think about access control for this. If you're in a \nDBaaS environment, providers might not like that you can read out their \ninternal host names. I'm not sure if there is an existing permission \nrole that this could be attached to or if we need a new one.\n\n\n\n\n", "msg_date": "Wed, 9 Aug 2023 10:26:38 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: Adding a pg_servername() function" }, { "msg_contents": "Hi everybody,\n\nOn 09.08.23 08:42, Laetitia Avrot wrote:\n> I agree that the feature I'm suggesting could be done with a few tricks.\n> I meant to simplify the life of the user by providing a simple new\n> feature. (Also, I might have trust issues with DNS due to several past\n> production disasters.)\n\nJust my contribution on why this function could be useful, since we had a\nuse case that fits exactly this.\nIn the past I needed such a pg_servername() function because in a project I\nwas engaged on they needed to distribute \"requests\" directed to a in-house\nmanagement service running on network servers, and each one had a DB, so we\nwent with pglogical to be used as a (transactional) messaging middleware.\nThe requests were targeted to specific hostnames, thus on the replication\ndownstream we wanted to filter with a before-insert trigger on the hostname.\nAt that point, we had to do some contortions to get the hostname the DB was\nrunning on, sometimes really nasty like: on linux servers we could go with\na python function; but on windows ones (where there was no python\navailable) we had to resort to a function that read the OS \"hostname\"\ncommand into a temporary table via the pg \"copy\", which is both tricky and\ninefficient.\n\nSo, from me +1 to Laetitia's proposal.\n\nOn Wed, 9 Aug 2023 at 10:26, Peter Eisentraut <peter@eisentraut.org> wrote:\n\n> (*) But we should think about access control for this.\n\n\nAnd +1 also to Peter's note on enforcing the access control. BTW that's\nexactly what we needed with plpythonu, and also with \"copy from program\".\n\nBest,\nGiovanni\n\nHi everybody,On 09.08.23 08:42, Laetitia Avrot wrote:\n> I agree that the feature I'm suggesting could be done with a few tricks. \n> I meant to simplify the life of the user by providing a simple new \n> feature. (Also, I might have trust issues with DNS due to several past \n> production disasters.)\nJust my contribution on why this function could be useful, since we had a use case that fits exactly this.In the past I needed such a pg_servername() function because in a project I was engaged on they needed to distribute \"requests\" directed to a in-house management service running on network servers, and each one had a DB, so we went with pglogical to be used as a (transactional) messaging middleware. The requests were targeted to specific hostnames, thus on the replication downstream we wanted to filter with a before-insert trigger on the hostname.At that point, we had to do some contortions to get the hostname the DB was running on, sometimes really nasty like: on linux servers we could go with a python function; but on windows ones (where there was no python available) we had to resort to a function that read the OS \"hostname\" command into a temporary table via the pg \"copy\", which is both tricky and inefficient.So, from me +1 to Laetitia's proposal.On Wed, 9 Aug 2023 at 10:26, Peter Eisentraut <peter@eisentraut.org> wrote:(*) But we should think about access control for this. And +1 also to Peter's note on enforcing the access control. BTW that's exactly what we needed with plpythonu, and also with \"copy from program\".Best,Giovanni", "msg_date": "Wed, 9 Aug 2023 12:50:17 +0200", "msg_from": "GF <phabriz@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding a pg_servername() function" }, { "msg_contents": "Peter Eisentraut <peter@eisentraut.org> writes:\n> On 09.08.23 08:42, Laetitia Avrot wrote:\n>> My question is very simple: Do you oppose having this feature in Postgres?\n\n> I think this is pretty harmless(*) and can be useful, so it seems \n> reasonable to pursue.\n\nI actually do object to this, because I think the concept of \"server\nname\" is extremely ill-defined and if we try to support it, we will\nforever be chasing requests for alternative behaviors. Just to start\nwith, is a prospective user expecting a fully-qualified domain name\nor just the base name? If the machine has several names (perhaps\nassociated with different IP addresses), what do you do about that?\nI wouldn't be too surprised if users would expect to get the name\nassociated with the IP address that the current connection came\nthrough. Or at least they might tell you they want that, until\nthey discover they're getting \"localhost.localdomain\" on loopback\nconnections and come right back to bitch about that.\n\nWindows likely adds a whole 'nother set of issues to \"what's the\nmachine name\", but I don't know enough about it to say what.\n\nI think the upthread suggestion to use cluster_name is going to\nbe a superior solution for most people, not least because they\ncan use it today and it will work the same regardless of platform.\n\n> (*) But we should think about access control for this. If you're in a \n> DBaaS environment, providers might not like that you can read out their \n> internal host names.\n\nThere's that, too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 09 Aug 2023 10:04:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Adding a pg_servername() function" }, { "msg_contents": "On Wed, 9 Aug 2023 at 16:05, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Peter Eisentraut <peter@eisentraut.org> writes:\n> > On 09.08.23 08:42, Laetitia Avrot wrote:\n> >> My question is very simple: Do you oppose having this feature in\n> Postgres?\n>\n> > I think this is pretty harmless(*) and can be useful, so it seems\n> > reasonable to pursue.\n>\n> I actually do object to this, because I think the concept of \"server\n> name\" is extremely ill-defined\n>\n\nTom,\nBut the gethostname() function is well defined, both in Linux and in\nWindows.\nMaybe the proposed name pg_servername() is unfortunate in that it may\nsuggest that DNS or something else is involved, but in Laetitia's patch the\ncall is to gethostname().\nWould it be less problematic if the function were called pg_gethostname()?\n@Laetitia: why did you propose that name? maybe to avoid clashes with some\nextension out there?\n\nBest,\nGiovanni\n\nOn Wed, 9 Aug 2023 at 16:05, Tom Lane <tgl@sss.pgh.pa.us> wrote:Peter Eisentraut <peter@eisentraut.org> writes:\n> On 09.08.23 08:42, Laetitia Avrot wrote:\n>> My question is very simple: Do you oppose having this feature in Postgres?\n\n> I think this is pretty harmless(*) and can be useful, so it seems \n> reasonable to pursue.\n\nI actually do object to this, because I think the concept of \"server\nname\" is extremely ill-defined Tom,But the gethostname() function is well defined, both in Linux and in Windows.Maybe the proposed name pg_servername() is unfortunate in that it may suggest that DNS or something else is involved, but in Laetitia's patch the call is to gethostname().Would it be less problematic if the function were called pg_gethostname()?@Laetitia: why did you propose that name? maybe to avoid clashes with some extension out there?Best,Giovanni", "msg_date": "Wed, 9 Aug 2023 17:32:22 +0200", "msg_from": "GF <phabriz@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding a pg_servername() function" }, { "msg_contents": "GF <phabriz@gmail.com> writes:\n> On Wed, 9 Aug 2023 at 16:05, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I actually do object to this, because I think the concept of \"server\n>> name\" is extremely ill-defined\n\n> But the gethostname() function is well defined, both in Linux and in\n> Windows.\n\nSure, its call convention is standardized. But I see nothing in POSIX\nsaying whether it returns a FQDN or just some random name. In any\ncase, the bigger issue is that I don't really want us to expose a\nfunction defined as \"whatever gethostname() says\". I think there will\nbe portability issues on some platforms, and I am dubious that that\ndefinition is what people would want.\n\nOne concrete reason why I am doubtful about this is the case of\nmultiple PG servers running on the same machine. gethostname()\nwill be unable to distinguish them.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 09 Aug 2023 12:31:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Adding a pg_servername() function" }, { "msg_contents": "## GF (phabriz@gmail.com):\n\n> In the past I needed such a pg_servername() function because in a project I\n> was engaged on they needed to distribute \"requests\" directed to a in-house\n> management service running on network servers, and each one had a DB, so we\n> went with pglogical to be used as a (transactional) messaging middleware.\n\nThat sounds like an \"if all you have is a hammer\" kind of architecture.\nOr \"solutioning by fandom\". Short of using an actual addressable\nmessage bus, you could set up the node name as a configuration\nparameter, or use cluster_name, or if you really must do some\nCOPY FROM PROGRAM magic (there's your access control) and just store\nthe value somewhere.\nThe more examples of use cases I see, the more I think that actually\nbeneficial use cases are really rare.\n\nAnd now that I checked it: I do have systems with gethostname()\nreturning an FQDN, and other systems return the (short) hostname\nonly. And it gets worse when you're talking \"container\" and\n\"automatic image deployment\". So I believe it's a good thing when\na database does not expose too much of the OS below it...\n\nRegards,\nChristoph\n\n-- \nSpare Space\n\n\n", "msg_date": "Wed, 9 Aug 2023 22:05:21 +0200", "msg_from": "Christoph Moench-Tegeder <cmt@burggraben.net>", "msg_from_op": false, "msg_subject": "Re: Adding a pg_servername() function" }, { "msg_contents": "Le mer. 9 août 2023 à 17:32, GF <phabriz@gmail.com> a écrit :\n\n>\n> Would it be less problematic if the function were called pg_gethostname()?\n> @Laetitia: why did you propose that name? maybe to avoid clashes with some\n> extension out there?\n>\n>\nI used that name to be kind of coherent with the inet_server_addr(), but I\nsee now how it is a bad decision and can add confusion and wrong\nexpectations. I think pg_gethostname() is better.\n\nLe mer. 9 août 2023 à 17:32, GF <phabriz@gmail.com> a écrit :Would it be less problematic if the function were called pg_gethostname()?@Laetitia: why did you propose that name? maybe to avoid clashes with some extension out there?I used that name to be kind of coherent with the inet_server_addr(), but I see now how it is a bad decision and can add confusion and wrong expectations. I think pg_gethostname() is better.", "msg_date": "Thu, 10 Aug 2023 09:06:31 +0200", "msg_from": "Laetitia Avrot <laetitia.avrot@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Adding a pg_servername() function" }, { "msg_contents": "Dear Tom,\n\nThank you for your interest in that patch and for taking the time to point\nout several things that need to be better. Please find below my answers.\n\nLe mer. 9 août 2023 à 16:04, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n\n> I actually do object to this, because I think the concept of \"server\n> name\" is extremely ill-defined and if we try to support it, we will\n> forever be chasing requests for alternative behaviors.\n\n\nYes, that's on me with choosing a poor name. I will go with\npg_gethostname().\n\n\n> Just to start\n> with, is a prospective user expecting a fully-qualified domain name\n> or just the base name? If the machine has several names (perhaps\n> associated with different IP addresses), what do you do about that?\n> I wouldn't be too surprised if users would expect to get the name\n> associated with the IP address that the current connection came\n> through. Or at least they might tell you they want that, until\n> they discover they're getting \"localhost.localdomain\" on loopback\n> connections and come right back to bitch about that.\n>\n\nIf there is a gap between what the function does and the user expectations,\nit is my job to write the documentation in a more clear way to set\nexpectations to the right level and explain precisely what this function is\ndoing. Again, using a better name as pg_gethostname() will also help to\nremove the confusion.\n\n\n>\n> Windows likely adds a whole 'nother set of issues to \"what's the\n> machine name\", but I don't know enough about it to say what.\n>\n\nWindows does have a similar gethostname() function (see here:\nhttps://learn.microsoft.com/en-us/windows/win32/api/winsock/nf-winsock-gethostname\n).\n\n\n>\n> I think the upthread suggestion to use cluster_name is going to\n> be a superior solution for most people, not least because they\n> can use it today and it will work the same regardless of platform.\n>\n\nI don't think cluster_name is the same as hostname. True, people could use\nthat parameter for that usage, but it does not feel right to entertain\nconfusion between the cluster_name (which, in my humble opinion, should be\ndifferent depending on the Postgres cluster) and the answer to \"on which\nhost is this Postgres cluster running?\".\n\n\n> > (*) But we should think about access control for this. If you're in a\n> > DBaaS environment, providers might not like that you can read out their\n> > internal host names.\n>\n> There's that, too.\n\n\nOf course, this function will need special access and DBaaS providers will\nbe able to not allow their users to use that function, as they already do\nfor other features. As I said, the patch is only at the stage of POC, at\nthe moment.\n\nLe mer. 9 août 2023 à 18:31, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n\n>\n>\n> One concrete reason why I am doubtful about this is the case of\n> multiple PG servers running on the same machine. gethostname()\n> will be unable to distinguish them.\n>\n>\nAnd that's where my bad name for this function brings confusion. If this\nfunction returns the hostname, then it does seem totally legit and normal\nto get the same if 3 Postgres clusters are running on the same host. If\npeople want to identify their cluster, they should use cluster_name. I\ntotally agree with that.\n\nWhy do I think this is useful?\n\n1- In most companies I've worked for, people responsible for the OS\nsettings, Network settings, and database settings are different persons.\nAlso, for most companies I've worked with, maintaining their inventory and\nfinding out which Postgres cluster is running on which host is still a\ndifficult thing and error-prone to do. I thought that it could be nice and\nuseful to display easily for the DBAs on which host the cluster they are\nconnected to is running so that when they are called at 2 AM, their life\ncould be a little easier.\n\n2- In addition, as already pointed out, I know that pg_staviz (a monitoring\ntool) needs that information and uses this very dirty hack to get it (see\nhttps://github.com/vyruss/pg_statviz/blob/7cd0c694cea40f780fb8b76275c6097b5d210de6/src/pg_statviz/libs/info.py#L30\n)\n\nCREATE TEMP TABLE _info(hostname text);\nCOPY _info FROM PROGRAM 'hostname';\n\n3- Last but not least, as David E. Wheeler had created an extension to do\nso for Postgres 9.0+ and I found out a customer who asked me for this\nfeature, I thought there might be out there more Postgres users who could\nfind this feature helpful.\n\nI'm sorry if I'm not making myself clear enough about the use cases of that\nfeature. If you still object to my points, I will simply drop this patch.\n\nHave a nice day,\n\nLætitia\n\nDear Tom,Thank you for your interest in that patch and for taking the time to point out several things that need to be better. Please find below my answers.Le mer. 9 août 2023 à 16:04, Tom Lane <tgl@sss.pgh.pa.us> a écrit :I actually do object to this, because I think the concept of \"server\nname\" is extremely ill-defined and if we try to support it, we will\nforever be chasing requests for alternative behaviors. Yes, that's on me with choosing a poor name. I will go with pg_gethostname(). Just to start\nwith, is a prospective user expecting a fully-qualified domain name\nor just the base name?  If the machine has several names (perhaps\nassociated with different IP addresses), what do you do about that?\nI wouldn't be too surprised if users would expect to get the name\nassociated with the IP address that the current connection came\nthrough.  Or at least they might tell you they want that, until\nthey discover they're getting \"localhost.localdomain\" on loopback\nconnections and come right back to bitch about that.If there is a gap between what the function does and the user expectations, it is my job to write the documentation in a more clear way to set expectations to the right level and explain precisely what this function is doing. Again, using a better name as pg_gethostname() will also help to remove the confusion. \n\nWindows likely adds a whole 'nother set of issues to \"what's the\nmachine name\", but I don't know enough about it to say what.Windows does have a similar gethostname() function (see here: https://learn.microsoft.com/en-us/windows/win32/api/winsock/nf-winsock-gethostname). \n\nI think the upthread suggestion to use cluster_name is going to\nbe a superior solution for most people, not least because they\ncan use it today and it will work the same regardless of platform.I don't think cluster_name is the same as hostname. True, people could use that parameter for that usage, but it does not feel right to entertain confusion between the cluster_name (which, in my humble opinion, should be different depending on the Postgres cluster) and the answer to \"on which host is this Postgres cluster running?\".\n\n> (*) But we should think about access control for this.  If you're in a \n> DBaaS environment, providers might not like that you can read out their \n> internal host names.\n\nThere's that, too.Of course, this function will need special access and DBaaS providers will be able to not allow their users to use that function, as they already do for other features. As I said, the patch is only at the stage of POC, at the moment.Le mer. 9 août 2023 à 18:31, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n\nOne concrete reason why I am doubtful about this is the case of\nmultiple PG servers running on the same machine.  gethostname()\nwill be unable to distinguish them.\nAnd that's where my bad name for this function brings confusion. If this function returns the hostname, then it does seem totally legit and normal to get the same if 3 Postgres clusters are running on the same host. If people want to identify their cluster, they should use cluster_name. I totally agree with that.Why do I think this is useful?1- In most companies I've worked for, people responsible for the OS settings, Network settings, and database settings are different persons. Also, for most companies I've worked with, maintaining their inventory and finding out which Postgres cluster is running on which host is still a difficult thing and error-prone to do. I thought that it could be nice and useful to display easily for the DBAs on which host the cluster they are connected to is running so that when they are called at 2 AM, their life could be a little easier.2- In addition, as already pointed out, I know that pg_staviz (a monitoring tool) needs that information and uses this very dirty hack to get it (see https://github.com/vyruss/pg_statviz/blob/7cd0c694cea40f780fb8b76275c6097b5d210de6/src/pg_statviz/libs/info.py#L30)CREATE TEMP TABLE _info(hostname text);COPY _info FROM PROGRAM 'hostname';3- Last but not least, as David E. Wheeler had created an extension to do so for Postgres 9.0+ and I found out a customer who asked me for this feature, I thought there might be out there more Postgres users who could find this feature helpful.I'm sorry if I'm not making myself clear enough about the use cases of that feature. If you still object to my points, I will simply drop this patch.Have a nice day,Lætitia", "msg_date": "Thu, 10 Aug 2023 09:44:28 +0200", "msg_from": "Laetitia Avrot <laetitia.avrot@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Adding a pg_servername() function" }, { "msg_contents": "Dear Christoph,\n\nPlease find my answers below.\n\nLe mer. 9 août 2023 à 22:05, Christoph Moench-Tegeder <cmt@burggraben.net>\na écrit :\n\n> ## GF (phabriz@gmail.com):\n>\n> And now that I checked it: I do have systems with gethostname()\n> returning an FQDN, and other systems return the (short) hostname\n> only.\n\n\nThe return of gethostname() depends on what has been configured. So, yes\nsome people will prefer a FQDN while others will prefer a short hostname.\nAlso, it's a POSIX standard function (see\nhttps://pubs.opengroup.org/onlinepubs/9699919799/), so I don't get why\ngetting a FQDN or a short name depending on what people set would be a\nproblem for Postgres while it's not for Linux.\n\n\n> And it gets worse when you're talking \"container\" and\n> \"automatic image deployment\". So I believe it's a good thing when\n> a database does not expose too much of the OS below it...\n>\n>\nI respectfully disagree. Containers still have a hostname: for example, by\ndefault, docker uses their container id. (see\nhttps://docs.docker.com/network/). In fact, the Open Container Initiative\ndefines the hostname as a runtime configuration parameter (see\nhttps://github.com/opencontainers/runtime-spec/blob/main/config.md):\n\n> Hostname\n>\n> - *hostname* (string, OPTIONAL) specifies the container's hostname as\n> seen by processes running inside the container. On Linux, for example, this\n> will change the hostname in the container\n> <https://github.com/opencontainers/runtime-spec/blob/main/glossary.md#container-namespace> UTS\n> namespace <http://man7.org/linux/man-pages/man7/namespaces.7.html>.\n> Depending on your namespace configuration\n> <https://github.com/opencontainers/runtime-spec/blob/main/config-linux.md#namespaces>,\n> the container UTS namespace may be the runtime\n> <https://github.com/opencontainers/runtime-spec/blob/main/glossary.md#runtime-namespace> UTS\n> namespace <http://man7.org/linux/man-pages/man7/namespaces.7.html>.\n>\n> Example\n>\n> \"hostname\": \"mrsdalloway\"\n>\n>\nHave a nice day,\n\nLætitia\n\nDear Christoph,Please find my answers below.Le mer. 9 août 2023 à 22:05, Christoph Moench-Tegeder <cmt@burggraben.net> a écrit :## GF (phabriz@gmail.com):\n\nAnd now that I checked it: I do have systems with gethostname()\nreturning an FQDN, and other systems return the (short) hostname\nonly.The return of gethostname() depends on what has been configured. So, yes some people will prefer a FQDN while others will prefer a short hostname. Also, it's a POSIX standard function (see https://pubs.opengroup.org/onlinepubs/9699919799/), so I don't get why getting a FQDN or a short name depending on what people set would be a problem for Postgres while it's not for Linux. And it gets worse when you're talking \"container\" and\n\"automatic image deployment\". So I believe it's a good thing when\na database does not expose too much of the OS below it...\nI respectfully disagree. Containers still have a hostname: for example, by default, docker uses their container id. (see https://docs.docker.com/network/). In fact, the Open Container Initiative defines the hostname as a runtime configuration parameter (see https://github.com/opencontainers/runtime-spec/blob/main/config.md):Hostnamehostname (string, OPTIONAL) specifies the container's hostname as seen by processes running inside the container.\nOn Linux, for example, this will change the hostname in the container UTS namespace.\nDepending on your namespace configuration, the container UTS namespace may be the runtime UTS namespace.Example\"hostname\": \"mrsdalloway\"\nHave a nice day,Lætitia", "msg_date": "Thu, 10 Aug 2023 10:14:46 +0200", "msg_from": "Laetitia Avrot <laetitia.avrot@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Adding a pg_servername() function" }, { "msg_contents": "Hi all,\n\nFWIW, I too believe this is a feature that has been sorely missing for\nyears, leading us to use awful hacks like running \"hostname\" as PROGRAM and\nretrieving its output.\nIdeally that's what this function (or GUC) should return, what the system\nbelieves its hostname to be. It doesn't even have to return any additional\nhostnames it may have.\n\n2 more points:\n1. I too don't see it as relevant to the cluster's name\n2. I agree that this may not be desirable for non-monitoring roles but\nwould not go so far as to require superuser. If a cloud provider doesn't\nwant this hostname exposed for whatever reason, they can hack Postgres to\nblock this. They don't seem to have trouble hacking it for other reasons.\n\nBest regards,\nJimmy\n\nJimmy Angelakos\nSenior Solutions Architect\nEDB - https://www.enterprisedb.com/\n\n\nOn Thu, 10 Aug 2023 at 08:44, Laetitia Avrot <laetitia.avrot@gmail.com>\nwrote:\n\n> Dear Tom,\n>\n> Thank you for your interest in that patch and for taking the time to point\n> out several things that need to be better. Please find below my answers.\n>\n> Le mer. 9 août 2023 à 16:04, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n>\n>> I actually do object to this, because I think the concept of \"server\n>> name\" is extremely ill-defined and if we try to support it, we will\n>> forever be chasing requests for alternative behaviors.\n>\n>\n> Yes, that's on me with choosing a poor name. I will go with\n> pg_gethostname().\n>\n>\n>> Just to start\n>> with, is a prospective user expecting a fully-qualified domain name\n>> or just the base name? If the machine has several names (perhaps\n>> associated with different IP addresses), what do you do about that?\n>> I wouldn't be too surprised if users would expect to get the name\n>> associated with the IP address that the current connection came\n>> through. Or at least they might tell you they want that, until\n>> they discover they're getting \"localhost.localdomain\" on loopback\n>> connections and come right back to bitch about that.\n>>\n>\n> If there is a gap between what the function does and the user\n> expectations, it is my job to write the documentation in a more clear way\n> to set expectations to the right level and explain precisely what this\n> function is doing. Again, using a better name as pg_gethostname() will also\n> help to remove the confusion.\n>\n>\n>>\n>> Windows likely adds a whole 'nother set of issues to \"what's the\n>> machine name\", but I don't know enough about it to say what.\n>>\n>\n> Windows does have a similar gethostname() function (see here:\n> https://learn.microsoft.com/en-us/windows/win32/api/winsock/nf-winsock-gethostname\n> ).\n>\n>\n>>\n>> I think the upthread suggestion to use cluster_name is going to\n>> be a superior solution for most people, not least because they\n>> can use it today and it will work the same regardless of platform.\n>>\n>\n> I don't think cluster_name is the same as hostname. True, people could use\n> that parameter for that usage, but it does not feel right to entertain\n> confusion between the cluster_name (which, in my humble opinion, should be\n> different depending on the Postgres cluster) and the answer to \"on which\n> host is this Postgres cluster running?\".\n>\n>\n>> > (*) But we should think about access control for this. If you're in a\n>> > DBaaS environment, providers might not like that you can read out their\n>> > internal host names.\n>>\n>> There's that, too.\n>\n>\n> Of course, this function will need special access and DBaaS providers will\n> be able to not allow their users to use that function, as they already do\n> for other features. As I said, the patch is only at the stage of POC, at\n> the moment.\n>\n> Le mer. 9 août 2023 à 18:31, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n>\n>>\n>>\n>> One concrete reason why I am doubtful about this is the case of\n>> multiple PG servers running on the same machine. gethostname()\n>> will be unable to distinguish them.\n>>\n>>\n> And that's where my bad name for this function brings confusion. If this\n> function returns the hostname, then it does seem totally legit and normal\n> to get the same if 3 Postgres clusters are running on the same host. If\n> people want to identify their cluster, they should use cluster_name. I\n> totally agree with that.\n>\n> Why do I think this is useful?\n>\n> 1- In most companies I've worked for, people responsible for the OS\n> settings, Network settings, and database settings are different persons.\n> Also, for most companies I've worked with, maintaining their inventory and\n> finding out which Postgres cluster is running on which host is still a\n> difficult thing and error-prone to do. I thought that it could be nice and\n> useful to display easily for the DBAs on which host the cluster they are\n> connected to is running so that when they are called at 2 AM, their life\n> could be a little easier.\n>\n> 2- In addition, as already pointed out, I know that pg_staviz (a\n> monitoring tool) needs that information and uses this very dirty hack to\n> get it (see\n> https://github.com/vyruss/pg_statviz/blob/7cd0c694cea40f780fb8b76275c6097b5d210de6/src/pg_statviz/libs/info.py#L30\n> )\n>\n> CREATE TEMP TABLE _info(hostname text);\n> COPY _info FROM PROGRAM 'hostname';\n>\n> 3- Last but not least, as David E. Wheeler had created an extension to do\n> so for Postgres 9.0+ and I found out a customer who asked me for this\n> feature, I thought there might be out there more Postgres users who could\n> find this feature helpful.\n>\n> I'm sorry if I'm not making myself clear enough about the use cases of\n> that feature. If you still object to my points, I will simply drop this\n> patch.\n>\n> Have a nice day,\n>\n> Lætitia\n>\n\nHi all,FWIW, I too believe this is a feature that has been sorely missing for years, leading us to use awful hacks like running \"hostname\" as PROGRAM and retrieving its output. Ideally that's what this function (or GUC) should return, what the system believes its hostname to be. It doesn't even have to return any additional hostnames it may have. 2 more points: 1. I too don't see it as relevant to the cluster's name2. I agree that this may not be desirable for non-monitoring roles but would not go so far as to require superuser. If a cloud provider doesn't want this hostname exposed for whatever reason, they can hack Postgres to block this. They don't seem to have trouble hacking it for other reasons.Best regards,JimmyJimmy AngelakosSenior Solutions ArchitectEDB - https://www.enterprisedb.com/On Thu, 10 Aug 2023 at 08:44, Laetitia Avrot <laetitia.avrot@gmail.com> wrote:Dear Tom,Thank you for your interest in that patch and for taking the time to point out several things that need to be better. Please find below my answers.Le mer. 9 août 2023 à 16:04, Tom Lane <tgl@sss.pgh.pa.us> a écrit :I actually do object to this, because I think the concept of \"server\nname\" is extremely ill-defined and if we try to support it, we will\nforever be chasing requests for alternative behaviors. Yes, that's on me with choosing a poor name. I will go with pg_gethostname(). Just to start\nwith, is a prospective user expecting a fully-qualified domain name\nor just the base name?  If the machine has several names (perhaps\nassociated with different IP addresses), what do you do about that?\nI wouldn't be too surprised if users would expect to get the name\nassociated with the IP address that the current connection came\nthrough.  Or at least they might tell you they want that, until\nthey discover they're getting \"localhost.localdomain\" on loopback\nconnections and come right back to bitch about that.If there is a gap between what the function does and the user expectations, it is my job to write the documentation in a more clear way to set expectations to the right level and explain precisely what this function is doing. Again, using a better name as pg_gethostname() will also help to remove the confusion. \n\nWindows likely adds a whole 'nother set of issues to \"what's the\nmachine name\", but I don't know enough about it to say what.Windows does have a similar gethostname() function (see here: https://learn.microsoft.com/en-us/windows/win32/api/winsock/nf-winsock-gethostname). \n\nI think the upthread suggestion to use cluster_name is going to\nbe a superior solution for most people, not least because they\ncan use it today and it will work the same regardless of platform.I don't think cluster_name is the same as hostname. True, people could use that parameter for that usage, but it does not feel right to entertain confusion between the cluster_name (which, in my humble opinion, should be different depending on the Postgres cluster) and the answer to \"on which host is this Postgres cluster running?\".\n\n> (*) But we should think about access control for this.  If you're in a \n> DBaaS environment, providers might not like that you can read out their \n> internal host names.\n\nThere's that, too.Of course, this function will need special access and DBaaS providers will be able to not allow their users to use that function, as they already do for other features. As I said, the patch is only at the stage of POC, at the moment.Le mer. 9 août 2023 à 18:31, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n\nOne concrete reason why I am doubtful about this is the case of\nmultiple PG servers running on the same machine.  gethostname()\nwill be unable to distinguish them.\nAnd that's where my bad name for this function brings confusion. If this function returns the hostname, then it does seem totally legit and normal to get the same if 3 Postgres clusters are running on the same host. If people want to identify their cluster, they should use cluster_name. I totally agree with that.Why do I think this is useful?1- In most companies I've worked for, people responsible for the OS settings, Network settings, and database settings are different persons. Also, for most companies I've worked with, maintaining their inventory and finding out which Postgres cluster is running on which host is still a difficult thing and error-prone to do. I thought that it could be nice and useful to display easily for the DBAs on which host the cluster they are connected to is running so that when they are called at 2 AM, their life could be a little easier.2- In addition, as already pointed out, I know that pg_staviz (a monitoring tool) needs that information and uses this very dirty hack to get it (see https://github.com/vyruss/pg_statviz/blob/7cd0c694cea40f780fb8b76275c6097b5d210de6/src/pg_statviz/libs/info.py#L30)CREATE TEMP TABLE _info(hostname text);COPY _info FROM PROGRAM 'hostname';3- Last but not least, as David E. Wheeler had created an extension to do so for Postgres 9.0+ and I found out a customer who asked me for this feature, I thought there might be out there more Postgres users who could find this feature helpful.I'm sorry if I'm not making myself clear enough about the use cases of that feature. If you still object to my points, I will simply drop this patch.Have a nice day,Lætitia", "msg_date": "Thu, 10 Aug 2023 13:47:09 +0100", "msg_from": "Jimmy Angelakos <jimmy.angelakos@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Adding a pg_servername() function" } ]
[ { "msg_contents": "Hi,\n\nIn the commit [1] one side change was to fix mixed up usage of\nBlockNumber and Buffer variables. This was a one-off manual effort, and\nindeed it hardly seems possible to get compilation warnings for such\ncode, as long as the underlying types could be converted in a standard\nconforming way. As far as I see such conversions are acceptable and\nused every now and then in the project, but they may hide some subtle\nissues.\n\nOne interesting way I found to address this was to benefit from clang\nplugin system [2]. A clang plugin allows you to run some frontend\nactions when compiling code with clang, and this approach is used in\nsome large open source projects (e.g. I've got my inspiration largely\nfrom libreoffice [3]). After a little experimenting I could teach such a\nplugin to warn about situations similar to the one described above. What\nit does is:\n\n* It visits all the function call expressions\n* Verify if the function arguments are defined via typedef\n* Compare the actual argument with the function declaration\n* Consult with the suppression rules to minimize false positives\n\nIn the end the plugin catches the error mentioned above, and we get\nsomething like this:\n\n ginget.c:143:41: warning: Typedef check: Expected 'BlockNumber' (aka 'unsigned int'),\n got 'Buffer' (aka 'int') in PredicateLockPage PredicateLockPage(btree->index, stack->buffer, snapshot);\n\nOf course the plugin produces more warning, and I haven't checked all of\nthem yet -- some probably would have to be ignored as well. But in the\nlong run I'm pretty confident it's possible to fine tune this logic and\nsuppression rules to get minimum noise.\n\nAs I see it, there are advantages of using plugins in such way:\n\n* Helps automatically detect some issues\n* Other type of plugins could be useful to support large undertakings,\n where a lot of code transformation is involved.\n\nAnd of course disadvantages as well:\n\n* It has to be fine-tuned to be useful\n* It's compiler dependent, not clear how to always exercise it\n\nI would like to get your opinion, folks. Does it sound interesting\nenough for the community, is it worth it to pursue the idea?\n\nSome final notes about infrastructure bits. Never mind cmake in there --\nit was just for a quick start, I'm going to convert it to something\nelse. There are some changes needed to tell the compiler to actually\nload the plugin, those of course could be done much better as well. I\ndidn't do anything with meson here, because it turns out meson tends to\nenclose the plugin file with '-Wl,--start-group ... -Wl,--end-group' and\nit breaks the plugin discovery.\n\n[1]: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=126552c85c1cfb6ce6445159b8024cfa5631f33e\n[2]: https://clang.llvm.org/docs/ClangPlugins.html\n[3]: https://git.libreoffice.org/core/+/refs/heads/master/compilerplugins/clang/", "msg_date": "Thu, 3 Aug 2023 18:56:38 +0200", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": true, "msg_subject": "[RFC] Clang plugin for catching suspicious typedef casting" }, { "msg_contents": "This is the first I am learning about clang plugins. Interesting \nconcept. Did you give any thought to using libclang instead of \na compiler plugin? I am kind of doing similar work, but I went with \nlibclang instead of a plugin.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Thu, 03 Aug 2023 12:23:52 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": false, "msg_subject": "Re: [RFC] Clang plugin for catching suspicious typedef casting" }, { "msg_contents": "> On Thu, Aug 03, 2023 at 12:23:52PM -0500, Tristan Partin wrote:\n>\n> This is the first I am learning about clang plugins. Interesting concept.\n> Did you give any thought to using libclang instead of a compiler plugin? I\n> am kind of doing similar work, but I went with libclang instead of a plugin.\n\nNope, never thought about trying libclang. From the clang documentation\nit seems a plugin is a suitable interface if one wants to:\n\n special lint-style warnings or errors for your project\n\nWhich is what I was trying to achieve. Are there any advantages of\nlibclang that you see?\n\n\n", "msg_date": "Fri, 4 Aug 2023 12:47:38 +0200", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [RFC] Clang plugin for catching suspicious typedef casting" }, { "msg_contents": "On 03.08.23 18:56, Dmitry Dolgov wrote:\n> I would like to get your opinion, folks. Does it sound interesting\n> enough for the community, is it worth it to pursue the idea?\n\nI think it's interesting, and doesn't look too invasive.\n\nMaybe we can come up with three or four interesting use cases and try to \nimplement them. BlockNumber vs. Buffer type checking is obviously a bit \nmarginal to get anyone super-excited, but it's a reasonable demo.\n\nAlso, how stable is the plugin API? Would we need to keep updating \nthese plugins for each new clang version?\n\n\n", "msg_date": "Wed, 9 Aug 2023 15:23:32 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: [RFC] Clang plugin for catching suspicious typedef casting" }, { "msg_contents": "> On Wed, Aug 09, 2023 at 03:23:32PM +0200, Peter Eisentraut wrote:\n> On 03.08.23 18:56, Dmitry Dolgov wrote:\n> > I would like to get your opinion, folks. Does it sound interesting\n> > enough for the community, is it worth it to pursue the idea?\n>\n> I think it's interesting, and doesn't look too invasive.\n>\n> Maybe we can come up with three or four interesting use cases and try to\n> implement them. BlockNumber vs. Buffer type checking is obviously a bit\n> marginal to get anyone super-excited, but it's a reasonable demo.\n>\n> Also, how stable is the plugin API? Would we need to keep updating these\n> plugins for each new clang version?\n\n From the first glance the API itself seems to be stable -- I didn't find\nmany breaking changes for plugins in the release notes over the last\nfive releases. The git history for such definitions as ASTConsumer or\nRecursiveASTVisitor also doesn't seem to be very convoluted.\n\nBut libreoffice example shows, that some updating would be necessary --\nthey've got about a dozen of places in the code that depend on the clang\nversion. From what I see it's usually not directly related to the plugin\nAPI, and looks more like our opaque pointers issue.\n\n\n", "msg_date": "Wed, 9 Aug 2023 18:32:24 +0200", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [RFC] Clang plugin for catching suspicious typedef casting" }, { "msg_contents": "Hi,\n\nOn 8/10/23, Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n>> On Wed, Aug 09, 2023 at 03:23:32PM +0200, Peter Eisentraut wrote:\n>> On 03.08.23 18:56, Dmitry Dolgov wrote:\n>> > I would like to get your opinion, folks. Does it sound interesting\n>> > enough for the community, is it worth it to pursue the idea?\n>>\n>> I think it's interesting, and doesn't look too invasive.\n\n+1.\n\n>> Maybe we can come up with three or four interesting use cases and try to\n>> implement them. BlockNumber vs. Buffer type checking is obviously a bit\n>> marginal to get anyone super-excited, but it's a reasonable demo.\n\nSeveral months ago, I implemented a clang plugin[1] for checking\nsuspicious return/goto/break/continue in PG_TRY/PG_CATCH blocks and it\nfound unsafe codes in Postgres[2]. It's implemented using clang's AST\nmatcher API.\n\nI would like to contribute it well if this RFC can be accepted.\n\n[1] https://github.com/higuoxing/clang-plugins/blob/main/lib/ReturnInPgTryBlockChecker.cpp\n[2] https://www.postgresql.org/message-id/flat/CACpMh%2BCMsGMRKFzFMm3bYTzQmMU5nfEEoEDU2apJcc4hid36AQ%40mail.gmail.com\n\n-- \nBest Regards,\nXing\n\n\n", "msg_date": "Tue, 15 Aug 2023 10:21:44 +0800", "msg_from": "Xing Guo <higuoxing@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [RFC] Clang plugin for catching suspicious typedef casting" }, { "msg_contents": "On Fri Aug 4, 2023 at 5:47 AM CDT, Dmitry Dolgov wrote:\n> > On Thu, Aug 03, 2023 at 12:23:52PM -0500, Tristan Partin wrote:\n> >\n> > This is the first I am learning about clang plugins. Interesting concept.\n> > Did you give any thought to using libclang instead of a compiler plugin? I\n> > am kind of doing similar work, but I went with libclang instead of a plugin.\n>\n> Nope, never thought about trying libclang. From the clang documentation\n> it seems a plugin is a suitable interface if one wants to:\n>\n> special lint-style warnings or errors for your project\n>\n> Which is what I was trying to achieve. Are there any advantages of\n> libclang that you see?\n\nOnly advantage I see to using libclang is that you can run programs \nbuilt with libclang regardless of what your C compiler is. I typically \nuse GCC.\n\nI think your idea of automating this kind of thing is great no matter \nhow it is implemented.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Fri, 01 Dec 2023 16:01:05 -0600", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": false, "msg_subject": "Re: [RFC] Clang plugin for catching suspicious typedef casting" }, { "msg_contents": "> On Fri, Dec 01, 2023 at 04:01:05PM -0600, Tristan Partin wrote:\n> On Fri Aug 4, 2023 at 5:47 AM CDT, Dmitry Dolgov wrote:\n> > > On Thu, Aug 03, 2023 at 12:23:52PM -0500, Tristan Partin wrote:\n> > >\n> > > This is the first I am learning about clang plugins. Interesting concept.\n> > > Did you give any thought to using libclang instead of a compiler plugin? I\n> > > am kind of doing similar work, but I went with libclang instead of a plugin.\n> >\n> > Nope, never thought about trying libclang. From the clang documentation\n> > it seems a plugin is a suitable interface if one wants to:\n> >\n> > special lint-style warnings or errors for your project\n> >\n> > Which is what I was trying to achieve. Are there any advantages of\n> > libclang that you see?\n>\n> Only advantage I see to using libclang is that you can run programs built\n> with libclang regardless of what your C compiler is. I typically use GCC.\n>\n> I think your idea of automating this kind of thing is great no matter how it\n> is implemented.\n\nYeah, absolutely.\n\nSorry, haven't had a chance to follow up on the patch, but I'm planing\nto do so. I guess the important part, as Peter mentioned above in the\nthread, is to figure out more use cases which could be usefully\naugmented with such compile time checks. At the moment I don't have any\nother except BlockNumber vs Buffer, so I'm going to search through the\nhackers looking for more potential targets. If anyone got a great idea\nright away about where compile-time checks could be useful, feel free to\nshare!\n\n\n", "msg_date": "Sun, 3 Dec 2023 15:27:47 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [RFC] Clang plugin for catching suspicious typedef casting" }, { "msg_contents": "On Sun, Dec 3, 2023 at 6:31 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n> > Only advantage I see to using libclang is that you can run programs built\n> > with libclang regardless of what your C compiler is. I typically use GCC.\n> >\n> > I think your idea of automating this kind of thing is great no matter how it\n> > is implemented.\n>\n> Yeah, absolutely.\n\nWhat is the difference between a clang plugin (or whatever this is),\nand a custom clang-tidy check? Are they the same thing?\n\nI myself use clang-tidy through clangd. It has a huge number of\nchecks, plus some custom checks that are only used by certain open\nsource projects.\n\nAn example of a check that seems like it would be interesting to\nPostgres hackers:\n\nhttps://releases.llvm.org/17.0.1/tools/clang/tools/extra/docs/clang-tidy/checks/bugprone/signal-handler.html\n\nAn example of a custom clang-tidy check, used for the Linux Kernel:\n\nhttps://releases.llvm.org/17.0.1/tools/clang/tools/extra/docs/clang-tidy/checks/linuxkernel/must-use-errs.html\n\nYour idea of starting with a check that identifies when BlockNumber\nand Buffer variables were confused seems like the right general idea\nto me. It's just about impossible for this to be a false positive in practice,\nwhich is important. But, I wonder, is there a way of accomplishing the\nsame thing without any custom code? Seems like the general idea of not\nconfusing two types like BlockNumber and Buffer might be a generic\none?\n\nSome random ideas in this area:\n\n* A custom check that enforces coding standards within signal handlers\n-- the generic one that I linked to might need to be customized, in\nwhatever way (don't use it myself).\n\n* A custom check that enforces a coding standard that applies within\ncritical sections.\n\nWe already have an assertion that catches memory allocations made\nwithin a critical section. It might make sense to have tooling that\ncan detect other kinds of definitely-unsafe things in critical\nsections. For example, maybe it would be possible to detect when\nXLogRegisterBufData() has been passed a pointer to something on the\nstack that will go out of scope, in a way that leaves open the\npossibility of reading random stuff from the stack once inside\nXLogInsert(). AddressSanitizer's use-after-scope can detect that sort\nof thing, though not reliably.\n\nNot at all sure about how feasible any of this is...\n\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Sun, 3 Dec 2023 19:02:55 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [RFC] Clang plugin for catching suspicious typedef casting" }, { "msg_contents": "> On Sun, Dec 03, 2023 at 07:02:55PM -0800, Peter Geoghegan wrote:\n> On Sun, Dec 3, 2023 at 6:31 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n> > > Only advantage I see to using libclang is that you can run programs built\n> > > with libclang regardless of what your C compiler is. I typically use GCC.\n> > >\n> > > I think your idea of automating this kind of thing is great no matter how it\n> > > is implemented.\n> >\n> > Yeah, absolutely.\n>\n> What is the difference between a clang plugin (or whatever this is),\n> and a custom clang-tidy check? Are they the same thing?\n\nGood question. Obviously one difference is that a clang plugin is more\ntightly integrated into the build process, where a custom clang-tidy\ncheck could be used independently. IIUC they also use different API, AST\nConsumer vs AST Matchers, but so far I have no idea what consequences\ndoes it have.\n\nClang tooling page has this to say about choosing the right interface\n[1], where clang-tidy seems to fall into LibTooling cathegory:\n\n Clang Plugins allow you to run additional actions on the AST as part\n of a compilation. Plugins are dynamic libraries that are loaded at\n runtime by the compiler, and they’re easy to integrate into your\n build environment.\n\n Canonical examples of when to use Clang Plugins:\n * special lint-style warnings or errors for your project creating\n * additional build artifacts from a single compile step\n\n [...]\n\n LibTooling is a C++ interface aimed at writing standalone tools, as\n well as integrating into services that run clang tools. Canonical\n examples of when to use LibTooling:\n\n * a simple syntax checker\n * refactoring tools\n\n> I myself use clang-tidy through clangd. It has a huge number of\n> checks, plus some custom checks that are only used by certain open\n> source projects.\n>\n> An example of a check that seems like it would be interesting to\n> Postgres hackers:\n>\n> https://releases.llvm.org/17.0.1/tools/clang/tools/extra/docs/clang-tidy/checks/bugprone/signal-handler.html\n>\n> An example of a custom clang-tidy check, used for the Linux Kernel:\n>\n> https://releases.llvm.org/17.0.1/tools/clang/tools/extra/docs/clang-tidy/checks/linuxkernel/must-use-errs.html\n\nYeah, those look interesting, and might be even worth including\nindependently of the topic in this thread.\n\n> Your idea of starting with a check that identifies when BlockNumber\n> and Buffer variables were confused seems like the right general idea\n> to me. It's just about impossible for this to be a false positive in practice,\n> which is important. But, I wonder, is there a way of accomplishing the\n> same thing without any custom code? Seems like the general idea of not\n> confusing two types like BlockNumber and Buffer might be a generic\n> one?\n\nAgree, the example in this patch could be generalized.\n\n> * A custom check that enforces coding standards within signal handlers\n> -- the generic one that I linked to might need to be customized, in\n> whatever way (don't use it myself).\n>\n> * A custom check that enforces a coding standard that applies within\n> critical sections.\n>\n> We already have an assertion that catches memory allocations made\n> within a critical section. It might make sense to have tooling that\n> can detect other kinds of definitely-unsafe things in critical\n> sections. For example, maybe it would be possible to detect when\n> XLogRegisterBufData() has been passed a pointer to something on the\n> stack that will go out of scope, in a way that leaves open the\n> possibility of reading random stuff from the stack once inside\n> XLogInsert(). AddressSanitizer's use-after-scope can detect that sort\n> of thing, though not reliably.\n\nInteresting ideas, thanks for sharing. I'm going to try implementing\nsome of those.\n\n[1]: https://clang.llvm.org/docs/Tooling.html\n\n\n", "msg_date": "Wed, 6 Dec 2023 16:07:22 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [RFC] Clang plugin for catching suspicious typedef casting" } ]
[ { "msg_contents": "Greetings,\n\nAttached is a patch which introduces a file protocol.h. Instead of using\nthe actual characters everywhere in the code this patch names the\ncharacters and removes the comments beside each usage.\n\n\nDave Cramer", "msg_date": "Thu, 3 Aug 2023 11:40:12 -0600", "msg_from": "Dave Cramer <davecramer@gmail.com>", "msg_from_op": true, "msg_subject": "Using defines for protocol characters" }, { "msg_contents": "On 2023-Aug-03, Dave Cramer wrote:\n\n> Greetings,\n> \n> Attached is a patch which introduces a file protocol.h. Instead of using\n> the actual characters everywhere in the code this patch names the\n> characters and removes the comments beside each usage.\n\nI saw this one last week. I think it's a very idea (and I fact I had\nthought of doing it when I last messed with libpq code).\n\nI don't really like the name pattern you've chosen though; I think we\nneed to have a common prefix in the defines. Maybe prepending PQMSG_ to\neach name would be enough. And maybe turn the _RESPONSE and _REQUEST\nsuffixes you added into prefixes as well, so instead of PARSE_REQUEST\nyou could make it PQMSG_REQ_PARSE, PQMSG_RESP_BIND_COMPLETE and so\non.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n<Schwern> It does it in a really, really complicated way\n<crab> why does it need to be complicated?\n<Schwern> Because it's MakeMaker.\n\n\n", "msg_date": "Thu, 3 Aug 2023 19:59:40 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "On Thu, 3 Aug 2023 at 11:59, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> On 2023-Aug-03, Dave Cramer wrote:\n>\n> > Greetings,\n> >\n> > Attached is a patch which introduces a file protocol.h. Instead of using\n> > the actual characters everywhere in the code this patch names the\n> > characters and removes the comments beside each usage.\n>\n> I saw this one last week. I think it's a very idea (and I fact I had\n> thought of doing it when I last messed with libpq code).\n>\n> I don't really like the name pattern you've chosen though; I think we\n> need to have a common prefix in the defines. Maybe prepending PQMSG_ to\n> each name would be enough. And maybe turn the _RESPONSE and _REQUEST\n> suffixes you added into prefixes as well, so instead of PARSE_REQUEST\n> you could make it PQMSG_REQ_PARSE, PQMSG_RESP_BIND_COMPLETE and so\n> on.\n>\nThat becomes trivial to do now that the names are defined. I presumed\nsomeone would object to the names.\nI'm fine with the names you propose, but I suggest we wait to see if anyone\nobjects.\n\nDave\n\n>\n> --\n> Álvaro Herrera 48°01'N 7°57'E —\n> https://www.EnterpriseDB.com/\n> <Schwern> It does it in a really, really complicated way\n> <crab> why does it need to be complicated?\n> <Schwern> Because it's MakeMaker.\n>\n\nOn Thu, 3 Aug 2023 at 11:59, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2023-Aug-03, Dave Cramer wrote:\n\n> Greetings,\n> \n> Attached is a patch which introduces a file protocol.h. Instead of using\n> the actual characters everywhere in the code this patch names the\n> characters and removes the comments beside each usage.\n\nI saw this one last week.  I think it's a very idea (and I fact I had\nthought of doing it when I last messed with libpq code).\n\nI don't really like the name pattern you've chosen though; I think we\nneed to have a common prefix in the defines.  Maybe prepending PQMSG_ to\neach name would be enough.  And maybe turn the _RESPONSE and _REQUEST\nsuffixes you added into prefixes as well, so instead of PARSE_REQUEST\nyou could make it PQMSG_REQ_PARSE, PQMSG_RESP_BIND_COMPLETE and so\non.That becomes trivial to do now that the names are defined. I presumed someone would object to the names.I'm fine with the names you propose, but I suggest we wait to see if anyone objects.Dave \n\n-- \nÁlvaro Herrera               48°01'N 7°57'E  —  https://www.EnterpriseDB.com/\n<Schwern> It does it in a really, really complicated way\n<crab> why does it need to be complicated?\n<Schwern> Because it's MakeMaker.", "msg_date": "Thu, 3 Aug 2023 12:07:21 -0600", "msg_from": "Dave Cramer <davecramer@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "On Thu, Aug 03, 2023 at 12:07:21PM -0600, Dave Cramer wrote:\n> On Thu, 3 Aug 2023 at 11:59, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>> I don't really like the name pattern you've chosen though; I think we\n>> need to have a common prefix in the defines. Maybe prepending PQMSG_ to\n>> each name would be enough. And maybe turn the _RESPONSE and _REQUEST\n>> suffixes you added into prefixes as well, so instead of PARSE_REQUEST\n>> you could make it PQMSG_REQ_PARSE, PQMSG_RESP_BIND_COMPLETE and so\n>> on.\n>>\n> That becomes trivial to do now that the names are defined. I presumed\n> someone would object to the names.\n> I'm fine with the names you propose, but I suggest we wait to see if anyone\n> objects.\n\nI'm okay with the proposed names as well.\n\n> + * src/include/protocol.h\n\nCould we put these definitions in an existing header such as\nsrc/include/libpq/pqcomm.h? I see that's where the authentication request\ncodes live today.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 3 Aug 2023 11:53:56 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "> Greetings,\n> \n> Attached is a patch which introduces a file protocol.h. Instead of using\n> the actual characters everywhere in the code this patch names the\n> characters and removes the comments beside each usage.\n\n> +#define DESCRIBE_PREPARED 'S'\n> +#define DESCRIBE_PORTAL 'P'\n\nYou use these for Close message as well. I don't like the idea because\nClose is different message from Describe message.\n\nWhat about adding following for Close too use them instead?\n\n#define CLOSE_PREPARED 'S'\n#define CLOSE_PORTAL 'P'\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Fri, 04 Aug 2023 04:25:52 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "On Thu, 3 Aug 2023 at 13:25, Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n\n> > Greetings,\n> >\n> > Attached is a patch which introduces a file protocol.h. Instead of using\n> > the actual characters everywhere in the code this patch names the\n> > characters and removes the comments beside each usage.\n>\n> > +#define DESCRIBE_PREPARED 'S'\n> > +#define DESCRIBE_PORTAL 'P'\n>\n> You use these for Close message as well. I don't like the idea because\n> Close is different message from Describe message.\n>\n> What about adding following for Close too use them instead?\n>\n> #define CLOSE_PREPARED 'S'\n> #define CLOSE_PORTAL 'P'\n>\n\nGood catch.\nI recall when writing this it was a bit hacky.\nWhat do you think of PREPARED_SUB_COMMAND and PORTAL_SUB_COMMAND instead\nof duplicating them ?\n\nDave\n\nOn Thu, 3 Aug 2023 at 13:25, Tatsuo Ishii <ishii@sraoss.co.jp> wrote:> Greetings,\n> \n> Attached is a patch which introduces a file protocol.h. Instead of using\n> the actual characters everywhere in the code this patch names the\n> characters and removes the comments beside each usage.\n\n> +#define DESCRIBE_PREPARED           'S'\n> +#define DESCRIBE_PORTAL             'P'\n\nYou use these for Close message as well. I don't like the idea because\nClose is different message from Describe message.\n\nWhat about adding following for Close too use them instead?\n\n#define CLOSE_PREPARED           'S'\n#define CLOSE_PORTAL             'P'Good catch. I recall when writing this it was a bit hacky.What do you think of PREPARED_SUB_COMMAND   and PORTAL_SUB_COMMAND instead of duplicating them ?Dave", "msg_date": "Thu, 3 Aug 2023 15:22:51 -0600", "msg_from": "Dave Cramer <davecramer@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "On Thu, 3 Aug 2023 at 15:22, Dave Cramer <davecramer@gmail.com> wrote:\n\n>\n>\n> On Thu, 3 Aug 2023 at 13:25, Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n>\n>> > Greetings,\n>> >\n>> > Attached is a patch which introduces a file protocol.h. Instead of using\n>> > the actual characters everywhere in the code this patch names the\n>> > characters and removes the comments beside each usage.\n>>\n>> > +#define DESCRIBE_PREPARED 'S'\n>> > +#define DESCRIBE_PORTAL 'P'\n>>\n>> You use these for Close message as well. I don't like the idea because\n>> Close is different message from Describe message.\n>>\n>> What about adding following for Close too use them instead?\n>>\n>> #define CLOSE_PREPARED 'S'\n>> #define CLOSE_PORTAL 'P'\n>>\n>\n> Good catch.\n> I recall when writing this it was a bit hacky.\n> What do you think of PREPARED_SUB_COMMAND and PORTAL_SUB_COMMAND instead\n> of duplicating them ?\n>\n\nWhile reviewing this I found a number of mistakes where I used\nDATA_ROW_RESPONSE instead of DESCRIBE_REQUEST.\n\nI can provide a patch now or wait until we resolve the above\n\n\n>\n> Dave\n>\n\nOn Thu, 3 Aug 2023 at 15:22, Dave Cramer <davecramer@gmail.com> wrote:On Thu, 3 Aug 2023 at 13:25, Tatsuo Ishii <ishii@sraoss.co.jp> wrote:> Greetings,\n> \n> Attached is a patch which introduces a file protocol.h. Instead of using\n> the actual characters everywhere in the code this patch names the\n> characters and removes the comments beside each usage.\n\n> +#define DESCRIBE_PREPARED           'S'\n> +#define DESCRIBE_PORTAL             'P'\n\nYou use these for Close message as well. I don't like the idea because\nClose is different message from Describe message.\n\nWhat about adding following for Close too use them instead?\n\n#define CLOSE_PREPARED           'S'\n#define CLOSE_PORTAL             'P'Good catch. I recall when writing this it was a bit hacky.What do you think of PREPARED_SUB_COMMAND   and PORTAL_SUB_COMMAND instead of duplicating them ?While reviewing this I found a number of mistakes where I used DATA_ROW_RESPONSE instead of DESCRIBE_REQUEST.I can provide a patch now or wait until we resolve the above Dave", "msg_date": "Thu, 3 Aug 2023 15:42:00 -0600", "msg_from": "Dave Cramer <davecramer@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "> On Thu, 3 Aug 2023 at 13:25, Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n> \n>> > Greetings,\n>> >\n>> > Attached is a patch which introduces a file protocol.h. Instead of using\n>> > the actual characters everywhere in the code this patch names the\n>> > characters and removes the comments beside each usage.\n>>\n>> > +#define DESCRIBE_PREPARED 'S'\n>> > +#define DESCRIBE_PORTAL 'P'\n>>\n>> You use these for Close message as well. I don't like the idea because\n>> Close is different message from Describe message.\n>>\n>> What about adding following for Close too use them instead?\n>>\n>> #define CLOSE_PREPARED 'S'\n>> #define CLOSE_PORTAL 'P'\n>>\n> \n> Good catch.\n> I recall when writing this it was a bit hacky.\n> What do you think of PREPARED_SUB_COMMAND and PORTAL_SUB_COMMAND instead\n> of duplicating them ?\n\nNice. Looks good to me.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Fri, 04 Aug 2023 07:59:28 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "On Thu, 3 Aug 2023 at 16:59, Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n\n> > On Thu, 3 Aug 2023 at 13:25, Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n> >\n> >> > Greetings,\n> >> >\n> >> > Attached is a patch which introduces a file protocol.h. Instead of\n> using\n> >> > the actual characters everywhere in the code this patch names the\n> >> > characters and removes the comments beside each usage.\n> >>\n> >> > +#define DESCRIBE_PREPARED 'S'\n> >> > +#define DESCRIBE_PORTAL 'P'\n> >>\n> >> You use these for Close message as well. I don't like the idea because\n> >> Close is different message from Describe message.\n> >>\n> >> What about adding following for Close too use them instead?\n> >>\n> >> #define CLOSE_PREPARED 'S'\n> >> #define CLOSE_PORTAL 'P'\n> >>\n> >\n> > Good catch.\n> > I recall when writing this it was a bit hacky.\n> > What do you think of PREPARED_SUB_COMMAND and PORTAL_SUB_COMMAND\n> instead\n> > of duplicating them ?\n>\n> Nice. Looks good to me.\n>\nNew patch attached which uses PREPARED_SUB_COMMAND and\nPORTAL_SUB_COMMAND instead\nand uses\nPQMSG_REQ_* and PQMSG_RESP_* as per Alvaro's suggestion\n\nDave Cramer", "msg_date": "Thu, 3 Aug 2023 20:09:58 -0600", "msg_from": "Dave Cramer <davecramer@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "> On Thu, 3 Aug 2023 at 16:59, Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n> \n>> > On Thu, 3 Aug 2023 at 13:25, Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n>> >\n>> >> > Greetings,\n>> >> >\n>> >> > Attached is a patch which introduces a file protocol.h. Instead of\n>> using\n>> >> > the actual characters everywhere in the code this patch names the\n>> >> > characters and removes the comments beside each usage.\n>> >>\n>> >> > +#define DESCRIBE_PREPARED 'S'\n>> >> > +#define DESCRIBE_PORTAL 'P'\n>> >>\n>> >> You use these for Close message as well. I don't like the idea because\n>> >> Close is different message from Describe message.\n>> >>\n>> >> What about adding following for Close too use them instead?\n>> >>\n>> >> #define CLOSE_PREPARED 'S'\n>> >> #define CLOSE_PORTAL 'P'\n>> >>\n>> >\n>> > Good catch.\n>> > I recall when writing this it was a bit hacky.\n>> > What do you think of PREPARED_SUB_COMMAND and PORTAL_SUB_COMMAND\n>> instead\n>> > of duplicating them ?\n>>\n>> Nice. Looks good to me.\n>>\n> New patch attached which uses PREPARED_SUB_COMMAND and\n> PORTAL_SUB_COMMAND instead\n> and uses\n> PQMSG_REQ_* and PQMSG_RESP_* as per Alvaro's suggestion\n\nLooks good to me.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Fri, 04 Aug 2023 15:42:13 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "On 2023-Aug-03, Dave Cramer wrote:\n\n> New patch attached which uses PREPARED_SUB_COMMAND and\n> PORTAL_SUB_COMMAND instead\n\nHmm, I would keep the prefix in this case and make the message type a\nsecond prefix, with the subtype last -- PQMSG_CLOSE_PREPARED,\nPQMSG_DESCRIBE_PORTAL and so on.\n\nYou define PASSWORD and GSS both with 'p', which I think is bogus; in\nsome place that isn't doing GSS, you've replaced 'p' with the GSS one\n(CheckSASLAuth). I think it'd be better to give 'p' a single name, and\nnot try to distinguish which is PASSWORD and which is GSS, because\nultimately it's not important.\n\nThere are some unpatched places, such as basebackup_copy.c -- there are\nseveral matches for /'.'/ that correspond to protocol chars in that file.\nAlso CopySendEndOfRow has one 'd', and desc.c has two 'C'.\n\nI think fe-trace.c will require further adjustment of the comments for\neach function. We could change them to be the symbol for each char, like so:\n\n/* PQMSG_RESP_READY_FOR_QUERY */\nstatic void\npqTraceOutputZ(FILE *f, const char *message, int *cursor)\n\n\nThanks\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Someone said that it is at least an order of magnitude more work to do\nproduction software than a prototype. I think he is wrong by at least\nan order of magnitude.\" (Brian Kernighan)\n\n\n", "msg_date": "Fri, 4 Aug 2023 11:44:53 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "On Fri, 4 Aug 2023 at 03:44, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> On 2023-Aug-03, Dave Cramer wrote:\n>\n> > New patch attached which uses PREPARED_SUB_COMMAND and\n> > PORTAL_SUB_COMMAND instead\n>\n> Hmm, I would keep the prefix in this case and make the message type a\n> second prefix, with the subtype last -- PQMSG_CLOSE_PREPARED,\n> PQMSG_DESCRIBE_PORTAL and so on.\n>\nI used\n\n#define PQMSG_REQ_PREPARED 'S'\n#define PQMSG_REQ_PORTAL 'P'\n\nDuplicating them for CLOSE PREPARED|PORTAL seems awkward\n\n>\n> You define PASSWORD and GSS both with 'p', which I think is bogus; in\n> some place that isn't doing GSS, you've replaced 'p' with the GSS one\n> (CheckSASLAuth). I think it'd be better to give 'p' a single name, and\n> not try to distinguish which is PASSWORD and which is GSS, because\n> ultimately it's not important.\n>\nDone\n\n>\n> There are some unpatched places, such as basebackup_copy.c -- there are\n> several matches for /'.'/ that correspond to protocol chars in that file.\n> Also CopySendEndOfRow has one 'd', and desc.c has two 'C'.\n>\n> I think fe-trace.c will require further adjustment of the comments for\n> each function. We could change them to be the symbol for each char, like\n> so:\n>\n> /* PQMSG_RESP_READY_FOR_QUERY */\n> static void\n> pqTraceOutputZ(FILE *f, const char *message, int *cursor)\n>\n>\nThanks for reviewing see attached for fixes\n\nDave", "msg_date": "Fri, 4 Aug 2023 07:03:52 -0600", "msg_from": "Dave Cramer <davecramer@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "Hi, I wondered if any consideration was given to using an enum instead\nof all the #defines.\n\nI guess, your patch would not be much different; you can still have\nall the nice names and assign the appropriate values to the enum\nvalues same as now, but using an enum you might also gain\ntype-checking in the code and also get warnings for the \"switch\"\nstatements if there are any cases accidentally omitted.\n\nFor example, see typedef enum LogicalRepMsgType [1].\n\n------\n[1] https://github.com/postgres/postgres/blob/eeb4eeea2c525c51767ffeafda0070b946f26ae8/src/include/replication/logicalproto.h#L57C31-L57C31\n\nKind Regards,\nPeter Smith\nFujitsu Australia\n\n\n", "msg_date": "Mon, 7 Aug 2023 18:41:52 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "On 2023-Aug-07, Peter Smith wrote:\n\n> I guess, your patch would not be much different; you can still have\n> all the nice names and assign the appropriate values to the enum\n> values same as now, but using an enum you might also gain\n> type-checking in the code and also get warnings for the \"switch\"\n> statements if there are any cases accidentally omitted.\n\nHmm, I think omitting a 'default' clause (which is needed when you want\nwarnings for missing clauses) in a switch that handles protocol traffic\nis not great, because the switch would misbehave when the network\ncounterpart sends a broken message. I'm not sure we want to do that.\nIt could become a serious security problem if confronted with a\nmalicious libpq.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 7 Aug 2023 11:10:44 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "On Mon, 7 Aug 2023 at 03:10, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> On 2023-Aug-07, Peter Smith wrote:\n>\n> > I guess, your patch would not be much different; you can still have\n> > all the nice names and assign the appropriate values to the enum\n> > values same as now, but using an enum you might also gain\n> > type-checking in the code and also get warnings for the \"switch\"\n> > statements if there are any cases accidentally omitted.\n>\n> Hmm, I think omitting a 'default' clause (which is needed when you want\n> warnings for missing clauses) in a switch that handles protocol traffic\n> is not great, because the switch would misbehave when the network\n> counterpart sends a broken message. I'm not sure we want to do that.\n> It could become a serious security problem if confronted with a\n> malicious libpq.\n>\n>\nAny other changes required ?\n\nDave\n\n> --\n> Álvaro Herrera PostgreSQL Developer —\n> https://www.EnterpriseDB.com/\n>\n\nOn Mon, 7 Aug 2023 at 03:10, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2023-Aug-07, Peter Smith wrote:\n\n> I guess, your patch would not be much different; you can still have\n> all the nice names and assign the appropriate values to the enum\n> values same as now, but using an enum you might also gain\n> type-checking in the code and also get warnings for the \"switch\"\n> statements if there are any cases accidentally omitted.\n\nHmm, I think omitting a 'default' clause (which is needed when you want\nwarnings for missing clauses) in a switch that handles protocol traffic\nis not great, because the switch would misbehave when the network\ncounterpart sends a broken message.  I'm not sure we want to do that.\nIt could become a serious security problem if confronted with a\nmalicious libpq.\nAny other changes required ?Dave \n-- \nÁlvaro Herrera         PostgreSQL Developer  —  https://www.EnterpriseDB.com/", "msg_date": "Mon, 7 Aug 2023 11:19:25 -0600", "msg_from": "Dave Cramer <davecramer@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "On Mon, Aug 7, 2023 at 1:19 PM Dave Cramer <davecramer@gmail.com> wrote:\n> Any other changes required ?\n\nIMHO, the correspondence between the names in the patch and the\ntraditional names in the documentation could be stronger. For example,\nthe documentation mentions EmptyQueryResponse and\nNegotiateProtocolVersion, but in this patch those become\nPQMSG_RESP_NEGOTIATE_PROTOCOL and PQMSG_RESP_EMPTY_QUERY. I think we\ncould consider directly using the names from the documentation, right\ndown to capitalization, perhaps with a prefix, but I'm also totally\nfine with this use of uppercase letters and underscores. But why not\ndo a strict translation, like EmptyQueryResponse ->\nPQMSG_EMPTY_QUERY_RESPONSE, NegotiateProtocolVersion ->\nPQMSG_NEGOTIATE_PROTOCOL_VERSION? To me at least, the current patch is\ninventing new and slightly different names for things that already\nhave names...\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 7 Aug 2023 13:36:43 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> IMHO, the correspondence between the names in the patch and the\n> traditional names in the documentation could be stronger. For example,\n> the documentation mentions EmptyQueryResponse and\n> NegotiateProtocolVersion, but in this patch those become\n> PQMSG_RESP_NEGOTIATE_PROTOCOL and PQMSG_RESP_EMPTY_QUERY. I think we\n> could consider directly using the names from the documentation, right\n> down to capitalization, perhaps with a prefix, but I'm also totally\n> fine with this use of uppercase letters and underscores. But why not\n> do a strict translation, like EmptyQueryResponse ->\n> PQMSG_EMPTY_QUERY_RESPONSE, NegotiateProtocolVersion ->\n> PQMSG_NEGOTIATE_PROTOCOL_VERSION? To me at least, the current patch is\n> inventing new and slightly different names for things that already\n> have names...\n\n+1. For ease of greppability, maybe even PQMSG_EmptyQueryResponse\nand so on? Then one grep would find both uses of the constants and\ncode/docs references. Not sure if the prefix should be all-caps or\nnot if we go this way.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 07 Aug 2023 14:24:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "On Mon, Aug 07, 2023 at 02:24:58PM -0400, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> IMHO, the correspondence between the names in the patch and the\n>> traditional names in the documentation could be stronger. For example,\n>> the documentation mentions EmptyQueryResponse and\n>> NegotiateProtocolVersion, but in this patch those become\n>> PQMSG_RESP_NEGOTIATE_PROTOCOL and PQMSG_RESP_EMPTY_QUERY. I think we\n>> could consider directly using the names from the documentation, right\n>> down to capitalization, perhaps with a prefix, but I'm also totally\n>> fine with this use of uppercase letters and underscores. But why not\n>> do a strict translation, like EmptyQueryResponse ->\n>> PQMSG_EMPTY_QUERY_RESPONSE, NegotiateProtocolVersion ->\n>> PQMSG_NEGOTIATE_PROTOCOL_VERSION? To me at least, the current patch is\n>> inventing new and slightly different names for things that already\n>> have names...\n> \n> +1. For ease of greppability, maybe even PQMSG_EmptyQueryResponse\n> and so on? Then one grep would find both uses of the constants and\n> code/docs references. Not sure if the prefix should be all-caps or\n> not if we go this way.\n\n+1 for Tom's suggestion. I have no strong opinion about the prefix, but I\ndo think easing greppability is worthwhile.\n\nAs mentioned earlier [0], I think we should also consider putting the\ndefinitions in pqcomm.h.\n\n[0] https://postgr.es/m/20230803185356.GA1144430%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 7 Aug 2023 11:38:49 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "On Mon, Aug 7, 2023 at 2:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> +1. For ease of greppability, maybe even PQMSG_EmptyQueryResponse\n> and so on? Then one grep would find both uses of the constants and\n> code/docs references. Not sure if the prefix should be all-caps or\n> not if we go this way.\n\nPqMsgEmptyQueryResponse or something like that seems better, if we\nwant to keep the current capitalization. I'm not a huge fan of the way\nwe vary our capitalization conventions so much all over the code base,\nbut I think we would at least do well to keep it consistent from one\nend of a certain identifier to the other.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 7 Aug 2023 14:59:01 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "On Mon, 7 Aug 2023 at 12:59, Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Aug 7, 2023 at 2:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > +1. For ease of greppability, maybe even PQMSG_EmptyQueryResponse\n> > and so on? Then one grep would find both uses of the constants and\n> > code/docs references. Not sure if the prefix should be all-caps or\n> > not if we go this way.\n>\n> PqMsgEmptyQueryResponse or something like that seems better, if we\n> want to keep the current capitalization. I'm not a huge fan of the way\n> we vary our capitalization conventions so much all over the code base,\n> but I think we would at least do well to keep it consistent from one\n> end of a certain identifier to the other.\n>\n\nI don't have a strong preference, but before I make the changes I'd like to\nget consensus.\n\nCan we vote or whatever it takes to decide on a naming pattern that is\nacceptable ?\n\nDave\n\nOn Mon, 7 Aug 2023 at 12:59, Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Aug 7, 2023 at 2:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> +1.  For ease of greppability, maybe even PQMSG_EmptyQueryResponse\n> and so on?  Then one grep would find both uses of the constants and\n> code/docs references.  Not sure if the prefix should be all-caps or\n> not if we go this way.\n\nPqMsgEmptyQueryResponse or something like that seems better, if we\nwant to keep the current capitalization. I'm not a huge fan of the way\nwe vary our capitalization conventions so much all over the code base,\nbut I think we would at least do well to keep it consistent from one\nend of a certain identifier to the other.I don't have a strong preference, but before I make the changes I'd like to get consensus.Can we vote or whatever it takes to decide on a naming pattern that is acceptable ?Dave", "msg_date": "Mon, 7 Aug 2023 14:00:25 -0600", "msg_from": "Dave Cramer <davecramer@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "Dave Cramer <davecramer@gmail.com> writes:\n> On Mon, 7 Aug 2023 at 12:59, Robert Haas <robertmhaas@gmail.com> wrote:\n>> PqMsgEmptyQueryResponse or something like that seems better, if we\n>> want to keep the current capitalization. I'm not a huge fan of the way\n>> we vary our capitalization conventions so much all over the code base,\n>> but I think we would at least do well to keep it consistent from one\n>> end of a certain identifier to the other.\n\n> I don't have a strong preference, but before I make the changes I'd like to\n> get consensus.\n> Can we vote or whatever it takes to decide on a naming pattern that is\n> acceptable ?\n\nI'm good with Robert's proposal above.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 07 Aug 2023 16:02:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "On Mon, Aug 07, 2023 at 04:02:08PM -0400, Tom Lane wrote:\n> Dave Cramer <davecramer@gmail.com> writes:\n>> On Mon, 7 Aug 2023 at 12:59, Robert Haas <robertmhaas@gmail.com> wrote:\n>>> PqMsgEmptyQueryResponse or something like that seems better, if we\n>>> want to keep the current capitalization. I'm not a huge fan of the way\n>>> we vary our capitalization conventions so much all over the code base,\n>>> but I think we would at least do well to keep it consistent from one\n>>> end of a certain identifier to the other.\n> \n>> I don't have a strong preference, but before I make the changes I'd like to\n>> get consensus.\n>> Can we vote or whatever it takes to decide on a naming pattern that is\n>> acceptable ?\n> \n> I'm good with Robert's proposal above.\n\n+1\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 7 Aug 2023 14:47:14 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "+#ifndef _PROTOCOL_H\n+#define _PROTOCOL_H\n+\n+#define PQMSG_REQ_BIND 'B'\n+#define PQMSG_REQ_CLOSE 'C'\n+#define PQMSG_REQ_DESCRIBE 'D'\n+#define PQMSG_REQ_EXECUTE 'E'\n+#define PQMSG_REQ_FUNCTION_CALL 'F'\n+#define PQMSG_REQ_FLUSH_DATA 'H'\n+#define PQMSG_REQ_BACKEND_KEY_DATA 'K'\n+#define PQMSG_REQ_PARSE 'P'\n+#define PQMSG_REQ_AUTHENTICATION 'R'\n+#define PQMSG_REQ_SYNC_DATA 'S'\n+#define PQMSG_REQ_SIMPLE_QUERY 'Q'\n+#define PQMSG_REQ_TERMINATE 'X'\n+#define PQMSG_REQ_COPY_FAIL 'f'\n+#define PQMSG_REQ_COPY_DONE 'c'\n+#define PQMSG_REQ_COPY_DATA 'd'\n+#define PQMSG_REQ_COPY_PROGRESS 'p'\n+#define PQMSG_REQ_PREPARED 'S'\n+#define PQMSG_REQ_PORTAL 'P'\n+\n+\n+/*\n+Responses\n+*/\n+#define PQMSG_RESP_PARSE_COMPLETE '1'\n+#define PQMSG_RESP_BIND_COMPLETE '2'\n+#define PQMSG_RESP_CLOSE_COMPLETE '3'\n+#define PQMSG_RESP_NOTIFY 'A'\n+#define PQMSG_RESP_COMMAND_COMPLETE 'C'\n+#define PQMSG_RESP_DATA_ROW 'D'\n+#define PQMSG_RESP_ERROR 'E'\n+#define PQMSG_RESP_COPY_IN 'G'\n+#define PQMSG_RESP_COPY_OUT 'H'\n+#define PQMSG_RESP_EMPTY_QUERY 'I'\n+#define PQMSG_RESP_NOTICE 'N'\n+#define PQMSG_RESP_PARALLEL_PROGRESS 'P'\n+#define PQMSG_RESP_FUNCTION_CALL 'V'\n+#define PQMSG_RESP_PARAMETER_STATUS 'S'\n+#define PQMSG_RESP_ROW_DESCRIPTION 'T'\n+#define PQMSG_RESP_COPY_BOTH 'W'\n+#define PQMSG_RESP_READY_FOR_QUERY 'Z'\n+#define PQMSG_RESP_NO_DATA 'n'\n+#define PQMSG_RESP_PASSWORD 'p'\n+#define PQMSG_RESP_PORTAL_SUSPENDED 's'\n+#define PQMSG_RESP_PARAMETER_DESCRIPTION 't'\n+#define PQMSG_RESP_NEGOTIATE_PROTOCOL 'v'\n+#endif\n\nWas ordering-by-value intended here? If yes, then FYI both of those\ngroups of #defines are very nearly, but not quite, in that order.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 8 Aug 2023 08:48:38 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "> On Mon, Aug 07, 2023 at 04:02:08PM -0400, Tom Lane wrote:\n>> Dave Cramer <davecramer@gmail.com> writes:\n>>> On Mon, 7 Aug 2023 at 12:59, Robert Haas <robertmhaas@gmail.com> wrote:\n>>>> PqMsgEmptyQueryResponse or something like that seems better, if we\n>>>> want to keep the current capitalization. I'm not a huge fan of the way\n>>>> we vary our capitalization conventions so much all over the code base,\n>>>> but I think we would at least do well to keep it consistent from one\n>>>> end of a certain identifier to the other.\n>> \n>>> I don't have a strong preference, but before I make the changes I'd like to\n>>> get consensus.\n>>> Can we vote or whatever it takes to decide on a naming pattern that is\n>>> acceptable ?\n>> \n>> I'm good with Robert's proposal above.\n> \n> +1\n\n+1.\n\nAlso we need to decide what to do with them:\n\n> #define PQMSG_REQ_PREPARED 'S'\n> #define PQMSG_REQ_PORTAL 'P'\n\nIf we go \"PqMsgEmptyQueryResponse\", probably we should go something\nlike for these?\n\n#define PqMsgReqPrepared 'S'\n#define PqMsgReqPortal 'P'\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Tue, 08 Aug 2023 07:49:54 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "On Mon, 7 Aug 2023 at 16:50, Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n\n> > On Mon, Aug 07, 2023 at 04:02:08PM -0400, Tom Lane wrote:\n> >> Dave Cramer <davecramer@gmail.com> writes:\n> >>> On Mon, 7 Aug 2023 at 12:59, Robert Haas <robertmhaas@gmail.com>\n> wrote:\n> >>>> PqMsgEmptyQueryResponse or something like that seems better, if we\n> >>>> want to keep the current capitalization. I'm not a huge fan of the way\n> >>>> we vary our capitalization conventions so much all over the code base,\n> >>>> but I think we would at least do well to keep it consistent from one\n> >>>> end of a certain identifier to the other.\n> >>\n> >>> I don't have a strong preference, but before I make the changes I'd\n> like to\n> >>> get consensus.\n> >>> Can we vote or whatever it takes to decide on a naming pattern that is\n> >>> acceptable ?\n> >>\n> >> I'm good with Robert's proposal above.\n> >\n> > +1\n>\n> +1.\n>\n> Also we need to decide what to do with them:\n>\n> > #define PQMSG_REQ_PREPARED 'S'\n> > #define PQMSG_REQ_PORTAL 'P'\n>\n> If we go \"PqMsgEmptyQueryResponse\", probably we should go something\n> like for these?\n>\n> #define PqMsgReqPrepared 'S'\n> #define PqMsgReqPortal 'P'\n>\n\nI went with PqMsgPortalSubCommand and PqMsgPreparedSubCommand\n\nSee attached patch\n\nDave", "msg_date": "Mon, 7 Aug 2023 17:55:49 -0600", "msg_from": "Dave Cramer <davecramer@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "> On Mon, 7 Aug 2023 at 16:50, Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n> \n>> > On Mon, Aug 07, 2023 at 04:02:08PM -0400, Tom Lane wrote:\n>> >> Dave Cramer <davecramer@gmail.com> writes:\n>> >>> On Mon, 7 Aug 2023 at 12:59, Robert Haas <robertmhaas@gmail.com>\n>> wrote:\n>> >>>> PqMsgEmptyQueryResponse or something like that seems better, if we\n>> >>>> want to keep the current capitalization. I'm not a huge fan of the way\n>> >>>> we vary our capitalization conventions so much all over the code base,\n>> >>>> but I think we would at least do well to keep it consistent from one\n>> >>>> end of a certain identifier to the other.\n>> >>\n>> >>> I don't have a strong preference, but before I make the changes I'd\n>> like to\n>> >>> get consensus.\n>> >>> Can we vote or whatever it takes to decide on a naming pattern that is\n>> >>> acceptable ?\n>> >>\n>> >> I'm good with Robert's proposal above.\n>> >\n>> > +1\n>>\n>> +1.\n>>\n>> Also we need to decide what to do with them:\n>>\n>> > #define PQMSG_REQ_PREPARED 'S'\n>> > #define PQMSG_REQ_PORTAL 'P'\n>>\n>> If we go \"PqMsgEmptyQueryResponse\", probably we should go something\n>> like for these?\n>>\n>> #define PqMsgReqPrepared 'S'\n>> #define PqMsgReqPortal 'P'\n>>\n> \n> I went with PqMsgPortalSubCommand and PqMsgPreparedSubCommand\n> \n> See attached patch\n\nLooks good to me.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Tue, 08 Aug 2023 09:37:17 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "On 2023-Aug-07, Nathan Bossart wrote:\n\n> On Mon, Aug 07, 2023 at 04:02:08PM -0400, Tom Lane wrote:\n> > Dave Cramer <davecramer@gmail.com> writes:\n\n> >> Can we vote or whatever it takes to decide on a naming pattern that is\n> >> acceptable ?\n> > \n> > I'm good with Robert's proposal above.\n> \n> +1\n\nWFM as well.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Every machine is a smoke machine if you operate it wrong enough.\"\nhttps://twitter.com/libseybieda/status/1541673325781196801\n\n\n", "msg_date": "Tue, 8 Aug 2023 08:27:40 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "1. As was discussed, these definitions should go into \nsrc/include/libpq/pqcomm.h, not a new file.\n\n2. I would prefer an underscore after PgMsg, like PqMsg_DescribeRequest, \nso it's easier to visually locate the start of the actual message name.\n\n3. IMO, the names of the protocol messages in protocol.sgml are \ncanonical. Your patch appends \"Request\" and \"Response\" in cases where \nthat is not part of the actual name. Also, some messages are documented \nto go both ways, so this separation doesn't make sense strictly \nspeaking. Please use the names as in protocol.sgml without augmenting them.\n\n\n\n", "msg_date": "Wed, 9 Aug 2023 17:19:20 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "On Wed, 9 Aug 2023 at 09:19, Peter Eisentraut <peter@eisentraut.org> wrote:\n\n> 1. As was discussed, these definitions should go into\n> src/include/libpq/pqcomm.h, not a new file.\n>\n\nI'm ambivalent, this is very easy to do.\n\n>\n> 2. I would prefer an underscore after PgMsg, like PqMsg_DescribeRequest,\n> so it's easier to visually locate the start of the actual message name.\n>\n> 3. IMO, the names of the protocol messages in protocol.sgml are\n> canonical. Your patch appends \"Request\" and \"Response\" in cases where\n> that is not part of the actual name. Also, some messages are documented\n> to go both ways, so this separation doesn't make sense strictly\n> speaking. Please use the names as in protocol.sgml without augmenting\n> them.\n>\n>\nI've changed this a number of times. I do not mind changing it again, but\ncan we reach a consensus ?\n\nDave\n\nOn Wed, 9 Aug 2023 at 09:19, Peter Eisentraut <peter@eisentraut.org> wrote:1. As was discussed, these definitions should go into \nsrc/include/libpq/pqcomm.h, not a new file.I'm ambivalent, this is very easy to do.  \n\n2. I would prefer an underscore after PgMsg, like PqMsg_DescribeRequest, \nso it's easier to visually locate the start of the actual message name.\n\n3. IMO, the names of the protocol messages in protocol.sgml are \ncanonical.  Your patch appends \"Request\" and \"Response\" in cases where \nthat is not part of the actual name.  Also, some messages are documented \nto go both ways, so this separation doesn't make sense strictly \nspeaking.  Please use the names as in protocol.sgml without augmenting them.\nI've changed this a number of times. I do not mind changing it again, but can we reach a consensus ?Dave", "msg_date": "Wed, 9 Aug 2023 10:27:37 -0600", "msg_from": "Dave Cramer <davecramer@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "Dave Cramer <davecramer@gmail.com> writes:\n> On Wed, 9 Aug 2023 at 09:19, Peter Eisentraut <peter@eisentraut.org> wrote:\n>> 3. IMO, the names of the protocol messages in protocol.sgml are\n>> canonical. Your patch appends \"Request\" and \"Response\" in cases where\n>> that is not part of the actual name. Also, some messages are documented\n>> to go both ways, so this separation doesn't make sense strictly\n>> speaking. Please use the names as in protocol.sgml without augmenting\n>> them.\n\n> I've changed this a number of times. I do not mind changing it again, but\n> can we reach a consensus ?\n\nI agree with Peter: let's use the names in the protocol document\nwith a single prefix. I've got mixed feelings about whether that prefix\nshould have an underscore, though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 09 Aug 2023 12:34:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "On Wed, 9 Aug 2023 at 10:34, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Dave Cramer <davecramer@gmail.com> writes:\n> > On Wed, 9 Aug 2023 at 09:19, Peter Eisentraut <peter@eisentraut.org>\n> wrote:\n> >> 3. IMO, the names of the protocol messages in protocol.sgml are\n> >> canonical. Your patch appends \"Request\" and \"Response\" in cases where\n> >> that is not part of the actual name. Also, some messages are documented\n> >> to go both ways, so this separation doesn't make sense strictly\n> >> speaking. Please use the names as in protocol.sgml without augmenting\n> >> them.\n>\n> > I've changed this a number of times. I do not mind changing it again, but\n> > can we reach a consensus ?\n>\n> I agree with Peter: let's use the names in the protocol document\n> with a single prefix. I've got mixed feelings about whether that prefix\n> should have an underscore, though.\n>\n\nWell, we're getting closer :)\n\nDave\n\nOn Wed, 9 Aug 2023 at 10:34, Tom Lane <tgl@sss.pgh.pa.us> wrote:Dave Cramer <davecramer@gmail.com> writes:\n> On Wed, 9 Aug 2023 at 09:19, Peter Eisentraut <peter@eisentraut.org> wrote:\n>> 3. IMO, the names of the protocol messages in protocol.sgml are\n>> canonical.  Your patch appends \"Request\" and \"Response\" in cases where\n>> that is not part of the actual name.  Also, some messages are documented\n>> to go both ways, so this separation doesn't make sense strictly\n>> speaking.  Please use the names as in protocol.sgml without augmenting\n>> them.\n\n> I've changed this a number of times. I do not mind changing it again, but\n> can we reach a consensus ?\n\nI agree with Peter: let's use the names in the protocol document\nwith a single prefix.  I've got mixed feelings about whether that prefix\nshould have an underscore, though.Well, we're getting closer :)Dave", "msg_date": "Wed, 9 Aug 2023 10:44:42 -0600", "msg_from": "Dave Cramer <davecramer@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "On Wed, Aug 09, 2023 at 10:44:42AM -0600, Dave Cramer wrote:\n> On Wed, 9 Aug 2023 at 10:34, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I agree with Peter: let's use the names in the protocol document\n>> with a single prefix. I've got mixed feelings about whether that prefix\n>> should have an underscore, though.\n> \n> Well, we're getting closer :)\n\nI'm +0.5 for the underscore.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 9 Aug 2023 09:51:47 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "On 2023-Aug-09, Nathan Bossart wrote:\n\n> On Wed, Aug 09, 2023 at 10:44:42AM -0600, Dave Cramer wrote:\n> > On Wed, 9 Aug 2023 at 10:34, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I agree with Peter: let's use the names in the protocol document\n> >> with a single prefix. I've got mixed feelings about whether that prefix\n> >> should have an underscore, though.\n> > \n> > Well, we're getting closer :)\n> \n> I'm +0.5 for the underscore.\n\nWe use CamelCase_With_UnderScores in other places (PgStat_Counter,\nAtEOXact_PgStat_Database). It's not pretty, and at times it's not\ncomfortable to type, but I think it will be less ugly here than in those\nother places (particularly because there'll be a single underscore), and\nI also agree that readability is better with the underscore than\nwithout.\n\nSo, I'm also +0.5 on having them, much as it displeases Robert.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 9 Aug 2023 19:08:05 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "I tried to address all the feedback in v5 of the patch, which is attached.\nI limited the patch to only the characters that have names in the \"Message\nFormats\" section of protocol.sgml instead of trying to invent names for\nunnamed characters.\n\nI'm aware of one inconsistency. While I grouped all the authentication\nrequest messages ('R') into PqMsg_AuthenticationRequest, I added separate\nmacros for all the different meanings of 'p', i.e., GSSResponse,\nPasswordMessage, SASLInitialResponse, and SASLResponse. I wanted to list\neverything in protocol.sgml (even the duplicateѕ) to ease greppability, but\nthe code is structure such that adding macros for all the different\nauthentication messages would be kind of pointless. Plus, there is a\nseparate set of authentication request codes just below the added macros.\nSo, this is where I landed...\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 14 Aug 2023 21:07:27 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "> I tried to address all the feedback in v5 of the patch, which is attached.\n> I limited the patch to only the characters that have names in the \"Message\n> Formats\" section of protocol.sgml instead of trying to invent names for\n> unnamed characters.\n> \n> I'm aware of one inconsistency. While I grouped all the authentication\n> request messages ('R') into PqMsg_AuthenticationRequest, I added separate\n> macros for all the different meanings of 'p', i.e., GSSResponse,\n> PasswordMessage, SASLInitialResponse, and SASLResponse. I wanted to list\n> everything in protocol.sgml (even the duplicate ) to ease greppability, but\n> the code is structure such that adding macros for all the different\n> authentication messages would be kind of pointless. Plus, there is a\n> separate set of authentication request codes just below the added macros.\n> So, this is where I landed...\n\nIs it possible to put the new define staff into protocol.h then let\npqcomm.h include protocol.h? This makes Pgpool-II and other middle\nware/drivers written in C easier to use the defines so that they only\ninclude protocol.h to use the defines.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Tue, 15 Aug 2023 14:46:24 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "On Tue, Aug 15, 2023 at 02:46:24PM +0900, Tatsuo Ishii wrote:\n> Is it possible to put the new define staff into protocol.h then let\n> pqcomm.h include protocol.h? This makes Pgpool-II and other middle\n> ware/drivers written in C easier to use the defines so that they only\n> include protocol.h to use the defines.\n\nIt is possible, of course, but are there any reasons such programs couldn't\ninclude pqcomm.h? It looks relatively inexpensive to me. That being said,\nI'm fine with the approach you describe if the folks in this thread agree.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 15 Aug 2023 08:58:02 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "> On Tue, Aug 15, 2023 at 02:46:24PM +0900, Tatsuo Ishii wrote:\n>> Is it possible to put the new define staff into protocol.h then let\n>> pqcomm.h include protocol.h? This makes Pgpool-II and other middle\n>> ware/drivers written in C easier to use the defines so that they only\n>> include protocol.h to use the defines.\n> \n> It is possible, of course, but are there any reasons such programs couldn't\n> include pqcomm.h? It looks relatively inexpensive to me. That being said,\n> I'm fine with the approach you describe if the folks in this thread agree.\n\nCurrently pqcomm.h needs c.h which is not problem for Pgpool-II. But\nwhat about other middleware?\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Wed, 16 Aug 2023 06:25:09 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "On Wed, Aug 16, 2023 at 06:25:09AM +0900, Tatsuo Ishii wrote:\n> Currently pqcomm.h needs c.h which is not problem for Pgpool-II. But\n> what about other middleware?\n\nWhy do you need to include directly c.h? There are definitions in\nthere that are not intended to be exposed.\n--\nMichael", "msg_date": "Wed, 16 Aug 2023 06:37:18 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "On 2023-Aug-16, Michael Paquier wrote:\n\n> On Wed, Aug 16, 2023 at 06:25:09AM +0900, Tatsuo Ishii wrote:\n> > Currently pqcomm.h needs c.h which is not problem for Pgpool-II. But\n> > what about other middleware?\n> \n> Why do you need to include directly c.h? There are definitions in\n> there that are not intended to be exposed.\n\nWhat this argument says is that these new defines should be in a\nseparate file, not in pqcomm.h. IMO that makes sense, precisely because\nthese defines should be usable by third parties.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Tue, 15 Aug 2023 23:40:07 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "> On Wed, Aug 16, 2023 at 06:25:09AM +0900, Tatsuo Ishii wrote:\n>> Currently pqcomm.h needs c.h which is not problem for Pgpool-II. But\n>> what about other middleware?\n> \n> Why do you need to include directly c.h? There are definitions in\n> there that are not intended to be exposed.\n\npqcomm.h has this:\n\n#define UNIXSOCK_PATH(path, port, sockdir) \\\n\t (AssertMacro(sockdir), \\\n\t\tAssertMacro(*(sockdir) != '\\0'), \\\n\t\tsnprintf(path, sizeof(path), \"%s/.s.PGSQL.%d\", \\\n\t\t\t\t (sockdir), (port)))\n\nAssertMacro is defined in c.h.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Wed, 16 Aug 2023 06:44:20 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "On Tue, Aug 15, 2023 at 11:40:07PM +0200, Alvaro Herrera wrote:\n> On 2023-Aug-16, Michael Paquier wrote:\n> \n>> On Wed, Aug 16, 2023 at 06:25:09AM +0900, Tatsuo Ishii wrote:\n>> > Currently pqcomm.h needs c.h which is not problem for Pgpool-II. But\n>> > what about other middleware?\n>> \n>> Why do you need to include directly c.h? There are definitions in\n>> there that are not intended to be exposed.\n> \n> What this argument says is that these new defines should be in a\n> separate file, not in pqcomm.h. IMO that makes sense, precisely because\n> these defines should be usable by third parties.\n\nI moved the definitions out to a separate file in v6.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 16 Aug 2023 12:29:56 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "On Wed, Aug 16, 2023 at 12:29:56PM -0700, Nathan Bossart wrote:\n> I moved the definitions out to a separate file in v6.\n\nLooks sensible seen from here.\n\nThis patch is missing the installation of protocol.h in\nsrc/tools/msvc/Install.pm for MSVC. For pqcomm.h, we are doing that:\nlcopy('src/include/libpq/pqcomm.h', $target . '/include/internal/libpq/')\n || croak 'Could not copy pqcomm.h';\n\nSo adding two similar lines for protocol.h should be enough (I assume,\ndid not test).\n\nIn fe-exec.c, we still have a few things for the type of objects to\nwork on:\n- 'S' for statement.\n- 'P' for portal.\nShould these be added to protocol.h? They are part of the extended\nprotocol.\n\nThe comment at the top of PQsendTypedCommand() mentions 'C' and 'D',\nbut perhaps these should be updated to the object names instead?\n\npqFunctionCall3(), for PQfn(), has a few more hardcoded characters for\nits status codes. I'm OK to do things incrementally so it's fine by\nme to not add them now, just noticing on the way what could be added\nto this new header.\n--\nMichael", "msg_date": "Thu, 17 Aug 2023 09:31:55 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "On Thu, Aug 17, 2023 at 09:31:55AM +0900, Michael Paquier wrote:\n> Looks sensible seen from here.\n\nThanks for taking a look.\n\n> This patch is missing the installation of protocol.h in\n> src/tools/msvc/Install.pm for MSVC. For pqcomm.h, we are doing that:\n> lcopy('src/include/libpq/pqcomm.h', $target . '/include/internal/libpq/')\n> || croak 'Could not copy pqcomm.h';\n> \n> So adding two similar lines for protocol.h should be enough (I assume,\n> did not test).\n\nI added those lines in v7.\n\n> In fe-exec.c, we still have a few things for the type of objects to\n> work on:\n> - 'S' for statement.\n> - 'P' for portal.\n> Should these be added to protocol.h? They are part of the extended\n> protocol.\n\nIMHO they should be added, but I've intentionally restricted this first\npatch to only codes with existing names in protocol.sgml. I figured we\ncould work on naming other things in a follow-up discussion.\n\n> The comment at the top of PQsendTypedCommand() mentions 'C' and 'D',\n> but perhaps these should be updated to the object names instead?\n\nDone.\n\n> pqFunctionCall3(), for PQfn(), has a few more hardcoded characters for\n> its status codes. I'm OK to do things incrementally so it's fine by\n> me to not add them now, just noticing on the way what could be added\n> to this new header.\n\nCool, thanks.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 17 Aug 2023 09:13:34 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "On Thu, Aug 17, 2023 at 12:13 PM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n> On Thu, Aug 17, 2023 at 09:31:55AM +0900, Michael Paquier wrote:\n> > Looks sensible seen from here.\n>\n> Thanks for taking a look.\n\nI think this is going to be a major improvement in terms of readability!\n\nI'm pretty happy with this version, too. We may be running out of\nthings to argue about -- and no, I don't want to argue about\nunderscores more!\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Aug 2023 12:22:24 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "On Thu, Aug 17, 2023 at 9:22 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Thu, Aug 17, 2023 at 12:13 PM Nathan Bossart\n> <nathandbossart@gmail.com> wrote:\n> > On Thu, Aug 17, 2023 at 09:31:55AM +0900, Michael Paquier wrote:\n> > > Looks sensible seen from here.\n> >\n> > Thanks for taking a look.\n>\n> I think this is going to be a major improvement in terms of readability!\n\n\nThat was the primary goal\n\n>\nDave\n\n>\n> --\nDave Cramer\n\nOn Thu, Aug 17, 2023 at 9:22 AM Robert Haas <robertmhaas@gmail.com> wrote:On Thu, Aug 17, 2023 at 12:13 PM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n> On Thu, Aug 17, 2023 at 09:31:55AM +0900, Michael Paquier wrote:\n> > Looks sensible seen from here.\n>\n> Thanks for taking a look.\n\nI think this is going to be a major improvement in terms of readability!That was the primary goal Dave \n-- Dave Cramer", "msg_date": "Thu, 17 Aug 2023 08:34:36 -0800", "msg_from": "Dave Cramer <davecramer@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "On Thu, Aug 17, 2023 at 12:22:24PM -0400, Robert Haas wrote:\n> I think this is going to be a major improvement in terms of readability!\n> \n> I'm pretty happy with this version, too. We may be running out of\n> things to argue about -- and no, I don't want to argue about\n> underscores more!\n\nAwesome. If we are indeed out of things to argue about, I'll plan on\ncommitting this sometime next week.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 17 Aug 2023 09:52:03 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Using defines for protocol characters" }, { "msg_contents": "On Thu, Aug 17, 2023 at 09:52:03AM -0700, Nathan Bossart wrote:\n> On Thu, Aug 17, 2023 at 12:22:24PM -0400, Robert Haas wrote:\n>> I think this is going to be a major improvement in terms of readability!\n>> \n>> I'm pretty happy with this version, too. We may be running out of\n>> things to argue about -- and no, I don't want to argue about\n>> underscores more!\n> \n> Awesome. If we are indeed out of things to argue about, I'll plan on\n> committing this sometime next week.\n\nCommitted.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 22 Aug 2023 19:24:11 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Using defines for protocol characters" } ]
[ { "msg_contents": "Hi,\n\nI would like to propose to add an option to pgbench so that benchmark\ncan quit immediately when any client is aborted. Currently, when a\nclient is aborted due to some error, for example, network trouble, \nother clients continue their run until a certain number of transactions\nspecified -t is reached or the time specified by -T is expired. At the\nend, the results are printed, but they are not useful, as the message\n\"Run was aborted; the above results are incomplete\" shows.\n\nFor precise benchmark purpose, we would not want to wait to get such\nincomplete results, rather we would like to know some trouble happened\nto allow a quick retry. Therefore, it would be nice to add an option to\nmake pgbench exit instead of continuing run in other clients when any\nclient is aborted. I think adding the optional is better than whole\nbehavioural change because some users that use pgbench just in order\nto stress on backends for testing purpose rather than benchmark might\nnot want to stop pgbench even a client is aborted. \n\nAttached is the patch to add the option --exit-on-abort.\nIf this option is specified, when any client is aborted, pgbench\nimmediately quit by calling exit(2).\n\nWhat do you think about it?\n\nRegards,\nYugo Nagata\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Fri, 4 Aug 2023 13:03:25 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "pgbench: allow to exit immediately when any client is aborted" }, { "msg_contents": "> Hi,\n> \n> I would like to propose to add an option to pgbench so that benchmark\n> can quit immediately when any client is aborted. Currently, when a\n> client is aborted due to some error, for example, network trouble, \n> other clients continue their run until a certain number of transactions\n> specified -t is reached or the time specified by -T is expired. At the\n> end, the results are printed, but they are not useful, as the message\n> \"Run was aborted; the above results are incomplete\" shows.\n\nSounds like a good idea. It's a waste of resources waiting for\nunusable benchmark results until t/T expired. If we graze on the\nscreen, then it's easy to cancel the pgbench run. But I frequently let\npgbench run without sitting in front of the screen especially when t/T\nis large (it's recommended that running pgbench with large enough t/T\nto get usable results).\n\n> For precise benchmark purpose, we would not want to wait to get such\n> incomplete results, rather we would like to know some trouble happened\n> to allow a quick retry. Therefore, it would be nice to add an option to\n> make pgbench exit instead of continuing run in other clients when any\n> client is aborted. I think adding the optional is better than whole\n> behavioural change because some users that use pgbench just in order\n> to stress on backends for testing purpose rather than benchmark might\n> not want to stop pgbench even a client is aborted. \n> \n> Attached is the patch to add the option --exit-on-abort.\n> If this option is specified, when any client is aborted, pgbench\n> immediately quit by calling exit(2).\n> \n> What do you think about it?\n\nI think aborting pgbench by calling exit(2) is enough. It's not worth\nthe trouble to add more codes for this purpose.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Sat, 05 Aug 2023 12:16:11 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: pgbench: allow to exit immediately when any client is aborted" }, { "msg_contents": "On Sat, 05 Aug 2023 12:16:11 +0900 (JST)\nTatsuo Ishii <ishii@sraoss.co.jp> wrote:\n\n> > Hi,\n> > \n> > I would like to propose to add an option to pgbench so that benchmark\n> > can quit immediately when any client is aborted. Currently, when a\n> > client is aborted due to some error, for example, network trouble, \n> > other clients continue their run until a certain number of transactions\n> > specified -t is reached or the time specified by -T is expired. At the\n> > end, the results are printed, but they are not useful, as the message\n> > \"Run was aborted; the above results are incomplete\" shows.\n> \n> Sounds like a good idea. It's a waste of resources waiting for\n> unusable benchmark results until t/T expired. If we graze on the\n> screen, then it's easy to cancel the pgbench run. But I frequently let\n> pgbench run without sitting in front of the screen especially when t/T\n> is large (it's recommended that running pgbench with large enough t/T\n> to get usable results).\n\nThank you for your agreement.\n\n> > For precise benchmark purpose, we would not want to wait to get such\n> > incomplete results, rather we would like to know some trouble happened\n> > to allow a quick retry. Therefore, it would be nice to add an option to\n> > make pgbench exit instead of continuing run in other clients when any\n> > client is aborted. I think adding the optional is better than whole\n> > behavioural change because some users that use pgbench just in order\n> > to stress on backends for testing purpose rather than benchmark might\n> > not want to stop pgbench even a client is aborted. \n> > \n> > Attached is the patch to add the option --exit-on-abort.\n> > If this option is specified, when any client is aborted, pgbench\n> > immediately quit by calling exit(2).\n> > \n> > What do you think about it?\n> \n> I think aborting pgbench by calling exit(2) is enough. It's not worth\n> the trouble to add more codes for this purpose.\n\nIn order to stop pgbench more gracefully, it might be better to make\neach thread exit at more proper timing after some cleaning-up like\nconnection close. However, pgbench code doesn't provide such functions\nfor inter-threads communication. If we would try to make this, both\npthread and Windows versions would be needed. I don't think it is necessary\nto make such effort for --exit-on-abort option, as you said.\n\nRegards,\nYugo Nagata\n\n> \n> Best reagards,\n> --\n> Tatsuo Ishii\n> SRA OSS LLC\n> English: http://www.sraoss.co.jp/index_en/\n> Japanese:http://www.sraoss.co.jp\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Mon, 7 Aug 2023 11:02:48 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: pgbench: allow to exit immediately when any client is aborted" }, { "msg_contents": "On Mon, 7 Aug 2023 11:02:48 +0900\nYugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> On Sat, 05 Aug 2023 12:16:11 +0900 (JST)\n> Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n> \n> > > Hi,\n> > > \n> > > I would like to propose to add an option to pgbench so that benchmark\n> > > can quit immediately when any client is aborted. Currently, when a\n> > > client is aborted due to some error, for example, network trouble, \n> > > other clients continue their run until a certain number of transactions\n> > > specified -t is reached or the time specified by -T is expired. At the\n> > > end, the results are printed, but they are not useful, as the message\n> > > \"Run was aborted; the above results are incomplete\" shows.\n> > \n> > Sounds like a good idea. It's a waste of resources waiting for\n> > unusable benchmark results until t/T expired. If we graze on the\n> > screen, then it's easy to cancel the pgbench run. But I frequently let\n> > pgbench run without sitting in front of the screen especially when t/T\n> > is large (it's recommended that running pgbench with large enough t/T\n> > to get usable results).\n> \n> Thank you for your agreement.\n> \n> > > For precise benchmark purpose, we would not want to wait to get such\n> > > incomplete results, rather we would like to know some trouble happened\n> > > to allow a quick retry. Therefore, it would be nice to add an option to\n> > > make pgbench exit instead of continuing run in other clients when any\n> > > client is aborted. I think adding the optional is better than whole\n> > > behavioural change because some users that use pgbench just in order\n> > > to stress on backends for testing purpose rather than benchmark might\n> > > not want to stop pgbench even a client is aborted. \n> > > \n> > > Attached is the patch to add the option --exit-on-abort.\n> > > If this option is specified, when any client is aborted, pgbench\n> > > immediately quit by calling exit(2).\n\nI attached v2 patch including the documentation and some comments\nin the code.\n\nRegards,\nYugo Nagata\n\n> > > \n> > > What do you think about it?\n> > \n> > I think aborting pgbench by calling exit(2) is enough. It's not worth\n> > the trouble to add more codes for this purpose.\n> \n> In order to stop pgbench more gracefully, it might be better to make\n> each thread exit at more proper timing after some cleaning-up like\n> connection close. However, pgbench code doesn't provide such functions\n> for inter-threads communication. If we would try to make this, both\n> pthread and Windows versions would be needed. I don't think it is necessary\n> to make such effort for --exit-on-abort option, as you said.\n> \n> Regards,\n> Yugo Nagata\n> \n> > \n> > Best reagards,\n> > --\n> > Tatsuo Ishii\n> > SRA OSS LLC\n> > English: http://www.sraoss.co.jp/index_en/\n> > Japanese:http://www.sraoss.co.jp\n> \n> \n> -- \n> Yugo NAGATA <nagata@sraoss.co.jp>\n> \n> \n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Mon, 7 Aug 2023 11:26:00 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: pgbench: allow to exit immediately when any client is aborted" }, { "msg_contents": "\nHello Yugo-san,\n\n> I attached v2 patch including the documentation and some comments\n> in the code.\n\nI've looked at this patch.\n\nI'm unclear whether it does what it says: \"exit immediately on abort\", I \nwould expect a cold call to \"exit\" (with a clear message obviously) when \nthe abort occurs.\n\nCurrently it skips to \"done\" which starts by closing this particular \nthread client connections, then it will call \"exit\" later, so it is not \n\"immediate\".\n\nI do not think that this cleanup is necessary, because anyway all other \nthreads will be brutally killed by the exit called by the aborted thread, \nso why bothering to disconnect only some connections?\n\n /* If any client is aborted, exit immediately. */\n if (state[i].state != CSTATE_FINISHED)\n\nFor this comment, I would prefer \"if ( ... == CSTATE_ABORTED)\" rather that \nimplying that not finished means aborted, and if you follow my previous \nremark then this code can be removed.\n\nTypo: \"going to exist\" -> \"going to exit\". Note that it is not \"the whole \nthread\" but \"the program\" which is exiting.\n\nThere is no test.\n\n-- \nFabien.\n\n\n", "msg_date": "Mon, 7 Aug 2023 12:17:38 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench: allow to exit immediately when any client is aborted" }, { "msg_contents": "Hello Fabien,\n\nOn Mon, 7 Aug 2023 12:17:38 +0200 (CEST)\nFabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n> \n> Hello Yugo-san,\n> \n> > I attached v2 patch including the documentation and some comments\n> > in the code.\n> \n> I've looked at this patch.\n\nThank you for your review!\n\n> \n> I'm unclear whether it does what it says: \"exit immediately on abort\", I \n> would expect a cold call to \"exit\" (with a clear message obviously) when \n> the abort occurs.\n> \n> Currently it skips to \"done\" which starts by closing this particular \n> thread client connections, then it will call \"exit\" later, so it is not \n> \"immediate\".\n\nThere are cases where \"goto done\" is used where some error like\n\"invalid socket: ...\" happens. I would like to make pgbench exit in\nsuch cases, too, so I chose to handle errors below the \"done:\" label.\nHowever, we can change it to call \"exit\" instead of \"goo done\" at each\nplace. Do you think this is better?\n \n> I do not think that this cleanup is necessary, because anyway all other \n> threads will be brutally killed by the exit called by the aborted thread, \n> so why bothering to disconnect only some connections?\n\nAgreed. This disconnection is not necessary anyway even when we would like\nto handle it below \"done\".\n\n> /* If any client is aborted, exit immediately. */\n> if (state[i].state != CSTATE_FINISHED)\n> \n> For this comment, I would prefer \"if ( ... == CSTATE_ABORTED)\" rather that \n> implying that not finished means aborted, and if you follow my previous \n> remark then this code can be removed.\n\nOk. If we handle errors like \"invalid socket:\" (mentioned above) after\nskipping to \"done\", we should set the status to CSTATE_ABORTED before the jump. \nOtherwise, if we choose to call \"exit\" immediately at each error instead of\nskipping to \"done\", we can remove this as you says.\n\n> Typo: \"going to exist\" -> \"going to exit\". Note that it is not \"the whole \n> thread\" but \"the program\" which is exiting.\n\nI'll fix.\n \n> There is no test.\n\nI'll add an test.\n\nRegards,\nYugo Nagata\n\n \n> -- \n> Fabien.\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Tue, 8 Aug 2023 09:47:47 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: pgbench: allow to exit immediately when any client is aborted" }, { "msg_contents": "\nHello Yugo-san,\n\n> There are cases where \"goto done\" is used where some error like\n> \"invalid socket: ...\" happens. I would like to make pgbench exit in\n> such cases, too, so I chose to handle errors below the \"done:\" label.\n> However, we can change it to call \"exit\" instead of \"goo done\" at each\n> place. Do you think this is better?\n\nGood point.\n\nNow I understand the \"!= FINISHED\", because indeed in these cases the done \nis reached with unfinished but not necessarily ABORTED clients, and the \ncomment was somehow misleading.\n\nOn reflection, there should be only one exit() call, thus I'd say to keep \nthe \"goto done\" as you did, but to move the checking loop *before* the \ndisconnect_all, and the overall section comment could be something like \n\"possibly abort if any client is not finished, meaning some error \noccured\", which is consistent with the \"!= FINISHED\" condition.\n\n-- \nFabien.\n\n\n", "msg_date": "Wed, 9 Aug 2023 02:15:01 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench: allow to exit immediately when any client is aborted" }, { "msg_contents": "On Wed, 9 Aug 2023 02:15:01 +0200 (CEST)\nFabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n> \n> Hello Yugo-san,\n> \n> > There are cases where \"goto done\" is used where some error like\n> > \"invalid socket: ...\" happens. I would like to make pgbench exit in\n> > such cases, too, so I chose to handle errors below the \"done:\" label.\n> > However, we can change it to call \"exit\" instead of \"goo done\" at each\n> > place. Do you think this is better?\n> \n> Good point.\n> \n> Now I understand the \"!= FINISHED\", because indeed in these cases the done \n> is reached with unfinished but not necessarily ABORTED clients, and the \n> comment was somehow misleading.\n> \n> On reflection, there should be only one exit() call, thus I'd say to keep \n> the \"goto done\" as you did, but to move the checking loop *before* the \n> disconnect_all, and the overall section comment could be something like \n> \"possibly abort if any client is not finished, meaning some error \n> occured\", which is consistent with the \"!= FINISHED\" condition.\n\nThank you for your suggestion.\nI'll fix as above and submit a updated patch soon.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Wed, 9 Aug 2023 10:46:38 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: pgbench: allow to exit immediately when any client is aborted" }, { "msg_contents": "On Wed, 9 Aug 2023 10:46:38 +0900\nYugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> On Wed, 9 Aug 2023 02:15:01 +0200 (CEST)\n> Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> \n> > \n> > Hello Yugo-san,\n> > \n> > > There are cases where \"goto done\" is used where some error like\n> > > \"invalid socket: ...\" happens. I would like to make pgbench exit in\n> > > such cases, too, so I chose to handle errors below the \"done:\" label.\n> > > However, we can change it to call \"exit\" instead of \"goo done\" at each\n> > > place. Do you think this is better?\n> > \n> > Good point.\n> > \n> > Now I understand the \"!= FINISHED\", because indeed in these cases the done \n> > is reached with unfinished but not necessarily ABORTED clients, and the \n> > comment was somehow misleading.\n> > \n> > On reflection, there should be only one exit() call, thus I'd say to keep \n> > the \"goto done\" as you did, but to move the checking loop *before* the \n> > disconnect_all, and the overall section comment could be something like \n> > \"possibly abort if any client is not finished, meaning some error \n> > occured\", which is consistent with the \"!= FINISHED\" condition.\n> \n> Thank you for your suggestion.\n> I'll fix as above and submit a updated patch soon.\n\nI attached the updated patch v3 including changes above, a test,\nand fix of the typo you pointed out.\n\nRegards,\nYugo Nagata\n\n> \n> Regards,\n> Yugo Nagata\n> \n> -- \n> Yugo NAGATA <nagata@sraoss.co.jp>\n> \n> \n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Wed, 9 Aug 2023 13:59:21 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: pgbench: allow to exit immediately when any client is aborted" }, { "msg_contents": "On Wed, 9 Aug 2023 13:59:21 +0900\nYugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> On Wed, 9 Aug 2023 10:46:38 +0900\n> Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n> \n> > On Wed, 9 Aug 2023 02:15:01 +0200 (CEST)\n> > Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> > \n> > > \n> > > Hello Yugo-san,\n> > > \n> > > > There are cases where \"goto done\" is used where some error like\n> > > > \"invalid socket: ...\" happens. I would like to make pgbench exit in\n> > > > such cases, too, so I chose to handle errors below the \"done:\" label.\n> > > > However, we can change it to call \"exit\" instead of \"goo done\" at each\n> > > > place. Do you think this is better?\n> > > \n> > > Good point.\n> > > \n> > > Now I understand the \"!= FINISHED\", because indeed in these cases the done \n> > > is reached with unfinished but not necessarily ABORTED clients, and the \n> > > comment was somehow misleading.\n> > > \n> > > On reflection, there should be only one exit() call, thus I'd say to keep \n> > > the \"goto done\" as you did, but to move the checking loop *before* the \n> > > disconnect_all, and the overall section comment could be something like \n> > > \"possibly abort if any client is not finished, meaning some error \n> > > occured\", which is consistent with the \"!= FINISHED\" condition.\n> > \n> > Thank you for your suggestion.\n> > I'll fix as above and submit a updated patch soon.\n> \n> I attached the updated patch v3 including changes above, a test,\n> and fix of the typo you pointed out.\n\nI'm sorry but the test in the previous patch was incorrect. \nI attached the correct one.\n\nRegards,\nYugo Nagata\n\n> Regards,\n> Yugo Nagata\n> \n> > \n> > Regards,\n> > Yugo Nagata\n> > \n> > -- \n> > Yugo NAGATA <nagata@sraoss.co.jp>\n> > \n> > \n> \n> \n> -- \n> Yugo NAGATA <nagata@sraoss.co.jp>\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Wed, 9 Aug 2023 14:07:55 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: pgbench: allow to exit immediately when any client is aborted" }, { "msg_contents": "\nHello Yugo-san,\n\n>> I attached the updated patch v3 including changes above, a test,\n>> and fix of the typo you pointed out.\n>\n> I'm sorry but the test in the previous patch was incorrect.\n> I attached the correct one.\n\nAbout pgbench exit on abort v3:\n\nPatch does not \"git apply\", but is ok with \"patch\" although there are some\nminor warnings.\n\nCompiles. Code is ok. Tests are ok.\n\nAbout Test:\n\nThe code is simple to get an error quickly but after a few transactions, \ngood. I'll do a minimal \"-c 2 -j 2 -t 10\" instead of \"-c 4 -j 4 -T 10\".\n\n-- \nFabien.\n\n\n", "msg_date": "Sun, 13 Aug 2023 11:27:55 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench: allow to exit immediately when any client is aborted" }, { "msg_contents": "On Sun, 13 Aug 2023 11:27:55 +0200 (CEST)\nFabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n> \n> Hello Yugo-san,\n> \n> >> I attached the updated patch v3 including changes above, a test,\n> >> and fix of the typo you pointed out.\n> >\n> > I'm sorry but the test in the previous patch was incorrect.\n> > I attached the correct one.\n> \n> About pgbench exit on abort v3:\n> \n> Patch does not \"git apply\", but is ok with \"patch\" although there are some\n> minor warnings.\n\nIn my environment, the patch can be applied to the master branch without\nany warnings...\n\n> \n> Compiles. Code is ok. Tests are ok.\n> \n> About Test:\n> \n> The code is simple to get an error quickly but after a few transactions, \n> good. I'll do a minimal \"-c 2 -j 2 -t 10\" instead of \"-c 4 -j 4 -T 10\".\n\nI fixed the test and attached the updated patch, v4.\n\nRegards,\nYugo Nagata\n\n> \n> -- \n> Fabien.\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Mon, 14 Aug 2023 23:31:53 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: pgbench: allow to exit immediately when any client is aborted" }, { "msg_contents": "\nAbout pgbench exit on abort v4:\n\nPatch applies cleanly, compiles, local make check ok, doc looks ok.\n\nThis looks ok to me.\n\n-- \nFabien.\n\n\n", "msg_date": "Tue, 15 Aug 2023 11:40:29 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench: allow to exit immediately when any client is aborted" }, { "msg_contents": "> About pgbench exit on abort v4:\n> \n> Patch applies cleanly, compiles, local make check ok, doc looks ok.\n> \n> This looks ok to me.\n\nI have tested the v4 patch with default_transaction_isolation =\n'repeatable read'.\n\npgbench --exit-on-abort -N -p 11002 -c 10 -T 30 test\npgbench (17devel, server 15.3)\nstarting vacuum...end.\ntransaction type: <builtin: simple update>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 10\nnumber of threads: 1\nmaximum number of tries: 1\nduration: 30 s\nnumber of transactions actually processed: 64854\nnumber of failed transactions: 4 (0.006%)\nlatency average = 4.628 ms (including failures)\ninitial connection time = 49.526 ms\ntps = 2160.827556 (without initial connection time)\n\nDepite the 4 failed transactions (could not serialize access due to\nconcurrent update) pgbench ran normally, rather than aborting the\ntest. Is this an expected behavior?\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Thu, 17 Aug 2023 10:37:58 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: pgbench: allow to exit immediately when any client is aborted" }, { "msg_contents": "> I have tested the v4 patch with default_transaction_isolation =\n> 'repeatable read'.\n> \n> pgbench --exit-on-abort -N -p 11002 -c 10 -T 30 test\n> pgbench (17devel, server 15.3)\n> starting vacuum...end.\n> transaction type: <builtin: simple update>\n> scaling factor: 1\n> query mode: simple\n> number of clients: 10\n> number of threads: 1\n> maximum number of tries: 1\n> duration: 30 s\n> number of transactions actually processed: 64854\n> number of failed transactions: 4 (0.006%)\n> latency average = 4.628 ms (including failures)\n> initial connection time = 49.526 ms\n> tps = 2160.827556 (without initial connection time)\n> \n> Depite the 4 failed transactions (could not serialize access due to\n> concurrent update) pgbench ran normally, rather than aborting the\n> test. Is this an expected behavior?\n\nI start to think this behavior is ok and consistent with previous\nbehavior of pgbench because serialization (and dealock) errors have\nbeen treated specially from other types of errors, such as accessing\nnon existing tables. However, I suggest to add more sentences to the\nexplanation of this option so that users are not confused by this.\n\n+ <varlistentry id=\"pgbench-option-exit-on-abort\">\n+ <term><option>--exit-on-abort</option></term>\n+ <listitem>\n+ <para>\n+ Exit immediately when any client is aborted due to some error. Without\n+ this option, even when a client is aborted, other clients could continue\n+ their run as specified by <option>-t</option> or <option>-T</option> option,\n+ and <application>pgbench</application> will print an incomplete results\n+ in this case.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+\n\nWhat about inserting \"Note that serialization failures or deadlock\nfailures will not abort client. See <xref\nlinkend=\"failures-and-retries\"/> for more information.\" into the end\nof this paragraph?\n\nBTW, I think:\n Exit immediately when any client is aborted due to some error. Without\n\nshould be:\n Exit immediately when any client is aborted due to some errors. Without\n\n(error vs. erros)\n\nAlso:\n+ <option>--exit-on-abort</option> is specified . Otherwise in the worst\n\nThere is an extra space between \"specified\" and \".\".\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Sat, 19 Aug 2023 19:51:56 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: pgbench: allow to exit immediately when any client is aborted" }, { "msg_contents": "On Sat, 19 Aug 2023 19:51:56 +0900 (JST)\nTatsuo Ishii <ishii@sraoss.co.jp> wrote:\n\n> > I have tested the v4 patch with default_transaction_isolation =\n> > 'repeatable read'.\n> > \n> > pgbench --exit-on-abort -N -p 11002 -c 10 -T 30 test\n> > pgbench (17devel, server 15.3)\n> > starting vacuum...end.\n> > transaction type: <builtin: simple update>\n> > scaling factor: 1\n> > query mode: simple\n> > number of clients: 10\n> > number of threads: 1\n> > maximum number of tries: 1\n> > duration: 30 s\n> > number of transactions actually processed: 64854\n> > number of failed transactions: 4 (0.006%)\n> > latency average = 4.628 ms (including failures)\n> > initial connection time = 49.526 ms\n> > tps = 2160.827556 (without initial connection time)\n> > \n> > Depite the 4 failed transactions (could not serialize access due to\n> > concurrent update) pgbench ran normally, rather than aborting the\n> > test. Is this an expected behavior?\n\nYes. --exit-on-abort changes a behaviour when a client is **aborted**\ndue to an error, and serialization errors do not cause abort, so\nit is not affected.\n\n> I start to think this behavior is ok and consistent with previous\n> behavior of pgbench because serialization (and dealock) errors have\n> been treated specially from other types of errors, such as accessing\n> non existing tables. However, I suggest to add more sentences to the\n> explanation of this option so that users are not confused by this.\n> \n> + <varlistentry id=\"pgbench-option-exit-on-abort\">\n> + <term><option>--exit-on-abort</option></term>\n> + <listitem>\n> + <para>\n> + Exit immediately when any client is aborted due to some error. Without\n> + this option, even when a client is aborted, other clients could continue\n> + their run as specified by <option>-t</option> or <option>-T</option> option,\n> + and <application>pgbench</application> will print an incomplete results\n> + in this case.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> +\n> \n> What about inserting \"Note that serialization failures or deadlock\n> failures will not abort client. See <xref\n> linkend=\"failures-and-retries\"/> for more information.\" into the end\n> of this paragraph?\n\n--exit-on-abort is related to \"abort\" of a client instead of error or\nfailure itself, so rather I wonder a bit that mentioning serialization/deadlock\nfailures might be confusing. However, if users may think of such failures from\n\"abort\", it could be beneficial to add the sentences with some modification as\nbelow.\n\n \"Note that serialization failures or deadlock failures does not abort the\n client, so they are not affected by this option.\n See <xref linkend=\"failures-and-retries\"/> for more information.\"\n\n> BTW, I think:\n> Exit immediately when any client is aborted due to some error. Without\n> \n> should be:\n> Exit immediately when any client is aborted due to some errors. Without\n> \n> (error vs. erros)\n\nWell, I chose \"some\" to mean \"unknown or unspecified\", not \"an unspecified amount\nor number of\", so singular form \"error\" is used. \nInstead, should we use \"due to a error\"?\n\n> Also:\n> + <option>--exit-on-abort</option> is specified . Otherwise in the worst\n> \n> There is an extra space between \"specified\" and \".\".\n\nFixed.\n\nAlso, I fixed the place of the description in the documentation\nto alphabetical order\n\nAttached is the updated patch. \n\nRegards,\nYugo Nagata\n\n> \n> Best reagards,\n> --\n> Tatsuo Ishii\n> SRA OSS LLC\n> English: http://www.sraoss.co.jp/index_en/\n> Japanese:http://www.sraoss.co.jp\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Thu, 24 Aug 2023 00:16:23 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: pgbench: allow to exit immediately when any client is aborted" }, { "msg_contents": ">> I start to think this behavior is ok and consistent with previous\n>> behavior of pgbench because serialization (and dealock) errors have\n>> been treated specially from other types of errors, such as accessing\n>> non existing tables. However, I suggest to add more sentences to the\n>> explanation of this option so that users are not confused by this.\n>> \n>> + <varlistentry id=\"pgbench-option-exit-on-abort\">\n>> + <term><option>--exit-on-abort</option></term>\n>> + <listitem>\n>> + <para>\n>> + Exit immediately when any client is aborted due to some error. Without\n>> + this option, even when a client is aborted, other clients could continue\n>> + their run as specified by <option>-t</option> or <option>-T</option> option,\n>> + and <application>pgbench</application> will print an incomplete results\n>> + in this case.\n>> + </para>\n>> + </listitem>\n>> + </varlistentry>\n>> +\n>> \n>> What about inserting \"Note that serialization failures or deadlock\n>> failures will not abort client. See <xref\n>> linkend=\"failures-and-retries\"/> for more information.\" into the end\n>> of this paragraph?\n> \n> --exit-on-abort is related to \"abort\" of a client instead of error or\n> failure itself, so rather I wonder a bit that mentioning serialization/deadlock\n> failures might be confusing. However, if users may think of such failures from\n> \"abort\", it could be beneficial to add the sentences with some modification as\n> below.\n\nI myself confused by this and believe that adding extra paragraph is\nbeneficial to users.\n\n> \"Note that serialization failures or deadlock failures does not abort the\n> client, so they are not affected by this option.\n> See <xref linkend=\"failures-and-retries\"/> for more information.\"\n\n\"does not\" --> \"do not\".\n\n>> BTW, I think:\n>> Exit immediately when any client is aborted due to some error. Without\n>> \n>> should be:\n>> Exit immediately when any client is aborted due to some errors. Without\n>> \n>> (error vs. erros)\n> \n> Well, I chose \"some\" to mean \"unknown or unspecified\", not \"an unspecified amount\n> or number of\", so singular form \"error\" is used. \n\nOk.\n\n> Instead, should we use \"due to a error\"?\n\nI don't think so.\n\n>> Also:\n>> + <option>--exit-on-abort</option> is specified . Otherwise in the worst\n>> \n>> There is an extra space between \"specified\" and \".\".\n> \n> Fixed.\n> \n> Also, I fixed the place of the description in the documentation\n> to alphabetical order\n> \n> Attached is the updated patch. \n\nLooks good to me. If there's no objection, I will commit this next week.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Thu, 24 Aug 2023 09:15:51 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: pgbench: allow to exit immediately when any client is aborted" }, { "msg_contents": "On Thu, 24 Aug 2023 09:15:51 +0900 (JST)\nTatsuo Ishii <ishii@sraoss.co.jp> wrote:\n\n> >> I start to think this behavior is ok and consistent with previous\n> >> behavior of pgbench because serialization (and dealock) errors have\n> >> been treated specially from other types of errors, such as accessing\n> >> non existing tables. However, I suggest to add more sentences to the\n> >> explanation of this option so that users are not confused by this.\n> >> \n> >> + <varlistentry id=\"pgbench-option-exit-on-abort\">\n> >> + <term><option>--exit-on-abort</option></term>\n> >> + <listitem>\n> >> + <para>\n> >> + Exit immediately when any client is aborted due to some error. Without\n> >> + this option, even when a client is aborted, other clients could continue\n> >> + their run as specified by <option>-t</option> or <option>-T</option> option,\n> >> + and <application>pgbench</application> will print an incomplete results\n> >> + in this case.\n> >> + </para>\n> >> + </listitem>\n> >> + </varlistentry>\n> >> +\n> >> \n> >> What about inserting \"Note that serialization failures or deadlock\n> >> failures will not abort client. See <xref\n> >> linkend=\"failures-and-retries\"/> for more information.\" into the end\n> >> of this paragraph?\n> > \n> > --exit-on-abort is related to \"abort\" of a client instead of error or\n> > failure itself, so rather I wonder a bit that mentioning serialization/deadlock\n> > failures might be confusing. However, if users may think of such failures from\n> > \"abort\", it could be beneficial to add the sentences with some modification as\n> > below.\n> \n> I myself confused by this and believe that adding extra paragraph is\n> beneficial to users.\n\nOk.\n\n> > \"Note that serialization failures or deadlock failures does not abort the\n> > client, so they are not affected by this option.\n> > See <xref linkend=\"failures-and-retries\"/> for more information.\"\n> \n> \"does not\" --> \"do not\".\n\nOops. I attached the updated patch.\n\n> >> BTW, I think:\n> >> Exit immediately when any client is aborted due to some error. Without\n> >> \n> >> should be:\n> >> Exit immediately when any client is aborted due to some errors. Without\n> >> \n> >> (error vs. erros)\n> > \n> > Well, I chose \"some\" to mean \"unknown or unspecified\", not \"an unspecified amount\n> > or number of\", so singular form \"error\" is used. \n> \n> Ok.\n> \n> > Instead, should we use \"due to a error\"?\n> \n> I don't think so.\n> \n> >> Also:\n> >> + <option>--exit-on-abort</option> is specified . Otherwise in the worst\n> >> \n> >> There is an extra space between \"specified\" and \".\".\n> > \n> > Fixed.\n> > \n> > Also, I fixed the place of the description in the documentation\n> > to alphabetical order\n> > \n> > Attached is the updated patch. \n> \n> Looks good to me. If there's no objection, I will commit this next week.\n> \n> Best reagards,\n> --\n> Tatsuo Ishii\n> SRA OSS LLC\n> English: http://www.sraoss.co.jp/index_en/\n> Japanese:http://www.sraoss.co.jp\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Thu, 24 Aug 2023 11:24:59 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: pgbench: allow to exit immediately when any client is aborted" }, { "msg_contents": "Yugo,\nFabien,\n\n>>> I start to think this behavior is ok and consistent with previous\n>>> behavior of pgbench because serialization (and dealock) errors have\n>>> been treated specially from other types of errors, such as accessing\n>>> non existing tables. However, I suggest to add more sentences to the\n>>> explanation of this option so that users are not confused by this.\n>>> \n>>> + <varlistentry id=\"pgbench-option-exit-on-abort\">\n>>> + <term><option>--exit-on-abort</option></term>\n>>> + <listitem>\n>>> + <para>\n>>> + Exit immediately when any client is aborted due to some error. Without\n>>> + this option, even when a client is aborted, other clients could continue\n>>> + their run as specified by <option>-t</option> or <option>-T</option> option,\n>>> + and <application>pgbench</application> will print an incomplete results\n>>> + in this case.\n>>> + </para>\n>>> + </listitem>\n>>> + </varlistentry>\n>>> +\n>>> \n>>> What about inserting \"Note that serialization failures or deadlock\n>>> failures will not abort client. See <xref\n>>> linkend=\"failures-and-retries\"/> for more information.\" into the end\n>>> of this paragraph?\n>> \n>> --exit-on-abort is related to \"abort\" of a client instead of error or\n>> failure itself, so rather I wonder a bit that mentioning serialization/deadlock\n>> failures might be confusing. However, if users may think of such failures from\n>> \"abort\", it could be beneficial to add the sentences with some modification as\n>> below.\n> \n> I myself confused by this and believe that adding extra paragraph is\n> beneficial to users.\n> \n>> \"Note that serialization failures or deadlock failures does not abort the\n>> client, so they are not affected by this option.\n>> See <xref linkend=\"failures-and-retries\"/> for more information.\"\n> \n> \"does not\" --> \"do not\".\n> \n>>> BTW, I think:\n>>> Exit immediately when any client is aborted due to some error. Without\n>>> \n>>> should be:\n>>> Exit immediately when any client is aborted due to some errors. Without\n>>> \n>>> (error vs. erros)\n>> \n>> Well, I chose \"some\" to mean \"unknown or unspecified\", not \"an unspecified amount\n>> or number of\", so singular form \"error\" is used. \n> \n> Ok.\n> \n>> Instead, should we use \"due to a error\"?\n> \n> I don't think so.\n> \n>>> Also:\n>>> + <option>--exit-on-abort</option> is specified . Otherwise in the worst\n>>> \n>>> There is an extra space between \"specified\" and \".\".\n>> \n>> Fixed.\n>> \n>> Also, I fixed the place of the description in the documentation\n>> to alphabetical order\n>> \n>> Attached is the updated patch. \n> \n> Looks good to me. If there's no objection, I will commit this next week.\n\nI have pushed the patch. Thank you for the conributions!\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Wed, 30 Aug 2023 10:11:10 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: pgbench: allow to exit immediately when any client is aborted" }, { "msg_contents": "On Wed, 30 Aug 2023 10:11:10 +0900 (JST)\nTatsuo Ishii <ishii@sraoss.co.jp> wrote:\n\n> Yugo,\n> Fabien,\n> \n> >>> I start to think this behavior is ok and consistent with previous\n> >>> behavior of pgbench because serialization (and dealock) errors have\n> >>> been treated specially from other types of errors, such as accessing\n> >>> non existing tables. However, I suggest to add more sentences to the\n> >>> explanation of this option so that users are not confused by this.\n> >>> \n> >>> + <varlistentry id=\"pgbench-option-exit-on-abort\">\n> >>> + <term><option>--exit-on-abort</option></term>\n> >>> + <listitem>\n> >>> + <para>\n> >>> + Exit immediately when any client is aborted due to some error. Without\n> >>> + this option, even when a client is aborted, other clients could continue\n> >>> + their run as specified by <option>-t</option> or <option>-T</option> option,\n> >>> + and <application>pgbench</application> will print an incomplete results\n> >>> + in this case.\n> >>> + </para>\n> >>> + </listitem>\n> >>> + </varlistentry>\n> >>> +\n> >>> \n> >>> What about inserting \"Note that serialization failures or deadlock\n> >>> failures will not abort client. See <xref\n> >>> linkend=\"failures-and-retries\"/> for more information.\" into the end\n> >>> of this paragraph?\n> >> \n> >> --exit-on-abort is related to \"abort\" of a client instead of error or\n> >> failure itself, so rather I wonder a bit that mentioning serialization/deadlock\n> >> failures might be confusing. However, if users may think of such failures from\n> >> \"abort\", it could be beneficial to add the sentences with some modification as\n> >> below.\n> > \n> > I myself confused by this and believe that adding extra paragraph is\n> > beneficial to users.\n> > \n> >> \"Note that serialization failures or deadlock failures does not abort the\n> >> client, so they are not affected by this option.\n> >> See <xref linkend=\"failures-and-retries\"/> for more information.\"\n> > \n> > \"does not\" --> \"do not\".\n> > \n> >>> BTW, I think:\n> >>> Exit immediately when any client is aborted due to some error. Without\n> >>> \n> >>> should be:\n> >>> Exit immediately when any client is aborted due to some errors. Without\n> >>> \n> >>> (error vs. erros)\n> >> \n> >> Well, I chose \"some\" to mean \"unknown or unspecified\", not \"an unspecified amount\n> >> or number of\", so singular form \"error\" is used. \n> > \n> > Ok.\n> > \n> >> Instead, should we use \"due to a error\"?\n> > \n> > I don't think so.\n> > \n> >>> Also:\n> >>> + <option>--exit-on-abort</option> is specified . Otherwise in the worst\n> >>> \n> >>> There is an extra space between \"specified\" and \".\".\n> >> \n> >> Fixed.\n> >> \n> >> Also, I fixed the place of the description in the documentation\n> >> to alphabetical order\n> >> \n> >> Attached is the updated patch. \n> > \n> > Looks good to me. If there's no objection, I will commit this next week.\n> \n> I have pushed the patch. Thank you for the conributions!\n\nThank you!\n\nRegards,\nYugo Nagata\n\n> \n> Best reagards,\n> --\n> Tatsuo Ishii\n> SRA OSS LLC\n> English: http://www.sraoss.co.jp/index_en/\n> Japanese:http://www.sraoss.co.jp\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Wed, 30 Aug 2023 10:55:56 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: pgbench: allow to exit immediately when any client is aborted" } ]
[ { "msg_contents": "As stated in [1], there is a typo in the comment in\nparaminfo_get_equal_hashops.\n\n foreach(lc, innerrel->lateral_vars)\n {\n Node *expr = (Node *) lfirst(lc);\n TypeCacheEntry *typentry;\n\n /* Reject if there are any volatile functions in PHVs */\n if (contain_volatile_functions(expr))\n {\n list_free(*operators);\n list_free(*param_exprs);\n return false;\n }\n\nThe expressions in RelOptInfo.lateral_vars are not necessarily from\nPHVs. For instance\n\nexplain (costs off)\nselect * from t t1 join\n lateral (select * from t t2 where t1.a = t2.a offset 0) on true;\n QUERY PLAN\n----------------------------------\n Nested Loop\n -> Seq Scan on t t1\n -> Memoize\n Cache Key: t1.a\n Cache Mode: binary\n -> Seq Scan on t t2\n Filter: (t1.a = a)\n(7 rows)\n\nThe lateral Var 't1.a' comes from the lateral subquery.\n\nSo attach a trivial patch to fix that.\n\n[1]\nhttps://www.postgresql.org/message-id/CAMbWs49dEHRPe8poM_K39r2uOsaOZcg+Y0B5a8tF7vW3uVR3mw@mail.gmail.com\n\nThanks\nRichard", "msg_date": "Fri, 4 Aug 2023 14:48:33 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Fix a comment in paraminfo_get_equal_hashops" }, { "msg_contents": "On Fri, 4 Aug 2023 at 18:48, Richard Guo <guofenglinux@gmail.com> wrote:\n> As stated in [1], there is a typo in the comment in\n> paraminfo_get_equal_hashops.\n\n> [1] https://www.postgresql.org/message-id/CAMbWs49dEHRPe8poM_K39r2uOsaOZcg+Y0B5a8tF7vW3uVR3mw@mail.gmail.com\n\nThanks. Pushed.\n\nDavid\n\n\n", "msg_date": "Mon, 7 Aug 2023 18:18:12 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix a comment in paraminfo_get_equal_hashops" } ]
[ { "msg_contents": "In get_memoize_path() we have a theory about avoiding memoize node if\nthere are volatile functions in the inner rel's target/restrict list.\n\n /*\n * We can't use a memoize node if there are volatile functions in the\n * inner rel's target list or restrict list. A cache hit could reduce the\n * number of calls to these functions.\n */\n if (contain_volatile_functions((Node *) innerrel->reltarget))\n return NULL;\n\n foreach(lc, innerrel->baserestrictinfo)\n {\n RestrictInfo *rinfo = (RestrictInfo *) lfirst(lc);\n\n if (contain_volatile_functions((Node *) rinfo))\n return NULL;\n }\n\nIt seems that the check for restrict list is not thorough, because for a\nparameterized scan we are supposed to also consider all the join clauses\navailable from the outer rels, ie, ppi_clauses. For instance,\n\ncreate table t (a float);\ninsert into t values (1.0), (1.0), (1.0), (1.0);\nanalyze t;\n\nexplain (costs off)\nselect * from t t1 left join lateral\n (select t1.a as t1a, t2.a as t2a from t t2) s\non t1.a = s.t2a + random();\n QUERY PLAN\n-----------------------------------------------\n Nested Loop Left Join\n -> Seq Scan on t t1\n -> Memoize\n Cache Key: t1.a\n Cache Mode: binary\n -> Seq Scan on t t2\n Filter: (t1.a = (a + random()))\n(7 rows)\n\nAccording to the theory we should not use memoize node for this query\nbecause of the volatile function in the inner side. So propose a patch\nto fix that.\n\nThanks\nRichard", "msg_date": "Fri, 4 Aug 2023 18:26:21 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Check volatile functions in ppi_clauses for memoize node" }, { "msg_contents": "On Fri, 4 Aug 2023 at 22:26, Richard Guo <guofenglinux@gmail.com> wrote:\n> explain (costs off)\n> select * from t t1 left join lateral\n> (select t1.a as t1a, t2.a as t2a from t t2) s\n> on t1.a = s.t2a + random();\n> QUERY PLAN\n> -----------------------------------------------\n> Nested Loop Left Join\n> -> Seq Scan on t t1\n> -> Memoize\n> Cache Key: t1.a\n> Cache Mode: binary\n> -> Seq Scan on t t2\n> Filter: (t1.a = (a + random()))\n> (7 rows)\n>\n> According to the theory we should not use memoize node for this query\n> because of the volatile function in the inner side. So propose a patch\n> to fix that.\n\nThanks for the patch. I've pushed a variation of it.\n\nI didn't really see the need to make a single list with all the\nRestrictInfos. It seems fine just to write code and loop over the\nppi_clauses checking for volatility.\n\nDavid\n\n\n", "msg_date": "Mon, 7 Aug 2023 22:19:08 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Check volatile functions in ppi_clauses for memoize node" }, { "msg_contents": "On Mon, Aug 7, 2023 at 6:19 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Fri, 4 Aug 2023 at 22:26, Richard Guo <guofenglinux@gmail.com> wrote:\n> > explain (costs off)\n> > select * from t t1 left join lateral\n> > (select t1.a as t1a, t2.a as t2a from t t2) s\n> > on t1.a = s.t2a + random();\n> > QUERY PLAN\n> > -----------------------------------------------\n> > Nested Loop Left Join\n> > -> Seq Scan on t t1\n> > -> Memoize\n> > Cache Key: t1.a\n> > Cache Mode: binary\n> > -> Seq Scan on t t2\n> > Filter: (t1.a = (a + random()))\n> > (7 rows)\n> >\n> > According to the theory we should not use memoize node for this query\n> > because of the volatile function in the inner side. So propose a patch\n> > to fix that.\n>\n> Thanks for the patch. I've pushed a variation of it.\n>\n> I didn't really see the need to make a single list with all the\n> RestrictInfos. It seems fine just to write code and loop over the\n> ppi_clauses checking for volatility.\n\n\nThat looks good. Thanks for pushing it!\n\nThanks\nRichard\n\nOn Mon, Aug 7, 2023 at 6:19 PM David Rowley <dgrowleyml@gmail.com> wrote:On Fri, 4 Aug 2023 at 22:26, Richard Guo <guofenglinux@gmail.com> wrote:\n> explain (costs off)\n> select * from t t1 left join lateral\n>     (select t1.a as t1a, t2.a as t2a from t t2) s\n> on t1.a = s.t2a + random();\n>                   QUERY PLAN\n> -----------------------------------------------\n>  Nested Loop Left Join\n>    ->  Seq Scan on t t1\n>    ->  Memoize\n>          Cache Key: t1.a\n>          Cache Mode: binary\n>          ->  Seq Scan on t t2\n>                Filter: (t1.a = (a + random()))\n> (7 rows)\n>\n> According to the theory we should not use memoize node for this query\n> because of the volatile function in the inner side.  So propose a patch\n> to fix that.\n\nThanks for the patch.  I've pushed a variation of it.\n\nI didn't really see the need to make a single list with all the\nRestrictInfos. It seems fine just to write code and loop over the\nppi_clauses checking for volatility.That looks good.  Thanks for pushing it!ThanksRichard", "msg_date": "Tue, 8 Aug 2023 08:54:10 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Check volatile functions in ppi_clauses for memoize node" } ]
[ { "msg_contents": "Hi hackers,\n\nNow that fa88928470 generates automatically code and documentation\nrelated to wait events, why not exposing the wait events description\nthrough a system catalog relation? (the idea has been proposed on twitter\nby Yves Colin [1]).\n\nI think that could be useful to:\n\n- join this new relation with pg_stat_activity and have a quick\nunderstanding of what the sessions are waiting for (if any).\n- quickly compare the wait events across versions (what are the new\nones if any,..)\n\nPlease find attached a POC patch creating this new system catalog\npg_wait_event.\n\nThe patch:\n\n- updates the documentation\n- adds a new option to generate-wait_event_types.pl to generate the pg_wait_event.dat\n- creates the pg_wait_event.h\n- works with autoconf\n\nIt currently does not:\n\n- works with meson (has to be done)\n- add tests (not sure it's needed)\n- create an index on the new system catalog (not sure it's needed as the data fits\nin 4 pages (8kB default size)).\n\nOutcome example:\n\npostgres=# select a.pid, a.application_name, a.wait_event,d.description from pg_stat_activity a, pg_wait_event d where a.wait_event = d.wait_event_name and state='active';\n pid | application_name | wait_event | description\n---------+------------------+-------------+-------------------------------------------------------------------\n 2937546 | pgbench | WALInitSync | Waiting for a newly initialized WAL file to reach durable storage\n(1 row)\n\nThere is still some work to be done to generate the pg_wait_event.dat file, specially when the\nsame wait event name can be found in multiple places (like for example \"WALWrite\" in IO and LWLock),\nleading to:\n\npostgres=# select * from pg_wait_event where wait_event_name = 'WALWrite';\n wait_event_name | description\n-----------------+----------------------------------------------------------------------------------\n WALWrite | Waiting for a write to a WAL file. Waiting for WAL buffers to be written to disk\n WALWrite | Waiting for WAL buffers to be written to disk\n(2 rows)\n\nwhich is obviously not right (we'll probably have to add the wait class name to the game).\n\nI'm sharing it now (even if it's still WIP) so that you can start sharing your thoughts\nabout it.\n\n[1]: https://twitter.com/Ycolin/status/1676598065048743948\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 4 Aug 2023 16:22:54 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "WIP: new system catalog pg_wait_event" }, { "msg_contents": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com> writes:\n> Now that fa88928470 generates automatically code and documentation\n> related to wait events, why not exposing the wait events description\n> through a system catalog relation?\n\nI think you'd be better off making this a view over a set-returning\nfunction. The nearby work to allow run-time extensibility of the\nset of wait events is not going to be happy with a static catalog.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 04 Aug 2023 11:08:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: WIP: new system catalog pg_wait_event" }, { "msg_contents": "Hi,\n\nOn 8/4/23 5:08 PM, Tom Lane wrote:\n> \"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com> writes:\n>> Now that fa88928470 generates automatically code and documentation\n>> related to wait events, why not exposing the wait events description\n>> through a system catalog relation?\n> \n> I think you'd be better off making this a view over a set-returning\n> function. The nearby work to allow run-time extensibility of the\n> set of wait events is not going to be happy with a static catalog.\n\nOh right, good point, thanks!: I'll come back with a new patch version to make\nuse of SRF instead.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 7 Aug 2023 10:23:27 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIP: new system catalog pg_wait_event" }, { "msg_contents": "Hi,\n\nOn 8/7/23 10:23 AM, Drouvot, Bertrand wrote:\n> Hi,\n> \n> On 8/4/23 5:08 PM, Tom Lane wrote:\n>> \"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com> writes:\n>>> Now that fa88928470 generates automatically code and documentation\n>>> related to wait events, why not exposing the wait events description\n>>> through a system catalog relation?\n>>\n>> I think you'd be better off making this a view over a set-returning\n>> function.  The nearby work to allow run-time extensibility of the\n>> set of wait events is not going to be happy with a static catalog.\n> \n> Oh right, good point, thanks!: I'll come back with a new patch version to make\n> use of SRF instead.\n\nPlease find attached v2 making use of SRF.\n\nv2:\n\n- adds a new pg_wait_event.c that acts as the \"template\" for the SRF\n- generates pg_wait_event_insert.c (through generate-wait_event_types.pl) that\nis included into pg_wait_event.c\n\nThat way I think it's flexible enough to add more code if needed in the SRF.\n\nThe patch also:\n\n- updates the doc\n- works with autoconf and meson\n- adds a simple test\n\nI'm adding a new CF entry for it.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 7 Aug 2023 17:11:50 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIP: new system catalog pg_wait_event" }, { "msg_contents": "At Mon, 7 Aug 2023 17:11:50 +0200, \"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com> wrote in \n> That way I think it's flexible enough to add more code if needed in\n> the SRF.\n> \n> The patch also:\n> \n> - updates the doc\n> - works with autoconf and meson\n> - adds a simple test\n> \n> I'm adding a new CF entry for it.\n\nAs I mentioned in another thread, I'm uncertain about our stance on\nthe class id of the wait event. If a class acts as a namespace, we\nshould include it in the view. Otherwise, if the class id is just an\nattribute of the wait event, we should make the event name unique.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 08 Aug 2023 11:53:32 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WIP: new system catalog pg_wait_event" }, { "msg_contents": "On Tue, Aug 08, 2023 at 11:53:32AM +0900, Kyotaro Horiguchi wrote:\n> As I mentioned in another thread, I'm uncertain about our stance on\n> the class id of the wait event. If a class acts as a namespace, we\n> should include it in the view. Otherwise, if the class id is just an\n> attribute of the wait event, we should make the event name unique.\n\nIncluding the class name in the view makes the most sense to me, FWIW,\nas it could be also possible that one reuses an event name in the\nexisting in-core list, but for extensions. That's of course not\nsomething I would recommend.\n--\nMichael", "msg_date": "Tue, 8 Aug 2023 12:05:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: WIP: new system catalog pg_wait_event" }, { "msg_contents": "Hi,\n\nOn 8/8/23 5:05 AM, Michael Paquier wrote:\n> On Tue, Aug 08, 2023 at 11:53:32AM +0900, Kyotaro Horiguchi wrote:\n>> As I mentioned in another thread, I'm uncertain about our stance on\n>> the class id of the wait event. If a class acts as a namespace, we\n>> should include it in the view. Otherwise, if the class id is just an\n>> attribute of the wait event, we should make the event name unique.\n> \n> Including the class name in the view makes the most sense to me, FWIW,\n> as it could be also possible that one reuses an event name in the\n> existing in-core list, but for extensions. That's of course not\n> something I would recommend.\n\nThanks Kyotaro-san and Michael for the feedback. I do agree and will\nadd the class name.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 8 Aug 2023 07:37:42 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIP: new system catalog pg_wait_event" }, { "msg_contents": "Hi,\n\nOn 8/8/23 7:37 AM, Drouvot, Bertrand wrote:\n> Hi,\n> \n> On 8/8/23 5:05 AM, Michael Paquier wrote:\n>> On Tue, Aug 08, 2023 at 11:53:32AM +0900, Kyotaro Horiguchi wrote:\n>>> As I mentioned in another thread, I'm uncertain about our stance on\n>>> the class id of the wait event. If a class acts as a namespace, we\n>>> should include it in the view. Otherwise, if the class id is just an\n>>> attribute of the wait event, we should make the event name unique.\n>>\n>> Including the class name in the view makes the most sense to me, FWIW,\n>> as it could be also possible that one reuses an event name in the\n>> existing in-core list, but for extensions.  That's of course not\n>> something I would recommend.\n> \n> Thanks Kyotaro-san and Michael for the feedback. I do agree and will\n> add the class name.\n> \n\nPlease find attached v3 adding the wait event types.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 8 Aug 2023 10:16:37 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIP: new system catalog pg_wait_event" }, { "msg_contents": "On Tue, Aug 08, 2023 at 10:16:37AM +0200, Drouvot, Bertrand wrote:\n> Please find attached v3 adding the wait event types.\n\n+-- There will surely be at least 9 wait event types, 240 wait events and at\n+-- least 27 related to WAL\n+select count(distinct(wait_event_type)) > 8 as ok_type,\n+ count(*) > 239 as ok,\n+ count(*) FILTER (WHERE description like '%WAL%') > 26 AS ok_wal_desc\n+from pg_wait_event;\nThe point is to check the execution of this function, so this could be\nsimpler, like that or a GROUP BY clause with the event type:\nSELECT count(*) > 0 FROM pg_wait_event;\nSELECT wait_event_type, count(*) > 0 AS has_data FROM pg_wait_event\n GROUP BY wait_event_type ORDER BY wait_event_type;\n\n+ printf $ic \"\\tmemset(values, 0, sizeof(values));\\n\";\n+ printf $ic \"\\tmemset(nulls, 0, sizeof(nulls));\\n\\n\";\n+ printf $ic \"\\tvalues[0] = CStringGetTextDatum(\\\"%s\\\");\\n\", $last;\n+ printf $ic \"\\tvalues[1] = CStringGetTextDatum(\\\"%s\\\");\\n\", $wev->[1];\n+ printf $ic \"\\tvalues[2] = CStringGetTextDatum(\\\"%s\\\");\\n\\n\", $new_desc;\n\nThat's overcomplicated for some code generated. Wouldn't it be\nsimpler to generate a list of elements, with the code inserting the\ntuples materialized looping over it?\n\n+ my $new_desc = substr $wev->[2], 1, -2;\n+ $new_desc =~ s/'/\\\\'/g;\n+ $new_desc =~ s/<.*>(.*?)<.*>/$1/g;\n+ $new_desc =~ s/<xref linkend=\"guc-(.*?)\"\\/>/$1/g;\n+ $new_desc =~ s/; see.*$//;\nBetter to document what this does, the contents produced look good.\n\n+ rename($ictmp, \"$output_path/pg_wait_event_insert.c\")\n+ || die \"rename: $ictmp to $output_path/pg_wait_event_insert.c: $!\";\n\n # seems nicer to not add that as an include path for the whole backend.\n waitevent_sources = files(\n 'wait_event.c',\n+ 'pg_wait_event.c',\n )\n\nThis could use a name referring to SQL functions, say\nwait_event_funcs.c, with a wait_event_data.c or a\nwait_event_funcs_data.c?\n\n+ # Don't generate .c (except pg_wait_event_insert.c) and .h files for\n+ # Extension, LWLock and Lock, these are handled independently.\n+ my $is_exception = $waitclass eq 'WaitEventExtension' ||\n+ $waitclass eq 'WaitEventLWLock' ||\n+ $waitclass eq 'WaitEventLock';\nPerhaps it would be cleaner to use a separate loop?\n--\nMichael", "msg_date": "Wed, 9 Aug 2023 16:56:27 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: WIP: new system catalog pg_wait_event" }, { "msg_contents": "Hi,\n\nOn 8/9/23 9:56 AM, Michael Paquier wrote:\n> On Tue, Aug 08, 2023 at 10:16:37AM +0200, Drouvot, Bertrand wrote:\n>> Please find attached v3 adding the wait event types.\n> \n> +-- There will surely be at least 9 wait event types, 240 wait events and at\n> +-- least 27 related to WAL\n> +select count(distinct(wait_event_type)) > 8 as ok_type,\n> + count(*) > 239 as ok,\n> + count(*) FILTER (WHERE description like '%WAL%') > 26 AS ok_wal_desc\n> +from pg_wait_event;\n> The point is to check the execution of this function, so this could be\n> simpler, like that or a GROUP BY clause with the event type:\n> SELECT count(*) > 0 FROM pg_wait_event;\n> SELECT wait_event_type, count(*) > 0 AS has_data FROM pg_wait_event\n> GROUP BY wait_event_type ORDER BY wait_event_type;\n> \n\nThanks for looking at it!\nRight, so v4 attached is just testing \"SELECT count(*) > 0 FROM pg_wait_event;\",\nthat does look enough to test.\n\n> + printf $ic \"\\tmemset(values, 0, sizeof(values));\\n\";\n> + printf $ic \"\\tmemset(nulls, 0, sizeof(nulls));\\n\\n\";\n> + printf $ic \"\\tvalues[0] = CStringGetTextDatum(\\\"%s\\\");\\n\", $last;\n> + printf $ic \"\\tvalues[1] = CStringGetTextDatum(\\\"%s\\\");\\n\", $wev->[1];\n> + printf $ic \"\\tvalues[2] = CStringGetTextDatum(\\\"%s\\\");\\n\\n\", $new_desc;\n> \n> That's overcomplicated for some code generated. Wouldn't it be\n> simpler to generate a list of elements, with the code inserting the\n> tuples materialized looping over it?\n> \n\nYeah, agree thanks!\nIn v4, the perl script now appends the wait events in a List that way:\n\n\"\n printf $ic \"\\telement = (wait_event_element *) palloc(sizeof(wait_event_element));\\n\";\n\n printf $ic \"\\telement->wait_event_type = \\\"%s\\\";\\n\", $last;\n printf $ic \"\\telement->wait_event_name = \\\"%s\\\";\\n\", $wev->[1];\n printf $ic \"\\telement->wait_event_description = \\\"%s\\\";\\n\\n\", $new_desc;\n\n printf $ic \"\\twait_event = lappend(wait_event, element);\\n\\n\";\n\"\n\nAnd the C function pg_get_wait_events() now iterates over this List.\n\n> + my $new_desc = substr $wev->[2], 1, -2;\n> + $new_desc =~ s/'/\\\\'/g;\n> + $new_desc =~ s/<.*>(.*?)<.*>/$1/g;\n> + $new_desc =~ s/<xref linkend=\"guc-(.*?)\"\\/>/$1/g;\n> + $new_desc =~ s/; see.*$//;\n> Better to document what this does,\n\ngood idea...\n\nI had to turn them \"on\" one by one to recall why they are there...;-)\nDone in v4.\n\n> the contents produced look good.\n\nyeap\n\n> \n> + rename($ictmp, \"$output_path/pg_wait_event_insert.c\")\n> + || die \"rename: $ictmp to $output_path/pg_wait_event_insert.c: $!\";\n> \n> # seems nicer to not add that as an include path for the whole backend.\n> waitevent_sources = files(\n> 'wait_event.c',\n> + 'pg_wait_event.c',\n> )\n> \n> This could use a name referring to SQL functions, say\n> wait_event_funcs.c, with a wait_event_data.c or a\n> wait_event_funcs_data.c?\n\nThat sounds better indeed, thanks! v4 is using wait_event_funcs.c and\nwait_event_funcs_data.c.\n\n> \n> + # Don't generate .c (except pg_wait_event_insert.c) and .h files for\n> + # Extension, LWLock and Lock, these are handled independently.\n> + my $is_exception = $waitclass eq 'WaitEventExtension' ||\n> + $waitclass eq 'WaitEventLWLock' ||\n> + $waitclass eq 'WaitEventLock';\n> Perhaps it would be cleaner to use a separate loop?\n\nAgree that's worth it given the fact that iterating one more time is not that\ncostly here.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 10 Aug 2023 20:09:34 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIP: new system catalog pg_wait_event" }, { "msg_contents": "On Thu, Aug 10, 2023 at 08:09:34PM +0200, Drouvot, Bertrand wrote:\n> Agree that's worth it given the fact that iterating one more time is not that\n> costly here.\n\nI have reviewed v4, and finished by putting my hands on it to see what\nI could do.\n\n+ printf $wc \"\\telement = (wait_event_element *) palloc(sizeof(wait_event_element));\\n\";\n+\n+ printf $wc \"\\telement->wait_event_type = \\\"%s\\\";\\n\", $last;\n+ printf $wc \"\\telement->wait_event_name = \\\"%s\\\";\\n\", $wev->[1];\n+ printf $wc \"\\telement->wait_event_description = \\\"%s\\\";\\n\\n\", $new_desc;\n+\n+ printf $wc \"\\twait_event = lappend(wait_event, element);\\n\\n\";\n+ }\n\nThis is simpler than the previous versions, still I am not much a fan\nof implying the use of a list and these pallocs. There are two things\nthat we could do:\n- Hide that behind a macro defined in wait_event_funcs.c.\n- Feed the data generated here into a static structure, like:\n+static const struct\n+{\n+ const char *type;\n+ const char *name;\n+ const char *description;\n+}\n\nAfter experimenting with both, I've found the latter a tad cleaner, so\nthe attached version does that.\n\n+ <structfield>description</structfield> <type>texte</type>\nThis one was difficult to see..\n\nI am not sure that \"pg_wait_event\" is a good idea for the name if the\nnew view. How about \"pg_wait_events\" instead, in plural form? There\nis more than one wait event listed.\n\nOne log entry in Solution.pm has missed the addition of a reference to\nwait_event_funcs_data.c.\n\nAnd.. src/backend/Makefile missed two updates for maintainer-clean & co,\nno?\n\nOne thing that the patch is still missing is the handling of custom\nwait events for extensions. So this still requires more code. I was\nthinking about listed these events as:\n- Type: \"Extension\"\n- Name: What a module has registered.\n- Description: \"Custom wait event \\\"%Name%\\\" defined by extension\".\n\nFor now I am attaching a v5.\n--\nMichael", "msg_date": "Mon, 14 Aug 2023 13:37:14 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: WIP: new system catalog pg_wait_event" }, { "msg_contents": "Hi,\n\nOn 8/14/23 6:37 AM, Michael Paquier wrote:\n> On Thu, Aug 10, 2023 at 08:09:34PM +0200, Drouvot, Bertrand wrote:\n>> Agree that's worth it given the fact that iterating one more time is not that\n>> costly here.\n> \n> I have reviewed v4, and finished by putting my hands on it to see what\n> I could do.\n\nThanks!\n\n\n> There are two things\n> that we could do:\n> - Hide that behind a macro defined in wait_event_funcs.c.\n> - Feed the data generated here into a static structure, like:\n> +static const struct\n> +{\n> + const char *type;\n> + const char *name;\n> + const char *description;\n> +}\n> \n> After experimenting with both, I've found the latter a tad cleaner, so\n> the attached version does that.\n\nYeah, looking at what you've done in v5, I also agree that's better\nthat what has been done in v4 (I also think it will be easier to maintain).\n\n> I am not sure that \"pg_wait_event\" is a good idea for the name if the\n> new view. How about \"pg_wait_events\" instead, in plural form? There\n> is more than one wait event listed.\n> \n\nI'd prefer the singular form. There is a lot of places where it's already used\n(pg_database, pg_user, pg_namespace...to name a few) and it looks like that using\nthe plural form are exceptions.\n\n> One log entry in Solution.pm has missed the addition of a reference to\n> wait_event_funcs_data.c.\n> \n\nOh right, thanks for fixing it in v5.\n\n> And.. src/backend/Makefile missed two updates for maintainer-clean & co,\n> no?\n> \nOh right, thanks for fixing it in v5.\n\n> One thing that the patch is still missing is the handling of custom\n> wait events for extensions. \n\nYeah, now that af720b4c50 is done, I'll add the custom wait events handling\nin v6.\n\n> So this still requires more code. I was\n> thinking about listed these events as:\n> - Type: \"Extension\"\n> - Name: What a module has registered.\n> - Description: \"Custom wait event \\\"%Name%\\\" defined by extension\".\n\nThat sounds good to me, I'll do that.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 16 Aug 2023 07:04:53 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIP: new system catalog pg_wait_event" }, { "msg_contents": "On Wed, Aug 16, 2023 at 07:04:53AM +0200, Drouvot, Bertrand wrote:\n> I'd prefer the singular form. There is a lot of places where it's already used\n> (pg_database, pg_user, pg_namespace...to name a few) and it looks like that using\n> the plural form are exceptions.\n\nOkay, fine by me. Also, I would remove the \"wait_event_\" prefixes to\nthe field names for the attribute names.\n\n> Yeah, now that af720b4c50 is done, I'll add the custom wait events handling\n> in v6.\n\nThanks. I guess that the hash tables had better remain local to\nwait_event.c.\n--\nMichael", "msg_date": "Wed, 16 Aug 2023 15:22:41 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: WIP: new system catalog pg_wait_event" }, { "msg_contents": "Hi,\n\nOn 8/16/23 8:22 AM, Michael Paquier wrote:\n> On Wed, Aug 16, 2023 at 07:04:53AM +0200, Drouvot, Bertrand wrote:\n>> I'd prefer the singular form. There is a lot of places where it's already used\n>> (pg_database, pg_user, pg_namespace...to name a few) and it looks like that using\n>> the plural form are exceptions.\n> \n> Okay, fine by me. \n\nGreat, singular form is being used in v6 attached.\n\n> Also, I would remove the \"wait_event_\" prefixes to\n> the field names for the attribute names.\n> \n\nIt looks like it has already been done in v5.\n\n>> Yeah, now that af720b4c50 is done, I'll add the custom wait events handling\n>> in v6.\n> \n> Thanks. I guess that the hash tables had better remain local to\n> wait_event.c.\n\nYeah, agree, done that way in v6 (also added a test in 001_worker_spi.pl\nto ensure that \"worker_spi_main\" is reported in pg_wait_event).\n\nI added a \"COLLATE \"C\"\" in the \"order by\" clause's test that we added in\nsrc/test/regress/sql/sysviews.sql (the check was failing on my environment).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 16 Aug 2023 13:43:35 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIP: new system catalog pg_wait_event" }, { "msg_contents": "On Wed, Aug 16, 2023 at 01:43:35PM +0200, Drouvot, Bertrand wrote:\n> Yeah, agree, done that way in v6 (also added a test in 001_worker_spi.pl\n> to ensure that \"worker_spi_main\" is reported in pg_wait_event).\n\n-typedef struct WaitEventExtensionEntryByName\n-{\n- char wait_event_name[NAMEDATALEN]; /* hash key */\n- uint16 event_id; /* wait event ID */\n-} WaitEventExtensionEntryByName;\n\nI'd rather keep all these structures local to wait_event.c, as these\ncover the internals of the hash tables.\n\nCould you switch GetWaitEventExtensionEntries() so as it returns a\nlist of strings or an array of char*? We probably can live with the\nlist for that.\n\n+ char buf[NAMEDATALEN + 44]; //\"Custom wait event \\\"%Name%\\\" defined by extension\" + \nIncorrect comment. This would be simpler as a StringInfo.\n\nThanks for the extra test in worker_spi looking at the contents of the\ncatalog.\n--\nMichael", "msg_date": "Wed, 16 Aug 2023 21:08:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: WIP: new system catalog pg_wait_event" }, { "msg_contents": "Hi,\n\nThank you for creating the patch!\nI think it is a very useful view as a user.\n\nI will share some thoughts about the v6 patch.\n\n1)\n\nThe regular expression needs to be changed in \ngenerate-wait_event_types.pl.\nI have compared the documentation with the output of the pg_wait_event\nview and found the following differences.\n\n* For parameter names, the substitution for underscore is missing.\n-ArchiveCleanupCommand Waiting for archive_cleanup_command to complete\n-ArchiveCommand Waiting for archive_command to complete\n+ArchiveCleanupCommand Waiting for archive-cleanup-command to complete\n+ArchiveCommand Waiting for archive-command to complete\n-RecoveryEndCommand Waiting for recovery_end_command to complete\n+RecoveryEndCommand Waiting for recovery-end-command to complete\n-RestoreCommand Waiting for restore_command to complete\n+RestoreCommand Waiting for restore-command to complete\n\n* The HTML tag match is not shortest match.\n-frozenid Waiting to update pg_database.datfrozenxid and \npg_database.datminmxid\n+frozenid Waiting to update datminmxid\n\n* There are two blanks before \"about\". Also \" for heavy weight is\n removed (is it intended?)\n-LockManager Waiting to read or update information about \"heavyweight\" \nlocks\n+LockManager Waiting to read or update information about heavyweight \nlocks\n\n* Do we need \"worker_spi_main\" in the description? The name column\n shows the same one, so it could be omitted.\n> pg_wait_event\n> worker_spi_main Custom wait event \"worker_spi_main\" defined by \n> extension\n\n2)\n\nWould it be better to add \"extension\" meaning unassigned?\n\n> Extensions can add Extension and LWLock types to the list shown in \n> Table 28.8 and Table 28.12. In some cases, the name of LWLock assigned \n> by an extension will not be available in all server processes; It might \n> be reported as just “extension” rather than the extension-assigned \n> name.\n\n3)\n\nWould index == els be better for the following Assert?\n+\tAssert(index <= els);\n\n4)\n\nThere is a typo. alll -> all\n+\t/* Allocate enough space for alll entries */\n\n5)\n\nBTW, although I think this is outside the scope of this patch,\nit might be a good idea to be able to add a description to the\nAPI for custom wait events.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 17 Aug 2023 10:53:02 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: WIP: new system catalog pg_wait_event" }, { "msg_contents": "On Thu, Aug 17, 2023 at 10:53:02AM +0900, Masahiro Ikeda wrote:\n> BTW, although I think this is outside the scope of this patch,\n> it might be a good idea to be able to add a description to the\n> API for custom wait events.\n\nSomebody on twitter has raised this point. I am not sure that we need\nto go down to that for the sake of this view, but I'm OK to..\nDisagree and Commit to any consensus reached on this matter.\n--\nMichael", "msg_date": "Thu, 17 Aug 2023 10:57:04 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: WIP: new system catalog pg_wait_event" }, { "msg_contents": "On 2023-08-17 10:57, Michael Paquier wrote:\n> On Thu, Aug 17, 2023 at 10:53:02AM +0900, Masahiro Ikeda wrote:\n>> BTW, although I think this is outside the scope of this patch,\n>> it might be a good idea to be able to add a description to the\n>> API for custom wait events.\n> \n> Somebody on twitter has raised this point. I am not sure that we need\n> to go down to that for the sake of this view, but I'm OK to..\n> Disagree and Commit to any consensus reached on this matter.\n\nOh, okay. Thanks for sharing the information.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 17 Aug 2023 11:35:28 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: WIP: new system catalog pg_wait_event" }, { "msg_contents": "Hi,\n\nOn 8/16/23 2:08 PM, Michael Paquier wrote:\n> On Wed, Aug 16, 2023 at 01:43:35PM +0200, Drouvot, Bertrand wrote:\n>> Yeah, agree, done that way in v6 (also added a test in 001_worker_spi.pl\n>> to ensure that \"worker_spi_main\" is reported in pg_wait_event).\n> \n> -typedef struct WaitEventExtensionEntryByName\n> -{\n> - char wait_event_name[NAMEDATALEN]; /* hash key */\n> - uint16 event_id; /* wait event ID */\n> -} WaitEventExtensionEntryByName;\n> \n> I'd rather keep all these structures local to wait_event.c, as these\n> cover the internals of the hash tables.\n> \n> Could you switch GetWaitEventExtensionEntries() so as it returns a\n> list of strings or an array of char*? We probably can live with the\n> list for that.\n> \n\nYeah, I was not sure about this (returning a list of WaitEventExtensionEntryByName\nor a list of wait event names) while working on v6.\n\nThat's true that the only need here is to get the names of the custom wait events.\nReturning only the names would allow us to move the WaitEventExtensionEntryByName definition back\nto the wait_event.c file.\n\nIt makes sense to me, done in v7 attached and renamed the function to GetWaitEventExtensionNames().\n\n> + char buf[NAMEDATALEN + 44]; //\"Custom wait event \\\"%Name%\\\" defined by extension\" +\n> Incorrect comment. This would be simpler as a StringInfo.\n\nYeah and probably less error prone: done in v7.\n\nWhile at it, v7 is deliberately not calling \"pfree(waiteventnames)\" and \"resetStringInfo(&buf)\" in\npg_get_wait_events(): reason is that they are palloc in a short-lived memory context while executing\npg_get_wait_events().\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 17 Aug 2023 07:51:55 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIP: new system catalog pg_wait_event" }, { "msg_contents": "Hi,\n\nOn 8/17/23 3:53 AM, Masahiro Ikeda wrote:\n> Hi,\n> \n> Thank you for creating the patch!\n> I think it is a very useful view as a user.\n> \n> I will share some thoughts about the v6 patch.\n> \n\nThanks for looking at it!\n\n> 1)\n> \n> The regular expression needs to be changed in generate-wait_event_types.pl.\n> I have compared the documentation with the output of the pg_wait_event\n> view and found the following differences.\n> \n> * For parameter names, the substitution for underscore is missing.\n> -ArchiveCleanupCommand Waiting for archive_cleanup_command to complete\n> -ArchiveCommand Waiting for archive_command to complete\n> +ArchiveCleanupCommand Waiting for archive-cleanup-command to complete\n> +ArchiveCommand Waiting for archive-command to complete\n> -RecoveryEndCommand Waiting for recovery_end_command to complete\n> +RecoveryEndCommand Waiting for recovery-end-command to complete\n> -RestoreCommand Waiting for restore_command to complete\n> +RestoreCommand Waiting for restore-command to complete\n> \n\nYeah, nice catch. v7 just shared up-thread replace \"-\" by \"_\"\nfor such a case. But I'm not sure the pg_wait_event description field needs\nto be 100% aligned with the documentation: wouldn't be better to replace\n\"-\" by \" \" in such cases in pg_wait_event?\n\n> * The HTML tag match is not shortest match.\n> -frozenid Waiting to update pg_database.datfrozenxid and pg_database.datminmxid\n> +frozenid Waiting to update datminmxid\n> \n\nNice catch, fixed in v7.\n\n> * There are two blanks before \"about\".\n\nThis is coming from wait_event_names.txt, thanks for pointing out.\nI just submitted a patch [1] to fix that.\n\n> Also \" for heavy weight is\n>   removed (is it intended?)\n> -LockManager Waiting to read or update information about \"heavyweight\" locks\n> +LockManager Waiting to read or update information  about heavyweight locks\n> \n\nNot intended, fixed in v7.\n\n> * Do we need \"worker_spi_main\" in the description? The name column\n>   shows the same one, so it could be omitted.\n>> pg_wait_event\n>> worker_spi_main Custom wait event \"worker_spi_main\" defined by extension\n> \n\nDo you mean remove the wait event name from the description in case of custom\nextension wait events? I'd prefer to keep it, if not the descriptions would be\nall the same for custom wait events.\n\n> 2)\n> \n> Would it be better to add \"extension\" meaning unassigned?\n> \n>> Extensions can add Extension and LWLock types to the list shown in Table 28.8 and Table 28.12. In some cases, the name of LWLock assigned by an extension will not be available in all server processes; It might be reported as just “extension” rather than the extension-assigned name.\n> \n\nYeah, could make sense but I think that should be a dedicated patch for the documentation.\n\n> 3)\n> \n> Would index == els be better for the following Assert?\n> +    Assert(index <= els);\n> \n\nAgree, done in v7.\n\n> 4)\n> \n> There is a typo. alll -> all\n> +    /* Allocate enough space for alll entries */\n> \n\nThanks! Fixed in v7.\n\n[1]: https://www.postgresql.org/message-id/dd836027-2e9e-4df9-9fd9-7527cd1757e1%40gmail.com\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 17 Aug 2023 07:53:02 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIP: new system catalog pg_wait_event" }, { "msg_contents": "Hi,\n\nOn 8/17/23 3:57 AM, Michael Paquier wrote:\n> On Thu, Aug 17, 2023 at 10:53:02AM +0900, Masahiro Ikeda wrote:\n>> BTW, although I think this is outside the scope of this patch,\n>> it might be a good idea to be able to add a description to the\n>> API for custom wait events.\n> \n> Somebody on twitter has raised this point.\n\nI think I know him ;-)\n\n> I am not sure that we need\n> to go down to that for the sake of this view, but I'm OK to..\n> Disagree and Commit to any consensus reached on this matter.\n\nYeah, I changed my mind of this. I think it's better to keep the\n\"control\" about what is displayed in the description field for\nthis new system view.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 17 Aug 2023 07:57:31 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIP: new system catalog pg_wait_event" }, { "msg_contents": "Hi,\n\nOn 2023-08-17 14:53, Drouvot, Bertrand wrote:\n> On 8/17/23 3:53 AM, Masahiro Ikeda wrote:\n>> 1)\n>> \n>> The regular expression needs to be changed in \n>> generate-wait_event_types.pl.\n>> I have compared the documentation with the output of the pg_wait_event\n>> view and found the following differences.\n>> \n>> * For parameter names, the substitution for underscore is missing.\n>> -ArchiveCleanupCommand Waiting for archive_cleanup_command to complete\n>> -ArchiveCommand Waiting for archive_command to complete\n>> +ArchiveCleanupCommand Waiting for archive-cleanup-command to complete\n>> +ArchiveCommand Waiting for archive-command to complete\n>> -RecoveryEndCommand Waiting for recovery_end_command to complete\n>> +RecoveryEndCommand Waiting for recovery-end-command to complete\n>> -RestoreCommand Waiting for restore_command to complete\n>> +RestoreCommand Waiting for restore-command to complete\n>> \n> \n> Yeah, nice catch. v7 just shared up-thread replace \"-\" by \"_\"\n> for such a case. But I'm not sure the pg_wait_event description field \n> needs\n> to be 100% aligned with the documentation: wouldn't be better to \n> replace\n> \"-\" by \" \" in such cases in pg_wait_event?\n> \n>> * The HTML tag match is not shortest match.\n>> -frozenid Waiting to update pg_database.datfrozenxid and \n>> pg_database.datminmxid\n>> +frozenid Waiting to update datminmxid\n>> \n> \n> Nice catch, fixed in v7.\n\nThanks, I confirmed.\n\n>> * There are two blanks before \"about\".\n> \n> This is coming from wait_event_names.txt, thanks for pointing out.\n> I just submitted a patch [1] to fix that.\n\nThanks. I agree it's better to be treated as a separated patch.\n\n>> Also \" for heavy weight is\n>>   removed (is it intended?)\n>> -LockManager Waiting to read or update information about \"heavyweight\" \n>> locks\n>> +LockManager Waiting to read or update information  about heavyweight \n>> locks\n>> \n> \n> Not intended, fixed in v7.\n\nOK, I confirmed it's fixed.\n\n>> * Do we need \"worker_spi_main\" in the description? The name column\n>>   shows the same one, so it could be omitted.\n>>> pg_wait_event\n>>> worker_spi_main Custom wait event \"worker_spi_main\" defined by \n>>> extension\n>> \n> \n> Do you mean remove the wait event name from the description in case of \n> custom\n> extension wait events? I'd prefer to keep it, if not the descriptions \n> would be\n> all the same for custom wait events.\n\nOK, I thought it's redundant.\n\nBTW, is it better to start with \"Waiting\" like any other lines?\nFor example, \"Waiting for custom event \\\"worker_spi_main\\\" defined by an \nextension\".\n\n>> 2)\n>> \n>> Would it be better to add \"extension\" meaning unassigned?\n>> \n>>> Extensions can add Extension and LWLock types to the list shown in \n>>> Table 28.8 and Table 28.12. In some cases, the name of LWLock \n>>> assigned by an extension will not be available in all server \n>>> processes; It might be reported as just “extension” rather than the \n>>> extension-assigned name.\n>> \n> \n> Yeah, could make sense but I think that should be a dedicated patch\n> for the documentation.\n\nOK, make sense.\n\n>> 3)\n>> \n>> Would index == els be better for the following Assert?\n>> +    Assert(index <= els);\n>> \n> \n> Agree, done in v7.\n> \n>> 4)\n>> \n>> There is a typo. alll -> all\n>> +    /* Allocate enough space for alll entries */\n\nThanks, I confirmed it's fixed.\n\n\nThe followings are additional comments for v7.\n\n1)\n\n>> I am not sure that \"pg_wait_event\" is a good idea for the name if the\n>> new view. How about \"pg_wait_events\" instead, in plural form? There\n>> is more than one wait event listed.\n> I'd prefer the singular form. There is a lot of places where it's \n> already used\n> (pg_database, pg_user, pg_namespace...to name a few) and it looks like \n> that using\n> the plural form are exceptions.\n\nSince I got the same feeling as Michael-san that \"pg_wait_events\" would \nbe better,\nI check the names of all system views. I think the singular form seems \nto be\nexceptions for views though the singular form is used usually for system \ncatalogs.\n\nhttps://www.postgresql.org/docs/devel/views.html\nhttps://www.postgresql.org/docs/devel/catalogs.html\n\n2)\n\n$ctmp must be $wctmp?\n\n+\trename($wctmp, \"$output_path/wait_event_funcs_data.c\")\n+\t || die \"rename: $ctmp to $output_path/wait_event_funcs_data.c: $!\";\n\n\n3)\n\n\"List all the wait events type, name and descriptions\" is enough?\n\n+ * List all the wait events (type, name) and their descriptions.\n\n4)\n\nWhy don't you add the usage of the view in the monitoring.sgml?\n\n```\n <para>\n Here is an example of how wait events can be viewed:\n\n<programlisting>\nSELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE \nwait_event is NOT NULL;\n pid | wait_event_type | wait_event\n------+-----------------+------------\n 2540 | Lock | relation\n 6644 | LWLock | ProcArray\n(2 rows)\n</programlisting>\n </para>\n```\n\nex.\npostgres(3789417)=# SELECT a.pid, a.wait_event_type, a.wait_event, \nw.description FROM pg_stat_activity a JOIN pg_wait_event w ON \n(a.wait_event_type = w.type AND a.wait_event = w.name) WHERE wait_event \nis NOT NULL;\n pid | wait_event_type | wait_event | \ndescription\n---------+-----------------+-------------------+--------------------------------------------------------\n 3783759 | Activity | AutoVacuumMain | Waiting in main loop of \nautovacuum launcher process\n 3812866 | Extension | WorkerSpiMain | Custom wait event \n\"WorkerSpiMain\" defined by extension\n 3783756 | Activity | BgWriterHibernate | Waiting in background \nwriter process, hibernating\n 3783760 | Activity | ArchiverMain | Waiting in main loop of \narchiver process\n 3783755 | Activity | CheckpointerMain | Waiting in main loop of \ncheckpointer process\n 3783758 | Activity | WalWriterMain | Waiting in main loop of \nWAL writer process\n(6 rows)\n\n5)\n\nThough I'm not familiar the value, do we need procost?\n\n+{ oid => '8403', descr => 'describe wait events',\n+ proname => 'pg_get_wait_events', procost => '10', prorows => '100',\n+ proretset => 't', provolatile => 's', prorettype => 'record',\n+ proargtypes => '', proallargtypes => '{text,text,text}',\n+ proargmodes => '{o,o,o}', proargnames => '{type,name,description}',\n+ prosrc => 'pg_get_wait_events' },\n\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 17 Aug 2023 16:37:22 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: WIP: new system catalog pg_wait_event" }, { "msg_contents": "On Thu, Aug 17, 2023 at 04:37:22PM +0900, Masahiro Ikeda wrote:\n> On 2023-08-17 14:53, Drouvot, Bertrand wrote:\n> BTW, is it better to start with \"Waiting\" like any other lines?\n> For example, \"Waiting for custom event \\\"worker_spi_main\\\" defined by an\n> extension\".\n\nUsing \"Waiting for custom event\" is a good term here. I am wondering\nif this should be s/extension/module/, as an event could be set by a\nloaded library, which may not be set with CREATE EXTENSION.\n\n> ex.\n> postgres(3789417)=# SELECT a.pid, a.wait_event_type, a.wait_event,\n> w.description FROM pg_stat_activity a JOIN pg_wait_event w ON\n> (a.wait_event_type = w.type AND a.wait_event = w.name) WHERE wait_event is\n> NOT NULL;\n> pid | wait_event_type | wait_event |\n> description\n> ---------+-----------------+-------------------+--------------------------------------------------------\n> 3783759 | Activity | AutoVacuumMain | Waiting in main loop of\n> autovacuum launcher process\n> 3812866 | Extension | WorkerSpiMain | Custom wait event\n> \"WorkerSpiMain\" defined by extension\n> 3783756 | Activity | BgWriterHibernate | Waiting in background\n> writer process, hibernating\n> 3783760 | Activity | ArchiverMain | Waiting in main loop of\n> archiver process\n> 3783755 | Activity | CheckpointerMain | Waiting in main loop of\n> checkpointer process\n> 3783758 | Activity | WalWriterMain | Waiting in main loop of WAL\n> writer process\n> (6 rows)\n\nYou need to think about the PDF being generated from the SGML docs, as\nwell, so this should not be too large. I don't think we need to care\nabout wait_event_type for this example, so it can be removed. If the\ntuple generated is still too large, showing one tuple under \\x with a\nfilter on backend_type would be enough. \n--\nMichael", "msg_date": "Fri, 18 Aug 2023 07:37:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: WIP: new system catalog pg_wait_event" }, { "msg_contents": "Hi,\n\nOn 8/17/23 9:37 AM, Masahiro Ikeda wrote:\n> Hi,\n> \n> On 2023-08-17 14:53, Drouvot, Bertrand wrote:\n> \n> The followings are additional comments for v7.\n> \n> 1)\n> \n>>> I am not sure that \"pg_wait_event\" is a good idea for the name if the\n>>> new view.  How about \"pg_wait_events\" instead, in plural form?  There\n>>> is more than one wait event listed.\n>> I'd prefer the singular form. There is a lot of places where it's already used\n>> (pg_database, pg_user, pg_namespace...to name a few) and it looks like that using\n>> the plural form are exceptions.\n> \n> Since I got the same feeling as Michael-san that \"pg_wait_events\" would be better,\n> I check the names of all system views. I think the singular form seems to be\n> exceptions for views though the singular form is used usually for system catalogs.\n> \n> https://www.postgresql.org/docs/devel/views.html\n> https://www.postgresql.org/docs/devel/catalogs.html\n\nOkay, using the plural form in v8 attached.\n\n> \n> 2)\n> \n> $ctmp must be $wctmp?\n> \n> +    rename($wctmp, \"$output_path/wait_event_funcs_data.c\")\n> +      || die \"rename: $ctmp to $output_path/wait_event_funcs_data.c: $!\";\n> \n\nNice catch, thanks, fixed in v8.\n\n> \n> 3)\n> \n> \"List all the wait events type, name and descriptions\" is enough?\n> \n> + * List all the wait events (type, name) and their descriptions.\n> \n\nAgree, used \"List all the wait events type, name and description\" in v8\n(using description in singular as compared to your proposal)\n\n> 4)\n> \n> Why don't you add the usage of the view in the monitoring.sgml?\n> \n> ```\n>    <para>\n>      Here is an example of how wait events can be viewed:\n> \n> <programlisting>\n> SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event is NOT NULL;\n>  pid  | wait_event_type | wait_event\n> ------+-----------------+------------\n>  2540 | Lock            | relation\n>  6644 | LWLock          | ProcArray\n> (2 rows)\n> </programlisting>\n>    </para>\n> ```\n> \n> ex.\n> postgres(3789417)=# SELECT a.pid, a.wait_event_type, a.wait_event, w.description FROM pg_stat_activity a JOIN pg_wait_event w ON (a.wait_event_type = w.type AND a.wait_event = w.name) WHERE wait_event is NOT NULL;\n\nAgree, I think that's a good idea.\n\nI'd prefer to add also a filtering on \"state='active'\" for this example (I think that would provide\na \"real\" use case as the sessions are not waiting if they are not active).\n\nDone in v8.\n\n> 5)\n> \n> Though I'm not familiar the value, do we need procost?\n> \n\nI think it makes sense to add a \"small\" one as it's done.\nBTW, while looking at it, I changed prorows to a more \"accurate\"\nvalue.\n\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 18 Aug 2023 10:56:55 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIP: new system catalog pg_wait_event" }, { "msg_contents": "Hi,\n\nOn 8/18/23 12:37 AM, Michael Paquier wrote:\n> On Thu, Aug 17, 2023 at 04:37:22PM +0900, Masahiro Ikeda wrote:\n>> On 2023-08-17 14:53, Drouvot, Bertrand wrote:\n>> BTW, is it better to start with \"Waiting\" like any other lines?\n>> For example, \"Waiting for custom event \\\"worker_spi_main\\\" defined by an\n>> extension\".\n> \n> Using \"Waiting for custom event\" is a good term here. \n\nYeah, done in v8 just shared up-thread.\n\n> I am wondering\n> if this should be s/extension/module/, as an event could be set by a\n> loaded library, which may not be set with CREATE EXTENSION.\n> \n\nI think that removing \"extension\" sounds weird given the fact that the\nwait event type is \"Extension\" (that could lead to confusion).\n\nI did use \"defined by an extension module\" in v8 (also that's aligned with\nthe wording used in wait_event.h).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 18 Aug 2023 10:57:28 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIP: new system catalog pg_wait_event" }, { "msg_contents": "On Fri, Aug 18, 2023 at 10:57:28AM +0200, Drouvot, Bertrand wrote:\n> I did use \"defined by an extension module\" in v8 (also that's aligned with\n> the wording used in wait_event.h).\n\nWFM.\n--\nMichael", "msg_date": "Fri, 18 Aug 2023 19:28:39 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: WIP: new system catalog pg_wait_event" }, { "msg_contents": "On Fri, Aug 18, 2023 at 10:56:55AM +0200, Drouvot, Bertrand wrote:\n> Okay, using the plural form in v8 attached.\n\nNoting in passing:\n- Here is an example of how wait events can be viewed:\n+ Here is are examples of how wait events can be viewed:\ns/is are/are/.\n--\nMichael", "msg_date": "Fri, 18 Aug 2023 19:31:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: WIP: new system catalog pg_wait_event" }, { "msg_contents": "Hi,\n\nOn 8/18/23 12:31 PM, Michael Paquier wrote:\n> On Fri, Aug 18, 2023 at 10:56:55AM +0200, Drouvot, Bertrand wrote:\n>> Okay, using the plural form in v8 attached.\n> \n> Noting in passing:\n> - Here is an example of how wait events can be viewed:\n> + Here is are examples of how wait events can be viewed:\n> s/is are/are/.\n\nThanks! Please find attached v9 fixing this typo.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sat, 19 Aug 2023 11:35:01 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIP: new system catalog pg_wait_event" }, { "msg_contents": "On Sat, Aug 19, 2023 at 11:35:01AM +0200, Drouvot, Bertrand wrote:\n> Hi,\n> \n> On 8/18/23 12:31 PM, Michael Paquier wrote:\n> Thanks! Please find attached v9 fixing this typo.\n\nI was looking at v8, and this looks pretty good to me. I have\nspotted a few minor things.\n\n+ proretset => 't', provolatile => 's', prorettype => 'record',\nThis function should be volatile. For example, a backend could add a\nnew wait event while a different backend queries twice this view in\nthe same transaction, resulting in a different set of rows sent back.\n\nBy the way, shouldn't the results of GetWaitEventExtensionNames() in\nthe SQL function be freed? I mean that:\n for (int idx = 0; idx < nbextwaitevents; idx++)\n pfree(waiteventnames[idx]);\n pfree(waiteventnames);\n\n+ /* Handling extension custom wait events */\nNit warning: s/Handling/Handle/, or \"Handling of\".\n--\nMichael", "msg_date": "Sat, 19 Aug 2023 19:00:05 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: WIP: new system catalog pg_wait_event" }, { "msg_contents": "Hi,\n\nOn 8/19/23 12:00 PM, Michael Paquier wrote:\n> On Sat, Aug 19, 2023 at 11:35:01AM +0200, Drouvot, Bertrand wrote:\n>> Hi,\n>>\n>> On 8/18/23 12:31 PM, Michael Paquier wrote:\n>> Thanks! Please find attached v9 fixing this typo.\n> \n> I was looking at v8, and this looks pretty good to me. \n\nGreat!\n\n> I have\n> spotted a few minor things.\n> \n> + proretset => 't', provolatile => 's', prorettype => 'record',\n> This function should be volatile. \n\nOh right, I copied/pasted this and should have paid more attention to\nthat part. Fixed in v10 attached.\n\n> By the way, shouldn't the results of GetWaitEventExtensionNames() in\n> the SQL function be freed? I mean that:\n> for (int idx = 0; idx < nbextwaitevents; idx++)\n> pfree(waiteventnames[idx]);\n> pfree(waiteventnames);\n> \n\nI don't think it's needed. The reason is that they are palloc in a\nshort-lived memory context while executing pg_get_wait_events().\n\n> + /* Handling extension custom wait events */\n> Nit warning: s/Handling/Handle/, or \"Handling of\".\n\nThanks, fixed in v10.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sat, 19 Aug 2023 18:30:12 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIP: new system catalog pg_wait_event" }, { "msg_contents": "On Sat, Aug 19, 2023 at 06:30:12PM +0200, Drouvot, Bertrand wrote:\n> Thanks, fixed in v10.\n\nOkay. I have done an extra review of it, simplifying a few things in\nthe function, the comments and the formatting, and applied the patch.\nThanks!\n\nI have somewhat managed to miss the catalog version in the initial\ncommit, but fixed that quickly.\n--\nMichael", "msg_date": "Sun, 20 Aug 2023 17:07:26 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: WIP: new system catalog pg_wait_event" }, { "msg_contents": "Hi,\n\nOn 8/20/23 10:07 AM, Michael Paquier wrote:\n> On Sat, Aug 19, 2023 at 06:30:12PM +0200, Drouvot, Bertrand wrote:\n>> Thanks, fixed in v10.\n> \n> Okay. I have done an extra review of it, simplifying a few things in\n> the function, the comments and the formatting, and applied the patch.\n\nThanks!\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 21 Aug 2023 09:33:36 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: WIP: new system catalog pg_wait_event" }, { "msg_contents": "Please, consider this small fix for the query example in the docs[1]:\n\nSELECT a.pid, a.wait_event, w.description\n FROM pg_stat_activity a JOIN\n pg_wait_events w ON (a.wait_event_type = w.type AND\n a.wait_event = w.name)\n WHERE wait_event is NOT NULL and a.state = 'active';\n\nI propose to add a table alias for the wait_event column in the WHERE clause for consistency.\n\n1.https://www.postgresql.org/docs/devel/monitoring-stats.html#MONITORING-PG-STAT-ACTIVITY-VIEW\n\n-- \nPavel Luzanov\nPostgres Professional:https://postgrespro.com", "msg_date": "Mon, 23 Oct 2023 13:17:42 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: WIP: new system catalog pg_wait_event" }, { "msg_contents": "On Mon, Oct 23, 2023 at 01:17:42PM +0300, Pavel Luzanov wrote:\n> I propose to add a table alias for the wait_event column in the\n> WHERE clause for consistency.\n\nOkay by me that it looks like an improvement to understand where this\nattribute is from, so applied on HEAD.\n--\nMichael", "msg_date": "Tue, 24 Oct 2023 08:41:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: WIP: new system catalog pg_wait_event" } ]
[ { "msg_contents": "Hi hackers,\n\nwhile working on the new system catalog pg_wait_event (see [1]) I noticed that some wait\nevents are currently \"duplicated\":\n\npostgres=# select wait_event_name,count(*) from pg_wait_event group by wait_event_name having count(*) > 1;\n wait_event_name | count\n-----------------+-------\n SyncRep | 2\n WALWrite | 2\n(2 rows)\n\nIndeed:\n\nSynRep currently appears in \"IPC\" and \"LWLock\" (see [2])\nWALWrite currently appears in \"IO\" and \"LWLock\" (see [2])\n\nI think that can lead to confusion and it would be better to avoid duplicate wait event\nname across Wait Class (and so fix those 2 ones above), what do you think?\n\n[1]: https://www.postgresql.org/message-id/0e2ae164-dc89-03c3-cf7f-de86378053ac%40gmail.com\n[2]: https://www.postgresql.org/docs/current/monitoring-stats.html\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 4 Aug 2023 17:07:51 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "\"duplicated\" wait events" }, { "msg_contents": "At Fri, 4 Aug 2023 17:07:51 +0200, \"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com> wrote in \n> SynRep currently appears in \"IPC\" and \"LWLock\" (see [2])\n> WALWrite currently appears in \"IO\" and \"LWLock\" (see [2])\n> \n> I think that can lead to confusion and it would be better to avoid\n> duplicate wait event\n> name across Wait Class (and so fix those 2 ones above), what do you\n> think?\n\nI think it would be handy to be able to summirize wait events by event\nnames, instead of classes. In other words, grouping wait events\nthrough all classes according to their purpose. From that perspective,\nI'm rather fan of consolidating event names (that is, the opposite of\nyour proposal).\n\nHowever, it seems the event weren't named with this consideration. So\nI'm on the fence about this change.\n\n\nBy the way, I couldn't figure out how to tag event names without\nmessing up the source tree. This causes meson to refuse a build..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 07 Aug 2023 14:46:34 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: \"duplicated\" wait events" }, { "msg_contents": "Hi,\n\nOn 8/7/23 7:46 AM, Kyotaro Horiguchi wrote:\n> At Fri, 4 Aug 2023 17:07:51 +0200, \"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com> wrote in\n>> SynRep currently appears in \"IPC\" and \"LWLock\" (see [2])\n>> WALWrite currently appears in \"IO\" and \"LWLock\" (see [2])\n>>\n>> I think that can lead to confusion and it would be better to avoid\n>> duplicate wait event\n>> name across Wait Class (and so fix those 2 ones above), what do you\n>> think?\n> \n> I think it would be handy to be able to summirize wait events by event\n> names, instead of classes. In other words, grouping wait events\n> through all classes according to their purpose. From that perspective,\n> I'm rather fan of consolidating event names (that is, the opposite of\n> your proposal).\n> \n\nI could agree if they had the same descriptions and behaviors.\n\nFor example with WALWrite:\n\nWALWrite in \"IO\" is described as \"Waiting for a write to a WAL file\" and\nis described in \"LWLock\" as \"Waiting for WAL buffers to be written to disk\".\nHaving two distinct descriptions seems to suggest the wait events are not the same.\n\nAlso I think it makes sense to distinguish both as the LWLock one could\nbe reported as waiting in LWLockReportWaitStart() while trying to grab the\nlock in LW_SHARED mode (as done in GetLastSegSwitchData()).\n\nI think there is 2 things to address:\n\n1) fix the current duplicates\n2) put safeguard in place\n\nAs far 1), it could be easily done by updating wait_event_names.txt (renaming duplicates)\nin the master branch. As the duplication has been introduced in 14a9101091\n(so starting as of PG 13) we would need to update monitoring.sgml for <= 16 should we want to\nback patch.\n\n2) could be handled in generate-wait_event_types.pl on master.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 7 Aug 2023 10:21:50 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: \"duplicated\" wait events" } ]
[ { "msg_contents": "I have pushed Release 17 of the PostgreSQL Buildfarm client.\n\nRelease 17 has two main features:\n\n * Modernize the way we do cross-version upgrade tests. Most of the\n logic for modifying instances to make them suitable for cross\n version upgrade testing has now been migrated to the Postgres core\n code in |src/test/perl/PostgreSQL/Test/AdjustUpgrade.pm|. The new\n code simply imports this module and leverages its knowledge.\n * Support of building with |meson|. This is only supported on version\n 16 or later of Postgres, older branches will continue to use the\n older toolsets. To enable building with |meson|, there are several\n new settings, illustrated in the sample configuration file:\n o |using_meson| this must be set to a true value\n o |meson_jobs| this controls the degree of parallelism that\n |meson| will use\n o |meson_test_timeout| this is used to multiply the meson test\n timeout. The default is 3, 0 turns off timeout\n o |meson_config| This is an array of settings for passing to\n |meson setup|. Note that all options need to be explicitly given\n here - the client disables all |auto| options. This includes use\n of |zlib| and |readline|, which do not default to on, unlike\n |autoconf| setups.\n\nThere are also a number of relatively small bug fixes and tweaks (e.g. \nsome improvements in processing typedefs).\n\nThe release is available at \n<https://github.com/PGBuildFarm/client-code/releases> or \n<https://buildfarm.postgresql.org/downloads/latest-client.tgz>\n\nEnjoy!\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\nI have pushed Release 17 of the PostgreSQL Buildfarm client.\n\n\nRelease 17 has two main features:\n\nModernize the way we do cross-version upgrade tests. Most of\n the logic for modifying instances to make them suitable for\n cross version upgrade testing has now been migrated to the\n Postgres core code in src/test/perl/PostgreSQL/Test/AdjustUpgrade.pm.\n The new code simply imports this module and leverages its\n knowledge.\nSupport of building with meson. This is only\n supported on version 16 or later of Postgres, older branches\n will continue to use the older toolsets. To enable building\n with meson, there are several new settings,\n illustrated in the sample configuration file:\n \nusing_meson this must be set to a true\n value\nmeson_jobs this controls the degree of\n parallelism that meson will use\nmeson_test_timeout this is used to multiply\n the meson test timeout. The default is 3, 0 turns off\n timeout\nmeson_config This is an array of settings\n for passing to meson setup. Note that all\n options need to be explicitly given here - the client\n disables all auto options. This includes use\n of zlib and readline, which do\n not default to on, unlike autoconf setups.\n\n\n\nThere are also a number of relatively small bug fixes and\n tweaks (e.g. some improvements in processing typedefs).\nThe release is available at\n <https://github.com/PGBuildFarm/client-code/releases> or\n <https://buildfarm.postgresql.org/downloads/latest-client.tgz>\nEnjoy!\n\n\ncheers\n\n\nandrew\n\n\n\n\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Fri, 4 Aug 2023 11:23:11 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Release 17 of the PostgreSQL Buildfarm Client" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> I have pushed Release 17 of the PostgreSQL Buildfarm client.\n\nI ran a test of this using\n\nrun_branches.pl --run-all --nosend --force\n\nand noticed that it created \"animal.force-one-run\" files in each\nof the per-branch directories, and never removed them. This\nmeans (I assume) that the next run will also behave like --force\n... and maybe all later ones too, until I manually remove those\nfiles?\n\nI'm not sure if this behavior is new in v17, or I just never\ntried it before. I usually create force-one-run files manually.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 04 Aug 2023 19:14:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Release 17 of the PostgreSQL Buildfarm Client" }, { "msg_contents": "I wrote:\n> I ran a test of this using\n> run_branches.pl --run-all --nosend --force\n> and noticed that it created \"animal.force-one-run\" files in each\n> of the per-branch directories, and never removed them.\n\nFurther testing shows that a pre-existing force-one-run file does\nget removed, so use-cases involving manual creation of the file\nare still OK. Maybe this \"force twice\" from --force has been\nthere all along, and nobody noticed? Even if it's a new bug,\nit's not a show-stopper.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 04 Aug 2023 21:25:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Release 17 of the PostgreSQL Buildfarm Client" }, { "msg_contents": "On 2023-08-04 Fr 21:25, Tom Lane wrote:\n> I wrote:\n>> I ran a test of this using\n>> run_branches.pl --run-all --nosend --force\n>> and noticed that it created \"animal.force-one-run\" files in each\n>> of the per-branch directories, and never removed them.\n> Further testing shows that a pre-existing force-one-run file does\n> get removed, so use-cases involving manual creation of the file\n> are still OK. Maybe this \"force twice\" from --force has been\n> there all along, and nobody noticed? Even if it's a new bug,\n> it's not a show-stopper.\n>\n> \t\t\n\n\nThis isn't a product of the --force flag. It's done so that --nosend and \n--nostatus don't defeat the up-to-date checks in run_branches.pl by \nwriting a githead.log with a gitref we haven't reported on. We could \npossibly do that another way, e.g. by removing or renaming the \ngithead.log file in such cases, since the checks in run_branches.pl rely \non that name. (thinks) In fact that's probably better, because instead \nof forcing a run it would just make the code do a slow up-to-date check \n(by doing a git pull) next time around. Will fix.\n\nIf you had used --test instead of --nosend this wouldn't have happened.\n\nIn any case, the next regular run (i.e. one without --nosend or \n--nostatus) will remove the files.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-08-04 Fr 21:25, Tom Lane wrote:\n\n\nI wrote:\n\n\nI ran a test of this using\nrun_branches.pl --run-all --nosend --force\nand noticed that it created \"animal.force-one-run\" files in each\nof the per-branch directories, and never removed them.\n\n\n\nFurther testing shows that a pre-existing force-one-run file does\nget removed, so use-cases involving manual creation of the file\nare still OK. Maybe this \"force twice\" from --force has been\nthere all along, and nobody noticed? Even if it's a new bug,\nit's not a show-stopper.\n\n\t\t\n\n\n\nThis isn't a product of the --force flag. It's done so that\n --nosend and --nostatus don't defeat the up-to-date checks in\n run_branches.pl by writing a githead.log with a gitref we haven't\n reported on. We could possibly do that another way, e.g. by\n removing or renaming the githead.log file in such cases, since the\n checks in run_branches.pl rely on that name. (thinks) In fact\n that's probably better, because instead of forcing a run it would\n just make the code do a slow up-to-date check (by doing a git\n pull) next time around. Will fix.\n\nIf you had used --test instead of --nosend this wouldn't have\n happened.\nIn any case, the next regular run (i.e. one without --nosend or\n --nostatus) will remove the files.\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Sat, 5 Aug 2023 06:40:27 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: Release 17 of the PostgreSQL Buildfarm Client" }, { "msg_contents": "On 2023-08-05 Sa 06:40, Andrew Dunstan wrote:\n>\n>\n> On 2023-08-04 Fr 21:25, Tom Lane wrote:\n>> I wrote:\n>>> I ran a test of this using\n>>> run_branches.pl --run-all --nosend --force\n>>> and noticed that it created \"animal.force-one-run\" files in each\n>>> of the per-branch directories, and never removed them.\n>> Further testing shows that a pre-existing force-one-run file does\n>> get removed, so use-cases involving manual creation of the file\n>> are still OK. Maybe this \"force twice\" from --force has been\n>> there all along, and nobody noticed? Even if it's a new bug,\n>> it's not a show-stopper.\n>>\n>> \t\t\n>\n>\n> This isn't a product of the --force flag. It's done so that --nosend \n> and --nostatus don't defeat the up-to-date checks in run_branches.pl \n> by writing a githead.log with a gitref we haven't reported on. We \n> could possibly do that another way, e.g. by removing or renaming the \n> githead.log file in such cases, since the checks in run_branches.pl \n> rely on that name. (thinks) In fact that's probably better, because \n> instead of forcing a run it would just make the code do a slow \n> up-to-date check (by doing a git pull) next time around. Will fix.\n>\n> If you had used --test instead of --nosend this wouldn't have happened.\n>\n> In any case, the next regular run (i.e. one without --nosend or \n> --nostatus) will remove the files.\n>\n\nSee \n<https://github.com/PGBuildFarm/client-code/commit/ec4cf43613a74cb88f228efcde09931cf9fd57e7>\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-08-05 Sa 06:40, Andrew Dunstan\n wrote:\n\n\n\n\n\nOn 2023-08-04 Fr 21:25, Tom Lane\n wrote:\n\n\nI wrote:\n\n\nI ran a test of this using\nrun_branches.pl --run-all --nosend --force\nand noticed that it created \"animal.force-one-run\" files in each\nof the per-branch directories, and never removed them.\n\n\nFurther testing shows that a pre-existing force-one-run file does\nget removed, so use-cases involving manual creation of the file\nare still OK. Maybe this \"force twice\" from --force has been\nthere all along, and nobody noticed? Even if it's a new bug,\nit's not a show-stopper.\n\n\t\t\n\n\n\nThis isn't a product of the --force flag. It's done so that\n --nosend and --nostatus don't defeat the up-to-date checks in\n run_branches.pl by writing a githead.log with a gitref we\n haven't reported on. We could possibly do that another way, e.g.\n by removing or renaming the githead.log file in such cases,\n since the checks in run_branches.pl rely on that name. (thinks)\n In fact that's probably better, because instead of forcing a run\n it would just make the code do a slow up-to-date check (by doing\n a git pull) next time around. Will fix.\n\nIf you had used --test instead of --nosend this wouldn't have\n happened.\nIn any case, the next regular run (i.e. one without --nosend or\n --nostatus) will remove the files.\n\n\n\n\nSee\n<https://github.com/PGBuildFarm/client-code/commit/ec4cf43613a74cb88f228efcde09931cf9fd57e7>\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Sat, 5 Aug 2023 07:50:36 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: Release 17 of the PostgreSQL Buildfarm Client" } ]
[ { "msg_contents": "Hi,\n\nWe have some issues with CI on macos and windows being too expensive (more on\nthat soon in a separate email). For macos most of the obviously wasted time is\nspent installing packages with homebrew. Even with the package downloads being\ncached, it takes about 1m20s to install them. We can't just cache the whole\nhomebrew installation, because it contains a lot of pre-installed packages.\n\nAfter a bunch of experimenting, I found a way to do this a lot faster: The\nattached patch installs macports and installs our dependencies from\nthat. Because there aren't pre-existing macports packages, we can just cache\nthe whole thing. Doing so naively wouldn't yield that much of a speedup,\nbecause it takes a while to unpack a tarball (or whatnot) with as many files\nas our dependencies have - that's a lot of filesystem metadata\noperations. Instead the script creates a HFS+ filesystem in a file and caches\nthat - that's mounted within a few seconds. To further keep the size in check,\nthat file is compressed with zstd in the cache.\n\nAs macports has a package for IPC::Run and IO::Pty, we can use those instead\nof the separate cache we had for the perl installation.\n\nAfter the patch, the cached case takes ~5s to \"install\" and the cache is half\nthe size than the one for homebrew.\n\nThe comparison isn't entirely fair, because I used the occasion to not install\n'make' (since meson is used for building) and llvm (we didn't enable it for\nthe build anyway). That gain is a bit smaller without that, but still\nsignificant.\n\n\nAn alternative implementation would be to create the \"base\" .dmg file outside\nof CI and download it onto the CI instances. But I didn't want to figure out\nthe hosting situation for such files, so I thought this was the best\nnear-medium term path.\n\nGreetings,\n\nAndres Freund", "msg_date": "Sat, 5 Aug 2023 13:25:39 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "ci: Improve macos startup using a cached macports installation" }, { "msg_contents": "Hi,\n\nOn 2023-08-05 13:25:39 -0700, Andres Freund wrote:\n> We have some issues with CI on macos and windows being too expensive (more on\n> that soon in a separate email). For macos most of the obviously wasted time is\n> spent installing packages with homebrew. Even with the package downloads being\n> cached, it takes about 1m20s to install them. We can't just cache the whole\n> homebrew installation, because it contains a lot of pre-installed packages.\n> \n> After a bunch of experimenting, I found a way to do this a lot faster: The\n> attached patch installs macports and installs our dependencies from\n> that. Because there aren't pre-existing macports packages, we can just cache\n> the whole thing. Doing so naively wouldn't yield that much of a speedup,\n> because it takes a while to unpack a tarball (or whatnot) with as many files\n> as our dependencies have - that's a lot of filesystem metadata\n> operations. Instead the script creates a HFS+ filesystem in a file and caches\n> that - that's mounted within a few seconds. To further keep the size in check,\n> that file is compressed with zstd in the cache.\n> \n> As macports has a package for IPC::Run and IO::Pty, we can use those instead\n> of the separate cache we had for the perl installation.\n> \n> After the patch, the cached case takes ~5s to \"install\" and the cache is half\n> the size than the one for homebrew.\n> \n> The comparison isn't entirely fair, because I used the occasion to not install\n> 'make' (since meson is used for building) and llvm (we didn't enable it for\n> the build anyway). That gain is a bit smaller without that, but still\n> significant.\n> \n> \n> An alternative implementation would be to create the \"base\" .dmg file outside\n> of CI and download it onto the CI instances. But I didn't want to figure out\n> the hosting situation for such files, so I thought this was the best\n> near-medium term path.\n\nGiven how significant an improvement this is in test time, and the limited\nblast radius, I am planning to push this soon, unless somebody opposes that\nsoon.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 19 Aug 2023 11:47:33 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: ci: Improve macos startup using a cached macports installation" }, { "msg_contents": "On 2023-08-19 11:47:33 -0700, Andres Freund wrote:\n> Given how significant an improvement this is in test time, and the limited\n> blast radius, I am planning to push this soon, unless somebody opposes that\n> soon.\n\nAnd done.\n\n\n", "msg_date": "Sat, 19 Aug 2023 15:22:31 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: ci: Improve macos startup using a cached macports installation" } ]
[ { "msg_contents": "I’m trying to add a new operator for my pg application like greater_equals called “<~>\", how many files I need to \nmodify and how to do it? Can you give me an example?\n\n", "msg_date": "Sun, 6 Aug 2023 12:34:18 +0800", "msg_from": "jacktby jacktby <jacktby@gmail.com>", "msg_from_op": true, "msg_subject": "How to add a new operator for parser?" }, { "msg_contents": "On Sun, 6 Aug 2023, 12:34 jacktby jacktby, <jacktby@gmail.com> wrote:\n\n> I’m trying to add a new operator for my pg application like greater_equals\n> called “<~>\", how many files I need to\n> modify and how to do it? Can you give me an example?\n>\n\nyou can look at some contrib for some examples of custom operator (and\ncustom datatype) implementation, like citext or btree_gin/gist\n\n>\n\nOn Sun, 6 Aug 2023, 12:34 jacktby jacktby, <jacktby@gmail.com> wrote:I’m trying to add a new operator for my pg application like greater_equals called “<~>\", how many files I need to \nmodify and how to do it? Can you give me an example?you can look at some contrib for some examples of custom operator (and custom datatype) implementation, like citext or btree_gin/gist", "msg_date": "Sun, 6 Aug 2023 13:18:53 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: How to add a new operator for parser?" }, { "msg_contents": "> 2023年8月6日 13:18,Julien Rouhaud <rjuju123@gmail.com> 写道:\n> \n> On Sun, 6 Aug 2023, 12:34 jacktby jacktby, <jacktby@gmail.com <mailto:jacktby@gmail.com>> wrote:\n>> I’m trying to add a new operator for my pg application like greater_equals called “<~>\", how many files I need to \n>> modify and how to do it? Can you give me an example?\n> \n> \n> you can look at some contrib for some examples of custom operator (and custom datatype) implementation, like citext or btree_gin/gist\nI need to build a new datatype. It can contains different datatypes, like ‘(1,’a’,2.0)’,it’s a (interger,string,float) tuple type, and Then I need to add operator for it. How should I do?\n2023年8月6日 13:18,Julien Rouhaud <rjuju123@gmail.com> 写道:On Sun, 6 Aug 2023, 12:34 jacktby jacktby, <jacktby@gmail.com> wrote:I’m trying to add a new operator for my pg application like greater_equals called “<~>\", how many files I need to \nmodify and how to do it? Can you give me an example?you can look at some contrib for some examples of custom operator (and custom datatype) implementation, like citext or btree_gin/gist\n\nI need to build a new datatype. It can contains different datatypes, like ‘(1,’a’,2.0)’,it’s a (interger,string,float) tuple type, and Then I need to add operator for it. How should I do?", "msg_date": "Sun, 6 Aug 2023 13:37:42 +0800", "msg_from": "jacktby jacktby <jacktby@gmail.com>", "msg_from_op": true, "msg_subject": "Re: How to add a new operator for parser?" }, { "msg_contents": "On Sun, Aug 06, 2023 at 01:37:42PM +0800, jacktby jacktby wrote:\n>\n> I need to build a new datatype. It can contains different datatypes, like\n> ‘(1,’a’,2.0)’,it’s a (interger,string,float) tuple type\n\nIs there any reason why you can't simply rely on the record datatype?\n\n> and Then I need to\n> add operator for it. How should I do?\n\nIf using record datatype you would only need to add a new operator.\n\n\n", "msg_date": "Sun, 6 Aug 2023 15:25:23 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: How to add a new operator for parser?" } ]
[ { "msg_contents": "Hi all,\n\nA colleague Drew Callahan (in CC) has discovered that the logical\ndecoding doesn't handle syscache invalidation messages properly that\nare generated by other transactions. Here is example (I've attached a\npatch for isolation test),\n\n-- Setup\nCREATE TYPE comp AS (f1 int, f2 text);\nCREATE TABLE tbl3(val1 int, val2 comp);\nSELECT pg_create_logical_replication_slot('s', 'test_decoding');\n\n-- Session 1\nBEGIN;\nINSERT INTO tbl3 (val1, val2) VALUES (1, (1, '2'));\n\n -- Session 2\n ALTER TYPE comp ADD ATTRIBUTE f3 int;\n\nINSERT INTO tbl3 (val1, val2) VALUES (1, (1, '2', 3));\nCOMMIT;\n\npg_logical_slot_get_changes() returns:\n\nBEGIN\ntable public.tbl3: INSERT: val1[integer]:1 val2[comp]:'(1,2)'\ntable public.tbl3: INSERT: val1[integer]:1 val2[comp]:'(1,2)'\nCOMMIT\n\nHowever, the logical decoding should reflect the result of ALTER TYPE\nand the val2 of the second INSERT output should be '(1,2,3)'.\n\nThe root cause of this behavior is that while ALTER TYPE can be run\nconcurrently to INSERT, the logical decoding doesn't handle cache\ninvalidation properly, and it got a cache hit of stale data (of\ntypecache in this case). Unlike snapshots that are stored in the\ntransaction’s reorder buffer changes, the invalidation messages of\nother transactions are not distributed. As a result, the snapshot\nbecomes moot when we get a cache hit of stale data due to not\nprocessing the invalidation messages again. This is not an issue for\nALTER TABLE and the like due to 2 phase locking and taking an\nAccessExclusive lock. The issue with DMLs and ALTER TYPE has been\ndiscovered but there might be other similar cases.\n\nAs far as I tested, this issue has existed since v9.4, where the\nlogical decoding was introduced, so it seems to be a long-standing\nissue.\n\nThe simplest fix would be to invalidate all caches when setting a new\nhistorical snapshot (i.e. applying CHANGE_INTERNAL_SNAPSHOT) but we\nend up invalidating other caches unnecessarily too.\n\nA better fix would be that when decoding the commit of a catalog\nchanging transaction, we distribute invalidation messages to other\nconcurrent transactions, like we do for snapshots. But we might not\nneed to distribute all types of invalidation messages, though.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sun, 6 Aug 2023 14:35:06 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": true, "msg_subject": "logical decoding issue with concurrent ALTER TYPE" }, { "msg_contents": "On Sun, Aug 6, 2023 at 11:06 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> A colleague Drew Callahan (in CC) has discovered that the logical\n> decoding doesn't handle syscache invalidation messages properly that\n> are generated by other transactions. Here is example (I've attached a\n> patch for isolation test),\n>\n> -- Setup\n> CREATE TYPE comp AS (f1 int, f2 text);\n> CREATE TABLE tbl3(val1 int, val2 comp);\n> SELECT pg_create_logical_replication_slot('s', 'test_decoding');\n>\n> -- Session 1\n> BEGIN;\n> INSERT INTO tbl3 (val1, val2) VALUES (1, (1, '2'));\n>\n> -- Session 2\n> ALTER TYPE comp ADD ATTRIBUTE f3 int;\n>\n> INSERT INTO tbl3 (val1, val2) VALUES (1, (1, '2', 3));\n> COMMIT;\n>\n> pg_logical_slot_get_changes() returns:\n>\n> BEGIN\n> table public.tbl3: INSERT: val1[integer]:1 val2[comp]:'(1,2)'\n> table public.tbl3: INSERT: val1[integer]:1 val2[comp]:'(1,2)'\n> COMMIT\n>\n> However, the logical decoding should reflect the result of ALTER TYPE\n> and the val2 of the second INSERT output should be '(1,2,3)'.\n>\n> The root cause of this behavior is that while ALTER TYPE can be run\n> concurrently to INSERT, the logical decoding doesn't handle cache\n> invalidation properly, and it got a cache hit of stale data (of\n> typecache in this case). Unlike snapshots that are stored in the\n> transaction’s reorder buffer changes, the invalidation messages of\n> other transactions are not distributed. As a result, the snapshot\n> becomes moot when we get a cache hit of stale data due to not\n> processing the invalidation messages again. This is not an issue for\n> ALTER TABLE and the like due to 2 phase locking and taking an\n> AccessExclusive lock. The issue with DMLs and ALTER TYPE has been\n> discovered but there might be other similar cases.\n>\n> As far as I tested, this issue has existed since v9.4, where the\n> logical decoding was introduced, so it seems to be a long-standing\n> issue.\n>\n> The simplest fix would be to invalidate all caches when setting a new\n> historical snapshot (i.e. applying CHANGE_INTERNAL_SNAPSHOT) but we\n> end up invalidating other caches unnecessarily too.\n>\n\nI think this could be quite onerous for workloads having a mix of DDL/DMLs.\n\n> A better fix would be that when decoding the commit of a catalog\n> changing transaction, we distribute invalidation messages to other\n> concurrent transactions, like we do for snapshots. But we might not\n> need to distribute all types of invalidation messages, though.\n>\n\nI would prefer the solution on these lines. It would be good if we\ndistribute only the required type of invalidations.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 9 Aug 2023 14:51:28 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical decoding issue with concurrent ALTER TYPE" } ]
[ { "msg_contents": "Hi hackers,\n\nIs it is ok, that regression tests do not pass with small value of \nshared buffers (for example 1Mb)?\n\nTwo tests are failed because of sync scan - this tests cluster.sql and \nportals.sql perform seqscan without explicit order by and expect that \ndata will be returned in particular order. But because of sync scan it \ndoesn't happen. Small shared buffers are needed to satisfy seqscan \ncriteria in heapam.c: `scan->rs_nblocks > NBuffers / 4` for tenk1 table.\n\nMore general question - is it really good that in situation when there \nis actually no concurrent queries, seqscan is started not from the first \npage?\n\n\n\n\n", "msg_date": "Sun, 6 Aug 2023 22:20:57 +0300", "msg_from": "Konstantin Knizhnik <knizhnik@garret.ru>", "msg_from_op": true, "msg_subject": "Sync scan & regression tests" }, { "msg_contents": "Konstantin Knizhnik <knizhnik@garret.ru> writes:\n> Is it is ok, that regression tests do not pass with small value of \n> shared buffers (for example 1Mb)?\n\nThere are quite a few GUC settings with which you can break the\nregression tests. I'm not especially bothered by this one.\n\n> More general question - is it really good that in situation when there \n> is actually no concurrent queries, seqscan is started not from the first \n> page?\n\nYou're going to have race-condition issues if you try to make that\n(not) happen, I should think.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 06 Aug 2023 18:30:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Sync scan & regression tests" }, { "msg_contents": "On Mon, Aug 7, 2023 at 7:21 AM Konstantin Knizhnik <knizhnik@garret.ru> wrote:\n> Two tests are failed because of sync scan - this tests cluster.sql and\n> portals.sql perform seqscan without explicit order by and expect that\n> data will be returned in particular order. But because of sync scan it\n> doesn't happen. Small shared buffers are needed to satisfy seqscan\n> criteria in heapam.c: `scan->rs_nblocks > NBuffers / 4` for tenk1 table.\n\nI wondered the same thing while working on the tests in commit\n8ab0ebb9a84, which explicitly care about physical order, so they *say\nso* with ORDER BY ctid. But the problem seems quite widespread, so I\ndidn't volunteer to try to do something like that everywhere, when Tom\ncommitted cbf4177f for 027_stream_regress.pl.\n\nFWIW here's another discussion of that cluster test, in which I was\nstill figuring out some surprising ways this feature can introduce\nnon-determinism even without concurrent access to the same table.\n\nhttps://www.postgresql.org/message-id/flat/CA%2BhUKGLTK6ZuEkpeJ05-MEmvmgZveCh%2B_w013m7%2ByKWFSmRcDA%40mail.gmail.com\n\n\n", "msg_date": "Mon, 7 Aug 2023 11:29:58 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Sync scan & regression tests" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Mon, Aug 7, 2023 at 7:21 AM Konstantin Knizhnik <knizhnik@garret.ru> wrote:\n>> Two tests are failed because of sync scan - this tests cluster.sql and\n>> portals.sql perform seqscan without explicit order by and expect that\n>> data will be returned in particular order. But because of sync scan it\n>> doesn't happen. Small shared buffers are needed to satisfy seqscan\n>> criteria in heapam.c: `scan->rs_nblocks > NBuffers / 4` for tenk1 table.\n\n> I wondered the same thing while working on the tests in commit\n> 8ab0ebb9a84, which explicitly care about physical order, so they *say\n> so* with ORDER BY ctid. But the problem seems quite widespread, so I\n> didn't volunteer to try to do something like that everywhere, when Tom\n> committed cbf4177f for 027_stream_regress.pl.\n\nOur customary theory about that is explained in regress.sgml:\n\n You might wonder why we don't order all the regression test queries explicitly\n to get rid of this issue once and for all. The reason is that that would\n make the regression tests less useful, not more, since they'd tend\n to exercise query plan types that produce ordered results to the\n exclusion of those that don't.\n\nHaving said that ... I just noticed that chipmunk, which I'd been\nignoring because it had been having configuration-related failures\never since it came back to life about three months ago, has gotten\npast those problems and is now failing with what looks suspiciously\nlike syncscan problems:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chipmunk&dt=2023-08-06%2008%3A20%3A21\n\nThis is possibly explained by the fact that it uses (per its\nextra_config)\n 'shared_buffers = 10MB',\nalthough it's done that for a long time and portals.out hasn't changed\nsince before chipmunk's latest success. Perhaps some change in an\nearlier test script affected this?\n\nI think it'd be worth running to ground exactly what is causing these\nfailures and when they started. But assuming that they are triggered\nby syncscan, my first thought about dealing with it is to disable\nsyncscan within the affected tests. However ... do we exercise the\nsyncscan logic at all within the regression tests, normally? Is there\na coverage issue to be dealt with?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 06 Aug 2023 20:55:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Sync scan & regression tests" }, { "msg_contents": "(noticed this thread just now)\n\nOn 07/08/2023 03:55, Tom Lane wrote:\n> Having said that ... I just noticed that chipmunk, which I'd been\n> ignoring because it had been having configuration-related failures\n> ever since it came back to life about three months ago, has gotten\n> past those problems\n\nYes, I finally got around to fix the configuration...\n\n> and is now failing with what looks suspiciously like syncscan\n> problems:\n> \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chipmunk&dt=2023-08-06%2008%3A20%3A21\n> \n> This is possibly explained by the fact that it uses (per its\n> extra_config)\n> 'shared_buffers = 10MB',\n> although it's done that for a long time and portals.out hasn't changed\n> since before chipmunk's latest success. Perhaps some change in an\n> earlier test script affected this?\n\nI changed the config yesterday to 'shared_buffers = 20MB', before seeing \nthis thread. If we expect the regression tests to work with \n'shared_buffers=10MB' - and I think that would be nice - I'll change it \nback. But let's see what happens with 'shared_buffers=20MB'. It will \ntake a few days for the tests to complete.\n\n> I think it'd be worth running to ground exactly what is causing these\n> failures and when they started.\n\nI bisected it to this commit:\n\ncommit 7ae0ab0ad9704b10400a611a9af56c63304c27a5\nAuthor: David Rowley <drowley@postgresql.org>\nDate: Fri Feb 3 16:20:43 2023 +1300\n\n Reduce code duplication between heapgettup and heapgettup_pagemode\n\n The code to get the next block number was exactly the same between \nthese\n two functions, so let's just put it into a helper function and call \nthat\n from both locations.\n\n Author: Melanie Plageman\n Reviewed-by: Andres Freund, David Rowley\n Discussion: \nhttps://postgr.es/m/CAAKRu_bvkhka0CZQun28KTqhuUh5ZqY=_T8QEqZqOL02rpi2bw@mail.gmail.com\n\n From that description, that was supposed to be just code refactoring, \nwith no change in behavior.\n\nLooking the new heapgettup_advance_block() function and the code that it \nreplaced, it's now skipping this ss_report_location() on the last call, \nwhen it has reached the end of the scan:\n\n> \n> \t/*\n> \t * Report our new scan position for synchronization purposes. We\n> \t * don't do that when moving backwards, however. That would just\n> \t * mess up any other forward-moving scanners.\n> \t *\n> \t * Note: we do this before checking for end of scan so that the\n> \t * final state of the position hint is back at the start of the\n> \t * rel. That's not strictly necessary, but otherwise when you run\n> \t * the same query multiple times the starting position would shift\n> \t * a little bit backwards on every invocation, which is confusing.\n> \t * We don't guarantee any specific ordering in general, though.\n> \t */\n> \tif (scan->rs_base.rs_flags & SO_ALLOW_SYNC)\n> \t\tss_report_location(scan->rs_base.rs_rd, block);\n\nThe comment explicitly says that we do that before checking for \nend-of-scan, but that commit moved it to after. I propose the attached \nto move it back to before the end-of-scan checks.\n\nMelanie, David, any thoughts?\n\n> But assuming that they are triggered by syncscan, my first thought\n> about dealing with it is to disable syncscan within the affected\n> tests. However ... do we exercise the syncscan logic at all within\n> the regression tests, normally? Is there a coverage issue to be\n> dealt with?\nGood question..\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Tue, 29 Aug 2023 13:35:44 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Sync scan & regression tests" }, { "msg_contents": "On Tue, 29 Aug 2023 at 22:35, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> Looking the new heapgettup_advance_block() function and the code that it\n> replaced, it's now skipping this ss_report_location() on the last call,\n> when it has reached the end of the scan:\n>\n> >\n> > /*\n> > * Report our new scan position for synchronization purposes. We\n> > * don't do that when moving backwards, however. That would just\n> > * mess up any other forward-moving scanners.\n> > *\n> > * Note: we do this before checking for end of scan so that the\n> > * final state of the position hint is back at the start of the\n> > * rel. That's not strictly necessary, but otherwise when you run\n> > * the same query multiple times the starting position would shift\n> > * a little bit backwards on every invocation, which is confusing.\n> > * We don't guarantee any specific ordering in general, though.\n> > */\n> > if (scan->rs_base.rs_flags & SO_ALLOW_SYNC)\n> > ss_report_location(scan->rs_base.rs_rd, block);\n>\n> The comment explicitly says that we do that before checking for\n> end-of-scan, but that commit moved it to after. I propose the attached\n> to move it back to before the end-of-scan checks.\n>\n> Melanie, David, any thoughts?\n\nI just looked at v15's code and I agree that the ss_report_location()\nwould be called even when the scan is finished. It wasn't intentional\nthat that was changed in v16, so I'm happy for your patch to be\napplied and backpatched to 16. Thanks for digging into that.\n\nDavid\n\n\n", "msg_date": "Thu, 31 Aug 2023 09:15:03 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Sync scan & regression tests" }, { "msg_contents": "On Wed, Aug 30, 2023 at 5:15 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Tue, 29 Aug 2023 at 22:35, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> > Looking the new heapgettup_advance_block() function and the code that it\n> > replaced, it's now skipping this ss_report_location() on the last call,\n> > when it has reached the end of the scan:\n> >\n> > >\n> > > /*\n> > > * Report our new scan position for synchronization purposes. We\n> > > * don't do that when moving backwards, however. That would just\n> > > * mess up any other forward-moving scanners.\n> > > *\n> > > * Note: we do this before checking for end of scan so that the\n> > > * final state of the position hint is back at the start of the\n> > > * rel. That's not strictly necessary, but otherwise when you run\n> > > * the same query multiple times the starting position would shift\n> > > * a little bit backwards on every invocation, which is confusing.\n> > > * We don't guarantee any specific ordering in general, though.\n> > > */\n> > > if (scan->rs_base.rs_flags & SO_ALLOW_SYNC)\n> > > ss_report_location(scan->rs_base.rs_rd, block);\n> >\n> > The comment explicitly says that we do that before checking for\n> > end-of-scan, but that commit moved it to after. I propose the attached\n> > to move it back to before the end-of-scan checks.\n> >\n> > Melanie, David, any thoughts?\n>\n> I just looked at v15's code and I agree that the ss_report_location()\n> would be called even when the scan is finished. It wasn't intentional\n> that that was changed in v16, so I'm happy for your patch to be\n> applied and backpatched to 16. Thanks for digging into that.\n\nYes, thanks for catching my mistake.\nLGTM.\n\n\n", "msg_date": "Wed, 30 Aug 2023 19:37:18 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Sync scan & regression tests" }, { "msg_contents": "On 29/08/2023 13:35, Heikki Linnakangas wrote:\n> On 07/08/2023 03:55, Tom Lane wrote:\n>> This is possibly explained by the fact that it uses (per its\n>> extra_config)\n>> 'shared_buffers = 10MB',\n>> although it's done that for a long time and portals.out hasn't changed\n>> since before chipmunk's latest success. Perhaps some change in an\n>> earlier test script affected this?\n> \n> I changed the config yesterday to 'shared_buffers = 20MB', before seeing\n> this thread. If we expect the regression tests to work with\n> 'shared_buffers=10MB' - and I think that would be nice - I'll change it\n> back. But let's see what happens with 'shared_buffers=20MB'. It will\n> take a few days for the tests to complete.\n\nWith shared_buffers='20MB', the tests passed. I'm going to change it \nback to 10MB now, so that we continue to cover that case.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Thu, 31 Aug 2023 12:45:34 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Sync scan & regression tests" }, { "msg_contents": "On 31/08/2023 02:37, Melanie Plageman wrote:\n> On Wed, Aug 30, 2023 at 5:15 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>>\n>> I just looked at v15's code and I agree that the ss_report_location()\n>> would be called even when the scan is finished. It wasn't intentional\n>> that that was changed in v16, so I'm happy for your patch to be\n>> applied and backpatched to 16. Thanks for digging into that.\n> \n> Yes, thanks for catching my mistake.\n> LGTM.\n\nPushed, thanks for the reviews!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Thu, 31 Aug 2023 13:12:42 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Sync scan & regression tests" }, { "msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> With shared_buffers='20MB', the tests passed. I'm going to change it \n> back to 10MB now, so that we continue to cover that case.\n\nSo chipmunk is getting through the core tests now, but instead it\nis failing in contrib/pg_visibility [1]:\n\ndiff -U3 /home/pgbfarm/buildroot/HEAD/pgsql.build/contrib/pg_visibility/expected/pg_visibility.out /home/pgbfarm/buildroot/HEAD/pgsql.build/contrib/pg_visibility/results/pg_visibility.out\n--- /home/pgbfarm/buildroot/HEAD/pgsql.build/contrib/pg_visibility/expected/pg_visibility.out\t2022-10-08 19:00:15.905074105 +0300\n+++ /home/pgbfarm/buildroot/HEAD/pgsql.build/contrib/pg_visibility/results/pg_visibility.out\t2023-09-02 00:25:51.814148116 +0300\n@@ -218,7 +218,8 @@\n 0 | t | t\n 1 | t | t\n 2 | t | t\n-(3 rows)\n+ 3 | f | f\n+(4 rows)\n \n select * from pg_check_frozen('copyfreeze');\n t_ctid \n\nI find this easily reproducible by setting shared_buffers=10MB.\nBut I'm confused about why, because the affected test case\ndates to Tomas' commit 7db0cd214 of 2021-01-17, and chipmunk\npassed many times after that. Might be worth bisecting in\nthe interval where chipmunk wasn't reporting?\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chipmunk&dt=2023-09-01%2015%3A15%3A56\n\n\n", "msg_date": "Mon, 04 Sep 2023 23:16:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Sync scan & regression tests" }, { "msg_contents": "On 05/09/2023 06:16, Tom Lane wrote:\n> Heikki Linnakangas <hlinnaka@iki.fi> writes:\n>> With shared_buffers='20MB', the tests passed. I'm going to change it\n>> back to 10MB now, so that we continue to cover that case.\n> \n> So chipmunk is getting through the core tests now, but instead it\n> is failing in contrib/pg_visibility [1]:\n> \n> diff -U3 /home/pgbfarm/buildroot/HEAD/pgsql.build/contrib/pg_visibility/expected/pg_visibility.out /home/pgbfarm/buildroot/HEAD/pgsql.build/contrib/pg_visibility/results/pg_visibility.out\n> --- /home/pgbfarm/buildroot/HEAD/pgsql.build/contrib/pg_visibility/expected/pg_visibility.out\t2022-10-08 19:00:15.905074105 +0300\n> +++ /home/pgbfarm/buildroot/HEAD/pgsql.build/contrib/pg_visibility/results/pg_visibility.out\t2023-09-02 00:25:51.814148116 +0300\n> @@ -218,7 +218,8 @@\n> 0 | t | t\n> 1 | t | t\n> 2 | t | t\n> -(3 rows)\n> + 3 | f | f\n> +(4 rows)\n> \n> select * from pg_check_frozen('copyfreeze');\n> t_ctid\n> \n> I find this easily reproducible by setting shared_buffers=10MB.\n> But I'm confused about why, because the affected test case\n> dates to Tomas' commit 7db0cd214 of 2021-01-17, and chipmunk\n> passed many times after that. Might be worth bisecting in\n> the interval where chipmunk wasn't reporting?\n\nI bisected it to this:\n\ncommit 82a4edabd272f70d044faec8cf7fd1eab92d9991 (HEAD)\nAuthor: Andres Freund <andres@anarazel.de>\nDate: Mon Aug 14 09:54:03 2023 -0700\n\n hio: Take number of prior relation extensions into account\n\n The new relation extension logic, introduced in 00d1e02be24, could \nlead to\n slowdowns in some scenarios. E.g., when loading narrow rows into a \ntable using\n COPY, the caller of RelationGetBufferForTuple() will only request a \nsmall\n number of pages. Without concurrency, we just extended using \npwritev() in that\n case. However, if there is *some* concurrency, we switched between \nextending\n by a small number of pages and a larger number of pages, depending \non the\n number of waiters for the relation extension logic. However, some\n filesystems, XFS in particular, do not perform well when switching \nbetween\n extending files using fallocate() and pwritev().\n\n To avoid that issue, remember the number of prior relation \nextensions in\n BulkInsertState and extend more aggressively if there were prior \nrelation\n extensions. That not just avoids the aforementioned slowdown, but \nalso leads\n to noticeable performance gains in other situations, primarily due to\n extending more aggressively when there is no concurrency. I should \nhave done\n it this way from the get go.\n\n Reported-by: Masahiko Sawada <sawada.mshk@gmail.com>\n Author: Andres Freund <andres@anarazel.de>\n Discussion: \nhttps://postgr.es/m/CAD21AoDvDmUQeJtZrau1ovnT_smN940=Kp6mszNGK3bq9yRN6g@mail.gmail.com\n Backpatch: 16-, where the new relation extension code was added\n\n\nBefore this patch, the test table was 3 pages long, now it is 4 pages \nwith a small shared_buffers setting.\n\nIn this test, the relation needs to be at least 3 pages long to hold all \nthe COPYed rows. With a larger shared_buffers, the table is extended to \nthree pages in a single call to heap_multi_insert(). With \nshared_buffers='10 MB', the table is extended twice, because \nLimitAdditionalPins() restricts how many pages are extended in one go to \ntwo pages. With the logic that that commit added, we first extend the \ntable with 2 pages, then with 2 pages again.\n\nI think the behavior is fine. The reasons given in the commit message \nmake sense. But it would be nice to silence the test failure. Some \nalternatives:\n\na) Add an alternative expected output file\n\nb) Change the pg_visibilitymap query so that it passes even if the table \nhas four pages. \"select * from pg_visibility_map('copyfreeze') where \nblkno <= 3\";\n\nc) Change the extension logic so that we don't extend so much when the \ntable is small. The efficiency of bulk extension doesn't matter when the \ntable is tiny, so arguably we should rather try to minimize the table \nsize. If you have millions of tiny tables, allocating one extra block on \neach adds up.\n\nd) Copy fewer rows to the table in the test. If we copy only 6 rows, for \nexample, the table will have only two pages, regardless of shared_buffers.\n\nI'm leaning towards d). The whole test is a little fragile, it will also \nfail with a non-default block size, for example. But c) seems like a \nsimple fix and wouldn't look too out of place in the test.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Mon, 18 Sep 2023 13:49:24 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Sync scan & regression tests" }, { "msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> On 05/09/2023 06:16, Tom Lane wrote:\n>> So chipmunk is getting through the core tests now, but instead it\n>> is failing in contrib/pg_visibility [1]:\n\n> I bisected it to this:\n> commit 82a4edabd272f70d044faec8cf7fd1eab92d9991 (HEAD)\n> Author: Andres Freund <andres@anarazel.de>\n> Date: Mon Aug 14 09:54:03 2023 -0700\n> hio: Take number of prior relation extensions into account\n\nYeah, I came to the same conclusion after discovering that I could\nreproduce it locally with small shared_buffers.\n\n> I think the behavior is fine. The reasons given in the commit message \n> make sense. But it would be nice to silence the test failure. Some \n> alternatives:\n> ...\n> c) Change the extension logic so that we don't extend so much when the \n> table is small. The efficiency of bulk extension doesn't matter when the \n> table is tiny, so arguably we should rather try to minimize the table \n> size. If you have millions of tiny tables, allocating one extra block on \n> each adds up.\n\nI think your alternative (c) is plenty attractive. IIUC, the current\nchange has at one stroke doubled the amount of disk space eaten by\na table that formerly needed two pages. I don't think we should be\nadding more than one page at a time until the table size reaches\nperhaps 10 pages.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 18 Sep 2023 09:45:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Sync scan & regression tests" }, { "msg_contents": "Hi,\n\nOn 2023-09-18 13:49:24 +0300, Heikki Linnakangas wrote:\n> On 05/09/2023 06:16, Tom Lane wrote:\n> > Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> > > With shared_buffers='20MB', the tests passed. I'm going to change it\n> > > back to 10MB now, so that we continue to cover that case.\n> > \n> > So chipmunk is getting through the core tests now, but instead it\n> > is failing in contrib/pg_visibility [1]:\n> > \n> > diff -U3 /home/pgbfarm/buildroot/HEAD/pgsql.build/contrib/pg_visibility/expected/pg_visibility.out /home/pgbfarm/buildroot/HEAD/pgsql.build/contrib/pg_visibility/results/pg_visibility.out\n> > --- /home/pgbfarm/buildroot/HEAD/pgsql.build/contrib/pg_visibility/expected/pg_visibility.out\t2022-10-08 19:00:15.905074105 +0300\n> > +++ /home/pgbfarm/buildroot/HEAD/pgsql.build/contrib/pg_visibility/results/pg_visibility.out\t2023-09-02 00:25:51.814148116 +0300\n> > @@ -218,7 +218,8 @@\n> > 0 | t | t\n> > 1 | t | t\n> > 2 | t | t\n> > -(3 rows)\n> > + 3 | f | f\n> > +(4 rows)\n> > select * from pg_check_frozen('copyfreeze');\n> > t_ctid\n> > \n> > I find this easily reproducible by setting shared_buffers=10MB.\n> > But I'm confused about why, because the affected test case\n> > dates to Tomas' commit 7db0cd214 of 2021-01-17, and chipmunk\n> > passed many times after that. Might be worth bisecting in\n> > the interval where chipmunk wasn't reporting?\n> \n> I bisected it to this:\n> \n> commit 82a4edabd272f70d044faec8cf7fd1eab92d9991 (HEAD)\n> Author: Andres Freund <andres@anarazel.de>\n> Date: Mon Aug 14 09:54:03 2023 -0700\n> \n> hio: Take number of prior relation extensions into account\n> \n> The new relation extension logic, introduced in 00d1e02be24, could lead\n> to\n> slowdowns in some scenarios. E.g., when loading narrow rows into a table\n> using\n> COPY, the caller of RelationGetBufferForTuple() will only request a\n> small\n> number of pages. Without concurrency, we just extended using pwritev()\n> in that\n> case. However, if there is *some* concurrency, we switched between\n> extending\n> by a small number of pages and a larger number of pages, depending on\n> the\n> number of waiters for the relation extension logic. However, some\n> filesystems, XFS in particular, do not perform well when switching\n> between\n> extending files using fallocate() and pwritev().\n> \n> To avoid that issue, remember the number of prior relation extensions in\n> BulkInsertState and extend more aggressively if there were prior\n> relation\n> extensions. That not just avoids the aforementioned slowdown, but also\n> leads\n> to noticeable performance gains in other situations, primarily due to\n> extending more aggressively when there is no concurrency. I should have\n> done\n> it this way from the get go.\n> \n> Reported-by: Masahiko Sawada <sawada.mshk@gmail.com>\n> Author: Andres Freund <andres@anarazel.de>\n> Discussion: https://postgr.es/m/CAD21AoDvDmUQeJtZrau1ovnT_smN940=Kp6mszNGK3bq9yRN6g@mail.gmail.com\n> Backpatch: 16-, where the new relation extension code was added\n\nThis issue is also discussed at:\n\nhttps://www.postgresql.org/message-id/20230916000011.2ugpkkkp7bpp4cfh%40awork3.anarazel.de\n\n\n> Before this patch, the test table was 3 pages long, now it is 4 pages with a\n> small shared_buffers setting.\n> \n> In this test, the relation needs to be at least 3 pages long to hold all the\n> COPYed rows. With a larger shared_buffers, the table is extended to three\n> pages in a single call to heap_multi_insert(). With shared_buffers='10 MB',\n> the table is extended twice, because LimitAdditionalPins() restricts how\n> many pages are extended in one go to two pages. With the logic that that\n> commit added, we first extend the table with 2 pages, then with 2 pages\n> again.\n> \n> I think the behavior is fine. The reasons given in the commit message make\n> sense. But it would be nice to silence the test failure. Some alternatives:\n> \n> a) Add an alternative expected output file\n> \n> b) Change the pg_visibilitymap query so that it passes even if the table has\n> four pages. \"select * from pg_visibility_map('copyfreeze') where blkno <=\n> 3\";\n\nI think the test already encodes the tuple and page size too much, this'd go\nfurther down that road.\n\nIt's too bad we don't have an easy way for testing if a page is empty - if we\ndid, it'd be simple to write this in a robust way. Instead of the query I came\nup with in the other thread:\nselect *\nfrom pg_visibility_map('copyfreeze')\nwhere\n (not all_visible or not all_frozen)\n -- deal with trailing empty pages due to potentially bulk-extending too aggressively\n and exists(SELECT * FROM copyfreeze WHERE ctid >= ('('||blkno||', 0)')::tid)\n;\n\n\n> c) Change the extension logic so that we don't extend so much when the table\n> is small. The efficiency of bulk extension doesn't matter when the table is\n> tiny, so arguably we should rather try to minimize the table size. If you\n> have millions of tiny tables, allocating one extra block on each adds up.\n\nWe could also adjust LimitAdditionalPins() to be a bit more aggressive, by\nactually counting instead of using REFCOUNT_ARRAY_ENTRIES (only when it might\nmatter, for efficiency reasons).\n\n\n\n> d) Copy fewer rows to the table in the test. If we copy only 6 rows, for\n> example, the table will have only two pages, regardless of shared_buffers.\n> \n> I'm leaning towards d). The whole test is a little fragile, it will also\n> fail with a non-default block size, for example. But c) seems like a simple\n> fix and wouldn't look too out of place in the test.\n\nHm, what do you mean with the last sentence? Oh, is the test you're\nreferencing the relation-extension logic?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 18 Sep 2023 15:57:03 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Sync scan & regression tests" }, { "msg_contents": "On 19/09/2023 01:57, Andres Freund wrote:\n> On 2023-09-18 13:49:24 +0300, Heikki Linnakangas wrote:\n>> d) Copy fewer rows to the table in the test. If we copy only 6 rows, for\n>> example, the table will have only two pages, regardless of shared_buffers.\n>>\n>> I'm leaning towards d). The whole test is a little fragile, it will also\n>> fail with a non-default block size, for example. But c) seems like a simple\n>> fix and wouldn't look too out of place in the test.\n> \n> Hm, what do you mean with the last sentence? Oh, is the test you're\n> referencing the relation-extension logic?\n\nSorry, I said \"c) seems like a simple fix ...\", but I meant \"d) seems \nlike a simple fix ...\"\n\nI meant the attached.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Tue, 19 Sep 2023 12:06:57 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Sync scan & regression tests" }, { "msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> On 19/09/2023 01:57, Andres Freund wrote:\n>> On 2023-09-18 13:49:24 +0300, Heikki Linnakangas wrote:\n>>> d) Copy fewer rows to the table in the test. If we copy only 6 rows, for\n>>> example, the table will have only two pages, regardless of shared_buffers.\n>>> \n>>> I'm leaning towards d). The whole test is a little fragile, it will also\n>>> fail with a non-default block size, for example. But c) seems like a simple\n>>> fix and wouldn't look too out of place in the test.\n\n>> Hm, what do you mean with the last sentence? Oh, is the test you're\n>> referencing the relation-extension logic?\n\n> Sorry, I said \"c) seems like a simple fix ...\", but I meant \"d) seems \n> like a simple fix ...\"\n> I meant the attached.\n\nThis thread stalled out months ago, but chipmunk is still failing in\nHEAD and v16. Can we please have a fix? I'm good with Heikki's\nadjustment to the pg_visibility test case.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 24 Mar 2024 11:28:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Sync scan & regression tests" }, { "msg_contents": "Hi,\n\nOn 2024-03-24 11:28:12 -0400, Tom Lane wrote:\n> Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> > On 19/09/2023 01:57, Andres Freund wrote:\n> >> On 2023-09-18 13:49:24 +0300, Heikki Linnakangas wrote:\n> >>> d) Copy fewer rows to the table in the test. If we copy only 6 rows, for\n> >>> example, the table will have only two pages, regardless of shared_buffers.\n> >>>\n> >>> I'm leaning towards d). The whole test is a little fragile, it will also\n> >>> fail with a non-default block size, for example. But c) seems like a simple\n> >>> fix and wouldn't look too out of place in the test.\n>\n> >> Hm, what do you mean with the last sentence? Oh, is the test you're\n> >> referencing the relation-extension logic?\n>\n> > Sorry, I said \"c) seems like a simple fix ...\", but I meant \"d) seems\n> > like a simple fix ...\"\n> > I meant the attached.\n>\n> This thread stalled out months ago, but chipmunk is still failing in\n> HEAD and v16. Can we please have a fix? I'm good with Heikki's\n> adjustment to the pg_visibility test case.\n\nI pushed Heikki's adjustment. Thanks for the \"fix\" and the reminder.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 25 Mar 2024 20:28:34 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Sync scan & regression tests" } ]
[ { "msg_contents": "Hi hackers,\n\nfa88928470 introduced src/backend/utils/activity/wait_event_names.txt that has\nbeen auto-generated. The auto-generation had parsing errors leading to bad\nentries in wait_event_names.txt (when the same \"WAIT_EVENT\" name can be found as\na substring of another one, then descriptions were merged in one of them).\n\nFixing those bad entries in the attached.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 7 Aug 2023 10:34:43 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Fix badly generated entries in wait_event_names.txt" }, { "msg_contents": "On Mon, Aug 07, 2023 at 10:34:43AM +0200, Drouvot, Bertrand wrote:\n> fa88928470 introduced src/backend/utils/activity/wait_event_names.txt that has\n> been auto-generated. The auto-generation had parsing errors leading to bad\n> entries in wait_event_names.txt (when the same \"WAIT_EVENT\" name can be found as\n> a substring of another one, then descriptions were merged in one of them).\n\nThanks for the patch, applied. I have double-checked the contents of\nthe tables and the five entries you have pointed out are the only ones\nwith duplicated descriptions.\n--\nMichael", "msg_date": "Tue, 8 Aug 2023 08:25:49 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix badly generated entries in wait_event_names.txt" }, { "msg_contents": "Hi,\n\nOn 8/8/23 1:25 AM, Michael Paquier wrote:\n> On Mon, Aug 07, 2023 at 10:34:43AM +0200, Drouvot, Bertrand wrote:\n>> fa88928470 introduced src/backend/utils/activity/wait_event_names.txt that has\n>> been auto-generated. The auto-generation had parsing errors leading to bad\n>> entries in wait_event_names.txt (when the same \"WAIT_EVENT\" name can be found as\n>> a substring of another one, then descriptions were merged in one of them).\n> \n> Thanks for the patch, applied. \n\nThanks!\n\n> I have double-checked the contents of\n> the tables and the five entries you have pointed out are the only ones\n> with duplicated descriptions.\n\nYeah, did the same on my side before submitting the patch, and all looks\ngood now.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 8 Aug 2023 07:31:41 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix badly generated entries in wait_event_names.txt" } ]
[ { "msg_contents": "0001: There is no point in searching -lrt and -lposix4 for fdatasync()\nif it's supposed to help Solaris only. They moved all the realtime\nstuff into the main C library at least as far back as Solaris\n10/OpenSolaris.\n\n0002: There is no point in probing -lposix4 for anything. That was\nsuperseded by -lrt even longer ago.\n\nWe could go further: I suspect the only remaining -lrt search we\nstill need in practice is for shm_open() on glibc < 2.34, but here I\njust wanted to clean out some old Sun stuff that was called out by\nname.", "msg_date": "Mon, 7 Aug 2023 21:18:58 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Minor configure/meson cleanup" }, { "msg_contents": "On 07.08.23 11:18, Thomas Munro wrote:\n> 0001: There is no point in searching -lrt and -lposix4 for fdatasync()\n> if it's supposed to help Solaris only. They moved all the realtime\n> stuff into the main C library at least as far back as Solaris\n> 10/OpenSolaris.\n> \n> 0002: There is no point in probing -lposix4 for anything. That was\n> superseded by -lrt even longer ago.\n> \n> We could go further: I suspect the only remaining -lrt search we\n> still need in practice is for shm_open() on glibc < 2.34, but here I\n> just wanted to clean out some old Sun stuff that was called out by\n> name.\n\nLooks sensible to me.\n\n\n", "msg_date": "Mon, 7 Aug 2023 12:01:25 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: Minor configure/meson cleanup" }, { "msg_contents": "I agree that the change look good.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 07 Aug 2023 07:27:05 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": false, "msg_subject": "Re: Minor configure/meson cleanup" }, { "msg_contents": "On Tue, Aug 8, 2023 at 12:27 AM Tristan Partin <tristan@neon.tech> wrote:\n> I agree that the change look good.\n\nThanks both. Pushed.\n\n\n", "msg_date": "Thu, 17 Aug 2023 16:23:37 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Minor configure/meson cleanup" } ]
[ { "msg_contents": "Hi.\n\npostgres_fdw currently doesn't handle RowCompareExpr, which doesn't \nallow keyset pagination queries to be efficiently executed over sharded \ntable.\nAttached patch adds handling of RowCompareExpr in deparse.c, so that we \ncould push down conditions like\nWHERE (created, id) > ('2023-01-01 00:00:00'::timestamp, 12345) to the \nforeign server.\n\nI'm not sure about conditions when it's possible for RowCompareExpr to \nhave opnos with different names or namespaces, but comment in \nruleutils.c suggests that this is possible, so I've added check for this \nin foreign_expr_walker().\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional", "msg_date": "Mon, 07 Aug 2023 17:02:39 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "postgres_fdw could support row comparison pushdown" } ]
[ { "msg_contents": "Hi,\r\n\r\nAttached is the release announcement draft for the 2023-08-10 update \r\nrelease, which also includes the release of PostgreSQL 16 Beta 3.\r\n\r\nPlease provide your feedback no later than August 10, 2023 0:00 AoE[1].\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] https://en.wikipedia.org/wiki/Anywhere_on_Earth", "msg_date": "Mon, 7 Aug 2023 21:15:32 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "2023-08-10 release announcement draft" }, { "msg_contents": "On Tue, 8 Aug 2023 at 13:15, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> Attached is the release announcement draft for the 2023-08-10 update\n> release, which also includes the release of PostgreSQL 16 Beta 3.\n\nThanks for drafting this.\n\n> * Fix a performance regression when running concurrent\n> [`COPY`](https://www.postgresql.org/docs/16/sql-copy.html) statements on a\n> single table.\n\nI think this is still outstanding. A bit of work has been done for the\nint parsing regression but it seems there's still a performance\nregression when running multiple COPYs on the same table, per [1].\n\n> or an previous major version of PostgreSQL, you will need to use a strategy\n\n\"a previous\".\n\nDavid\n\n[1] https://postgr.es/m/CAD21AoC0GvCEZbDreoAcj=i0LjNCQePQbh_cxCuBKezYgYwmTA@mail.gmail.com\n\n\n", "msg_date": "Tue, 8 Aug 2023 13:45:13 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 2023-08-10 release announcement draft" }, { "msg_contents": "On 8/7/23 9:45 PM, David Rowley wrote:\r\n\r\n>> * Fix a performance regression when running concurrent\r\n>> [`COPY`](https://www.postgresql.org/docs/16/sql-copy.html) statements on a\r\n>> single table.\r\n> \r\n> I think this is still outstanding. A bit of work has been done for the\r\n> int parsing regression but it seems there's still a performance\r\n> regression when running multiple COPYs on the same table, per [1].\r\n\r\nHm, the open item was closed[1] -- was that premature, or is this a new \r\nissue (have not yet read the thread you referenced)?\r\n\r\n>> or an previous major version of PostgreSQL, you will need to use a strategy\r\n> \r\n> \"a previous\".\r\n\r\nThanks for the catch -- fixed locally.\r\n\r\nJonathan\r\n\r\n[1] \r\nhttps://wiki.postgresql.org/index.php?title=PostgreSQL_16_Open_Items#resolved_before_16beta3", "msg_date": "Mon, 7 Aug 2023 21:49:26 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: 2023-08-10 release announcement draft" }, { "msg_contents": "On Tue, 8 Aug 2023 at 13:49, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n>\n> On 8/7/23 9:45 PM, David Rowley wrote:\n>\n> >> * Fix a performance regression when running concurrent\n> >> [`COPY`](https://www.postgresql.org/docs/16/sql-copy.html) statements on a\n> >> single table.\n> >\n> > I think this is still outstanding. A bit of work has been done for the\n> > int parsing regression but it seems there's still a performance\n> > regression when running multiple COPYs on the same table, per [1].\n>\n> Hm, the open item was closed[1] -- was that premature, or is this a new\n> issue (have not yet read the thread you referenced)?\n\nI closed it thinking that enough had been done to resolve the\nperformance regression. In the linked thread, Sawadasan shows that\nthat's not the case. So, yes, premature. I've reverted the change to\nthe open items list now.\n\nDavid\n\n\n", "msg_date": "Tue, 8 Aug 2023 13:53:30 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 2023-08-10 release announcement draft" }, { "msg_contents": "On 8/7/23 9:53 PM, David Rowley wrote:\r\n> On Tue, 8 Aug 2023 at 13:49, Jonathan S. Katz <jkatz@postgresql.org> wrote:\r\n>>\r\n>> On 8/7/23 9:45 PM, David Rowley wrote:\r\n>>\r\n>>>> * Fix a performance regression when running concurrent\r\n>>>> [`COPY`](https://www.postgresql.org/docs/16/sql-copy.html) statements on a\r\n>>>> single table.\r\n>>>\r\n>>> I think this is still outstanding. A bit of work has been done for the\r\n>>> int parsing regression but it seems there's still a performance\r\n>>> regression when running multiple COPYs on the same table, per [1].\r\n>>\r\n>> Hm, the open item was closed[1] -- was that premature, or is this a new\r\n>> issue (have not yet read the thread you referenced)?\r\n> \r\n> I closed it thinking that enough had been done to resolve the\r\n> performance regression. In the linked thread, Sawadasan shows that\r\n> that's not the case. So, yes, premature. I've reverted the change to\r\n> the open items list now.\r\n\r\nGot it. I reverted it as well from the release announcement. Reattaching \r\nwith the clean copy.\r\n\r\n(Aside: I'm super excited for this PG16 improvement + fixed regression, \r\nas lately I've had to do some bulk imports on a single table that could \r\nreally benefit from this :)\r\n\r\nJonathan", "msg_date": "Mon, 7 Aug 2023 22:03:44 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: 2023-08-10 release announcement draft" }, { "msg_contents": "Op 8/8/23 om 03:15 schreef Jonathan S. Katz:\n> \n> Please provide your feedback no later than August 10, 2023 0:00 AoE[1].\n\n'You us' should be\n'You use'\n (2x)\n\n\nErik\n\n\n", "msg_date": "Tue, 8 Aug 2023 07:30:57 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: 2023-08-10 release announcement draft" }, { "msg_contents": "On 8/8/23 1:30 AM, Erik Rijkers wrote:\r\n> Op 8/8/23 om 03:15 schreef Jonathan S. Katz:\r\n>>\r\n>> Please provide your feedback no later than August 10, 2023 0:00 AoE[1].\r\n> \r\n> 'You us'  should be\r\n> 'You use'\r\n>    (2x)\r\n\r\nIt should actually be just \"Use\" -- but I've fixed both instances. Thanks!\r\n\r\nJonathan", "msg_date": "Tue, 8 Aug 2023 09:39:42 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: 2023-08-10 release announcement draft" }, { "msg_contents": "On Mon, Aug 7, 2023 at 9:15 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n>\n> Hi,\n>\n> Attached is the release announcement draft for the 2023-08-10 update\n> release, which also includes the release of PostgreSQL 16 Beta 3.\n>\n> Please provide your feedback no later than August 10, 2023 0:00 AoE[1].\n>\n> Thanks,\n>\n> Jonathan\n>\n> [1] https://en.wikipedia.org/wiki/Anywhere_on_Earth\n\n\"Users who have skipped one or more update releases may need to run\nadditional, post-update steps; \"\n\nThe comma should be removed.\n\n\"please see the release notes for earlier versions for details.\"\n\nUse of 'for' twice is grammatically incorrect; I am partial to \"please\nsee the release notes from earlier versions for details.\" but could\nalso see \"please see the release notes for earlier versions to get\ndetails.\"\n\n\nRobert Treat\nhttps://xzilla.net\n\n\n", "msg_date": "Tue, 8 Aug 2023 23:13:54 -0400", "msg_from": "Robert Treat <rob@xzilla.net>", "msg_from_op": false, "msg_subject": "Re: 2023-08-10 release announcement draft" }, { "msg_contents": "On Mon, Aug 07, 2023 at 10:03:44PM -0400, Jonathan S. Katz wrote:\n> Fixes in PostgreSQL 16 Beta 3\n> -----------------------------\n> \n> The following includes fixes included in PostgreSQL 16 Beta 3:\n\nWith both \"includes\" and \"included\" there, this line reads awkwardly to me.\nI'd just delete the line, since the heading has the same information.\n\n> * Add the `\\drg` command to `psql` to display information about role grants.\n> * Add timeline ID to filenames generated with `pg_waldump --save-fullpage`.\n> * Fix crash after a deadlock occurs in a parallel `VACUUM` worker.\n> \n> Please see the [release notes](https://www.postgresql.org/docs/16/release-16.html)\n> for a complete list of new and changed features:\n> \n> [https://www.postgresql.org/docs/16/release-16.html](https://www.postgresql.org/docs/16/release-16.html)\n\n\n", "msg_date": "Tue, 8 Aug 2023 22:04:36 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: 2023-08-10 release announcement draft" }, { "msg_contents": "On 8/9/23 1:04 AM, Noah Misch wrote:\r\n> On Mon, Aug 07, 2023 at 10:03:44PM -0400, Jonathan S. Katz wrote:\r\n>> Fixes in PostgreSQL 16 Beta 3\r\n>> -----------------------------\r\n>>\r\n>> The following includes fixes included in PostgreSQL 16 Beta 3:\r\n> \r\n> With both \"includes\" and \"included\" there, this line reads awkwardly to me.\r\n> I'd just delete the line, since the heading has the same information.\r\n\r\nI took your suggestion. Thanks!\r\n\r\nJonathan", "msg_date": "Wed, 9 Aug 2023 22:09:10 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: 2023-08-10 release announcement draft" }, { "msg_contents": "On 8/8/23 11:13 PM, Robert Treat wrote:\r\n\r\n> \"Users who have skipped one or more update releases may need to run\r\n> additional, post-update steps; \"\r\n> \r\n> The comma should be removed.\r\n> \r\n> \"please see the release notes for earlier versions for details.\"\r\n> \r\n> Use of 'for' twice is grammatically incorrect; I am partial to \"please\r\n> see the release notes from earlier versions for details.\" but could\r\n> also see \"please see the release notes for earlier versions to get\r\n> details.\"\r\n\r\nInterestingly, I think this language has been unmodified for awhile. \r\nUpon reading it, I agree, and took your suggestions.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Wed, 9 Aug 2023 22:10:44 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: 2023-08-10 release announcement draft" }, { "msg_contents": "Hi,\n\nSorry, join late after the release note is out.\n\n> * An out-of-memory error from JIT will now cause a PostgreSQL `FATAL`\nerror instead of a C++ exception.\n\nI assume this statement is related to this fix[1].\nI think an OOM error from JIT causing a PostgreSQL `FATAL` error is the\nactual behavior before the fix.\nWhat the fix does is to bring exceptions back to the c++ world when OOM\nhappens outside of LLVM.\n\nThanks,\nDonghang Lin\n\n[1]\nhttps://github.com/postgres/postgres/blame/accf4f84887eb8b53978a0dbf9cb5656e1779fcb/doc/src/sgml/release-15.sgml#L488-L509\n\n\n\nOn Wed, Aug 9, 2023 at 7:10 PM Jonathan S. Katz <jkatz@postgresql.org>\nwrote:\n\n> On 8/8/23 11:13 PM, Robert Treat wrote:\n>\n> > \"Users who have skipped one or more update releases may need to run\n> > additional, post-update steps; \"\n> >\n> > The comma should be removed.\n> >\n> > \"please see the release notes for earlier versions for details.\"\n> >\n> > Use of 'for' twice is grammatically incorrect; I am partial to \"please\n> > see the release notes from earlier versions for details.\" but could\n> > also see \"please see the release notes for earlier versions to get\n> > details.\"\n>\n> Interestingly, I think this language has been unmodified for awhile.\n> Upon reading it, I agree, and took your suggestions.\n>\n> Thanks,\n>\n> Jonathan\n>\n>\n\nHi, Sorry, join late after the release note is out. >  * An out-of-memory error from JIT will now cause a PostgreSQL `FATAL` error instead of a C++ exception.I assume this statement is related to this fix[1].I think an OOM error from JIT causing a PostgreSQL `FATAL` error is the actual behavior before the fix.What the fix does is to bring exceptions back to the c++ world when OOM happens outside of LLVM. Thanks,Donghang Lin[1] https://github.com/postgres/postgres/blame/accf4f84887eb8b53978a0dbf9cb5656e1779fcb/doc/src/sgml/release-15.sgml#L488-L509On Wed, Aug 9, 2023 at 7:10 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:On 8/8/23 11:13 PM, Robert Treat wrote:\n\n> \"Users who have skipped one or more update releases may need to run\n> additional, post-update steps; \"\n> \n> The comma should be removed.\n> \n> \"please see the release notes for earlier versions for details.\"\n> \n> Use of 'for' twice is grammatically incorrect; I am partial to \"please\n> see the release notes from earlier versions for details.\" but could\n> also see \"please see the release notes for earlier versions to get\n> details.\"\n\nInterestingly, I think this language has been unmodified for awhile. \nUpon reading it, I agree, and took your suggestions.\n\nThanks,\n\nJonathan", "msg_date": "Thu, 10 Aug 2023 19:07:02 -0700", "msg_from": "Donghang Lin <donghanglin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 2023-08-10 release announcement draft" } ]
[ { "msg_contents": "PG version 15.4\r\ni read code about pg_dump &amp; pg_restore, there is an option -j/--jobs related with parallel,\r\ni have questions about parallel dump &amp; restore\r\n1)&nbsp;In pg_dump when running parallel dump, it can ensure dump data consistency using synchronized snaphot,\r\n\r\nwhy should it start a trasaction in every parallel jobs, also set isolation level ,read only, and no need commit trasaction\r\n2)&nbsp;In pg_restore there are no actions same with pg_dump under parallel mode, how to ensure restore data consistency?\r\n\r\nno trasaction and no commit action\r\n3)&nbsp;when we do dump or restore under parallel mode, should we set whole database read-only(write behavior forbidden??)\r\n\r\n\r\n\r\n发自我的企业微信\nPG version 15.4i read code about pg_dump & pg_restore, there is an option -j/--jobs related with parallel,i have questions about parallel dump & restore1) In pg_dump when running parallel dump, it can ensure dump data consistency using synchronized snaphot,why should it start a trasaction in every parallel jobs, also set isolation level ,read only, and no need commit trasaction2) In pg_restore there are no actions same with pg_dump under parallel mode, how to ensure restore data consistency?no trasaction and no commit action3) when we do dump or restore under parallel mode, should we set whole database read-only(write behavior forbidden??)发自我的企业微信", "msg_date": "Tue, 8 Aug 2023 09:54:38 +0800", "msg_from": "\"=?utf-8?B?54aK6Imz6L6J?=\" <yanhui.xiong@enmotech.com>", "msg_from_op": true, "msg_subject": "how to ensure parallel restore consistency ?" } ]
[ { "msg_contents": "Sure, that's correct. There are indeed some code bugs in my logic, thank you very much for your response.\r\n\r\n\r\n\r\n\r\n发自我的企业微信\r\n\r\n\r\n\r\n\r\n\r\n ----------回复的邮件信息----------\r\n Tomas Vondra<tomas.vondra@enterprisedb.com&gt;&nbsp;在 2023-07-20 周四 18:16 写道:\r\nWell, by only looking at the code you're assuming two things:\r\n\r\n1) the code is correct\nSure, that's correct. There are indeed some code bugs in my logic, thank you very much for your response.发自我的企业微信\n----------回复的邮件信息----------\nTomas Vondra<tomas.vondra@enterprisedb.com> 在 2023-07-20 周四 18:16 写道:Well, by only looking at the code you're assuming two things:1) the code is correct", "msg_date": "Tue, 8 Aug 2023 10:00:22 +0800", "msg_from": "\"=?utf-8?B?54aK6Imz6L6J?=\" <yanhui.xiong@enmotech.com>", "msg_from_op": true, "msg_subject": "=?utf-8?B?5Zue5aSN77yaUmU6IGluY29uc2lzdGVuY3kgYmV0?=\n =?utf-8?B?d2VlbiB0aGUgVk0gcGFnZSB2aXNpYmlsaXR5IHN0?=\n =?utf-8?B?YXR1cyBhbmQgdGhlIHZpc2liaWxpdHkgc3RhdHVz?=\n =?utf-8?B?IG9mIHRoZSBwYWdl?=" } ]
[ { "msg_contents": "Hi,\n\nAs some of you might have seen when running CI, cirrus-ci is restricting how\nmuch CI cycles everyone can use for free (announcement at [1]). This takes\neffect September 1st.\n\nThis obviously has consequences both for individual users of CI as well as\ncfbot.\n\n\nThe first thing I think we should do is to lower the cost of CI. One thing I\nhad not entirely realized previously, is that macos CI is by far the most\nexpensive CI to provide. That's not just the case with cirrus-ci, but also\nwith other providers. See the series of patches described later in the email.\n\n\nTo me, the situation for cfbot is different than the one for individual\nusers.\n\nIMO, for the individual user case it's important to use CI for \"free\", without\na whole lot of complexity. Which imo rules approaches like providing\n$cloud_provider compute accounts, that's too much setup work. With the\nimprovements detailed below, cirrus' free CI would last about ~65 runs /\nmonth.\n\nFor cfbot I hope we can find funding to pay for compute to use for CI. The, by\nfar, most expensive bit is macos. To a significant degree due to macos\nlicensing terms not allowing more than 2 VMs on a physical host :(.\n\n\nThe reason we chose cirrus-ci were\n\na) Ability to use full VMs, rather than a pre-selected set of VMs, which\n allows us to test a larger number\n\nb) Ability to link to log files, without requiring an account. E.g. github\n actions doesn't allow to view logs unless logged in.\n\nc) Amount of compute available.\n\n\nThe set of free CI providers has shrunk since we chose cirrus, as have the\n\"free\" resources provided. I started, quite incomplete as of now, wiki page at\n[4].\n\n\nPotential paths forward for individual CI:\n\n- migrate wholesale to another CI provider\n\n- split CI tasks across different CI providers, rely on github et al\n displaying the CI status for different platforms\n\n- give up\n\n\nPotential paths forward for cfbot, in addition to the above:\n\n- Pay for compute / ask the various cloud providers to grant us compute\n credits. At least some of the cloud providers can be used via cirrus-ci.\n\n- Host (some) CI runners ourselves. Particularly with macos and windows, that\n could provide significant savings.\n\n- Build our own system, using buildbot, jenkins or whatnot.\n\n\nOpinions as to what to do?\n\n\n\nThe attached series of patches:\n\n1) Makes startup of macos instances faster, using more efficient caching of\n the required packages. Also submitted as [2].\n\n2) Introduces a template initdb that's reused during the tests. Also submitted\n as [3]\n\n3) Remove use of -DRANDOMIZE_ALLOCATED_MEMORY from macos tasks. It's\n expensive. And CI also uses asan on linux, so I don't think it's really\n needed.\n\n4) Switch tasks to use debugoptimized builds. Previously many tasks used -Og,\n to get decent backtraces etc. But the amount of CPU burned that way is too\n large. One issue with that is that use of ccache becomes much more crucial,\n uncached build times do significantly increase.\n\n5) Move use of -Dsegsize_blocks=6 from macos to linux\n\n Macos is expensive, -Dsegsize_blocks=6 slows things down. Alternatively we\n could stop covering both meson and autoconf segsize_blocks. It does affect\n runtime on linux as well.\n\n6) Disable write cache flushes on windows\n\n It's a bit ugly to do this without using the UI... Shaves off about 30s\n from the tests.\n\n7) pg_regress only checked once a second whether postgres started up, but it's\n usually much faster. Use pg_ctl's logic. It might be worth replacing the\n use psql with directly using libpq in pg_regress instead, looks like the\n overhead of repeatedly starting psql is noticeable.\n\n\nFWIW: with the patches applied, the \"credit costs\" in cirrus CI are roughly\nlike the following (depends on caching etc):\n\ntask costs in credits\n\tlinux-sanity: 0.01\n\tlinux-compiler-warnings: 0.05\n\tlinux-meson: 0.07\n\tfreebsd : 0.08\n\tlinux-autoconf: 0.09\n\twindows : 0.18\n\tmacos : 0.28\ntotal task runtime is 40.8\ncost in credits is 0.76, monthly credits of 50 allow approx 66.10 runs/month\n\n\nGreetings,\n\nAndres Freund\n\n[1] https://cirrus-ci.org/blog/2023/07/17/limiting-free-usage-of-cirrus-ci/\n[2] https://www.postgresql.org/message-id/20230805202539.r3umyamsnctysdc7%40awork3.anarazel.de\n[3] https://postgr.es/m/20220120021859.3zpsfqn4z7ob7afz@alap3.anarazel.de", "msg_date": "Mon, 7 Aug 2023 19:15:41 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Cirrus-ci is lowering free CI cycles - what to do with cfbot, etc?" }, { "msg_contents": "Hi,\n\nOn 2023-08-07 19:15:41 -0700, Andres Freund wrote:\n> The set of free CI providers has shrunk since we chose cirrus, as have the\n> \"free\" resources provided. I started, quite incomplete as of now, wiki page at\n> [4].\n\nOops, as Thomas just noticed, I left off that link:\n\n[4] https://wiki.postgresql.org/wiki/CI_Providers\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 7 Aug 2023 19:21:08 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Cirrus-ci is lowering free CI cycles - what to do with cfbot,\n etc?" }, { "msg_contents": "On 08/08/2023 05:15, Andres Freund wrote:\n> IMO, for the individual user case it's important to use CI for \"free\", without\n> a whole lot of complexity. Which imo rules approaches like providing\n> $cloud_provider compute accounts, that's too much setup work.\n\n+1\n\n> With the improvements detailed below, cirrus' free CI would last\n> about ~65 runs / month.\nI think that's plenty.\n\n> For cfbot I hope we can find funding to pay for compute to use for CI.\n\n+1\n\n> Potential paths forward for cfbot, in addition to the above:\n> \n> - Pay for compute / ask the various cloud providers to grant us compute\n> credits. At least some of the cloud providers can be used via cirrus-ci.\n> \n> - Host (some) CI runners ourselves. Particularly with macos and windows, that\n> could provide significant savings.\n> \n> - Build our own system, using buildbot, jenkins or whatnot.\n> \n> \n> Opinions as to what to do?\n\nThe resources for running our own system isn't free either. I'm sure we \ncan get sponsors for the cirrus-ci credits, or use donations.\n\nI have been quite happy with Cirrus CI overall.\n\n> The attached series of patches:\n\nAll of this makes sense to me, although I don't use macos myself.\n\n> 5) Move use of -Dsegsize_blocks=6 from macos to linux\n> \n> Macos is expensive, -Dsegsize_blocks=6 slows things down. Alternatively we\n> could stop covering both meson and autoconf segsize_blocks. It does affect\n> runtime on linux as well.\n\nCould we have a comment somewhere on why we use -Dsegsize_blocks on \nthese particular CI runs? It seems pretty random. I guess the idea is to \nhave one autoconf task and one meson task with that option, to check \nthat the autoconf/meson option works?\n\n> 6) Disable write cache flushes on windows\n> \n> It's a bit ugly to do this without using the UI... Shaves off about 30s\n> from the tests.\n\nA brief comment would be nice: \"We don't care about persistence over \nhard crashes in the CI, so disable write cache flushes to speed it up.\"\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Tue, 8 Aug 2023 11:58:25 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Cirrus-ci is lowering free CI cycles - what to do with cfbot,\n etc?" }, { "msg_contents": "On 08.08.23 04:15, Andres Freund wrote:\n> Potential paths forward for individual CI:\n> \n> - migrate wholesale to another CI provider\n> \n> - split CI tasks across different CI providers, rely on github et al\n> displaying the CI status for different platforms\n> \n> - give up\n\nWith the proposed optimizations, it seems you can still do a fair amount \nof work under the free plan.\n\n> Potential paths forward for cfbot, in addition to the above:\n> \n> - Pay for compute / ask the various cloud providers to grant us compute\n> credits. At least some of the cloud providers can be used via cirrus-ci.\n> \n> - Host (some) CI runners ourselves. Particularly with macos and windows, that\n> could provide significant savings.\n> \n> - Build our own system, using buildbot, jenkins or whatnot.\n\nI think we should use the \"compute credits\" plan from Cirrus CI. It \nshould be possible to estimate the costs for that. Money is available, \nI think.\n\n\n\n", "msg_date": "Tue, 8 Aug 2023 16:28:49 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: Cirrus-ci is lowering free CI cycles - what to do with cfbot,\n etc?" }, { "msg_contents": "Hi,\n\nOn 2023-08-08 16:28:49 +0200, Peter Eisentraut wrote:\n> On 08.08.23 04:15, Andres Freund wrote:\n> > Potential paths forward for cfbot, in addition to the above:\n> >\n> > - Pay for compute / ask the various cloud providers to grant us compute\n> > credits. At least some of the cloud providers can be used via cirrus-ci.\n> >\n> > - Host (some) CI runners ourselves. Particularly with macos and windows, that\n> > could provide significant savings.\n> >\n> > - Build our own system, using buildbot, jenkins or whatnot.\n>\n> I think we should use the \"compute credits\" plan from Cirrus CI. It should\n> be possible to estimate the costs for that. Money is available, I think.\n\nUnfortunately just doing that seems like it would up considerably on the too\nexpensive side. Here are the stats for last months' cfbot runtimes (provided\nby Thomas):\n\n task_name | sum\n------------------------------------------------+------------\n FreeBSD - 13 - Meson | 1017:56:09\n Windows - Server 2019, MinGW64 - Meson | 00:00:00\n SanityCheck | 76:48:41\n macOS - Ventura - Meson | 873:12:43\n Windows - Server 2019, VS 2019 - Meson & ninja | 1251:08:06\n Linux - Debian Bullseye - Autoconf | 830:17:26\n Linux - Debian Bullseye - Meson | 860:37:21\n CompilerWarnings | 935:30:35\n(8 rows)\n\nIf I did the math right, that's about 7000 credits (and 1 credit costs 1 USD).\n\ntask costs in credits\n\tlinux-sanity: 55.30\n\tlinux-autoconf: 598.04\n\tlinux-meson: 619.40\n\tlinux-compiler-warnings: 674.28\n\tfreebsd : 732.24\n\twindows : 1201.09\n\tmacos : 3143.52\n\n\nNow, those times are before optimizing test runtime. And besides optimizing\nthe tasks, we can also optimize not running tests for docs patches etc. And\noptimize cfbot to schedule a bit better.\n\nBut still, the costs look not realistic to me.\n\nIf instead we were to use our own GCP account, it's a lot less. t2d-standard-4\ninstances, which are faster than what we use right now, cost $0.168984 / hour\nas \"normal\" instances and $0.026764 as \"spot\" instances right now [1]. Windows\nVMs are considerably more expensive due to licensing - 0.184$/h in addition.\n\nAssuming spot instances, linux+freebsd tasks would cost ~100USD month (maybe\n10-20% more in reality, due to a) spot instances getting terminated requiring\nretries and b) disks).\n\nWindows would be ~255 USD / month (same retries caveats).\n\nGiven the cost of macos, it seems like it'd be by far the most of affordable\nto just buy 1-2 mac minis (2x ~660USD) and stick them in a shelf somewhere, as\npersistent runners. Cirrus has builtin macos virtualization support - but can\nonly host two VMs on each mac, due to macos licensing restrictions. A single\nmac mini would suffice to keep up with our unoptimized monthly runtime\n(although there likely would be some overhead).\n\nGreetings,\n\nAndres Freund\n\n[1] https://cloud.google.com/compute/all-pricing\n\n\n", "msg_date": "Tue, 8 Aug 2023 08:13:45 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Cirrus-ci is lowering free CI cycles - what to do with cfbot,\n etc?" }, { "msg_contents": "On Mon Aug 7, 2023 at 9:15 PM CDT, Andres Freund wrote:\n> FWIW: with the patches applied, the \"credit costs\" in cirrus CI are roughly\n> like the following (depends on caching etc):\n>\n> task costs in credits\n> \tlinux-sanity: 0.01\n> \tlinux-compiler-warnings: 0.05\n> \tlinux-meson: 0.07\n> \tfreebsd : 0.08\n> \tlinux-autoconf: 0.09\n> \twindows : 0.18\n> \tmacos : 0.28\n> total task runtime is 40.8\n> cost in credits is 0.76, monthly credits of 50 allow approx 66.10 runs/month\n\nI am not in the loop on the autotools vs meson stuff. How much longer do \nwe anticipate keeping autotools around? Seems like it could be a good \nopportunity to reduce some CI usage if autotools were finally dropped, \nbut I know there are still outstanding tasks to complete.\n\nBack of the napkin math says autotools is about 12% of the credit cost, \nthough I haven't looked to see if linux-meson and linux-autotools are \n1:1.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Tue, 08 Aug 2023 10:25:52 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": false, "msg_subject": "Re: Cirrus-ci is lowering free CI cycles - what to do with cfbot,\n etc?" }, { "msg_contents": "Hi,\n\nOn 2023-08-08 10:25:52 -0500, Tristan Partin wrote:\n> On Mon Aug 7, 2023 at 9:15 PM CDT, Andres Freund wrote:\n> > FWIW: with the patches applied, the \"credit costs\" in cirrus CI are roughly\n> > like the following (depends on caching etc):\n> > \n> > task costs in credits\n> > \tlinux-sanity: 0.01\n> > \tlinux-compiler-warnings: 0.05\n> > \tlinux-meson: 0.07\n> > \tfreebsd : 0.08\n> > \tlinux-autoconf: 0.09\n> > \twindows : 0.18\n> > \tmacos : 0.28\n> > total task runtime is 40.8\n> > cost in credits is 0.76, monthly credits of 50 allow approx 66.10 runs/month\n> \n> I am not in the loop on the autotools vs meson stuff. How much longer do we\n> anticipate keeping autotools around?\n\nI think it depends in what fashion. We've been talking about supporting\nbuilding out-of-tree modules with \"pgxs\" for at least a 5 year support\nwindow. But the replacement isn't yet finished [1], so that clock hasn't yet\nstarted ticking.\n\n\n> Seems like it could be a good opportunity to reduce some CI usage if\n> autotools were finally dropped, but I know there are still outstanding tasks\n> to complete.\n> \n> Back of the napkin math says autotools is about 12% of the credit cost,\n> though I haven't looked to see if linux-meson and linux-autotools are 1:1.\n\nThe autoconf task is actually doing quite useful stuff right now, leaving the\nuse of configure aside, as it builds with address sanitizer. Without that it'd\nbe a lot faster. But we'd loose, imo quite important, coverage. The tests\nwould run a bit faster with meson, but it'd be overall a difference on the\nmargins.\n\nGreetings,\n\nAndres Freund\n\n[1] https://github.com/anarazel/postgres/tree/meson-pkgconfig\n\n\n", "msg_date": "Tue, 8 Aug 2023 08:38:17 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Cirrus-ci is lowering free CI cycles - what to do with cfbot,\n etc?" }, { "msg_contents": "On Tue Aug 8, 2023 at 10:38 AM CDT, Andres Freund wrote:\n> Hi,\n>\n> On 2023-08-08 10:25:52 -0500, Tristan Partin wrote:\n> > On Mon Aug 7, 2023 at 9:15 PM CDT, Andres Freund wrote:\n> > > FWIW: with the patches applied, the \"credit costs\" in cirrus CI are roughly\n> > > like the following (depends on caching etc):\n> > > \n> > > task costs in credits\n> > > \tlinux-sanity: 0.01\n> > > \tlinux-compiler-warnings: 0.05\n> > > \tlinux-meson: 0.07\n> > > \tfreebsd : 0.08\n> > > \tlinux-autoconf: 0.09\n> > > \twindows : 0.18\n> > > \tmacos : 0.28\n> > > total task runtime is 40.8\n> > > cost in credits is 0.76, monthly credits of 50 allow approx 66.10 runs/month\n> > \n> > I am not in the loop on the autotools vs meson stuff. How much longer do we\n> > anticipate keeping autotools around?\n>\n> I think it depends in what fashion. We've been talking about supporting\n> building out-of-tree modules with \"pgxs\" for at least a 5 year support\n> window. But the replacement isn't yet finished [1], so that clock hasn't yet\n> started ticking.\n>\n>\n> > Seems like it could be a good opportunity to reduce some CI usage if\n> > autotools were finally dropped, but I know there are still outstanding tasks\n> > to complete.\n> > \n> > Back of the napkin math says autotools is about 12% of the credit cost,\n> > though I haven't looked to see if linux-meson and linux-autotools are 1:1.\n>\n> The autoconf task is actually doing quite useful stuff right now, leaving the\n> use of configure aside, as it builds with address sanitizer. Without that it'd\n> be a lot faster. But we'd loose, imo quite important, coverage. The tests\n> would run a bit faster with meson, but it'd be overall a difference on the\n> margins.\n> \n> [1] https://github.com/anarazel/postgres/tree/meson-pkgconfig\n\nMakes sense. Please let me know if I can help you out in anyway for the \nv17 development cycle besides what we have already talked about.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Tue, 08 Aug 2023 10:45:46 -0500", "msg_from": "\"Tristan Partin\" <tristan@neon.tech>", "msg_from_op": false, "msg_subject": "Re: Cirrus-ci is lowering free CI cycles - what to do with cfbot,\n etc?" }, { "msg_contents": "On 2023-Aug-08, Andres Freund wrote:\n\n> Given the cost of macos, it seems like it'd be by far the most of affordable\n> to just buy 1-2 mac minis (2x ~660USD) and stick them in a shelf somewhere, as\n> persistent runners. Cirrus has builtin macos virtualization support - but can\n> only host two VMs on each mac, due to macos licensing restrictions. A single\n> mac mini would suffice to keep up with our unoptimized monthly runtime\n> (although there likely would be some overhead).\n\nIf using persistent workers is an option, maybe we should explore that.\nI think we could move all or some of the Linux - Debian builds to\nhardware that we already have in shelves (depending on how much compute\npower is really needed.)\n\nI think using other OSes is more difficult, mostly because I doubt we\nwant to deal with licenses; but even FreeBSD might not be a realistic\noption, at least not in the short term.\n\nStill,\n\n> task_name | sum\n> ------------------------------------------------+------------\n> FreeBSD - 13 - Meson | 1017:56:09\n> Windows - Server 2019, MinGW64 - Meson | 00:00:00\n> SanityCheck | 76:48:41\n> macOS - Ventura - Meson | 873:12:43\n> Windows - Server 2019, VS 2019 - Meson & ninja | 1251:08:06\n> Linux - Debian Bullseye - Autoconf | 830:17:26\n> Linux - Debian Bullseye - Meson | 860:37:21\n> CompilerWarnings | 935:30:35\n> (8 rows)\n>\n\nmoving just Debian, that might alleviate 76+830+860+935 hours from the\nCirrus infra, which is ~46%. Not bad.\n\n\n(How come Windows - Meson reports allballs?)\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"No tengo por qué estar de acuerdo con lo que pienso\"\n (Carlos Caszeli)\n\n\n", "msg_date": "Tue, 8 Aug 2023 18:34:58 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Cirrus-ci is lowering free CI cycles - what to do with cfbot,\n etc?" }, { "msg_contents": "Hi,\n\nOn 2023-08-08 18:34:58 +0200, Alvaro Herrera wrote:\n> On 2023-Aug-08, Andres Freund wrote:\n> \n> > Given the cost of macos, it seems like it'd be by far the most of affordable\n> > to just buy 1-2 mac minis (2x ~660USD) and stick them in a shelf somewhere, as\n> > persistent runners. Cirrus has builtin macos virtualization support - but can\n> > only host two VMs on each mac, due to macos licensing restrictions. A single\n> > mac mini would suffice to keep up with our unoptimized monthly runtime\n> > (although there likely would be some overhead).\n> \n> If using persistent workers is an option, maybe we should explore that.\n> I think we could move all or some of the Linux - Debian builds to\n> hardware that we already have in shelves (depending on how much compute\n> power is really needed.)\n\n(76+830+860+935)/((365/12)*24) = 3.7\n\n3.7 instances with 4 \"vcores\" are busy 100% of the time. So we'd need at least\n~16 cpu threads - I think cirrus sometimes uses instances that disable HT, so\nit'd perhaps be 16 cores actually.\n\n\n> I think using other OSes is more difficult, mostly because I doubt we\n> want to deal with licenses; but even FreeBSD might not be a realistic\n> option, at least not in the short term.\n\nThey can be VMs, so that shouldn't be a big issue.\n\n> > task_name | sum\n> > ------------------------------------------------+------------\n> > FreeBSD - 13 - Meson | 1017:56:09\n> > Windows - Server 2019, MinGW64 - Meson | 00:00:00\n> > SanityCheck | 76:48:41\n> > macOS - Ventura - Meson | 873:12:43\n> > Windows - Server 2019, VS 2019 - Meson & ninja | 1251:08:06\n> > Linux - Debian Bullseye - Autoconf | 830:17:26\n> > Linux - Debian Bullseye - Meson | 860:37:21\n> > CompilerWarnings | 935:30:35\n> > (8 rows)\n> >\n> \n> moving just Debian, that might alleviate 76+830+860+935 hours from the\n> Cirrus infra, which is ~46%. Not bad.\n> \n> \n> (How come Windows - Meson reports allballs?)\n\nIt's mingw64, which we've marked as \"manual\", because we didn't have the cpu\ncycles to run it.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 8 Aug 2023 13:44:52 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Cirrus-ci is lowering free CI cycles - what to do with cfbot,\n etc?" }, { "msg_contents": "Hi,\n\nOn 2023-08-08 11:58:25 +0300, Heikki Linnakangas wrote:\n> On 08/08/2023 05:15, Andres Freund wrote:\n> > With the improvements detailed below, cirrus' free CI would last\n> > about ~65 runs / month.\n>\n> I think that's plenty.\n\nNot so sure, I would regularly exceed it, I think. But it definitely will\nsuffice for more casual contributors.\n\n\n> > Potential paths forward for cfbot, in addition to the above:\n> > \n> > - Pay for compute / ask the various cloud providers to grant us compute\n> > credits. At least some of the cloud providers can be used via cirrus-ci.\n> > \n> > - Host (some) CI runners ourselves. Particularly with macos and windows, that\n> > could provide significant savings.\n> > \n> > - Build our own system, using buildbot, jenkins or whatnot.\n> > \n> > \n> > Opinions as to what to do?\n> \n> The resources for running our own system isn't free either. I'm sure we can\n> get sponsors for the cirrus-ci credits, or use donations.\n\nAs outlined in my reply to Alvaro, just using credits likely is financially\nnot viable...\n\n\n> > 5) Move use of -Dsegsize_blocks=6 from macos to linux\n> > \n> > Macos is expensive, -Dsegsize_blocks=6 slows things down. Alternatively we\n> > could stop covering both meson and autoconf segsize_blocks. It does affect\n> > runtime on linux as well.\n> \n> Could we have a comment somewhere on why we use -Dsegsize_blocks on these\n> particular CI runs? It seems pretty random. I guess the idea is to have one\n> autoconf task and one meson task with that option, to check that the\n> autoconf/meson option works?\n\nHm, some of that was in the commit message, but I should have added it to\n.cirrus.yml as well.\n\nNormally, the \"relation segment\" code basically has no coverage in our tests,\nbecause we (quite reasonably) don't generate tables large enough. We've had\nplenty bugs that we didn't notice due the code not being exercised much. So it\nseemed useful to add CI coverage, by making the segments very small.\n\nI chose the tasks by looking at how long they took at the time, I\nthink. Adding them to to the slower ones.\n\n\n> > 6) Disable write cache flushes on windows\n> > \n> > It's a bit ugly to do this without using the UI... Shaves off about 30s\n> > from the tests.\n> \n> A brief comment would be nice: \"We don't care about persistence over hard\n> crashes in the CI, so disable write cache flushes to speed it up.\"\n\nTurns out that patch doesn't work on its own anyway, at least not\nreliably... I tested it by interactively logging into a windows vm and testing\nit there. It doesn't actually seem to suffice when run in isolation, because\nthe relevant registry key doesn't yet exist. I haven't yet figured out the\nmagic incantations for adding the missing \"intermediary\", but I'm getting\nthere...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 8 Aug 2023 18:26:04 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Cirrus-ci is lowering free CI cycles - what to do with cfbot,\n etc?" }, { "msg_contents": "On Tue, Aug 8, 2023 at 9:26 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2023-08-08 11:58:25 +0300, Heikki Linnakangas wrote:\n> > On 08/08/2023 05:15, Andres Freund wrote:\n> > > With the improvements detailed below, cirrus' free CI would last\n> > > about ~65 runs / month.\n> >\n> > I think that's plenty.\n>\n> Not so sure, I would regularly exceed it, I think. But it definitely will\n> suffice for more casual contributors.\n>\n>\n> > > Potential paths forward for cfbot, in addition to the above:\n> > >\n> > > - Pay for compute / ask the various cloud providers to grant us compute\n> > > credits. At least some of the cloud providers can be used via cirrus-ci.\n> > >\n> > > - Host (some) CI runners ourselves. Particularly with macos and windows, that\n> > > could provide significant savings.\n> > >\n> > > - Build our own system, using buildbot, jenkins or whatnot.\n> > >\n> > >\n> > > Opinions as to what to do?\n> >\n> > The resources for running our own system isn't free either. I'm sure we can\n> > get sponsors for the cirrus-ci credits, or use donations.\n>\n> As outlined in my reply to Alvaro, just using credits likely is financially\n> not viable...\n>\n>\n\nIn case it's helpful, from an SPI oriented perspective, $7K/month is\nprobably an order of magnitude more than what we can sustain, so I\ndon't see a way to make that work without some kind of additional\nmagic that includes other non-profits and/or commercial companies\nchanging donation habits between now and September.\n\nPurchasing a couple of mac-mini's (and/or similar gear) would be near\ntrivial though, just a matter of figuring out where/how to host it\n(but I think infra can chime in on that if that's what get's decided).\n\nThe other likely option would be to seek out cloud credits from one of\nthe big three (or others); Amazon has continually said they would be\nhappy to donate more credits to us if we had a use, and I think some\nof the other hosting providers have said similarly at times; so we'd\nneed to ask and hope it's not too bureaucratic.\n\nRobert Treat\nhttps://xzilla.net\n\n\n", "msg_date": "Tue, 8 Aug 2023 22:29:50 -0400", "msg_from": "Robert Treat <rob@xzilla.net>", "msg_from_op": false, "msg_subject": "Re: Cirrus-ci is lowering free CI cycles - what to do with cfbot,\n etc?" }, { "msg_contents": "Hi,\n\nOn 2023-08-08 22:29:50 -0400, Robert Treat wrote:\n> In case it's helpful, from an SPI oriented perspective, $7K/month is\n> probably an order of magnitude more than what we can sustain, so I\n> don't see a way to make that work without some kind of additional\n> magic that includes other non-profits and/or commercial companies\n> changing donation habits between now and September.\n\nYea, I think that'd make no sense, even if we could afford it. I think the\npatches I've written should drop it to 1/2 already. Thomas added some\nthrottling to push it down further.\n\n\n> Purchasing a couple of mac-mini's (and/or similar gear) would be near\n> trivial though, just a matter of figuring out where/how to host it\n> (but I think infra can chime in on that if that's what get's decided).\n\nCool. Because of the limitation of running two VMs at a time on macos and the\ncomparatively low cost of mac minis, it seems they beat alternative models by\na fair bit.\n\nPginfra/sysadmin: ^\n\n\nBased on being off by an order of magnitude, as you mention earlier, it seems\nthat:\n\n1) reducing test runtime etc, as already in progress\n2) getting 2 mac minis as runners\n3) using ~350 USD / mo in GCP costs for windows, linux, freebsd (*)\n\nWould be viable for a month or three? I hope we can get some cloud providers\nto chip in for 3), but I'd like to have something in place that doesn't depend\non that.\n\nGiven the cost of macos VMs at AWS, the only of the big cloud providers to\nhave macos instances, I think we'd burn pointlessly quick through credits if\nwe used VMs for that.\n\n(*) I think we should be able to get below that, but ...\n\n\n> The other likely option would be to seek out cloud credits from one of\n> the big three (or others); Amazon has continually said they would be\n> happy to donate more credits to us if we had a use, and I think some\n> of the other hosting providers have said similarly at times; so we'd\n> need to ask and hope it's not too bureaucratic.\n\nYep.\n\nI tried to start that progress within microsoft, fwiw. Perhaps Joe and\nJonathan know how to start within AWS? And perhaps Noah inside GCP?\n\nIt'd be the least work to get it up and running in GCP, as it's already\nrunning there, but should be quite doable at the others as well.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 8 Aug 2023 19:59:55 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Cirrus-ci is lowering free CI cycles - what to do with cfbot,\n etc?" }, { "msg_contents": "On Wed, Aug 9, 2023 at 3:26 AM Andres Freund <andres@anarazel.de> wrote:\n\n>\n> > > 6) Disable write cache flushes on windows\n> > >\n> > > It's a bit ugly to do this without using the UI... Shaves off\n> about 30s\n> > > from the tests.\n> >\n> > A brief comment would be nice: \"We don't care about persistence over hard\n> > crashes in the CI, so disable write cache flushes to speed it up.\"\n>\n> Turns out that patch doesn't work on its own anyway, at least not\n> reliably... I tested it by interactively logging into a windows vm and\n> testing\n> it there. It doesn't actually seem to suffice when run in isolation,\n> because\n> the relevant registry key doesn't yet exist. I haven't yet figured out the\n> magic incantations for adding the missing \"intermediary\", but I'm getting\n> there...\n>\n>\nYou can find a good example on how to accomplish this in:\n\nhttps://github.com/farag2/Utilities/blob/master/Enable_disk_write_caching.ps1\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Wed, Aug 9, 2023 at 3:26 AM Andres Freund <andres@anarazel.de> wrote:\n> > 6) Disable write cache flushes on windows\n> > \n> >     It's a bit ugly to do this without using the UI... Shaves off about 30s\n> >     from the tests.\n> \n> A brief comment would be nice: \"We don't care about persistence over hard\n> crashes in the CI, so disable write cache flushes to speed it up.\"\n\nTurns out that patch doesn't work on its own anyway, at least not\nreliably... I tested it by interactively logging into a windows vm and testing\nit there. It doesn't actually seem to suffice when run in isolation, because\nthe relevant registry key doesn't yet exist. I haven't yet figured out the\nmagic incantations for adding the missing \"intermediary\", but I'm getting\nthere...You can find a good example on how to accomplish this in:https://github.com/farag2/Utilities/blob/master/Enable_disk_write_caching.ps1Regards,Juan José Santamaría Flecha", "msg_date": "Wed, 9 Aug 2023 09:22:45 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Cirrus-ci is lowering free CI cycles - what to do with cfbot,\n etc?" }, { "msg_contents": "Hello\n\nSo pginfra had a little chat about this. Firstly, there's consensus\nthat it makes sense for pginfra to help out with some persistent workers\nin our existing VM system; however there are some aspects that need\nsome further discussion, to avoid destabilizing the rest of the\ninfrastructure. We're looking into it and we'll let you know.\n\nHosting a couple of Mac Minis is definitely a possibility, if some\nentity like SPI buys them. Let's take this off-list to arrange the\ndetails.\n\nRegards\n\n-- \nÁlvaro Herrera\n\n\n", "msg_date": "Wed, 9 Aug 2023 19:21:53 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Cirrus-ci is lowering free CI cycles - what to do with cfbot,\n etc?" }, { "msg_contents": "On Tue, Aug 08, 2023 at 07:59:55PM -0700, Andres Freund wrote:\n> On 2023-08-08 22:29:50 -0400, Robert Treat wrote:\n\n> 3) using ~350 USD / mo in GCP costs for windows, linux, freebsd (*)\n\n> > The other likely option would be to seek out cloud credits\n\n> I tried to start that progress within microsoft, fwiw. Perhaps Joe and\n> Jonathan know how to start within AWS? And perhaps Noah inside GCP?\n> \n> It'd be the least work to get it up and running in GCP, as it's already\n> running there\n\nI'm looking at this. Thanks for bringing it to my attention.\n\n\n", "msg_date": "Thu, 10 Aug 2023 19:44:20 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Cirrus-ci is lowering free CI cycles - what to do with cfbot,\n etc?" }, { "msg_contents": "Hi,\n\nOn 2023-08-07 19:15:41 -0700, Andres Freund wrote:\n> As some of you might have seen when running CI, cirrus-ci is restricting how\n> much CI cycles everyone can use for free (announcement at [1]). This takes\n> effect September 1st.\n>\n> This obviously has consequences both for individual users of CI as well as\n> cfbot.\n>\n> [...]\n\n> Potential paths forward for individual CI:\n>\n> - migrate wholesale to another CI provider\n>\n> - split CI tasks across different CI providers, rely on github et al\n> displaying the CI status for different platforms\n>\n> - give up\n>\n>\n> Potential paths forward for cfbot, in addition to the above:\n>\n> - Pay for compute / ask the various cloud providers to grant us compute\n> credits. At least some of the cloud providers can be used via cirrus-ci.\n>\n> - Host (some) CI runners ourselves. Particularly with macos and windows, that\n> could provide significant savings.\n\nTo make that possible, we need to make the compute resources for CI\nconfigurable on a per-repository basis. After experimenting with a bunch of\nways to do that, I got stuck on that for a while. But since today we have\nsufficient macos runners for cfbot available, so... I think the approach I\nfinally settled on is decent, although not great. It's described in the \"main\"\ncommit message:\n ci: Prepare to make compute resources for CI configurable\n\n cirrus-ci will soon restrict the amount of free resources every user gets (as\n have many other CI providers). For most users of CI that should not be an\n issue. But e.g. for cfbot it will be an issue.\n\n To allow configuring different resources on a per-repository basis, introduce\n infrastructure for overriding the task execution environment. Unfortunately\n this is not entirely trivial, as yaml anchors have to be defined before their\n use, and cirrus-ci only allows injecting additional contents at the end of\n .cirrus.yml.\n\n To deal with that, move the definition of the CI tasks to\n .cirrus.tasks.yml. The main .cirrus.yml is loaded first, then, if defined, the\n file referenced by the REPO_CI_CONFIG_GIT_URL variable, will be added,\n followed by the contents of .cirrus.tasks.yml. That allows\n REPO_CI_CONFIG_GIT_URL to override the yaml anchors defined in .cirrus.yml.\n\n Unfortunately git's default merge / rebase strategy does not handle copied\n files, just renamed ones. To avoid painful rebasing over this change, this\n commit just renames .cirrus.yml to .cirrus.tasks.yml, without adding a new\n .cirrus.yml. That's done in the followup commit, which moves the relevant\n portion of .cirrus.tasks.yml to .cirrus.yml. Until that is done,\n REPO_CI_CONFIG_GIT_URL does not fully work.\n\n The subsequent commit adds documentation for how to configure custom compute\n resources to src/tools/ci/README\n\n Discussion: https://postgr.es/m/20230808021541.7lbzdefvma7qmn3w@awork3.anarazel.de\n Backpatch: 15-, where CI support was added\n\n\nI don't love moving most of the contents of .cirrus.yml into a new file, but I\ndon't see another way. I did implement it without that as well (see [1]), but\nthat ends up considerably harder to understand, and hardcodes what cfbot\nneeds. Splitting the commit, as explained above, at least makes git rebase\nfairly painless. FWIW, I did merge the changes into 15, with only reasonable\nconflicts (due to new tasks, autoconf->meson).\n\n\nA prerequisite commit converts \"SanityCheck\" and \"CompilerWarnings\" to use a\nfull VM instead of a container - that way providing custom compute resources\ndoesn't have to deal with containers in addition to VMs. It also looks like\nthe increased startup overhead is outweighed by the reduction in runtime\noverhead.\n\n\nI'm hoping to push this fairly soon, as I'll be on vacation the last week of\nAugust. I'll be online intermittently though, if there are issues, I can react\n(very limited connectivity for middday Aug 29th - midday Aug 31th though). I'd\nappreciate a quick review or two.\n\n\nGreetings,\n\nAndres Freund\n\n[1] https://github.com/anarazel/postgres/commit/b95fd302161b951f1dc14d586162ed3d85564bfc", "msg_date": "Tue, 22 Aug 2023 23:58:33 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Cirrus-ci is lowering free CI cycles - what to do with cfbot,\n etc?" }, { "msg_contents": "> On 23 Aug 2023, at 08:58, Andres Freund <andres@anarazel.de> wrote:\n\n> I'm hoping to push this fairly soon, as I'll be on vacation the last week of\n> August. I'll be online intermittently though, if there are issues, I can react\n> (very limited connectivity for middday Aug 29th - midday Aug 31th though). I'd\n> appreciate a quick review or two.\n\nI've been reading over these and the thread, and while not within my area of\nexpertise, nothing really sticks out.\n\nI'll do another pass, but below are a few small comments so far.\n\nI don't know Windows to know the implications, but should the below file have\nsome sort of warning about not doing that for production/shared systems, only\nfor dedicated test instances?\n\n+++ b/src/tools/ci/windows_write_cache.ps1\n@@ -0,0 +1,20 @@\n+# Define the write cache to be power protected. This reduces the rate of cache\n+# flushes, which seems to help metadata heavy workloads on NTFS. We're just\n+# testing here anyway, so ...\n+#\n+# Let's do so for all disks, this could be useful beyond cirrus-ci.\n\nOne thing in 0010 caught my eye, and while not introduced in this patchset it\nmight be of interest here. In the below hunks we loop X ticks around\nsystem(psql), with the loop assuming the server can come up really quickly and\nsleeping if it doesn't. On my systems I always reach the pg_usleep after\nfailing the check, but if I reverse the check such it first sleeps and then\nchecks I only need to check once instead of twice.\n\n@@ -2499,7 +2502,7 @@ regression_main(int argc, char *argv[],\n \t\telse\n \t\t\twait_seconds = 60;\n \n-\t\tfor (i = 0; i < wait_seconds; i++)\n+\t\tfor (i = 0; i < wait_seconds * WAITS_PER_SEC; i++)\n \t\t{\n \t\t\t/* Done if psql succeeds */\n \t\t\tfflush(NULL);\n@@ -2519,7 +2522,7 @@ regression_main(int argc, char *argv[],\n \t\t\t\t\t outputdir);\n \t\t\t}\n \n-\t\t\tpg_usleep(1000000L);\n+\t\t\tpg_usleep(1000000L / WAITS_PER_SEC);\n \t\t}\n \t\tif (i >= wait_seconds)\n \t\t{\n\nIt's a micro-optimization, but if we're changing things here to chase cycles it\nmight perhaps be worth doing?\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 23 Aug 2023 14:48:26 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Cirrus-ci is lowering free CI cycles - what to do with cfbot,\n etc?" }, { "msg_contents": "Hi,\n\nOn 2023-08-23 14:48:26 +0200, Daniel Gustafsson wrote:\n> > On 23 Aug 2023, at 08:58, Andres Freund <andres@anarazel.de> wrote:\n>\n> > I'm hoping to push this fairly soon, as I'll be on vacation the last week of\n> > August. I'll be online intermittently though, if there are issues, I can react\n> > (very limited connectivity for middday Aug 29th - midday Aug 31th though). I'd\n> > appreciate a quick review or two.\n>\n> I've been reading over these and the thread, and while not within my area of\n> expertise, nothing really sticks out.\n\nThanks!\n\n\n> I'll do another pass, but below are a few small comments so far.\n>\n> I don't know Windows to know the implications, but should the below file have\n> some sort of warning about not doing that for production/shared systems, only\n> for dedicated test instances?\n\nAh, I should have explained that: I'm not planning to apply\n- regress: Check for postgres startup completion more often\n- ci: windows: Disabling write cache flushing during test\nright now. Compared to the other patches the wins are much smaller and/or more\nwork is needed to make them good.\n\nI think it might be worth going for\n- ci: switch tasks to debugoptimized build\nbecause that provides a fair bit of gain. But it might be more hurtful than\nhelpful due to costing more when ccache doesn't work...\n\n\n> +++ b/src/tools/ci/windows_write_cache.ps1\n> @@ -0,0 +1,20 @@\n> +# Define the write cache to be power protected. This reduces the rate of cache\n> +# flushes, which seems to help metadata heavy workloads on NTFS. We're just\n> +# testing here anyway, so ...\n> +#\n> +# Let's do so for all disks, this could be useful beyond cirrus-ci.\n>\n> One thing in 0010 caught my eye, and while not introduced in this patchset it\n> might be of interest here. In the below hunks we loop X ticks around\n> system(psql), with the loop assuming the server can come up really quickly and\n> sleeping if it doesn't. On my systems I always reach the pg_usleep after\n> failing the check, but if I reverse the check such it first sleeps and then\n> checks I only need to check once instead of twice.\n\nI think there's more effective ways to make this cheaper. The basic thing\nwould be to use libpq instead of forking of psql to make a connection\ncheck. Medium term, I think we should invent a way for pg_ctl and other\ntooling (including pg_regress) to wait for the service to come up. E.g. having\na named pipe that postmaster opens once the server is up, which should allow\nmultiple clients to use select/epoll/... to wait for it without looping.\n\nISTM making pg_regress use libpq w/ PQping() should be a pretty simple patch?\nThe non-polling approach obviously is even better, but also requires more\nthought (and documentation and ...).\n\n\n> @@ -2499,7 +2502,7 @@ regression_main(int argc, char *argv[],\n> \t\telse\n> \t\t\twait_seconds = 60;\n>\n> -\t\tfor (i = 0; i < wait_seconds; i++)\n> +\t\tfor (i = 0; i < wait_seconds * WAITS_PER_SEC; i++)\n> \t\t{\n> \t\t\t/* Done if psql succeeds */\n> \t\t\tfflush(NULL);\n> @@ -2519,7 +2522,7 @@ regression_main(int argc, char *argv[],\n> \t\t\t\t\t outputdir);\n> \t\t\t}\n>\n> -\t\t\tpg_usleep(1000000L);\n> +\t\t\tpg_usleep(1000000L / WAITS_PER_SEC);\n> \t\t}\n> \t\tif (i >= wait_seconds)\n> \t\t{\n>\n> It's a micro-optimization, but if we're changing things here to chase cycles it\n> might perhaps be worth doing?\n\nI wouldn't quite call not waiting for 1s for the server to start, when it does\nso within a few ms, chasing cycles ;). For short tests it's a substantial\nfraction of the overall runtime...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 23 Aug 2023 12:22:39 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Cirrus-ci is lowering free CI cycles - what to do with cfbot,\n etc?" }, { "msg_contents": "> On 23 Aug 2023, at 21:22, Andres Freund <andres@anarazel.de> wrote:\n> On 2023-08-23 14:48:26 +0200, Daniel Gustafsson wrote:\n\n>> I'll do another pass, but below are a few small comments so far.\n>> \n>> I don't know Windows to know the implications, but should the below file have\n>> some sort of warning about not doing that for production/shared systems, only\n>> for dedicated test instances?\n> \n> Ah, I should have explained that: I'm not planning to apply\n> - regress: Check for postgres startup completion more often\n> - ci: windows: Disabling write cache flushing during test\n> right now. Compared to the other patches the wins are much smaller and/or more\n> work is needed to make them good.\n> \n> I think it might be worth going for\n> - ci: switch tasks to debugoptimized build\n> because that provides a fair bit of gain. But it might be more hurtful than\n> helpful due to costing more when ccache doesn't work...\n\nGotcha.\n\n>> +++ b/src/tools/ci/windows_write_cache.ps1\n>> @@ -0,0 +1,20 @@\n>> +# Define the write cache to be power protected. This reduces the rate of cache\n>> +# flushes, which seems to help metadata heavy workloads on NTFS. We're just\n>> +# testing here anyway, so ...\n>> +#\n>> +# Let's do so for all disks, this could be useful beyond cirrus-ci.\n>> \n>> One thing in 0010 caught my eye, and while not introduced in this patchset it\n>> might be of interest here. In the below hunks we loop X ticks around\n>> system(psql), with the loop assuming the server can come up really quickly and\n>> sleeping if it doesn't. On my systems I always reach the pg_usleep after\n>> failing the check, but if I reverse the check such it first sleeps and then\n>> checks I only need to check once instead of twice.\n> \n> I think there's more effective ways to make this cheaper. The basic thing\n> would be to use libpq instead of forking of psql to make a connection\n> check.\n\nI had it in my head that not using libpq in pg_regress was a deliberate choice,\nbut I fail to find a reference to it in the archives.\n\n>> It's a micro-optimization, but if we're changing things here to chase cycles it\n>> might perhaps be worth doing?\n> \n> I wouldn't quite call not waiting for 1s for the server to start, when it does\n> so within a few ms, chasing cycles ;). For short tests it's a substantial\n> fraction of the overall runtime...\n\nAbsolutely, I was referring to shifting the sleep before the test to avoid the\nextra test, not the reduction of the pg_usleep. Reducing the sleep is a clear\nwin.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 23 Aug 2023 22:11:09 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Cirrus-ci is lowering free CI cycles - what to do with cfbot,\n etc?" }, { "msg_contents": "Hi,\n\nThanks for the patch!\n\nOn Wed, 23 Aug 2023 at 09:58, Andres Freund <andres@anarazel.de> wrote:\n> I'm hoping to push this fairly soon, as I'll be on vacation the last week of\n> August. I'll be online intermittently though, if there are issues, I can react\n> (very limited connectivity for middday Aug 29th - midday Aug 31th though). I'd\n> appreciate a quick review or two.\n\nPatch looks good to me besides some minor points.\n\nv3-0004-ci-Prepare-to-make-compute-resources-for-CI-confi.patch:\ndiff --git a/.cirrus.star b/.cirrus.star\n+ \"\"\"The main function is executed by cirrus-ci after loading\n.cirrus.yml and can\n+ extend the CI definition further.\n+\n+ As documented in .cirrus.yml, the final CI configuration is composed of\n+\n+ 1) the contents of this file\n\nInstead of '1) the contents of this file' comment, '1) the contents\nof .cirrus.yml file' could be better since this comment appears in\n.cirrus.star file.\n\n+ if repo_config_url != None:\n+ print(\"loading additional configuration from\n\\\"{}\\\"\".format(repo_config_url))\n+ output += config_from(repo_config_url)\n+ else:\n+ output += \"n# REPO_CI_CONFIG_URL was not set\\n\"\n\nPossible typo at output += \"n# REPO_CI_CONFIG_URL was not set\\n\".\n\nv3-0008-ci-switch-tasks-to-debugoptimized-build.patch:\nJust thinking of possible optimizations and thought can't we create\nsomething like 'buildtype: xxx' to override default buildtype using\n.cirrus.star? This could be better for PG developers. For sure that\ncould be the subject of another patch.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Wed, 23 Aug 2023 23:55:15 +0300", "msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Cirrus-ci is lowering free CI cycles - what to do with cfbot,\n etc?" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> On 23 Aug 2023, at 21:22, Andres Freund <andres@anarazel.de> wrote:\n>> I think there's more effective ways to make this cheaper. The basic thing\n>> would be to use libpq instead of forking of psql to make a connection\n>> check.\n\n> I had it in my head that not using libpq in pg_regress was a deliberate choice,\n> but I fail to find a reference to it in the archives.\n\nI have a vague feeling that you are right about that. Perhaps the\nconcern was that under \"make installcheck\", pg_regress might be\nusing a build-tree copy of libpq rather than the one from the\nsystem under test. As long as we're just trying to ping the server,\nthat shouldn't matter too much I think ... unless we hit problems\nwith, say, a different default port number or socket path compiled into\none copy vs. the other? That seems like it's probably a \"so don't\ndo that\" case, though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 23 Aug 2023 17:02:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Cirrus-ci is lowering free CI cycles - what to do with cfbot,\n etc?" }, { "msg_contents": "> On 23 Aug 2023, at 23:02, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 23 Aug 2023, at 21:22, Andres Freund <andres@anarazel.de> wrote:\n>>> I think there's more effective ways to make this cheaper. The basic thing\n>>> would be to use libpq instead of forking of psql to make a connection\n>>> check.\n> \n>> I had it in my head that not using libpq in pg_regress was a deliberate choice,\n>> but I fail to find a reference to it in the archives.\n> \n> I have a vague feeling that you are right about that. Perhaps the\n> concern was that under \"make installcheck\", pg_regress might be\n> using a build-tree copy of libpq rather than the one from the\n> system under test. As long as we're just trying to ping the server,\n> that shouldn't matter too much I think ... unless we hit problems\n> with, say, a different default port number or socket path compiled into\n> one copy vs. the other? That seems like it's probably a \"so don't\n> do that\" case, though.\n\nAh yes, that does ring a familiar bell. I agree that using it for pinging the\nserver should be safe either way, but we should document the use-with-caution\nin pg_regress.c if/when we go down that path. I'll take a stab at changing the\npsql retry loop for pinging tomorrow to see what it would look like.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 23 Aug 2023 23:12:50 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Cirrus-ci is lowering free CI cycles - what to do with cfbot,\n etc?" }, { "msg_contents": "Hi,\n\nOn 2023-08-23 17:02:51 -0400, Tom Lane wrote:\n> Daniel Gustafsson <daniel@yesql.se> writes:\n> > On 23 Aug 2023, at 21:22, Andres Freund <andres@anarazel.de> wrote:\n> >> I think there's more effective ways to make this cheaper. The basic thing\n> >> would be to use libpq instead of forking of psql to make a connection\n> >> check.\n>\n> > I had it in my head that not using libpq in pg_regress was a deliberate choice,\n> > but I fail to find a reference to it in the archives.\n>\n> I have a vague feeling that you are right about that. Perhaps the\n> concern was that under \"make installcheck\", pg_regress might be\n> using a build-tree copy of libpq rather than the one from the\n> system under test. As long as we're just trying to ping the server,\n> that shouldn't matter too much I think\n\nOr perhaps the opposite? That an installcheck pg_regress run might use the\nsystem libpq, which doesn't have the symbols, or such?\n\nEither way, with a function like PQping(), which existing in well beyond the\nsupported branches, that shouldn't be an issue?\n\n\n> ... unless we hit problems with, say, a different default port number or\n> socket path compiled into one copy vs. the other? That seems like it's\n> probably a \"so don't do that\" case, though.\n\nIf we were to find such a case, it seems we could just add whatever missing\nparameter to the connection string? I think we would likely already hit such\nproblems though, the psql started by an installcheck pg_regress might use the\nsystem libpq, I think?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 23 Aug 2023 14:43:41 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Cirrus-ci is lowering free CI cycles - what to do with cfbot,\n etc?" }, { "msg_contents": "Hi,\n\nOn 2023-08-23 23:55:15 +0300, Nazir Bilal Yavuz wrote:\n> On Wed, 23 Aug 2023 at 09:58, Andres Freund <andres@anarazel.de> wrote:\n> > I'm hoping to push this fairly soon, as I'll be on vacation the last week of\n> > August. I'll be online intermittently though, if there are issues, I can react\n> > (very limited connectivity for middday Aug 29th - midday Aug 31th though). I'd\n> > appreciate a quick review or two.\n> \n> Patch looks good to me besides some minor points.\n\nThanks for looking!\n\n\n> v3-0004-ci-Prepare-to-make-compute-resources-for-CI-confi.patch:\n> diff --git a/.cirrus.star b/.cirrus.star\n> + \"\"\"The main function is executed by cirrus-ci after loading\n> .cirrus.yml and can\n> + extend the CI definition further.\n> +\n> + As documented in .cirrus.yml, the final CI configuration is composed of\n> +\n> + 1) the contents of this file\n> \n> Instead of '1) the contents of this file' comment, '1) the contents\n> of .cirrus.yml file' could be better since this comment appears in\n> .cirrus.star file.\n\nGood catch.\n\n> + if repo_config_url != None:\n> + print(\"loading additional configuration from\n> \\\"{}\\\"\".format(repo_config_url))\n> + output += config_from(repo_config_url)\n> + else:\n> + output += \"n# REPO_CI_CONFIG_URL was not set\\n\"\n> \n> Possible typo at output += \"n# REPO_CI_CONFIG_URL was not set\\n\".\n\nFixed.\n\n\n> v3-0008-ci-switch-tasks-to-debugoptimized-build.patch:\n> Just thinking of possible optimizations and thought can't we create\n> something like 'buildtype: xxx' to override default buildtype using\n> .cirrus.star? This could be better for PG developers. For sure that\n> could be the subject of another patch.\n\nWe could, but I'm not sure what the use would be?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 23 Aug 2023 14:48:03 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Cirrus-ci is lowering free CI cycles - what to do with cfbot,\n etc?" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-08-23 17:02:51 -0400, Tom Lane wrote:\n>> ... unless we hit problems with, say, a different default port number or\n>> socket path compiled into one copy vs. the other? That seems like it's\n>> probably a \"so don't do that\" case, though.\n\n> If we were to find such a case, it seems we could just add whatever missing\n> parameter to the connection string? I think we would likely already hit such\n> problems though, the psql started by an installcheck pg_regress might use the\n> system libpq, I think?\n\nThe trouble with that approach is that in \"make installcheck\", we\ndon't really want to assume we know what the installed libpq's default\nconnection parameters are. So we don't explicitly know where that\nlibpq will connect.\n\nAs I said, we might be able to start treating installed-libpq-not-\ncompatible-with-build as a \"don't do it\" case. Another idea is to try\nto ensure that pg_regress uses the same libpq that the psql-under-test\ndoes; but I'm not sure how to implement that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 23 Aug 2023 17:55:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Cirrus-ci is lowering free CI cycles - what to do with cfbot,\n etc?" }, { "msg_contents": "Hi,\n\nOn 2023-08-23 17:55:53 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2023-08-23 17:02:51 -0400, Tom Lane wrote:\n> >> ... unless we hit problems with, say, a different default port number or\n> >> socket path compiled into one copy vs. the other? That seems like it's\n> >> probably a \"so don't do that\" case, though.\n>\n> > If we were to find such a case, it seems we could just add whatever missing\n> > parameter to the connection string? I think we would likely already hit such\n> > problems though, the psql started by an installcheck pg_regress might use the\n> > system libpq, I think?\n>\n> The trouble with that approach is that in \"make installcheck\", we\n> don't really want to assume we know what the installed libpq's default\n> connection parameters are. So we don't explicitly know where that\n> libpq will connect.\n\nStepping back: I don't think installcheck matters for the concrete use of\nlibpq we're discussing - the only time we wait for server startup is the\nnon-installcheck case.\n\nThere are other potential uses for libpq in pg_regress though - I'd e.g. like\nto have a \"monitoring\" session open, which we could use to detect that the\nserver crashed (by waiting for the FD to be become invalid). Where the\nconnection default issue could matter more?\n\nI was wondering if we could create an unambiguous connection info, but that\nseems like it'd be hard to do, without creating cross version hazards.\n\n\n> As I said, we might be able to start treating installed-libpq-not-\n> compatible-with-build as a \"don't do it\" case. Another idea is to try\n> to ensure that pg_regress uses the same libpq that the psql-under-test\n> does; but I'm not sure how to implement that.\n\nI don't think that's likely to work, psql could use a libpq with a different\nsoversion. We could dlopen() libpq, etc, but that seems way too complicated.\n\n\nWhat's the reason we don't force psql to come from the same build as\npg_regress?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 23 Aug 2023 15:06:12 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Cirrus-ci is lowering free CI cycles - what to do with cfbot,\n etc?" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-08-23 17:55:53 -0400, Tom Lane wrote:\n>> The trouble with that approach is that in \"make installcheck\", we\n>> don't really want to assume we know what the installed libpq's default\n>> connection parameters are. So we don't explicitly know where that\n>> libpq will connect.\n\n> Stepping back: I don't think installcheck matters for the concrete use of\n> libpq we're discussing - the only time we wait for server startup is the\n> non-installcheck case.\n\nOh, that's an excellent point. So for the immediately proposed use-case,\nthere's no issue. (We don't have a mode where we try to start a server\nusing already-installed executables.)\n\n> There are other potential uses for libpq in pg_regress though - I'd e.g. like\n> to have a \"monitoring\" session open, which we could use to detect that the\n> server crashed (by waiting for the FD to be become invalid). Where the\n> connection default issue could matter more?\n\nMeh. I don't find that idea compelling enough to justify adding\nrestrictions on what test scenarios will work. It's seldom hard to\ntell from the test output whether the server crashed.\n\n> I was wondering if we could create an unambiguous connection info, but that\n> seems like it'd be hard to do, without creating cross version hazards.\n\nHmm, we don't expect the regression test suite to work against other\nserver versions, so maybe that could be made to work --- that is, we\ncould run the psql under test and get a full set of connection\nparameters out of it? But I'm still not finding this worth the\ntrouble.\n\n> What's the reason we don't force psql to come from the same build as\n> pg_regress?\n\nBecause the point of installcheck is to check the installed binaries\n--- including the installed psql and libpq.\n\n(Thinks for a bit...) Maybe we should add pg_regress to the installed\nfileset, and use that copy not the in-tree copy for installcheck?\nThen we could assume it's using the same libpq as psql. IIRC there\nhave already been suggestions to do that for the benefit of PGXS\ntesting.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 23 Aug 2023 18:32:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Cirrus-ci is lowering free CI cycles - what to do with cfbot,\n etc?" }, { "msg_contents": "Hi,\n\nOn 2023-08-23 18:32:26 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > There are other potential uses for libpq in pg_regress though - I'd e.g. like\n> > to have a \"monitoring\" session open, which we could use to detect that the\n> > server crashed (by waiting for the FD to be become invalid). Where the\n> > connection default issue could matter more?\n> \n> Meh. I don't find that idea compelling enough to justify adding\n> restrictions on what test scenarios will work. It's seldom hard to\n> tell from the test output whether the server crashed.\n\nI find it pretty painful to wade through a several-megabyte regression.diffs\nto find the cause of a crash. I think we ought to use\nrestart_after_crash=false, since after a crash there's no hope for the tests\nto succeed, but even in that case, we end up with a lot of pointless contents\nin regression.diffs. If we instead realized that we shouldn't start further\ntests, we'd limit that by a fair bit.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 23 Aug 2023 15:56:30 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Cirrus-ci is lowering free CI cycles - what to do with cfbot,\n etc?" }, { "msg_contents": "On 24.08.23 00:56, Andres Freund wrote:\n> Hi,\n> \n> On 2023-08-23 18:32:26 -0400, Tom Lane wrote:\n>> Andres Freund <andres@anarazel.de> writes:\n>>> There are other potential uses for libpq in pg_regress though - I'd e.g. like\n>>> to have a \"monitoring\" session open, which we could use to detect that the\n>>> server crashed (by waiting for the FD to be become invalid). Where the\n>>> connection default issue could matter more?\n>>\n>> Meh. I don't find that idea compelling enough to justify adding\n>> restrictions on what test scenarios will work. It's seldom hard to\n>> tell from the test output whether the server crashed.\n> \n> I find it pretty painful to wade through a several-megabyte regression.diffs\n> to find the cause of a crash. I think we ought to use\n> restart_after_crash=false, since after a crash there's no hope for the tests\n> to succeed, but even in that case, we end up with a lot of pointless contents\n> in regression.diffs. If we instead realized that we shouldn't start further\n> tests, we'd limit that by a fair bit.\n\nI once coded it up so that if the server crashes during a test, it would \nwait until it recovers before running the next test. I found that \nuseful. I agree the current behavior is not useful in any case.\n\n\n\n", "msg_date": "Thu, 24 Aug 2023 08:10:14 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: Cirrus-ci is lowering free CI cycles - what to do with cfbot,\n etc?" }, { "msg_contents": "Hi,\n\nOn Thu, 24 Aug 2023 at 00:48, Andres Freund <andres@anarazel.de> wrote:\n> > v3-0008-ci-switch-tasks-to-debugoptimized-build.patch:\n> > Just thinking of possible optimizations and thought can't we create\n> > something like 'buildtype: xxx' to override default buildtype using\n> > .cirrus.star? This could be better for PG developers. For sure that\n> > could be the subject of another patch.\n>\n> We could, but I'm not sure what the use would be?\n\nMy main idea behind this was that PG developers could choose\n'buildtype: debug' while working on their patches and that\noptimization makes it easier to choose the buildtype.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Thu, 24 Aug 2023 19:41:00 +0300", "msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Cirrus-ci is lowering free CI cycles - what to do with cfbot,\n etc?" }, { "msg_contents": "Hi,\n\nOn 2023-08-22 23:58:33 -0700, Andres Freund wrote:\n> To make that possible, we need to make the compute resources for CI\n> configurable on a per-repository basis. After experimenting with a bunch of\n> ways to do that, I got stuck on that for a while. But since today we have\n> sufficient macos runners for cfbot available, so... I think the approach I\n> finally settled on is decent, although not great. It's described in the \"main\"\n> commit message:\n> [...]\n> ci: Prepare to make compute resources for CI configurable\n> I'm hoping to push this fairly soon, as I'll be on vacation the last week of\n> August. I'll be online intermittently though, if there are issues, I can react\n> (very limited connectivity for middday Aug 29th - midday Aug 31th though). I'd\n> appreciate a quick review or two.\n\nI've pushed this yesterday.\n\nAnd then utilized it to make cfbot use\n1) macos persistent workers, hosted by two community members\n2) our own GCP account for all the other operating systems\n\nThere were a few issues initially (needed to change how to run multiple jobs\non a single mac, and looks like there were some issues with macos going to\nsleep while processing jobs...). But it now seems to be chugging alone ok.\n\nOne of the nice things is that with our own compute we also control how much\nstorage can be used, making things like generating docs or code coverage as\npart of cfbot more realistic. And we could enable mingw by default when run as\npart of cfbot...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 24 Aug 2023 12:03:18 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Cirrus-ci is lowering free CI cycles - what to do with cfbot,\n etc?" }, { "msg_contents": "> On 23 Aug 2023, at 23:12, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 23 Aug 2023, at 23:02, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> \n>> Daniel Gustafsson <daniel@yesql.se> writes:\n>>> On 23 Aug 2023, at 21:22, Andres Freund <andres@anarazel.de> wrote:\n>>>> I think there's more effective ways to make this cheaper. The basic thing\n>>>> would be to use libpq instead of forking of psql to make a connection\n>>>> check.\n>> \n>>> I had it in my head that not using libpq in pg_regress was a deliberate choice,\n>>> but I fail to find a reference to it in the archives.\n>> \n>> I have a vague feeling that you are right about that. Perhaps the\n>> concern was that under \"make installcheck\", pg_regress might be\n>> using a build-tree copy of libpq rather than the one from the\n>> system under test. As long as we're just trying to ping the server,\n>> that shouldn't matter too much I think ... unless we hit problems\n>> with, say, a different default port number or socket path compiled into\n>> one copy vs. the other? That seems like it's probably a \"so don't\n>> do that\" case, though.\n> \n> Ah yes, that does ring a familiar bell. I agree that using it for pinging the\n> server should be safe either way, but we should document the use-with-caution\n> in pg_regress.c if/when we go down that path. I'll take a stab at changing the\n> psql retry loop for pinging tomorrow to see what it would look like.\n\nAttached is a patch with a quick PoC for using PQPing instead of using psql for\nconnection checks in pg_regress. In order to see performance it also includes\na diag output for \"Time to first test\" which contains all setup costs. This\nmight not make it into a commit but it was quite helpful in hacking so I left\nit in for now.\n\nThe patch incorporates Andres' idea for finer granularity of checks by checking\nTICKS times per second rather than once per second, it also shifts the\npg_usleep around to require just one ping in most cases compard to two today.\n\nOn my relatively tired laptop this speeds up pg_regress setup with 100+ms with\nmuch bigger wins on Windows in the CI. While it does add a dependency on\nlibpq, I think it's a fairly decent price to pay for running tests faster.\n\n--\nDaniel Gustafsson", "msg_date": "Mon, 28 Aug 2023 14:32:24 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Cirrus-ci is lowering free CI cycles - what to do with cfbot,\n etc?" }, { "msg_contents": "> On 28 Aug 2023, at 14:32, Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> Attached is a patch with a quick PoC for using PQPing instead of using psql for\n> connection checks in pg_regress.\n\nThe attached v2 fixes a silly mistake which led to a compiler warning.\n\n--\nDaniel Gustafsson", "msg_date": "Wed, 30 Aug 2023 10:57:10 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Cirrus-ci is lowering free CI cycles - what to do with cfbot,\n etc?" }, { "msg_contents": "Hi,\n\nOn 2023-08-30 10:57:10 +0200, Daniel Gustafsson wrote:\n> > On 28 Aug 2023, at 14:32, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n> > Attached is a patch with a quick PoC for using PQPing instead of using psql for\n> > connection checks in pg_regress.\n> \n> The attached v2 fixes a silly mistake which led to a compiler warning.\n\nStill seems like a good idea to me. To see what impact it has, I measured the\ntime running the pg_regress tests that take less than 6s on my machine - I\nexcluded the slower ones (like the main regression tests) because they'd hide\nany overall difference.\n\nninja && m test --suite setup --no-rebuild && tests=$(m test --no-rebuild --list|grep -E '/regress'|grep -vE '(regress|postgres_fdw|test_integerset|intarray|amcheck|test_decoding)/regress'|cut -d' ' -f 3) && time m test --no-rebuild $tests\n\nTime for:\n\n\nmaster:\n\ncassert:\nreal\t0m5.265s\nuser\t0m8.422s\nsys\t0m8.381s\n\noptimized:\nreal\t0m4.926s\nuser\t0m6.356s\nsys\t0m8.263s\n\n\nmy patch (probing every 100ms with psql):\n\ncassert:\nreal\t0m3.465s\nuser\t0m8.827s\nsys\t0m8.579s\n\noptimized:\nreal\t0m2.932s\nuser\t0m6.596s\nsys\t0m8.458s\n\n\nDaniel's (probing every 50ms with PQping()):\n\ncassert:\nreal\t0m3.347s\nuser\t0m8.373s\nsys\t0m8.354s\n\noptimized:\nreal\t0m2.527s\nuser\t0m6.156s\nsys\t0m8.315s\n\n\nMy patch increased user/sys time a bit (likely due to a higher number of\nfutile psql forks), but Daniel's doesn't. And it does show a nice overall wall\nclock time saving.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 12 Sep 2023 16:49:01 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Cirrus-ci is lowering free CI cycles - what to do with cfbot,\n etc?" }, { "msg_contents": "> On 13 Sep 2023, at 01:49, Andres Freund <andres@anarazel.de> wrote:\n> On 2023-08-30 10:57:10 +0200, Daniel Gustafsson wrote:\n>>> On 28 Aug 2023, at 14:32, Daniel Gustafsson <daniel@yesql.se> wrote:\n>> \n>>> Attached is a patch with a quick PoC for using PQPing instead of using psql for\n>>> connection checks in pg_regress.\n>> \n>> The attached v2 fixes a silly mistake which led to a compiler warning.\n> \n> Still seems like a good idea to me. To see what impact it has, I measured the\n> time running the pg_regress tests that take less than 6s on my machine - I\n> excluded the slower ones (like the main regression tests) because they'd hide\n> any overall difference.\n\n> My patch increased user/sys time a bit (likely due to a higher number of\n> futile psql forks), but Daniel's doesn't. And it does show a nice overall wall\n> clock time saving.\n\nWhile it does add a lib dependency I think it's worth doing, so I propose we go\nahead with this for master.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 18 Sep 2023 15:12:49 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Cirrus-ci is lowering free CI cycles - what to do with cfbot,\n etc?" }, { "msg_contents": "> On 13 Sep 2023, at 01:49, Andres Freund <andres@anarazel.de> wrote:\n\n> My patch increased user/sys time a bit (likely due to a higher number of\n> futile psql forks), but Daniel's doesn't. And it does show a nice overall wall\n> clock time saving.\n\nI went ahead and applied this on master, thanks for review! Now to see if\nthere will be any noticeable difference in resource usage.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 24 Oct 2023 22:25:54 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Cirrus-ci is lowering free CI cycles - what to do with cfbot,\n etc?" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> I went ahead and applied this on master, thanks for review! Now to see if\n> there will be any noticeable difference in resource usage.\n\nI think that tools like Coverity are likely to whine about your\nuse of sprintf instead of snprintf. Sure, it's perfectly safe,\nbut that won't stop the no-sprintf-ever crowd from complaining.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 24 Oct 2023 16:34:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Cirrus-ci is lowering free CI cycles - what to do with cfbot,\n etc?" }, { "msg_contents": "> On 24 Oct 2023, at 22:34, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> I went ahead and applied this on master, thanks for review! Now to see if\n>> there will be any noticeable difference in resource usage.\n> \n> I think that tools like Coverity are likely to whine about your\n> use of sprintf instead of snprintf. Sure, it's perfectly safe,\n> but that won't stop the no-sprintf-ever crowd from complaining.\n\nFair point, that's probably quite likely to happen. I can apply an snprintf()\nconversion change like this in the two places introduced by this:\n\n- sprintf(s, \"%d\", port);\n+ sprintf(s, sizeof(s), \"%d\", port);\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 24 Oct 2023 22:45:48 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Cirrus-ci is lowering free CI cycles - what to do with cfbot,\n etc?" } ]
[ { "msg_contents": "Hi,\n\nI updated the comment in 'SnapshotSetCommandId' in this patch which specifies the reason why it\nis not necessary to update 'curcid' of CatalogSnapshot.\n\nBest regards, xiaoran", "msg_date": "Tue, 8 Aug 2023 05:11:20 +0000", "msg_from": "Xiaoran Wang <wxiaoran@vmware.com>", "msg_from_op": true, "msg_subject": "[PATCH] update the comment in SnapshotSetCommandId" } ]
[ { "msg_contents": "Hi All,\nI have been looking at memory consumed when planning a partitionwise join\n[1], [2] and [3]. I am using the attached patch to measure the memory\nconsumption. The patch has been useful to detect an increased memory\nconsumption in other patches e.g. [4] and [5]. The patch looks useful by\nitself. So I propose this enhancement in EXPLAIN ANALYZE.\n\nComments/suggestions welcome.\n\n[1]\nhttps://www.postgresql.org/message-id/CAExHW5tHqEf3ASVqvFFcghYGPfpy7o3xnvhHwBGbJFMRH8KjNw@mail.gmail.com\n[2]\nhttps://www.postgresql.org/message-id/CAExHW5tUcVsBkq9qT=L5vYz4e-cwQNw=KAGJrtSyzOp3F=XacA@mail.gmail.com\n[3]\nhttps://www.postgresql.org/message-id/CAExHW5s=bCLMMq8n_bN6iU+Pjau0DS3z_6Dn6iLE69ESmsPMJQ@mail.gmail.com\n[4]\nhttps://www.postgresql.org/message-id/CAExHW5uVZ3E5RT9cXHaxQ_DEK7tasaMN=D6rPHcao5gcXanY5w@mail.gmail.com\n[5]\nhttps://www.postgresql.org/message-id/CAExHW5s4EqY43oB=ne6B2=-xLgrs9ZGeTr1NXwkGFt2j-OmaQQ@mail.gmail.com\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Tue, 8 Aug 2023 11:40:45 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Report planning memory in EXPLAIN ANALYZE" }, { "msg_contents": "Hi David,\nReplying to your comments on this thread.\n\n> On Tue, Aug 8, 2023 at 11:40 AM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:\n>>\n>> Hi All,\n>> I have been looking at memory consumed when planning a partitionwise join [1], [2] and [3]. I am using the attached patch to measure the memory consumption. The patch has been useful to detect an increased memory consumption in other patches e.g. [4] and [5]. The patch looks useful by itself. So I propose this enhancement in EXPLAIN ANALYZE.\n>>\n\nOn Wed, Aug 9, 2023 at 10:12 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> I see you're recording the difference in the CurrentMemoryContext of\n> palloc'd memory before and after planning. That won't really alert us\n> to problems if the planner briefly allocates, say 12GBs of memory\n> during, say the join search then quickly pfree's it again. unless\n> it's an oversized chunk, aset.c won't free() any memory until\n> MemoryContextReset(). Chunks just go onto a freelist for reuse later.\n> So at the end of planning, the context may still have that 12GBs\n> malloc'd, yet your new EXPLAIN ANALYZE property might end up just\n> reporting a tiny fraction of that.\n>\n> I wonder if it would be more useful to just have ExplainOneQuery()\n> create a new memory context and switch to that and just report the\n> context's mem_allocated at the end.\n\nThe memory allocated but not used is still available for use in rest\nof the query processing stages. The memory context which is\nCurrentMemoryContext at the time of planning is also\nCurrentMemoryContext at the time of its execution if it's not\nPREPAREd. So it's not exactly \"consumed\" by memory. But your point is\nvalid, that it indicates how much was allocated. Just reporting\nallocated memory wont' be enough since it might have been allocated\nbefore planning began. How about reporting both used and net allocated\nmemory? If we use a separate memory context as you suggest, context's\nmem_allocated would be net allocated.\n\n>\n> It's also slightly annoying that these planner-related summary outputs\n> are linked to EXPLAIN ANALYZE. We could be showing them in EXPLAIN\n> without ANALYZE. If we were to change that now, it might be a bit\n> annoying for the regression tests as we'd need to go and add SUMMARY\n> OFF in a load of places...\n\nWe do report planning time when SUMMARY ON without ANALYZE. Am I\nmissing something?\n\n#select version();\n version\n----------------------------------------------------------------------------------------------------------\n PostgreSQL 16devel on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu\n9.4.0-1ubuntu1~20.04.1) 9.4.0, 64-bit\n(1 row)\n#explain (summary on) select * from pg_class;\n QUERY PLAN\n-------------------------------------------------------------\n Seq Scan on pg_class (cost=0.00..18.13 rows=413 width=273)\n Planning Time: 0.282 ms\n(2 rows)\n#explain (summary off) select * from pg_class;\n QUERY PLAN\n-------------------------------------------------------------\n Seq Scan on pg_class (cost=0.00..18.13 rows=413 width=273)\n(1 row)\n\nI need to just make sure that the Planning Memory is reported with SUMMARY ON.\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Wed, 9 Aug 2023 20:09:13 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Report planning memory in EXPLAIN ANALYZE" }, { "msg_contents": "On Wed, Aug 9, 2023 at 8:09 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> Hi David,\n> Replying to your comments on this thread.\n>\n> > On Tue, Aug 8, 2023 at 11:40 AM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:\n> >>\n> >> Hi All,\n> >> I have been looking at memory consumed when planning a partitionwise join [1], [2] and [3]. I am using the attached patch to measure the memory consumption. The patch has been useful to detect an increased memory consumption in other patches e.g. [4] and [5]. The patch looks useful by itself. So I propose this enhancement in EXPLAIN ANALYZE.\n> >>\n>\n> On Wed, Aug 9, 2023 at 10:12 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> >\n> > I see you're recording the difference in the CurrentMemoryContext of\n> > palloc'd memory before and after planning. That won't really alert us\n> > to problems if the planner briefly allocates, say 12GBs of memory\n> > during, say the join search then quickly pfree's it again. unless\n> > it's an oversized chunk, aset.c won't free() any memory until\n> > MemoryContextReset(). Chunks just go onto a freelist for reuse later.\n> > So at the end of planning, the context may still have that 12GBs\n> > malloc'd, yet your new EXPLAIN ANALYZE property might end up just\n> > reporting a tiny fraction of that.\n> >\n> > I wonder if it would be more useful to just have ExplainOneQuery()\n> > create a new memory context and switch to that and just report the\n> > context's mem_allocated at the end.\n>\n> The memory allocated but not used is still available for use in rest\n> of the query processing stages. The memory context which is\n> CurrentMemoryContext at the time of planning is also\n> CurrentMemoryContext at the time of its execution if it's not\n> PREPAREd. So it's not exactly \"consumed\" by memory. But your point is\n> valid, that it indicates how much was allocated. Just reporting\n> allocated memory wont' be enough since it might have been allocated\n> before planning began. How about reporting both used and net allocated\n> memory? If we use a separate memory context as you suggest, context's\n> mem_allocated would be net allocated.\n\nThinking more about it, I think memory used is the only right metrics.\nIt's an optimization in MemoryContext implementation that malloc'ed\nmemory is not freed when it is returned using pfree().\n\nIf we have to report allocated memory, maybe we should report as two\nproperties Planning Memory used, Planning Memory allocated\nrespectively. But again the later is an implementation detail which\nmay not be relevant.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Wed, 9 Aug 2023 20:41:58 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Report planning memory in EXPLAIN ANALYZE" }, { "msg_contents": "On Thu, 10 Aug 2023 at 03:12, Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> Thinking more about it, I think memory used is the only right metrics.\n> It's an optimization in MemoryContext implementation that malloc'ed\n> memory is not freed when it is returned using free().\n\nI guess it depends on the problem you're trying to solve. I had\nthought you were trying to do some work to reduce the memory used by\nthe planner, so I imagined you wanted this so you could track your\nprogress and also to help ensure we don't make too many mistakes in\nthe future which makes memory consumption worse again. For that, I\nimagined you'd want to know how much memory is held to ransom by the\ncontext with malloc(), not palloc(). Is it really useful to reduce the\npalloc'd memory by the end of planning if it does not reduce the\nmalloc'd memory?\n\nAnother way we might go about reducing planner memory is to make\nchanges to the allocators themselves. For example, aset rounds up to\nthe next power of 2. If we decided to do something like add more\nfreelists to double the number so we could add a mid-way point\nfreelist between each power of 2, then we might find it reduces the\nplanner memory by, say 12.5%. If we just tracked what was consumed by\npalloc() that would appear to save us nothing, but it might actually\nsave us several malloced blocks.\n\nDavid\n\n\n", "msg_date": "Thu, 10 Aug 2023 03:26:40 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report planning memory in EXPLAIN ANALYZE" }, { "msg_contents": "On Wed, Aug 9, 2023 at 8:56 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Thu, 10 Aug 2023 at 03:12, Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> > Thinking more about it, I think memory used is the only right metrics.\n> > It's an optimization in MemoryContext implementation that malloc'ed\n> > memory is not freed when it is returned using free().\n>\n> I guess it depends on the problem you're trying to solve. I had\n> thought you were trying to do some work to reduce the memory used by\n> the planner, so I imagined you wanted this so you could track your\n> progress and also to help ensure we don't make too many mistakes in\n> the future which makes memory consumption worse again.\n\nRight.\n\n> For that, I\n> imagined you'd want to know how much memory is held to ransom by the\n> context with malloc(), not palloc(). Is it really useful to reduce the\n> palloc'd memory by the end of planning if it does not reduce the\n> malloc'd memory?\n\nAFAIU, the memory freed by the end of planning can be used by later\nstages of query processing. The memory malloc'ed during planning can\nalso be used at the time of execution if it was not palloc'ed or\npalloc'ed but pfree'd. So I think it's useful to reduce palloc'ed\nmemory by the end of planning.\n\n\n\n>\n> Another way we might go about reducing planner memory is to make\n> changes to the allocators themselves. For example, aset rounds up to\n> the next power of 2. If we decided to do something like add more\n> freelists to double the number so we could add a mid-way point\n> freelist between each power of 2, then we might find it reduces the\n> planner memory by, say 12.5%. If we just tracked what was consumed by\n> palloc() that would appear to save us nothing, but it might actually\n> save us several malloced blocks.\n>\n\nThis won't just affect planner but every subsystem of PostgreSQL, not\njust planner, unless we device a new context type for planner.\n\nMy point is what's relevant here is how much net memory planner asked\nfor. Internally how much memory PostgreSQL allocated seems relevant\nfor the memory context infra as such but not so much for planner\nalone.\n\nIf you think that memory allocated is important, I will add both used\nand (net) allocated memory to EXPLAIN output.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Thu, 10 Aug 2023 14:03:11 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Report planning memory in EXPLAIN ANALYZE" }, { "msg_contents": "Hi David,\n\nOn Wed, Aug 9, 2023 at 8:09 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> I need to just make sure that the Planning Memory is reported with SUMMARY ON.\n>\nThe patch reports planning memory in EXPLAIN without ANALYZE when SUMMARY = ON.\n\n#explain (summary on) select * from a, b where a.c1 = b.c1 and a.c1 < b.c2;\n QUERY PLAN\n-------------------------------------------------------------------------------\n Append (cost=55.90..245.70 rows=1360 width=24)\n -> Hash Join (cost=55.90..119.45 rows=680 width=24)\n Hash Cond: (a_1.c1 = b_1.c1)\n Join Filter: (a_1.c1 < b_1.c2)\n -> Seq Scan on a_p1 a_1 (cost=0.00..30.40 rows=2040 width=12)\n -> Hash (cost=30.40..30.40 rows=2040 width=12)\n -> Seq Scan on b_p1 b_1 (cost=0.00..30.40 rows=2040 width=12)\n -> Hash Join (cost=55.90..119.45 rows=680 width=24)\n Hash Cond: (a_2.c1 = b_2.c1)\n Join Filter: (a_2.c1 < b_2.c2)\n -> Seq Scan on a_p2 a_2 (cost=0.00..30.40 rows=2040 width=12)\n -> Hash (cost=30.40..30.40 rows=2040 width=12)\n -> Seq Scan on b_p2 b_2 (cost=0.00..30.40 rows=2040 width=12)\n Planning Time: 2.220 ms\n Planning Memory: 124816 bytes\n(15 rows)\n\nWe are good there.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Thu, 10 Aug 2023 14:04:31 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Report planning memory in EXPLAIN ANALYZE" }, { "msg_contents": "On 10/8/2023 15:33, Ashutosh Bapat wrote:\n> On Wed, Aug 9, 2023 at 8:56 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>>\n>> On Thu, 10 Aug 2023 at 03:12, Ashutosh Bapat\n>> <ashutosh.bapat.oss@gmail.com> wrote:\n>> I guess it depends on the problem you're trying to solve. I had\n>> thought you were trying to do some work to reduce the memory used by\n>> the planner, so I imagined you wanted this so you could track your\n>> progress and also to help ensure we don't make too many mistakes in\n>> the future which makes memory consumption worse again.\n>> Another way we might go about reducing planner memory is to make\n>> changes to the allocators themselves. For example, aset rounds up to\n>> the next power of 2. If we decided to do something like add more\n>> freelists to double the number so we could add a mid-way point\n>> freelist between each power of 2, then we might find it reduces the\n>> planner memory by, say 12.5%. If we just tracked what was consumed by\n>> palloc() that would appear to save us nothing, but it might actually\n>> save us several malloced blocks.\n>>\n> \n> This won't just affect planner but every subsystem of PostgreSQL, not\n> just planner, unless we device a new context type for planner.\n> \n> My point is what's relevant here is how much net memory planner asked\n> for. Internally how much memory PostgreSQL allocated seems relevant\n> for the memory context infra as such but not so much for planner\n> alone.\n> \n> If you think that memory allocated is important, I will add both used\n> and (net) allocated memory to EXPLAIN output.\nAs a developer, I like having as much internal info in my EXPLAIN as \npossible. But this command is designed mainly for users, not core \ndevelopers. The size of memory allocated usually doesn't make sense for \nusers. On the other hand, developers can watch it easily in different \nways, if needed. So, I vote for the 'memory used' metric only.\n\nThe patch looks good, passes the tests.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n\n", "msg_date": "Fri, 11 Aug 2023 12:11:21 +0700", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Report planning memory in EXPLAIN ANALYZE" }, { "msg_contents": "On Fri, Aug 11, 2023 at 10:41 AM Andrey Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> >>\n> >\n> > This won't just affect planner but every subsystem of PostgreSQL, not\n> > just planner, unless we device a new context type for planner.\n> >\n> > My point is what's relevant here is how much net memory planner asked\n> > for. Internally how much memory PostgreSQL allocated seems relevant\n> > for the memory context infra as such but not so much for planner\n> > alone.\n> >\n> > If you think that memory allocated is important, I will add both used\n> > and (net) allocated memory to EXPLAIN output.\n> As a developer, I like having as much internal info in my EXPLAIN as\n> possible. But this command is designed mainly for users, not core\n> developers. The size of memory allocated usually doesn't make sense for\n> users. On the other hand, developers can watch it easily in different\n> ways, if needed. So, I vote for the 'memory used' metric only.\n>\n> The patch looks good, passes the tests.\n\nThanks Andrey.\n\nDavid, are you against reporting \"memory used\"? If you are not against\nreporting used memory and still think that memory allocated is worth\nreporting, I am fine reporting allocated memory too.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 11 Aug 2023 18:50:52 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Report planning memory in EXPLAIN ANALYZE" }, { "msg_contents": "On Thu, 10 Aug 2023 at 20:33, Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> My point is what's relevant here is how much net memory planner asked\n> for.\n\nBut that's not what your patch is reporting. All you're reporting is\nthe difference in memory that's *currently* palloc'd from before and\nafter the planner ran. If we palloc'd 600 exabytes then pfree'd it\nagain, your metric won't change.\n\nI'm struggling a bit to understand your goals here. If your goal is\nto make a series of changes that reduces the amount of memory that's\npalloc'd at the end of planning, then your patch seems to suit that\ngoal, but per the quote above, it seems you care about how many bytes\nare palloc'd during planning and your patch does not seem track that.\n\nWith your patch as it is, to improve the metric you're reporting we\ncould go off and do things like pfree Paths once createplan.c is done,\nbut really, why would we do that? Just to make the \"Planning Memory\"\nmetric looks better doesn't seem like a worthy goal.\n\nInstead, if we reported the context's mem_allocated, then it would\ngive us the flexibility to make changes to the memory context code to\nhave the metric look better. It might also alert us to planner\ninefficiencies and problems with new code that may cause a large spike\nin the amount of memory that gets allocated. Now, I'm not saying we\nshould add a patch that shows mem_allocated. I'm just questioning if\nyour proposed patch meets the goals you're trying to achieve. I just\nsuggested that you might want to consider something else as a metric\nfor your memory usage reduction work.\n\nDavid\n\n\n", "msg_date": "Mon, 14 Aug 2023 11:53:02 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report planning memory in EXPLAIN ANALYZE" }, { "msg_contents": "On 14/8/2023 06:53, David Rowley wrote:\n> On Thu, 10 Aug 2023 at 20:33, Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n>> My point is what's relevant here is how much net memory planner asked\n>> for.\n> \n> But that's not what your patch is reporting. All you're reporting is\n> the difference in memory that's *currently* palloc'd from before and\n> after the planner ran. If we palloc'd 600 exabytes then pfree'd it\n> again, your metric won't change.\n> \n> I'm struggling a bit to understand your goals here. If your goal is\n> to make a series of changes that reduces the amount of memory that's\n> palloc'd at the end of planning, then your patch seems to suit that\n> goal, but per the quote above, it seems you care about how many bytes\n> are palloc'd during planning and your patch does not seem track that.\n> \n> With your patch as it is, to improve the metric you're reporting we\n> could go off and do things like pfree Paths once createplan.c is done,\n> but really, why would we do that? Just to make the \"Planning Memory\"\n> metric looks better doesn't seem like a worthy goal.\n> \n> Instead, if we reported the context's mem_allocated, then it would\n> give us the flexibility to make changes to the memory context code to\n> have the metric look better. It might also alert us to planner\n> inefficiencies and problems with new code that may cause a large spike\n> in the amount of memory that gets allocated. Now, I'm not saying we\n> should add a patch that shows mem_allocated. I'm just questioning if\n> your proposed patch meets the goals you're trying to achieve. I just\n> suggested that you might want to consider something else as a metric\n> for your memory usage reduction work.\nReally, the current approach with the final value of consumed memory \nsmooths peaks of memory consumption. I recall examples likewise massive \nmillion-sized arrays or reparameterization with many partitions where \nthe optimizer consumes much additional memory during planning.\nIdeally, to dive into the planner issues, we should have something like \na report-in-progress in the vacuum, reporting on memory consumption at \neach subquery and join level. But it looks too much for typical queries.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n\n", "msg_date": "Mon, 14 Aug 2023 09:52:47 +0700", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Report planning memory in EXPLAIN ANALYZE" }, { "msg_contents": "On Mon, Aug 14, 2023 at 5:23 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Thu, 10 Aug 2023 at 20:33, Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> > My point is what's relevant here is how much net memory planner asked\n> > for.\n>\n> But that's not what your patch is reporting. All you're reporting is\n> the difference in memory that's *currently* palloc'd from before and\n> after the planner ran. If we palloc'd 600 exabytes then pfree'd it\n> again, your metric won't change.\n\nMay be I didn't use the right phrase \"asked for\". But I expected that\nto be read in the context of \"net memory\" - net as an adjective.\nAnyway, I will make it more clear below.\n\n>\n> I'm struggling a bit to understand your goals here. If your goal is\n> to make a series of changes that reduces the amount of memory that's\n> palloc'd at the end of planning, then your patch seems to suit that\n> goal, but per the quote above, it seems you care about how many bytes\n> are palloc'd during planning and your patch does not seem track that.\n\nThere are three metrics here. M1: The total number of bytes that the\nplanner \"requests\". M2: The total number of bytes that \"remain\npalloc'ed\" at a given point in time. M3: Maximum number of bytes that\nwere palloc'ed at any point in time during planning. Following\nsequence of operations will explain the difference\np1 palloc: 100 - M1 = 100, M2 = 100, M3 = 100\np2 palloc: 100, M1 = 200, M2 = 200, M3 = 200\np3 pfree: 100, M1 = 200, M2 = 100, M3 = 200\np4 palloc: 100, M1 = 300, M2 = 200, M3 = 200\np5 palloc: 100, M1 = 400, M2 = 300, M3 = 300\np6 pfree: 100, M1 = 400, M2 = 200, M3 = 300\n\nThe patch reports M2 at the end of planning.\nMemoryContextData::mem_allocated is not exactly the same as M3 but\ngives indication of M3.\n\nMy goal is to reduce all three M1, M2 and M3. I don't it's easy to\ntrack M1 and also worth the complexity. M2 and M3 instead act as rough\nindicators of M1.\n>\n> With your patch as it is, to improve the metric you're reporting we\n> could go off and do things like pfree Paths once createplan.c is done,\n> but really, why would we do that? Just to make the \"Planning Memory\"\n> metric looks better doesn't seem like a worthy goal.\n>\n\nAs I mentioned earlier the CurrentMemoryContext at the time of\nplanning is also used during query execution. There are other contexts\nlike per row, per operator contexts but anything which is specific to\nthe running query and does not fit those contexts is allocated in this\ncontext. If we reduce memory that remains palloc'ed (M2) at the end of\nthe planning, it can be used during execution. So it looks like a\nworthy goal.\n\n> Instead, if we reported the context's mem_allocated, then it would\n> give us the flexibility to make changes to the memory context code to\n> have the metric look better. It might also alert us to planner\n> inefficiencies and problems with new code that may cause a large spike\n> in the amount of memory that gets allocated.\n\nThis is M3. I agree with your reasoning here. We should report M3 as\nwell. I will make changes to the patch.\n\n> Now, I'm not saying we\n> should add a patch that shows mem_allocated. I'm just questioning if\n> your proposed patch meets the goals you're trying to achieve. I just\n> suggested that you might want to consider something else as a metric\n> for your memory usage reduction work.\n\nOk. Thanks for the suggestions. More suggestions are welcome too.\n\n[1] https://www.merriam-webster.com/dictionary/net\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 14 Aug 2023 12:33:35 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Report planning memory in EXPLAIN ANALYZE" }, { "msg_contents": "On Mon, Aug 14, 2023 at 8:22 AM Andrey Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n>\n> Really, the current approach with the final value of consumed memory\n> smooths peaks of memory consumption. I recall examples likewise massive\n> million-sized arrays or reparameterization with many partitions where\n> the optimizer consumes much additional memory during planning.\n> Ideally, to dive into the planner issues, we should have something like\n> a report-in-progress in the vacuum, reporting on memory consumption at\n> each subquery and join level. But it looks too much for typical queries.\n\nPlanner finishes usually finish within a second. When partitioning is\ninvolved it might take a few dozens of seconds but it's still within a\nminute and we are working to reduce that as well to a couple hundred\nmilliseconds at max. Tracking memory usages during this small time may\nnot be worth it. The tracking itself might make the planning\nin-efficient and we might still miss the spikes in memory allocations,\nif they are very short lived. If the planner runs for more than a few\nminutes, maybe we could add some tracking.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 14 Aug 2023 12:39:15 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Report planning memory in EXPLAIN ANALYZE" }, { "msg_contents": "On Mon, Aug 14, 2023 at 3:13 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Mon, Aug 14, 2023 at 8:22 AM Andrey Lepikhov\n> <a.lepikhov@postgrespro.ru> wrote:\n> >\n> > Really, the current approach with the final value of consumed memory\n> > smooths peaks of memory consumption. I recall examples likewise massive\n> > million-sized arrays or reparameterization with many partitions where\n> > the optimizer consumes much additional memory during planning.\n> > Ideally, to dive into the planner issues, we should have something like\n> > a report-in-progress in the vacuum, reporting on memory consumption at\n> > each subquery and join level. But it looks too much for typical queries.\n>\n> Planner finishes usually finish within a second. When partitioning is\n> involved it might take a few dozens of seconds but it's still within a\n> minute and we are working to reduce that as well to a couple hundred\n> milliseconds at max. Tracking memory usages during this small time may\n> not be worth it. The tracking itself might make the planning\n> in-efficient and we might still miss the spikes in memory allocations,\n> if they are very short lived. If the planner runs for more than a few\n> minutes, maybe we could add some tracking.\n>\n> --\n> Best Wishes,\n> Ashutosh Bapat\n>\n>\n\nHi. I tested it.\nnot sure if following is desired behavior. first run with explain,\nthen run with explain(summary on).\nthe second time, Planning Memory: 0 bytes.\n\nregression=# PREPARE q4 AS SELECT 1 AS a;\nexplain EXECUTE q4;\n QUERY PLAN\n------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=4)\n(1 row)\n\nregression=# explain(summary on) EXECUTE q4;\n QUERY PLAN\n------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=4)\n Planning Time: 0.009 ms\n Planning Memory: 0 bytes\n(3 rows)\n---------------------------------------------\npreviously, if you want stats of a given memory context and its\nchildren, you can only use MemoryContextStatsDetail.\nbut it will only go to stderr or LOG_SERVER_ONLY.\nNow, MemoryContextMemUsed is being exposed. I can do something like:\n\nmem_consumed = MemoryContextMemUsed(CurrentMemoryContext);\n//do stuff.\nmem_consumed = MemoryContextMemUsed(CurrentMemoryContext) - mem_consumed;\n\nit will give me the NET memory consumed by doing staff in between. Is\nmy understanding correct?\n\n\n", "msg_date": "Tue, 22 Aug 2023 15:46:44 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report planning memory in EXPLAIN ANALYZE" }, { "msg_contents": "Sorry for the late reply. I was working on David's suggestion.\n\nHere's a response to your questions and also a new set of patches.\n\nOn Tue, Aug 22, 2023 at 1:16 PM jian he <jian.universality@gmail.com> wrote:\n> Hi. I tested it.\n> not sure if following is desired behavior. first run with explain,\n> then run with explain(summary on).\n> the second time, Planning Memory: 0 bytes.\n>\n> regression=# PREPARE q4 AS SELECT 1 AS a;\n> explain EXECUTE q4;\n> QUERY PLAN\n> ------------------------------------------\n> Result (cost=0.00..0.01 rows=1 width=4)\n> (1 row)\n>\n> regression=# explain(summary on) EXECUTE q4;\n> QUERY PLAN\n> ------------------------------------------\n> Result (cost=0.00..0.01 rows=1 width=4)\n> Planning Time: 0.009 ms\n> Planning Memory: 0 bytes\n> (3 rows)\n> ---------------------------------------------\n\nYes. This is expected since the plan is already available and no\nmemory is required to fetch it from the cache. I imagine, if there\nwere parameters to the prepared plan, it would consume some memory to\nevaluate those parameters and some more memory if replanning was\nrequired.\n\n\n> previously, if you want stats of a given memory context and its\n> children, you can only use MemoryContextStatsDetail.\n> but it will only go to stderr or LOG_SERVER_ONLY.\n> Now, MemoryContextMemUsed is being exposed. I can do something like:\n>\n> mem_consumed = MemoryContextMemUsed(CurrentMemoryContext);\n> //do stuff.\n> mem_consumed = MemoryContextMemUsed(CurrentMemoryContext) - mem_consumed;\n>\n> it will give me the NET memory consumed by doing staff in between. Is\n> my understanding correct?\n\nYes.\n\nHere are three patches based on the latest master.\n\n0001\n====\nthis is same as the previous patch with few things fixed. 1. Call\nMemoryContextMemUsed() before INSTR_TIME_SET_CURRENT so that the time\ntaken by MemoryContextMemUsed() is not counted in planning time. 2. In\nExplainOnePlan, use a separate code block for reporting memory.\n\n0002\n====\nThis patch reports both memory allocated and memory used in the\nCurrentMemoryContext at the time of planning. It converts \"Planning\nMemory\" into a section with two values reported as \"used\" and\n\"allocated\" as below\n\n#explain (summary on) select * from pg_class c, pg_type t where\nc.reltype = t.oid;\n QUERY PLAN\n--------------------------------------------------------------------------\n Hash Join (cost=28.84..47.08 rows=414 width=533)\n ... snip ...\n Planning Time: 9.274 ms\n Planning Memory: used=80848 bytes allocated=90112 bytes\n(7 rows)\n\nIn JSON format\n#explain (summary on, format json) select * from pg_class c, pg_type t\nwhere c.reltype = t.oid;\n QUERY PLAN\n-----------------------------------------------\n [ +\n { +\n \"Plan\": { +\n ... snip ...\n }, +\n \"Planning Time\": 0.466, +\n \"Planning Memory\": { +\n \"Used\": 80608, +\n \"Allocated\": 90112 +\n } +\n } +\n ]\n(1 row)\n\nPFA explain and explain analyze output in all the formats.\n\nThe patch adds MemoryContextMemConsumed() which is similar to\nMemoryContextStats() or MemoryContextStatsDetails() except 1. the\nlater always prints the memory statistics to either stderr or to the\nserver error log and 2. it doesn't return MemoryContextCounters that\nit gathered. We should probably change MemoryContextStats or\nMemoryContextStatsDetails() according to those two points and not add\nMemoryContextMemConsumed().\n\nI have not merged this into 0001 yet. But once we agree upon whether\nthis is the right thing to do, I will merge it into 0001.\n\n0003\n====\nWhen reporting memory allocated, a confusion may arise as to whether\nto report the \"net\" memory allocated between start and end of planner\nOR only the memory that remains allocated after end. This confusion\ncan be avoided by using an exclusive memory context (child of\nCurrentMemoryContext) for planning. That's what 0003 implements as\nsuggested by David. As mentioned in one of the comments in the patch,\nin order to measure memory allocated accurately the new MemoryContext\nhas to be of the same type as the memory context that will be used\notherwise by the planner i.e. CurrentMemoryContext, and should use the\nsame parameters. IIUC, it will always be AllocSet. So the patch uses\nAllocSet and Asserts so. But in case this changes in future, we will\nneed a way to create a new memory context with the same properties as\nthe CurrentMemoryContext, a functionality missing right now. Once we\nagree upon the approach, the patch will need to be merged into 0002\nand in turn 0001.\n\n--\nBest Wishes,\nAshutosh Bapat", "msg_date": "Thu, 24 Aug 2023 16:01:17 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Report planning memory in EXPLAIN ANALYZE" }, { "msg_contents": "Using your patch I found out one redundant memory usage in the planner [1]. It can be interesting as an example of how this patch can detect problems.\n\n[1] Optimize planner memory consumption for huge arrays\nhttps://www.postgresql.org/message-id/flat/em9939439a-441a-4b27-a977-ebdf5987dc49%407d14f008.com\n\n-- \nRegards,\nAndrei Lepikhov\n\nOn Thu, Aug 24, 2023, at 5:31 PM, Ashutosh Bapat wrote:\n> Sorry for the late reply. I was working on David's suggestion.\n>\n> Here's a response to your questions and also a new set of patches.\n>\n> On Tue, Aug 22, 2023 at 1:16 PM jian he <jian.universality@gmail.com> wrote:\n>> Hi. I tested it.\n>> not sure if following is desired behavior. first run with explain,\n>> then run with explain(summary on).\n>> the second time, Planning Memory: 0 bytes.\n>>\n>> regression=# PREPARE q4 AS SELECT 1 AS a;\n>> explain EXECUTE q4;\n>> QUERY PLAN\n>> ------------------------------------------\n>> Result (cost=0.00..0.01 rows=1 width=4)\n>> (1 row)\n>>\n>> regression=# explain(summary on) EXECUTE q4;\n>> QUERY PLAN\n>> ------------------------------------------\n>> Result (cost=0.00..0.01 rows=1 width=4)\n>> Planning Time: 0.009 ms\n>> Planning Memory: 0 bytes\n>> (3 rows)\n>> ---------------------------------------------\n>\n> Yes. This is expected since the plan is already available and no\n> memory is required to fetch it from the cache. I imagine, if there\n> were parameters to the prepared plan, it would consume some memory to\n> evaluate those parameters and some more memory if replanning was\n> required.\n>\n>\n>> previously, if you want stats of a given memory context and its\n>> children, you can only use MemoryContextStatsDetail.\n>> but it will only go to stderr or LOG_SERVER_ONLY.\n>> Now, MemoryContextMemUsed is being exposed. I can do something like:\n>>\n>> mem_consumed = MemoryContextMemUsed(CurrentMemoryContext);\n>> //do stuff.\n>> mem_consumed = MemoryContextMemUsed(CurrentMemoryContext) - mem_consumed;\n>>\n>> it will give me the NET memory consumed by doing staff in between. Is\n>> my understanding correct?\n>\n> Yes.\n>\n> Here are three patches based on the latest master.\n>\n> 0001\n> ====\n> this is same as the previous patch with few things fixed. 1. Call\n> MemoryContextMemUsed() before INSTR_TIME_SET_CURRENT so that the time\n> taken by MemoryContextMemUsed() is not counted in planning time. 2. In\n> ExplainOnePlan, use a separate code block for reporting memory.\n>\n> 0002\n> ====\n> This patch reports both memory allocated and memory used in the\n> CurrentMemoryContext at the time of planning. It converts \"Planning\n> Memory\" into a section with two values reported as \"used\" and\n> \"allocated\" as below\n>\n> #explain (summary on) select * from pg_class c, pg_type t where\n> c.reltype = t.oid;\n> QUERY PLAN\n> --------------------------------------------------------------------------\n> Hash Join (cost=28.84..47.08 rows=414 width=533)\n> ... snip ...\n> Planning Time: 9.274 ms\n> Planning Memory: used=80848 bytes allocated=90112 bytes\n> (7 rows)\n>\n> In JSON format\n> #explain (summary on, format json) select * from pg_class c, pg_type t\n> where c.reltype = t.oid;\n> QUERY PLAN\n> -----------------------------------------------\n> [ +\n> { +\n> \"Plan\": { +\n> ... snip ...\n> }, +\n> \"Planning Time\": 0.466, +\n> \"Planning Memory\": { +\n> \"Used\": 80608, +\n> \"Allocated\": 90112 +\n> } +\n> } +\n> ]\n> (1 row)\n>\n> PFA explain and explain analyze output in all the formats.\n>\n> The patch adds MemoryContextMemConsumed() which is similar to\n> MemoryContextStats() or MemoryContextStatsDetails() except 1. the\n> later always prints the memory statistics to either stderr or to the\n> server error log and 2. it doesn't return MemoryContextCounters that\n> it gathered. We should probably change MemoryContextStats or\n> MemoryContextStatsDetails() according to those two points and not add\n> MemoryContextMemConsumed().\n>\n> I have not merged this into 0001 yet. But once we agree upon whether\n> this is the right thing to do, I will merge it into 0001.\n>\n> 0003\n> ====\n> When reporting memory allocated, a confusion may arise as to whether\n> to report the \"net\" memory allocated between start and end of planner\n> OR only the memory that remains allocated after end. This confusion\n> can be avoided by using an exclusive memory context (child of\n> CurrentMemoryContext) for planning. That's what 0003 implements as\n> suggested by David. As mentioned in one of the comments in the patch,\n> in order to measure memory allocated accurately the new MemoryContext\n> has to be of the same type as the memory context that will be used\n> otherwise by the planner i.e. CurrentMemoryContext, and should use the\n> same parameters. IIUC, it will always be AllocSet. So the patch uses\n> AllocSet and Asserts so. But in case this changes in future, we will\n> need a way to create a new memory context with the same properties as\n> the CurrentMemoryContext, a functionality missing right now. Once we\n> agree upon the approach, the patch will need to be merged into 0002\n> and in turn 0001.\n\n\n", "msg_date": "Tue, 05 Sep 2023 10:18:13 +0700", "msg_from": "\"Lepikhov Andrei\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Report planning memory in EXPLAIN ANALYZE" }, { "msg_contents": "Hi Ashutosh,\n\nThanks for the patch!\n\nI think the reason why others are not excited about this is because the\nbenefit is not clear enough to them after the first glance and plus the\nchosen wolds \"Planning Memory: used \" is still confusing for different\npeople. I admit it is clear to you absolutely, just not for others. Here\nare some minor feedback from me:\n\n1). The commit message of patch 1 just says how it does but doesn't\nsay why it does. After reading through the discussion, I suggest making\nit clearer to others.\n\n...\nAfter the planning is done, it may still occupy lots of memory which is\nallocated but not pfree-d. All of these memories can't be used in the later\nstage. This patch will report such memory usage in order to making some\nfuture mistakes can be found in an easier way.\n...\n\n(a description of how it works just like what you have done in the commit\nmessage).\n\nSo if the people read the commit message, they can understand where the\nfunction is used for.\n\n2). at the second level, it would be cool that the user can understand the\nmetric without reading the commit message.\n\nPlanning Memory: used=15088 bytes allocated=16384 bytes\n\nWord 'Planning' is kind of a time duration, so the 'used' may be the\n'used' during the duration or 'used' after the duration. Obviously you\nmeans the later one but it is not surprising others think it in the other\nway. So I'd like to reword the metrics to:\n\nMemory Occupied (Now): Parser: 1k Planner: 10k\n\n'Now' is a cleaner signal that is a time point rather than a time\nduration. 'Parser' and 'Planner' also show a timeline about the\nimportant time point. At the same time, it cost very little coding\neffort based on patch 01. Different people may have different\nfeelings about these words, I just hope I describe the goal clearly.\n\nAbout patch 2 & patch 3, since I didn't find a good usage of it, so I\ndidn't put much effort on it. interesting probably is not a enough\nreason for adding it IMO, since there are too many interesting things.\n\nPersonally I am pretty like patch 1, we just need to refactor some words\nto make everything clearer.\n\n--\nBest Regards\nAndy Fan\n\nHi Ashutosh,Thanks for the patch!I think the reason why others are not excited about this is because thebenefit is not clear enough to them after the first glance and plus thechosen wolds \"Planning Memory: used \" is still confusing for differentpeople. I admit it is clear to you absolutely, just not for others.  Hereare some minor feedback from me:1). The commit message of patch 1 just says how it does but doesn't say why it does. After reading through the discussion, I suggest makingit clearer to others....After the planning is done, it may still occupy lots of memory which isallocated but not pfree-d. All of these memories can't be used in the laterstage. This patch will report such memory usage in order to making somefuture mistakes can be found in an easier way....(a description of how it works just like what you have done in the commitmessage).So if the people read the commit message, they can understand where thefunction is used for.2). at the second level, it would be cool that the user can understand themetric without reading the commit message.Planning Memory: used=15088 bytes allocated=16384 bytesWord 'Planning' is kind of a time duration, so the 'used' may be the'used' during the duration or 'used' after the duration. Obviously youmeans the later one but it is not surprising others think it in the otherway. So I'd like to reword the metrics to:Memory Occupied (Now): Parser: 1k Planner: 10k'Now' is a cleaner signal that is a time point rather than a time duration. 'Parser' and 'Planner' also show a timeline about the important time point. At the same time, it cost very little coding effort based on patch 01. Different people may have different feelings about these words, I just hope I describe the goal clearly.About patch 2 & patch 3, since I didn't find a good usage of it, so Ididn't put much effort on it. interesting probably is not a enough reason for adding it IMO, since there are too many interesting things.Personally I am pretty like patch 1, we just need to refactor some wordsto make everything clearer.--Best RegardsAndy Fan", "msg_date": "Fri, 22 Sep 2023 10:52:12 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report planning memory in EXPLAIN ANALYZE" }, { "msg_contents": "Hi Andy,\nThanks for your feedback.\n\nOn Fri, Sep 22, 2023 at 8:22 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> 1). The commit message of patch 1 just says how it does but doesn't\n> say why it does. After reading through the discussion, I suggest making\n> it clearer to others.\n>\n> ...\n> After the planning is done, it may still occupy lots of memory which is\n> allocated but not pfree-d. All of these memories can't be used in the later\n> stage. This patch will report such memory usage in order to making some\n> future mistakes can be found in an easier way.\n> ...\n\nThat's a good summary of how the memory report can be used. Will\ninclude a line about usage in the commit message.\n\n>\n> Planning Memory: used=15088 bytes allocated=16384 bytes\n>\n> Word 'Planning' is kind of a time duration, so the 'used' may be the\n> 'used' during the duration or 'used' after the duration. Obviously you\n> means the later one but it is not surprising others think it in the other\n> way. So I'd like to reword the metrics to:\n\nWe report \"PLanning Time\" hence used \"Planning memory\". Would\n\"Planner\" be good instead of \"Planning\"?\n\n>\n> Memory Occupied (Now): Parser: 1k Planner: 10k\n>\n> 'Now' is a cleaner signal that is a time point rather than a time\n> duration. 'Parser' and 'Planner' also show a timeline about the\n> important time point. At the same time, it cost very little coding\n> effort based on patch 01. Different people may have different\n> feelings about these words, I just hope I describe the goal clearly.\n\nParsing happens before planning and that memory is not measured by\nthis patch. May be separately but it's out of scope of this work.\n\"used\" and \"allocated\" are MemoryContext terms indicating memory\nactually used vs memory allocated.\n\n>\n> Personally I am pretty like patch 1, we just need to refactor some words\n> to make everything clearer.\n\nThanks.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 22 Sep 2023 15:26:18 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Report planning memory in EXPLAIN ANALYZE" }, { "msg_contents": "Hi Ashutosh,\n\nOn Fri, Sep 22, 2023 at 5:56 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n\n> Hi Andy,\n> Thanks for your feedback.\n>\n> On Fri, Sep 22, 2023 at 8:22 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> >\n> > 1). The commit message of patch 1 just says how it does but doesn't\n> > say why it does. After reading through the discussion, I suggest making\n> > it clearer to others.\n> >\n> > ...\n> > After the planning is done, it may still occupy lots of memory which is\n> > allocated but not pfree-d. All of these memories can't be used in the\n> later\n> > stage. This patch will report such memory usage in order to making some\n> > future mistakes can be found in an easier way.\n> > ...\n>\n> That's a good summary of how the memory report can be used. Will\n> include a line about usage in the commit message.\n>\n> >\n> > Planning Memory: used=15088 bytes allocated=16384 bytes\n> >\n> > Word 'Planning' is kind of a time duration, so the 'used' may be the\n> > 'used' during the duration or 'used' after the duration. Obviously you\n> > means the later one but it is not surprising others think it in the other\n> > way. So I'd like to reword the metrics to:\n>\n> We report \"PLanning Time\" hence used \"Planning memory\". Would\n> \"Planner\" be good instead of \"Planning\"?\n>\n\nI agree \"Planner\" is better than \"Planning\" personally.\n\n>\n> > Memory Occupied (Now): Parser: 1k Planner: 10k\n> >\n> > 'Now' is a cleaner signal that is a time point rather than a time\n> > duration. 'Parser' and 'Planner' also show a timeline about the\n> > important time point. At the same time, it cost very little coding\n> > effort based on patch 01. Different people may have different\n> > feelings about these words, I just hope I describe the goal clearly.\n>\n> Parsing happens before planning and that memory is not measured by\n> this patch. May be separately but it's out of scope of this work.\n> \"used\" and \"allocated\" are MemoryContext terms indicating memory\n> actually used vs memory allocated.\n\n\nYes, I understand the terms come from MemoryContext, the complex\nthing here is between time duration vs time point case. Since English\nis not my native language, so I'm not too confident about insisting on\nthis.\nSo I think we can keep it as it is.\n\n\n>\n>\n> > Personally I am pretty like patch 1, we just need to refactor some words\n> > to make everything clearer.\n>\n> Thanks.\n>\n\nDavid challenged something at the begining, but IIUC he also agree the\nvalue of patch 01 as the previous statement after discussion. Since the\npatch\nis mild itself, so I will mark this commitfest entry as \"Ready for\ncommitter\".\nPlease correct me if anything is wrong.\n\n-- \nBest Regards\nAndy Fan\n\nHi Ashutosh,On Fri, Sep 22, 2023 at 5:56 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:Hi Andy,\nThanks for your feedback.\n\nOn Fri, Sep 22, 2023 at 8:22 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> 1). The commit message of patch 1 just says how it does but doesn't\n> say why it does. After reading through the discussion, I suggest making\n> it clearer to others.\n>\n> ...\n> After the planning is done, it may still occupy lots of memory which is\n> allocated but not pfree-d. All of these memories can't be used in the later\n> stage. This patch will report such memory usage in order to making some\n> future mistakes can be found in an easier way.\n> ...\n\nThat's a good summary of how the memory report can be used. Will\ninclude a line about usage in the commit message.\n\n>\n> Planning Memory: used=15088 bytes allocated=16384 bytes\n>\n> Word 'Planning' is kind of a time duration, so the 'used' may be the\n> 'used' during the duration or 'used' after the duration. Obviously you\n> means the later one but it is not surprising others think it in the other\n> way. So I'd like to reword the metrics to:\n\nWe report \"PLanning Time\" hence used \"Planning memory\". Would\n\"Planner\" be good instead of \"Planning\"? I agree \"Planner\" is better than \"Planning\" personally. \n>\n> Memory Occupied (Now): Parser: 1k Planner: 10k\n>\n> 'Now' is a cleaner signal that is a time point rather than a time\n> duration. 'Parser' and 'Planner' also show a timeline about the\n> important time point. At the same time, it cost very little coding\n> effort based on patch 01. Different people may have different\n> feelings about these words, I just hope I describe the goal clearly.\n\nParsing happens before planning and that memory is not measured by\nthis patch. May be separately but it's out of scope of this work.\n\"used\" and \"allocated\" are MemoryContext terms indicating memory\nactually used vs memory allocated.Yes, I understand the terms come from MemoryContext, the complexthing here is between time duration vs time point case. Since Englishis not my native language, so I'm not too confident about insisting on this. So I think we can keep it as it is. \n>\n> Personally I am pretty like patch 1, we just need to refactor some words\n> to make everything clearer.\n\nThanks.David challenged something at the begining,  but IIUC he also agree thevalue of patch 01 as the previous statement after discussion. Since the patchis mild itself, so I will mark this commitfest entry as \"Ready for committer\".  Please  correct me if anything is wrong. -- Best RegardsAndy Fan", "msg_date": "Mon, 25 Sep 2023 11:35:17 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report planning memory in EXPLAIN ANALYZE" }, { "msg_contents": ">\n>\n> David challenged something at the begining, but IIUC he also agree the\n> value of patch 01 as the previous statement after discussion. Since the patch\n> is mild itself, so I will mark this commitfest entry as \"Ready for committer\".\n> Please correct me if anything is wrong.\n>\nThanks Andy.\n\nHere's rebased patches. A conflict in explain.out resolved.\n\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Mon, 30 Oct 2023 20:13:30 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Report planning memory in EXPLAIN ANALYZE" }, { "msg_contents": "I gave this a quick look. I think the usefulness aspect is already\nestablished in general terms; the bit I'm not so sure about is whether\nwe want it enabled by default. For many cases it'd just be noise.\nPerhaps we want it hidden behind something like \"EXPLAIN (MEMORY)\" or\nsuch, particularly since things like \"allocated\" (which, per David,\nseems to be the really useful metric) seems too much a PG-developer\nvalue rather than an end-user value.\n\nIf EXPLAIN (MEMORY) is added, then probably auto_explain needs a\ncorresponding flag, and doc updates.\n\nSome code looks to be in weird places. Why is calc_mem_usage, which\noperates on MemoryContextCounters, in explain.c instead of mcxt.c?\nwhy is MemUsage in explain.h instead of memnodes.h? I moved both. I\nalso renamed them, to MemoryContextSizeDifference() and MemoryUsage\nrespectively; fixup patch attached.\n\nI see no reason for this to be three separate patches anymore.\n\nThe EXPLAIN docs (explain.sgml) need an update to mention the new flag\nand the new output, too.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/", "msg_date": "Tue, 21 Nov 2023 19:24:58 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Report planning memory in EXPLAIN ANALYZE" }, { "msg_contents": "Hi Alvaro,\nThanks for the review and the edits. Sorry for replying late.\n\nOn Tue, Nov 21, 2023 at 11:56 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> I gave this a quick look. I think the usefulness aspect is already\n> established in general terms; the bit I'm not so sure about is whether\n> we want it enabled by default. For many cases it'd just be noise.\n> Perhaps we want it hidden behind something like \"EXPLAIN (MEMORY)\" or\n> such, particularly since things like \"allocated\" (which, per David,\n> seems to be the really useful metric) seems too much a PG-developer\n> value rather than an end-user value.\n\nIt is not default but hidden behind \"SUMMARY\".Do you still think it\nrequires another sub-flag MEMORY?\n\n>\n> If EXPLAIN (MEMORY) is added, then probably auto_explain needs a\n> corresponding flag, and doc updates.\n>\n> Some code looks to be in weird places. Why is calc_mem_usage, which\n> operates on MemoryContextCounters, in explain.c instead of mcxt.c?\n> why is MemUsage in explain.h instead of memnodes.h? I moved both. I\n> also renamed them, to MemoryContextSizeDifference() and MemoryUsage\n> respectively; fixup patch attached.\n\nThat looks better. The functions are now available outside explain.\n\n>\n> I see no reason for this to be three separate patches anymore.\n\nSquashed into one along with your changes.\n\n>\n> The EXPLAIN docs (explain.sgml) need an update to mention the new flag\n> and the new output, too.\n\nUpdated section describing \"SUMMARY\" flag.\n\nPFA updated patch.\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Thu, 30 Nov 2023 16:35:11 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Report planning memory in EXPLAIN ANALYZE" }, { "msg_contents": "On 2023-Nov-30, Ashutosh Bapat wrote:\n\n> Hi Alvaro,\n> Thanks for the review and the edits. Sorry for replying late.\n> \n> On Tue, Nov 21, 2023 at 11:56 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > I gave this a quick look. I think the usefulness aspect is already\n> > established in general terms; the bit I'm not so sure about is whether\n> > we want it enabled by default. For many cases it'd just be noise.\n> > Perhaps we want it hidden behind something like \"EXPLAIN (MEMORY)\" or\n> > such, particularly since things like \"allocated\" (which, per David,\n> > seems to be the really useful metric) seems too much a PG-developer\n> > value rather than an end-user value.\n> \n> It is not default but hidden behind \"SUMMARY\".Do you still think it\n> requires another sub-flag MEMORY?\n\nWell, SUMMARY is enabled by default with ANALYZE, and I'd rather not\nhave planner memory consumption displayed by default with all EXPLAIN\nANALYZEs. So yeah, I still think this deserves its own option.\n\nBut let's hear others' opinions on this point. I'm only one vote here.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"All rings of power are equal,\nBut some rings of power are more equal than others.\"\n (George Orwell's The Lord of the Rings)\n\n\n", "msg_date": "Thu, 30 Nov 2023 12:40:28 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Report planning memory in EXPLAIN ANALYZE" }, { "msg_contents": "On 30/11/2023 18:40, Alvaro Herrera wrote:\n> Well, SUMMARY is enabled by default with ANALYZE, and I'd rather not\n> have planner memory consumption displayed by default with all EXPLAIN\n> ANALYZEs. So yeah, I still think this deserves its own option.\n> \n> But let's hear others' opinions on this point. I'm only one vote here.\n\nI agree; it should be disabled by default. The fact that memory \nconsumption outputs with byte precision (very uncomfortable for \nperception) is a sign that the primary audience is developers and for \ndebugging purposes.\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n", "msg_date": "Fri, 1 Dec 2023 09:57:13 +0700", "msg_from": "Andrei Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Report planning memory in EXPLAIN ANALYZE" }, { "msg_contents": "On Fri, Dec 1, 2023 at 8:27 AM Andrei Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n>\n> On 30/11/2023 18:40, Alvaro Herrera wrote:\n> > Well, SUMMARY is enabled by default with ANALYZE, and I'd rather not\n> > have planner memory consumption displayed by default with all EXPLAIN\n> > ANALYZEs. So yeah, I still think this deserves its own option.\n> >\n> > But let's hear others' opinions on this point. I'm only one vote here.\n>\n> I agree; it should be disabled by default. The fact that memory\n> consumption outputs with byte precision (very uncomfortable for\n> perception) is a sign that the primary audience is developers and for\n> debugging purposes.\n\nThat's 2 vs 1. Here's patch with MEMORY option added. Replying to\nAlvaro's earlier relevant comments.\n\n> If EXPLAIN (MEMORY) is added, then probably auto_explain needs a\n> corresponding flag, and doc updates.\n\nauto_explain does not implement planner_hook and hence can not gather\ninformation about planner's memory usage. So no new flag and doc\nupdate to auto_explain. I have added a comment auto_explain.c but\nprobably even that's not needed. We are considering this option only\nfor developers and auto_explain is largely for users. So I didn't feel\nlike adding an implementation of planner_hook in auto_explain and use\nMEMORY option. If we feel that planner's memory usage report is useful\nin auto_explain, it should easy to do that in future.\n\n> The EXPLAIN docs (explain.sgml) need an update to mention the new flag\n> and the new output, too.\n\nDone.\n\n0001 is as is except explain.out and explain.sgml changes reverted.\n\n0002 following changes. It's a separate patch for ease of review.\na. implements MEMORY option, adds tests in explain.sql and also\nupdates explain.sgml.\nb. show_planning_memory renamed to show_planner_memory to be in sync\nwith the output\nc. small indentation change by pgindent.\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Mon, 4 Dec 2023 12:54:25 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Report planning memory in EXPLAIN ANALYZE" }, { "msg_contents": "On Mon, Dec 4, 2023 at 3:24 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Fri, Dec 1, 2023 at 8:27 AM Andrei Lepikhov\n> <a.lepikhov@postgrespro.ru> wrote:\n> >\n> > On 30/11/2023 18:40, Alvaro Herrera wrote:\n> > > Well, SUMMARY is enabled by default with ANALYZE, and I'd rather not\n> > > have planner memory consumption displayed by default with all EXPLAIN\n> > > ANALYZEs. So yeah, I still think this deserves its own option.\n> > >\n> > > But let's hear others' opinions on this point. I'm only one vote here.\n> >\n> > I agree; it should be disabled by default. The fact that memory\n> > consumption outputs with byte precision (very uncomfortable for\n> > perception) is a sign that the primary audience is developers and for\n> > debugging purposes.\n>\n> That's 2 vs 1. Here's patch with MEMORY option added. Replying to\n> Alvaro's earlier relevant comments.\n>\n\n\"Include information on planner's memory consumption. Specially,\ninclude the total memory allocated by the planner and net memory that\nremains used at the end of the planning. It defaults to\n<literal>FALSE</literal>.\n\"\ndoc/src/sgml/ref/explain.sgml\nI can view MemoryContextSizeDifference, figure out the meaning.\n\nBut I am not sure \"net memory that remains used at the end of the\nplanning\" is the correct description.\n(I am not native english speaker).\n\n\n", "msg_date": "Mon, 11 Dec 2023 16:36:11 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Report planning memory in EXPLAIN ANALYZE" }, { "msg_contents": "On Mon, Dec 11, 2023 at 2:06 PM jian he <jian.universality@gmail.com> wrote:\n>\n> On Mon, Dec 4, 2023 at 3:24 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > On Fri, Dec 1, 2023 at 8:27 AM Andrei Lepikhov\n> > <a.lepikhov@postgrespro.ru> wrote:\n> > >\n> > > On 30/11/2023 18:40, Alvaro Herrera wrote:\n> > > > Well, SUMMARY is enabled by default with ANALYZE, and I'd rather not\n> > > > have planner memory consumption displayed by default with all EXPLAIN\n> > > > ANALYZEs. So yeah, I still think this deserves its own option.\n> > > >\n> > > > But let's hear others' opinions on this point. I'm only one vote here.\n> > >\n> > > I agree; it should be disabled by default. The fact that memory\n> > > consumption outputs with byte precision (very uncomfortable for\n> > > perception) is a sign that the primary audience is developers and for\n> > > debugging purposes.\n> >\n> > That's 2 vs 1. Here's patch with MEMORY option added. Replying to\n> > Alvaro's earlier relevant comments.\n> >\n>\n> \"Include information on planner's memory consumption. Specially,\n> include the total memory allocated by the planner and net memory that\n> remains used at the end of the planning. It defaults to\n> <literal>FALSE</literal>.\n> \"\n> doc/src/sgml/ref/explain.sgml\n> I can view MemoryContextSizeDifference, figure out the meaning.\n>\n> But I am not sure \"net memory that remains used at the end of the\n> planning\" is the correct description.\n> (I am not native english speaker).\n\nThe word \"net\" is used as an adjective, see [1]\n\n[1] https://www.merriam-webster.com/dictionary/net (as an adjective)\n\nDoes that help? Do you have any other wording proposal?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 11 Dec 2023 19:05:32 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Report planning memory in EXPLAIN ANALYZE" }, { "msg_contents": "I would replace the text in explain.sgml with this:\n\n+ Include information on memory consumption by the query planning phase.\n+ This includes the precise amount of storage used by planner in-memory\n+ structures, as well as overall total consumption of planner memory,\n+ including allocation overhead.\n+ This parameter defaults to </literal>FALSE</literal>.\n\nand remove the new example you're adding; it's not really necessary.\n\nIn struct ExplainState, I'd put the new member just below summary.\n\nIf we're creating a new memory context for planner, we don't need the\n\"mem_counts_start\" thing, and we don't need the\nMemoryContextSizeDifference thing. Just add the\nMemoryContextMemConsumed() function (which ISTM should be just below\nMemoryContextMemAllocated() instead of in the middle of the\nMemoryContextStats implementation), and\njust report the values we get there. I think the SizeDifference stuff\nincreases surface damage for no good reason.\n\nExplainOnePlan() is given a MemoryUsage object (or, if we do as above\nand no longer have a MemoryUsage struct at all in the first place, a\nMemoryContextCounters object) even when the MEMORY option is false.\nThis means we waste computing memory usage when not needed. Let's avoid\nthat useless overhead.\n\nI'd also do away with the comment you added in explain_ExecutorEnd() and\ndo just this, below setting of es->summary:\n\n+ /* No support for MEMORY option */\n+ /* es->memory = false; */\n\nWe don't need to elaborate more at present.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Doing what he did amounts to sticking his fingers under the hood of the\nimplementation; if he gets his fingers burnt, it's his problem.\" (Tom Lane)\n\n\n", "msg_date": "Tue, 12 Dec 2023 21:11:31 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Report planning memory in EXPLAIN ANALYZE" }, { "msg_contents": "On 2023-Dec-12, Alvaro Herrera wrote:\n\n> I would replace the text in explain.sgml with this:\n> \n> + Include information on memory consumption by the query planning phase.\n> + This includes the precise amount of storage used by planner in-memory\n> + structures, as well as overall total consumption of planner memory,\n> + including allocation overhead.\n> + This parameter defaults to </literal>FALSE</literal>.\n\n(This can still use a lot more wordsmithing of course, such as avoiding\nrepeated use of \"include/ing\", ugh)\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"The Gord often wonders why people threaten never to come back after they've\nbeen told never to return\" (www.actsofgord.com)\n\n\n", "msg_date": "Tue, 12 Dec 2023 21:20:24 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Report planning memory in EXPLAIN ANALYZE" }, { "msg_contents": "On Wed, Dec 13, 2023 at 1:41 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> I would replace the text in explain.sgml with this:\n>\n> + Include information on memory consumption by the query planning phase.\n> + This includes the precise amount of storage used by planner in-memory\n> + structures, as well as overall total consumption of planner memory,\n> + including allocation overhead.\n> + This parameter defaults to </literal>FALSE</literal>.\n\nTo me consumption in the \"... total consumption ...\" part is same as\nallocation and that includes allocation overhead as well as any memory\nthat was allocated but remained unused (see discussion [1] if you\nhaven't already) ultimately. Mentioning \"total consumption\" and\n\"allocation overhead\" seems more confusing.\n\nHow about\n+ Include information on memory consumption by the query planning phase.\n+ Report the precise amount of storage used by planner in-memory\n+ structures, and total memory considering allocation overhead.\n+ The parameter defaults to <literal>FALSE</literal>.\n\nTakes care of your complaint about multiple include/ing as well.\n\n>\n> and remove the new example you're adding; it's not really necessary.\n\nDone.\n\n>\n> In struct ExplainState, I'd put the new member just below summary.\n\nDone\n\n>\n> If we're creating a new memory context for planner, we don't need the\n> \"mem_counts_start\" thing, and we don't need the\n> MemoryContextSizeDifference thing. Just add the\n> MemoryContextMemConsumed() function (which ISTM should be just below\n> MemoryContextMemAllocated() instead of in the middle of the\n> MemoryContextStats implementation), and\n> just report the values we get there. I think the SizeDifference stuff\n> increases surface damage for no good reason.\n\n240 bytes are used right after creating a memory context i.e.\nmem_counts_start.totalspace = 8192 and mem_counts_start.freespace =\n7952. To account for that I used two counters and calculated the\ndifference. If no planning is involved in EXECUTE <prepared statement>\nit will show 240 bytes used instead of 0. Barring that for all\npractical purposes 240 bytes negligible. E.g. 240 bytes is 4% of the\namount of memory used for planning a simple query \" select 'x' \". But\nyour suggestion simplifies the code a lot. An error of 240 bytes seems\nworth the code simplification. So did that way.\n\n>\n> ExplainOnePlan() is given a MemoryUsage object (or, if we do as above\n> and no longer have a MemoryUsage struct at all in the first place, a\n> MemoryContextCounters object) even when the MEMORY option is false.\n> This means we waste computing memory usage when not needed. Let's avoid\n> that useless overhead.\n\nDone.\n\nAlso avoided creating a memory context and switching to it when\nes->memory = false.\n\n>\n> I'd also do away with the comment you added in explain_ExecutorEnd() and\n> do just this, below setting of es->summary:\n>\n> + /* No support for MEMORY option */\n> + /* es->memory = false; */\n\nDone.\n\nI ended up rewriting most of the code, so squashed everything into a\nsingle patch as attached.\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Wed, 13 Dec 2023 16:21:30 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Report planning memory in EXPLAIN ANALYZE" }, { "msg_contents": "OK, I propose the following further minor tweaks. (I modified the docs\nfollowing the wording we have for COSTS and BUFFERS).\n\nThere are two things that still trouble me a bit. First, we assume that\nthe planner is using an AllocSet context, which I guess is true, but if\nsomebody runs the planner in a context of a different memcxt type, it's\ngoing to be a problem. So far we don't have infrastructure for creating\na context of the same type as another context. Maybe it's too fine a\npoint to worry about, for sure.\n\nThe other question is about trying to support the EXPLAIN EXECUTE case.\nDo you find that case really useful? In a majority of cases planning is\nnot going to happen because it was already done by PREPARE (where we\n_don't_ report memory, because we don't have EXPLAIN there), so it seems\na bit weird. I suppose you could make it useful if you instructed the\nuser to set plan_cache_mode to custom, assuming that does actually work\n(I didn't try).\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"El hombre nunca sabe de lo que es capaz hasta que lo intenta\" (C. Dickens)", "msg_date": "Sun, 17 Dec 2023 17:27:47 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Report planning memory in EXPLAIN ANALYZE" }, { "msg_contents": "On Sun, Dec 17, 2023 at 10:31 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> OK, I propose the following further minor tweaks. (I modified the docs\n> following the wording we have for COSTS and BUFFERS).\n\nLGTM. Included in the attached patch.\n\n>\n> There are two things that still trouble me a bit. First, we assume that\n> the planner is using an AllocSet context, which I guess is true, but if\n> somebody runs the planner in a context of a different memcxt type, it's\n> going to be a problem. So far we don't have infrastructure for creating\n> a context of the same type as another context. Maybe it's too fine a\n> point to worry about, for sure.\n\nI had considered this point. Different contexts take different\narguments for creation, so some jugglery is required to create a\ncontext based on type. It looked more than necessary for the limited\nscope of this patch. That's why I settled on the assertion. If we see\nthe need in future we can always add that support.\n\n>\n> The other question is about trying to support the EXPLAIN EXECUTE case.\n> Do you find that case really useful? In a majority of cases planning is\n> not going to happen because it was already done by PREPARE (where we\n> _don't_ report memory, because we don't have EXPLAIN there), so it seems\n> a bit weird. I suppose you could make it useful if you instructed the\n> user to set plan_cache_mode to custom, assuming that does actually work\n> (I didn't try).\n\nIf we set plan_cache_mode to force_custom_plan, we always plan the\nstatement and thus report memory.\n\nYou are right that we don't always plan the statement when EXECUTE Is\nissued. But it seems we create plan underneath EXECUTE more often that\nI expected. And the report looks mildly useful and interesting.\n\npostgres@21258=#prepare stmt as select * from pg_class where oid = $1;\nPREPARE\npostgres@21258=#explain (memory) execute stmt(1); -- first time\n QUERY PLAN\n-------------------------------------------------------------------------------------\n Index Scan using pg_class_oid_index on pg_class (cost=0.27..8.29\nrows=1 width=273)\n Index Cond: (oid = '1'::oid)\n Planner Memory: used=40448 bytes allocated=81920 bytes\n(3 rows)\n\n\npostgres@21258=#explain (memory) execute stmt(1);\n QUERY PLAN\n-------------------------------------------------------------------------------------\n Index Scan using pg_class_oid_index on pg_class (cost=0.27..8.29\nrows=1 width=273)\n Index Cond: (oid = '1'::oid)\n Planner Memory: used=40368 bytes allocated=81920 bytes\n(3 rows)\n\nobserve that the memory used is slightly different from the first\ntime. So when the plan is created again something happens that eats\nfew bytes less. I didn't investigate what.\n\nThe same output repeats if the statement is executed 3 more times.\nThat's as many times a custom plan is created for a statement by\ndefault.\n\npostgres@21258=#explain (memory) execute stmt(1);\n QUERY PLAN\n-------------------------------------------------------------------------------------\n Index Scan using pg_class_oid_index on pg_class (cost=0.27..8.29\nrows=1 width=273)\n Index Cond: (oid = $1)\n Planner Memory: used=40272 bytes allocated=81920 bytes\n(3 rows)\n\nObserve that the memory used is less here again. So when creating the\ngeneric plan something happened which causes the change in memory\nconsumption. Didn't investigate.\n\n\npostgres@21258=#explain (memory) execute stmt(1);\n QUERY PLAN\n-------------------------------------------------------------------------------------\n Index Scan using pg_class_oid_index on pg_class (cost=0.27..8.29\nrows=1 width=273)\n Index Cond: (oid = $1)\n Planner Memory: used=3520 bytes allocated=24576 bytes\n(3 rows)\n\nAnd now the planner is settled on very low value but still non-zero or\n240 bytes. I think the parameter evaluation takes that much memory.\nHaven't verified.\n\nIf we use an non-parameterized statement\npostgres@21258=#prepare stmt as select * from pg_class where oid = 2345;\nPREPARE\npostgres@21258=#explain (memory) execute stmt;\n QUERY PLAN\n-------------------------------------------------------------------------------------\n Index Scan using pg_class_oid_index on pg_class (cost=0.27..8.29\nrows=1 width=273)\n Index Cond: (oid = '2345'::oid)\n Planner Memory: used=37200 bytes allocated=65536 bytes\n(3 rows)\n\nfirst time memory is consumed by the planner.\n\npostgres@21258=#explain (memory) execute stmt;\n QUERY PLAN\n-------------------------------------------------------------------------------------\n Index Scan using pg_class_oid_index on pg_class (cost=0.27..8.29\nrows=1 width=273)\n Index Cond: (oid = '2345'::oid)\n Planner Memory: used=240 bytes allocated=8192 bytes\n(3 rows)\n\nNext time onwards it has settled on the custom plan.\n\nI think there's something to learn and investigate from memory numbers\nhere. So not completely meaningless and useless.\n\nI added that support on lines of \"planning time\".\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 18 Dec 2023 12:55:24 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Report planning memory in EXPLAIN ANALYZE" }, { "msg_contents": "Forgot to attach patch. Here it is\n\nOn Mon, Dec 18, 2023 at 12:55 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Sun, Dec 17, 2023 at 10:31 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > OK, I propose the following further minor tweaks. (I modified the docs\n> > following the wording we have for COSTS and BUFFERS).\n>\n> LGTM. Included in the attached patch.\n>\n> >\n> > There are two things that still trouble me a bit. First, we assume that\n> > the planner is using an AllocSet context, which I guess is true, but if\n> > somebody runs the planner in a context of a different memcxt type, it's\n> > going to be a problem. So far we don't have infrastructure for creating\n> > a context of the same type as another context. Maybe it's too fine a\n> > point to worry about, for sure.\n>\n> I had considered this point. Different contexts take different\n> arguments for creation, so some jugglery is required to create a\n> context based on type. It looked more than necessary for the limited\n> scope of this patch. That's why I settled on the assertion. If we see\n> the need in future we can always add that support.\n>\n> >\n> > The other question is about trying to support the EXPLAIN EXECUTE case.\n> > Do you find that case really useful? In a majority of cases planning is\n> > not going to happen because it was already done by PREPARE (where we\n> > _don't_ report memory, because we don't have EXPLAIN there), so it seems\n> > a bit weird. I suppose you could make it useful if you instructed the\n> > user to set plan_cache_mode to custom, assuming that does actually work\n> > (I didn't try).\n>\n> If we set plan_cache_mode to force_custom_plan, we always plan the\n> statement and thus report memory.\n>\n> You are right that we don't always plan the statement when EXECUTE Is\n> issued. But it seems we create plan underneath EXECUTE more often that\n> I expected. And the report looks mildly useful and interesting.\n>\n> postgres@21258=#prepare stmt as select * from pg_class where oid = $1;\n> PREPARE\n> postgres@21258=#explain (memory) execute stmt(1); -- first time\n> QUERY PLAN\n> -------------------------------------------------------------------------------------\n> Index Scan using pg_class_oid_index on pg_class (cost=0.27..8.29\n> rows=1 width=273)\n> Index Cond: (oid = '1'::oid)\n> Planner Memory: used=40448 bytes allocated=81920 bytes\n> (3 rows)\n>\n>\n> postgres@21258=#explain (memory) execute stmt(1);\n> QUERY PLAN\n> -------------------------------------------------------------------------------------\n> Index Scan using pg_class_oid_index on pg_class (cost=0.27..8.29\n> rows=1 width=273)\n> Index Cond: (oid = '1'::oid)\n> Planner Memory: used=40368 bytes allocated=81920 bytes\n> (3 rows)\n>\n> observe that the memory used is slightly different from the first\n> time. So when the plan is created again something happens that eats\n> few bytes less. I didn't investigate what.\n>\n> The same output repeats if the statement is executed 3 more times.\n> That's as many times a custom plan is created for a statement by\n> default.\n>\n> postgres@21258=#explain (memory) execute stmt(1);\n> QUERY PLAN\n> -------------------------------------------------------------------------------------\n> Index Scan using pg_class_oid_index on pg_class (cost=0.27..8.29\n> rows=1 width=273)\n> Index Cond: (oid = $1)\n> Planner Memory: used=40272 bytes allocated=81920 bytes\n> (3 rows)\n>\n> Observe that the memory used is less here again. So when creating the\n> generic plan something happened which causes the change in memory\n> consumption. Didn't investigate.\n>\n>\n> postgres@21258=#explain (memory) execute stmt(1);\n> QUERY PLAN\n> -------------------------------------------------------------------------------------\n> Index Scan using pg_class_oid_index on pg_class (cost=0.27..8.29\n> rows=1 width=273)\n> Index Cond: (oid = $1)\n> Planner Memory: used=3520 bytes allocated=24576 bytes\n> (3 rows)\n>\n> And now the planner is settled on very low value but still non-zero or\n> 240 bytes. I think the parameter evaluation takes that much memory.\n> Haven't verified.\n>\n> If we use an non-parameterized statement\n> postgres@21258=#prepare stmt as select * from pg_class where oid = 2345;\n> PREPARE\n> postgres@21258=#explain (memory) execute stmt;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------\n> Index Scan using pg_class_oid_index on pg_class (cost=0.27..8.29\n> rows=1 width=273)\n> Index Cond: (oid = '2345'::oid)\n> Planner Memory: used=37200 bytes allocated=65536 bytes\n> (3 rows)\n>\n> first time memory is consumed by the planner.\n>\n> postgres@21258=#explain (memory) execute stmt;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------\n> Index Scan using pg_class_oid_index on pg_class (cost=0.27..8.29\n> rows=1 width=273)\n> Index Cond: (oid = '2345'::oid)\n> Planner Memory: used=240 bytes allocated=8192 bytes\n> (3 rows)\n>\n> Next time onwards it has settled on the custom plan.\n>\n> I think there's something to learn and investigate from memory numbers\n> here. So not completely meaningless and useless.\n>\n> I added that support on lines of \"planning time\".\n>\n> --\n> Best Wishes,\n> Ashutosh Bapat\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Mon, 18 Dec 2023 12:57:58 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Report planning memory in EXPLAIN ANALYZE" }, { "msg_contents": "I think this patch is mostly OK and decided to use it to test something\nelse. In doing so I noticed one small thing that bothers me: if\nplanning uses buffers, and those were requested to be reported, we get\nsomething like this at the end of the explain\n\n Planning:\n Buffers: shared hit=120 read=30\n Planning Memory: used=67944 bytes allocated=73728 bytes\n Planning Time: 0.892 ms\n\nThis looks a bit unpleasant. Wouldn't it be much nicer if the Planning\nthingies were all reported together in the Planning group?\n\n Planning:\n Buffers: shared hit=120 read=30\n Memory: used=67944 bytes allocated=73728 bytes\n Time: 0.892 ms\n\nThis is easier said than done. First, moving \"Time\" to be in the same\ngroup would break tools that are already parsing the current format. So\nmaybe we should leave that one alone. So that means we end up with\nthis:\n\n Planning:\n Buffers: shared hit=120 read=30\n Memory: used=67944 bytes allocated=73728 bytes\n Planning Time: 0.892 ms\n\nBetter than the original, I think, even if not perfect. Looking at the\ncode, this is a bit of a mess to implement, because of the way\nshow_buffer_usage is coded; currently it has an internal\nincrement/decrement of indentation, so in order to report memory we\nwould have to move the indentation change to outside show_buffer_usage\n(I think it does belong outside, but the determination of whether to\nprint or not is quite messy, so we'd need a new function returning\nboolean).\n\nAlternatively we could move the memory reportage to within\nshow_buffer_usage, but that seems worse.\n\nOr we could leave it as you have it, but to me that's akin to giving up\non doing it nicely.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"I think my standards have lowered enough that now I think 'good design'\nis when the page doesn't irritate the living f*ck out of me.\" (JWZ)\n\n\n", "msg_date": "Fri, 12 Jan 2024 17:52:27 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Report planning memory in EXPLAIN ANALYZE" }, { "msg_contents": "At 2024-01-12 17:52:27 +0100, alvherre@alvh.no-ip.org wrote:\n>\n> I think this patch is mostly OK\n\n(After the last few rounds of changes, it looks fine to me too.)\n\n> Planning:\n> Buffers: shared hit=120 read=30\n> Memory: used=67944 bytes allocated=73728 bytes\n> Planning Time: 0.892 ms\n>\n> […]\n>\n> Or we could leave it as you have it, but to me that's akin to giving up\n> on doing it nicely.\n\nFor completeness, there's a third option, which is easier to write and a\nbit more friendly to the sort of thing that might already be parsing\n\"Planning Time\", viz.,\n\n Planning Buffers: shared hit=120 read=30\n Planning Memory: used=67944 bytes allocated=73728 bytes\n Planning Time: 0.892 ms\n\n(Those \"bytes\" look slightly odd to me in the midst of all the x=y\npieces, but that's probably not worth thinking about.)\n\n-- Abhijit\n\n\n", "msg_date": "Fri, 12 Jan 2024 22:53:08 +0530", "msg_from": "Abhijit Menon-Sen <ams@toroid.org>", "msg_from_op": false, "msg_subject": "Re: Report planning memory in EXPLAIN ANALYZE" }, { "msg_contents": "Replying to both Alvaro and Abhijit in this email.\n\nOn Fri, Jan 12, 2024 at 10:22 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> I think this patch is mostly OK and decided to use it to test something\n> else. In doing so I noticed one small thing that bothers me: if\n> planning uses buffers, and those were requested to be reported, we get\n> something like this at the end of the explain\n>\n> Planning:\n> Buffers: shared hit=120 read=30\n> Planning Memory: used=67944 bytes allocated=73728 bytes\n> Planning Time: 0.892 ms\n>\n> This looks a bit unpleasant. Wouldn't it be much nicer if the Planning\n> thingies were all reported together in the Planning group?\n>\n> Planning:\n> Buffers: shared hit=120 read=30\n> Memory: used=67944 bytes allocated=73728 bytes\n> Time: 0.892 ms\n>\n> This is easier said than done. First, moving \"Time\" to be in the same\n> group would break tools that are already parsing the current format. So\n> maybe we should leave that one alone. So that means we end up with\n> this:\n\n\nWe have not been careful when reporting buffer usage. Both Buffers and\nI/O timings are related to buffers. Ideally we should have created a\nseparate section for buffers and reported the usage and timings stats\nunder it. If done that way, actually the output looks better.\n\nPlanning Buffers:\n usage: shared hit = ...\n i/o timings: shared read =...\nPlanning Memory:\nPlanning Time:\n\nFurther we may move the Buffers, Memory and Timing under \"Planning\"\nsection. So it looks like\nPlanning\n Buffers\n Usage\n I/O Timings\n Memory\n Time\n\nWe could bite the bullet and fix it in the next major release when\nsuch compatibility breakage is expected if not welcome. If we go this\nroute, it will be good to commit this patch and then work on\nrefactoring.\n\n\n>\n> Planning:\n> Buffers: shared hit=120 read=30\n> Memory: used=67944 bytes allocated=73728 bytes\n> Planning Time: 0.892 ms\n>\n> Better than the original, I think, even if not perfect. Looking at the\n> code, this is a bit of a mess to implement, because of the way\n> show_buffer_usage is coded; currently it has an internal\n> increment/decrement of indentation, so in order to report memory we\n> would have to move the indentation change to outside show_buffer_usage\n> (I think it does belong outside, but the determination of whether to\n> print or not is quite messy, so we'd need a new function returning\n> boolean).\n\nIf we could change show_buffer_usage() to print something like below\nwhen no buffers are used or no time is spent in I/O, we won't need the\nboolean and all the complexity in showing \"Planning\" and indenting.\nPlanning:\n Buffers: no buffers used\n I/O timings: none\n Memory: used ...\n\nThat would be a backward compatibility break, but impact would be\nminor. Still that will bring I/O Timings at the same level as Memory\nwhich is wrong.\n\nUsing a boolean return and moving es->indent-- outside of\nshow_buffer_usage would be less elegant.\n\n>\n> Alternatively we could move the memory reportage to within\n> show_buffer_usage, but that seems worse.\n\nAgreed.\n\nOn Fri, Jan 12, 2024 at 10:53 PM Abhijit Menon-Sen <ams@toroid.org> wrote:\n>\n> At 2024-01-12 17:52:27 +0100, alvherre@alvh.no-ip.org wrote:\n> >\n> > I think this patch is mostly OK\n>\n> (After the last few rounds of changes, it looks fine to me too.)\n>\n> > Planning:\n> > Buffers: shared hit=120 read=30\n> > Memory: used=67944 bytes allocated=73728 bytes\n> > Planning Time: 0.892 ms\n> >\n> > […]\n> >\n> > Or we could leave it as you have it, but to me that's akin to giving up\n> > on doing it nicely.\n>\n> For completeness, there's a third option, which is easier to write and a\n> bit more friendly to the sort of thing that might already be parsing\n> \"Planning Time\", viz.,\n>\n> Planning Buffers: shared hit=120 read=30\n> Planning Memory: used=67944 bytes allocated=73728 bytes\n> Planning Time: 0.892 ms\n\nAnd\nPlanning I/O Timings: shared ... .\n\nTo me it looks similar to Alvaro's first option.\n\n>\n> (Those \"bytes\" look slightly odd to me in the midst of all the x=y\n> pieces, but that's probably not worth thinking about.)\n\nHow about separating those with \",\". That would add a minor\ninconsistency with how values in Buffers and I/O Timings sections are\nreported.\n\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 16 Jan 2024 20:19:14 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Report planning memory in EXPLAIN ANALYZE" }, { "msg_contents": "On Tue, Jan 16, 2024 at 8:19 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> Using a boolean return and moving es->indent-- outside of\n> show_buffer_usage would be less elegant.\n>\n\nI went ahead with this option so as not to break backward\ncompatibility in any manner. Attached patch 0002 has the code and\nexpected output changes.\n\nI have continued to use the variable \"show_planning\" and the new\nvariable is named accordingly. Other options I could think of, like\nopen_planning_section ended up being all long and rejected.\n\nI considered adding a test for EXPLAIN(memory, buffers) but\nexplain.sql filters out Buffers: and Planning: lines, so it's not of\nmuch use.\n\nReturning from show_planning_buffers() is not necessary, but it looks\nconsistent with show_buffer_usage() in the code block using them\ntogether.\n\nHere's how the output looks now\nIn Text format\n#explain (memory, buffers) select * from pg_class;\n QUERY PLAN\n-------------------------------------------------------------\n Seq Scan on pg_class (cost=0.00..18.15 rows=415 width=273)\n Planning:\n Buffers: shared hit=134 read=15\n I/O Timings: shared read=0.213\n Memory: used=22928 bytes, allocated=32768 bytes\n(5 rows)\n\nwith planning time\n#explain (analyze, memory, buffers) select * from pg_class;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------\n Seq Scan on pg_class (cost=0.00..18.15 rows=415 width=273) (actual\ntime=0.020..0.378 rows=415 loops=1)\n Buffers: shared hit=14\n Planning:\n Buffers: shared hit=134 read=15\n I/O Timings: shared read=0.635\n Memory: used=22928 bytes, allocated=32768 bytes\n Planning Time: 4.328 ms\n Execution Time: 1.262 ms\n(8 rows)\n\nJust memory option\n#explain (memory) select * from pg_class;\n QUERY PLAN\n-------------------------------------------------------------\n Seq Scan on pg_class (cost=0.00..18.15 rows=415 width=273)\n Planning:\n Memory: used=22928 bytes, allocated=32768 bytes\n(3 rows)\n\nIn JSON format\n#explain (memory, buffers, format json) select * from pg_class;\n(Notice Memory Used and Memory Allocated properties.\n QUERY PLAN\n---------------------------------------\n [ +\n { +\n \"Plan\": { +\n... snip ...\n }, +\n \"Planning\": { +\n \"Shared Hit Blocks\": 152, +\n \"Shared Read Blocks\": 0, +\n \"Shared Dirtied Blocks\": 0, +\n \"Shared Written Blocks\": 0, +\n \"Local Hit Blocks\": 0, +\n \"Local Read Blocks\": 0, +\n \"Local Dirtied Blocks\": 0, +\n \"Local Written Blocks\": 0, +\n \"Temp Read Blocks\": 0, +\n \"Temp Written Blocks\": 0, +\n \"Shared I/O Read Time\": 0.000, +\n \"Shared I/O Write Time\": 0.000,+\n \"Local I/O Read Time\": 0.000, +\n \"Local I/O Write Time\": 0.000, +\n \"Temp I/O Read Time\": 0.000, +\n \"Temp I/O Write Time\": 0.000, +\n \"Memory Used\": 22928, +\n \"Memory Allocated\": 32768 +\n } +\n } +\n ]\n(1 row)\n\nJSON format with planning time\n#explain (analyze, memory, buffers, format json) select * from pg_class;\n QUERY PLAN\n---------------------------------------\n [ +\n { +\n \"Plan\": { +\n ... snip ...\n }, +\n \"Planning\": { +\n \"Shared Hit Blocks\": 152, +\n \"Shared Read Blocks\": 0, +\n \"Shared Dirtied Blocks\": 0, +\n \"Shared Written Blocks\": 0, +\n \"Local Hit Blocks\": 0, +\n \"Local Read Blocks\": 0, +\n \"Local Dirtied Blocks\": 0, +\n \"Local Written Blocks\": 0, +\n \"Temp Read Blocks\": 0, +\n \"Temp Written Blocks\": 0, +\n \"Shared I/O Read Time\": 0.000, +\n \"Shared I/O Write Time\": 0.000,+\n \"Local I/O Read Time\": 0.000, +\n \"Local I/O Write Time\": 0.000, +\n \"Temp I/O Read Time\": 0.000, +\n \"Temp I/O Write Time\": 0.000, +\n \"Memory Used\": 22928, +\n \"Memory Allocated\": 32768 +\n }, +\n \"Planning Time\": 3.840, +\n \"Triggers\": [ +\n ], +\n \"Execution Time\": 1.266 +\n } +\n ]\n(1 row)\n\n>\n> How about separating those with \",\". That would add a minor\n> inconsistency with how values in Buffers and I/O Timings sections are\n> reported.\n>\n\ndid this way.\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Thu, 18 Jan 2024 15:25:17 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Report planning memory in EXPLAIN ANALYZE" }, { "msg_contents": "On 2024-Jan-18, Ashutosh Bapat wrote:\n\nHmm ... TBH I don't like the \"showed_planning\" thing very much, but if\nwe need to conditionalize the printing of \"Planning:\" on whether we\nprint either of buffers or memory, maybe there's no way around something\nlike what you propose.\n\nHowever, I don't understand this output change, and I think it indicates\na bug in the logic there:\n\n> diff --git a/src/test/regress/expected/explain.out b/src/test/regress/expected/explain.out\n> index 86bfdfd29e..55694505a7 100644\n> --- a/src/test/regress/expected/explain.out\n> +++ b/src/test/regress/expected/explain.out\n> @@ -331,15 +331,15 @@ select explain_filter('explain (memory) select * from int8_tbl i8');\n> explain_filter \n> ---------------------------------------------------------\n> Seq Scan on int8_tbl i8 (cost=N.N..N.N rows=N width=N)\n> - Planner Memory: used=N bytes allocated=N bytes\n> + Memory: used=N bytes, allocated=N bytes\n> (2 rows)\n\nDoes this really make sense?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Linux transformó mi computadora, de una `máquina para hacer cosas',\nen un aparato realmente entretenido, sobre el cual cada día aprendo\nalgo nuevo\" (Jaime Salinas)\n\n\n", "msg_date": "Thu, 18 Jan 2024 12:12:03 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Report planning memory in EXPLAIN ANALYZE" }, { "msg_contents": "On Thu, Jan 18, 2024 at 4:42 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2024-Jan-18, Ashutosh Bapat wrote:\n>\n> Hmm ... TBH I don't like the \"showed_planning\" thing very much, but if\n> we need to conditionalize the printing of \"Planning:\" on whether we\n> print either of buffers or memory, maybe there's no way around something\n> like what you propose.\n\nright.\n\n>\n> However, I don't understand this output change, and I think it indicates\n> a bug in the logic there:\n>\n> > diff --git a/src/test/regress/expected/explain.out b/src/test/regress/expected/explain.out\n> > index 86bfdfd29e..55694505a7 100644\n> > --- a/src/test/regress/expected/explain.out\n> > +++ b/src/test/regress/expected/explain.out\n> > @@ -331,15 +331,15 @@ select explain_filter('explain (memory) select * from int8_tbl i8');\n> > explain_filter\n> > ---------------------------------------------------------\n> > Seq Scan on int8_tbl i8 (cost=N.N..N.N rows=N width=N)\n> > - Planner Memory: used=N bytes allocated=N bytes\n> > + Memory: used=N bytes, allocated=N bytes\n> > (2 rows)\n>\n> Does this really make sense?\n\nThe EXPLAIN output produces something like below\n explain_filter\n ---------------------------------------------------------\n Seq Scan on int8_tbl i8 (cost=N.N..N.N rows=N width=N)\n Planning:\n Memory: used=N bytes, allocated=N bytes\n (3 rows)\n\nbut function explain_filter(), defined in explain.sql, removes line\ncontaining Planning: and we end up with output\n explain_filter\n ---------------------------------------------------------\n Seq Scan on int8_tbl i8 (cost=N.N..N.N rows=N width=N)\n Memory: used=N bytes, allocated=N bytes\n (2 rows)\n\nHence this weird difference.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Thu, 18 Jan 2024 16:58:49 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Report planning memory in EXPLAIN ANALYZE" }, { "msg_contents": "On 2024-Jan-18, Ashutosh Bapat wrote:\n\n> The EXPLAIN output produces something like below\n> explain_filter\n> ---------------------------------------------------------\n> Seq Scan on int8_tbl i8 (cost=N.N..N.N rows=N width=N)\n> Planning:\n> Memory: used=N bytes, allocated=N bytes\n> (3 rows)\n> \n> but function explain_filter(), defined in explain.sql, removes line\n> containing Planning: and we end up with output\n> explain_filter\n> ---------------------------------------------------------\n> Seq Scan on int8_tbl i8 (cost=N.N..N.N rows=N width=N)\n> Memory: used=N bytes, allocated=N bytes\n> (2 rows)\n> \n> Hence this weird difference.\n\nAh, okay, that makes sense. (I actually think this is a bit insane, and\nI would hope that we can revert commit eabba4a3eb71 eventually, but I\ndon't think that needs to hold up your proposed patch.)\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"At least to kernel hackers, who really are human, despite occasional\nrumors to the contrary\" (LWN.net)\n\n\n", "msg_date": "Thu, 18 Jan 2024 12:58:05 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Report planning memory in EXPLAIN ANALYZE" }, { "msg_contents": "On Thu, Jan 18, 2024 at 5:28 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2024-Jan-18, Ashutosh Bapat wrote:\n>\n> > The EXPLAIN output produces something like below\n> > explain_filter\n> > ---------------------------------------------------------\n> > Seq Scan on int8_tbl i8 (cost=N.N..N.N rows=N width=N)\n> > Planning:\n> > Memory: used=N bytes, allocated=N bytes\n> > (3 rows)\n> >\n> > but function explain_filter(), defined in explain.sql, removes line\n> > containing Planning: and we end up with output\n> > explain_filter\n> > ---------------------------------------------------------\n> > Seq Scan on int8_tbl i8 (cost=N.N..N.N rows=N width=N)\n> > Memory: used=N bytes, allocated=N bytes\n> > (2 rows)\n> >\n> > Hence this weird difference.\n>\n> Ah, okay, that makes sense. (I actually think this is a bit insane, and\n> I would hope that we can revert commit eabba4a3eb71 eventually, but I\n> don't think that needs to hold up your proposed patch.)\n>\n\nThanks. Attached is rebased and squashed patch.\n\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Tue, 23 Jan 2024 10:16:49 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Report planning memory in EXPLAIN ANALYZE" }, { "msg_contents": "Okay, so I gave this another look and concluded that I definitely didn't\nlike the whole business of having one level open the explain group and\nreturn outwards whether it had been done so that the other level would\nclose it. So I made the code do what I said I thought it should do\n(adding a new function peek_buffer_usage to report whether BUFFERS would\nprint anything), and I have to say that it looks much better to me with\nthat.\n\nI also added a trivial test for EXPLAIN EXECUTE, which was uncovered,\nand some other minor stylistic changes.\n\nAnd with that I pushed it.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n#error \"Operator lives in the wrong universe\"\n (\"Use of cookies in real-time system development\", M. Gleixner, M. Mc Guire)\n\n\n", "msg_date": "Mon, 29 Jan 2024 18:13:35 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Report planning memory in EXPLAIN ANALYZE" }, { "msg_contents": "On Mon, Jan 29, 2024 at 10:43 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> Okay, so I gave this another look and concluded that I definitely didn't\n> like the whole business of having one level open the explain group and\n> return outwards whether it had been done so that the other level would\n> close it. So I made the code do what I said I thought it should do\n> (adding a new function peek_buffer_usage to report whether BUFFERS would\n> print anything), and I have to say that it looks much better to me with\n> that.\n\nHmm. ExplainOnePlan certainly looks better with this.\n\n>\n> I also added a trivial test for EXPLAIN EXECUTE, which was uncovered,\n> and some other minor stylistic changes.\n>\n\nThanks. Looks fine to me.\n\n> And with that I pushed it.\n\nThanks a lot.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 30 Jan 2024 12:09:52 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Report planning memory in EXPLAIN ANALYZE" }, { "msg_contents": "On 2024-Jan-30, Ashutosh Bapat wrote:\n\n> On Mon, Jan 29, 2024 at 10:43 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> > I also added a trivial test for EXPLAIN EXECUTE, which was uncovered,\n> > and some other minor stylistic changes.\n> \n> Thanks. Looks fine to me.\n\nThanks for looking!\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n¡Ay, ay, ay! Con lo mucho que yo lo quería (bis)\nse fue de mi vera ... se fue para siempre, pa toíta ... pa toíta la vida\n¡Ay Camarón! ¡Ay Camarón! (Paco de Lucía)\n\n\n", "msg_date": "Tue, 30 Jan 2024 11:18:32 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Report planning memory in EXPLAIN ANALYZE" } ]
[ { "msg_contents": "Dear hackers,\n(CC: Julien, Sawada-san, Amit)\n\nThis is a fork thread from \"[PoC] pg_upgrade: allow to upgrade publisher node\" [1].\n\n# Background\n\n[1] is the patch which allows to replicate logical replication slots from old to new node.\nFollowings describe the rough steps:\n\n1. Boot old node as binary-upgrade mode\n2. Check confirmed_lsn of all the slots, and confirm all WALs are replicated to downstream\n3. Dump slot info to sql file\n4. Stop old node\n5. Boot new node as binary-upgrade mode\n...\n\nHere, step 2 was introduced for avoiding data loss. If there are some WAL records\nahead confirmed_lsn, such records would not be replicated anymore - it may be dangerous.\n\nSo in the current patch, pg_upgrade fails if other records than SHUTDOWN_CHECKPOINT\nexits after any confirmed_flush_lsn.\n\n# Problem\n\nWe found that following three records might be generated during the upgrade.\n\n* RUNNING_XACT\n* CHECKPOINT_ONLINE\n* XLOG_FPI_FOR_HINT\n\nRUNNING_XACT might be written by the background writer. Conditions for the generation are:\n\na. Elapsed 15 seconds since the last WAL creation or bootstraping of the process, and either of them:\nb-1. The process had never create the RUNNING_XACT record, or\nb-2. Some \"important WALs\" were created after the last RUNNING_XACT record\n\n\nCHECKPOINT_ONLINE might be written by the checkpointer. Conditions for the generation are:\n\na. Elapsed checkpoint_timeout seconds since the last creation or bootstraping, and either of them:\nb-1. The process had never create the CHECKPOINT_ONLINE record, or\nb-2. Some \"important WALs\" were created after the last CHECKPOINT record\n\n\nXLOG_FPI_FOR_HINT, which is raised by Sawada-san, might be generated by backend processes.\nConditions for the generation are:\n\na. Backend processes scanned any tuples (even if it was the system catalog), or either of them:\nb-1. Data checksum was enabled, or\nb-2. wal_log_hints was set to on\n\n# Solution\n\nI wanted to suppress generations of WALs during the upgrade, because of the \"# Background\".\n\nRegarding the RUNNING_XACT and CHECKPOINT_ONLINE, it might be OK by removing the\ncondition b-1. The duration between bootstrap and initial {RUNNING_XACT|CHECKPOINT_ONLINE}\nbecomes longer, but I could not find impacts by it.\n\nAs for the XLOG_FPI_FOR_HINT, the simplest way I came up with is not to call\nXLogSaveBufferForHint() during binary upgrade. Considerations may be not enough,\nbut I attached the patch for the fix. It passed CI on my repository.\n\n\nDo you have any other considerations about it?\nAn approach, which adds \"if (IsBinaryUpgare)\" in XLogInsertAllowed(), was proposed in [2]. \nBut I'm not sure it could really solve the issue - e.g., XLogInsertRecord() just\nraised an ERROR if !XLogInsertAllowed().\n\n[1]: https://commitfest.postgresql.org/44/4273/\n[2]: https://www.postgresql.org/message-id/flat/20210121152357.s6eflhqyh4g5e6dv%40dalibo.com\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED", "msg_date": "Tue, 8 Aug 2023 10:43:06 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": true, "msg_subject": "Suppress generating WAL records during the upgrade" } ]
[ { "msg_contents": "Since 3d351d916 (pg14), reltuples -1 means that the rel has never been\nvacuumed nor analyzed.\n\nBut since 4496020e6d (backpatched to pg15), following pg_upgrade, vacuum\ncan leave reltuples=-1.\n\ncommit 4496020e6dfaffe8217e4d3f85567bb2b6927b45\nAuthor: Peter Geoghegan <pg@bowt.ie>\nDate: Fri Aug 19 09:26:06 2022 -0700\n\n Avoid reltuples distortion in very small tables.\n\n$ /usr/pgsql-15/bin/initdb -N -D ./pg15.dat2\n$ /usr/pgsql-15/bin/initdb -N -D ./pg15.dat3\n\n$ /usr/pgsql-15/bin/postgres -c logging_collector=no -p 5678 -k /tmp -D ./pg15.dat2& # old cluster, pre-upgrade\npostgres=# CREATE TABLE t AS SELECT generate_series(1,9999);\npostgres=# SELECT reltuples FROM pg_class WHERE oid='t'::regclass;\nreltuples | -1\npostgres=# VACUUM FREEZE t;\npostgres=# SELECT reltuples FROM pg_class WHERE oid='t'::regclass;\nreltuples | 9999\n\n$ /usr/pgsql-15/bin/pg_upgrade -b /usr/pgsql-15/bin -d ./pg15.dat2 -D./pg15.dat3 # -c logging_collector=no -p 5678 -k /tmp&\n\n$ /usr/pgsql-15/bin/postgres -c logging_collector=no -p 5678 -k /tmp -D ./pg15.dat3& # new cluster, post-upgrade\npostgres=# VACUUM FREEZE VERBOSE t;\npostgres=# SELECT reltuples FROM pg_class WHERE oid='t'::regclass;\nreltuples | -1\n\nThe problem isn't that reltuples == -1 after the upgrade (which is\nnormal). The issue is that if VACUUM skips all the pages, it can leave\nreltuples -1. My expectation is that after running \"vacuum\", no tables\nare left in the \"never have been vacuumed\" state.\n\nIf the table was already frozen, then VACUUM (FREEZE) is inadequate to\nfix it, and you need to use DISABLE_PAGE_SKIPPING.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 8 Aug 2023 22:43:28 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "pg15: reltuples stuck at -1 after pg_upgrade and VACUUM" }, { "msg_contents": "On Tue, Aug 8, 2023 at 8:43 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> The problem isn't that reltuples == -1 after the upgrade (which is\n> normal). The issue is that if VACUUM skips all the pages, it can leave\n> reltuples -1. My expectation is that after running \"vacuum\", no tables\n> are left in the \"never have been vacuumed\" state.\n\nBut -1 isn't the \"never have been vacuumed\" state, exactly. At best it\nis the \"never been vacuumed or analyzed\" state.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 8 Aug 2023 21:07:58 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg15: reltuples stuck at -1 after pg_upgrade and VACUUM" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Tue, Aug 8, 2023 at 8:43 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>> The problem isn't that reltuples == -1 after the upgrade (which is\n>> normal). The issue is that if VACUUM skips all the pages, it can leave\n>> reltuples -1. My expectation is that after running \"vacuum\", no tables\n>> are left in the \"never have been vacuumed\" state.\n\n> But -1 isn't the \"never have been vacuumed\" state, exactly. At best it\n> is the \"never been vacuumed or analyzed\" state.\n\nYeah. -1 effectively pleads ignorance about the number of live tuples.\nIf VACUUM has skipped every page, then it is still ignorant about\nthe true number of live tuples, so setting the value to something\nelse would be inappropriate.\n\nPerhaps, though, there's a case for forcing all pages to be visited\nif we start with reltuples == -1? I'm not sure it matters much\nthough given that you also need an ANALYZE run to be really in a\ngood place after pg_upgrade. The ANALYZE should update this.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 09 Aug 2023 09:18:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg15: reltuples stuck at -1 after pg_upgrade and VACUUM" }, { "msg_contents": "On Wed, Aug 9, 2023 at 6:18 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Perhaps, though, there's a case for forcing all pages to be visited\n> if we start with reltuples == -1? I'm not sure it matters much\n> though given that you also need an ANALYZE run to be really in a\n> good place after pg_upgrade. The ANALYZE should update this.\n\nRight. VACUUM is sometimes much less efficient than just using ANALYZE\nto establish an initial reltuples. Other times it is much less\naccurate. I can't see any argument for opting to use VACUUM instead of\nANALYZE after an upgrade.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 9 Aug 2023 08:41:21 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: pg15: reltuples stuck at -1 after pg_upgrade and VACUUM" } ]
[ { "msg_contents": "In function `BackendXidGetPid`, when looping every proc's\n TransactionId, there is no need to access its PGPROC since there\n is shared memory access: `arrayP->pgprocnos[index]`.\n\n Though the compiler can optimize this kind of inefficiency, I\n believe we should ship with better code.\n\n\n-- \nRegards\nJunwang Zhao", "msg_date": "Wed, 9 Aug 2023 12:00:34 +0800", "msg_from": "Junwang Zhao <zhjwpku@gmail.com>", "msg_from_op": true, "msg_subject": "[BackendXidGetPid] only access allProcs when xid matches" }, { "msg_contents": "On Wed, Aug 9, 2023 at 9:30 AM Junwang Zhao <zhjwpku@gmail.com> wrote:\n>\n> In function `BackendXidGetPid`, when looping every proc's\n> TransactionId, there is no need to access its PGPROC since there\n> is shared memory access: `arrayP->pgprocnos[index]`.\n>\n> Though the compiler can optimize this kind of inefficiency, I\n> believe we should ship with better code.\n>\n\nLooks good to me. However, I would just move the variable declaration\nwith their assignments inside the if () rather than combing the\nexpressions. It more readable that way.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Wed, 9 Aug 2023 20:16:46 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BackendXidGetPid] only access allProcs when xid matches" }, { "msg_contents": "On Wed, Aug 9, 2023 at 10:46 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Wed, Aug 9, 2023 at 9:30 AM Junwang Zhao <zhjwpku@gmail.com> wrote:\n> >\n> > In function `BackendXidGetPid`, when looping every proc's\n> > TransactionId, there is no need to access its PGPROC since there\n> > is shared memory access: `arrayP->pgprocnos[index]`.\n> >\n> > Though the compiler can optimize this kind of inefficiency, I\n> > believe we should ship with better code.\n> >\n>\n> Looks good to me. However, I would just move the variable declaration\n> with their assignments inside the if () rather than combing the\n> expressions. It more readable that way.\n\nyeah, make sense, also checked elsewhere using the original style,\nattachment file\nkeep that style, thanks ;)\n\n>\n> --\n> Best Wishes,\n> Ashutosh Bapat\n\n\n\n-- \nRegards\nJunwang Zhao", "msg_date": "Wed, 9 Aug 2023 23:06:48 +0800", "msg_from": "Junwang Zhao <zhjwpku@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [BackendXidGetPid] only access allProcs when xid matches" }, { "msg_contents": "Please add this to commitfest so that it's not forgotten.\n\nOn Wed, Aug 9, 2023 at 8:37 PM Junwang Zhao <zhjwpku@gmail.com> wrote:\n>\n> On Wed, Aug 9, 2023 at 10:46 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > On Wed, Aug 9, 2023 at 9:30 AM Junwang Zhao <zhjwpku@gmail.com> wrote:\n> > >\n> > > In function `BackendXidGetPid`, when looping every proc's\n> > > TransactionId, there is no need to access its PGPROC since there\n> > > is shared memory access: `arrayP->pgprocnos[index]`.\n> > >\n> > > Though the compiler can optimize this kind of inefficiency, I\n> > > believe we should ship with better code.\n> > >\n> >\n> > Looks good to me. However, I would just move the variable declaration\n> > with their assignments inside the if () rather than combing the\n> > expressions. It more readable that way.\n>\n> yeah, make sense, also checked elsewhere using the original style,\n> attachment file\n> keep that style, thanks ;)\n>\n> >\n> > --\n> > Best Wishes,\n> > Ashutosh Bapat\n>\n>\n>\n> --\n> Regards\n> Junwang Zhao\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Thu, 10 Aug 2023 13:41:03 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BackendXidGetPid] only access allProcs when xid matches" }, { "msg_contents": "On Thu, Aug 10, 2023 at 4:11 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> Please add this to commitfest so that it's not forgotten.\n>\n\nAdded [1], thanks\n\n[1]: https://commitfest.postgresql.org/44/4495/\n\n> On Wed, Aug 9, 2023 at 8:37 PM Junwang Zhao <zhjwpku@gmail.com> wrote:\n> >\n> > On Wed, Aug 9, 2023 at 10:46 PM Ashutosh Bapat\n> > <ashutosh.bapat.oss@gmail.com> wrote:\n> > >\n> > > On Wed, Aug 9, 2023 at 9:30 AM Junwang Zhao <zhjwpku@gmail.com> wrote:\n> > > >\n> > > > In function `BackendXidGetPid`, when looping every proc's\n> > > > TransactionId, there is no need to access its PGPROC since there\n> > > > is shared memory access: `arrayP->pgprocnos[index]`.\n> > > >\n> > > > Though the compiler can optimize this kind of inefficiency, I\n> > > > believe we should ship with better code.\n> > > >\n> > >\n> > > Looks good to me. However, I would just move the variable declaration\n> > > with their assignments inside the if () rather than combing the\n> > > expressions. It more readable that way.\n> >\n> > yeah, make sense, also checked elsewhere using the original style,\n> > attachment file\n> > keep that style, thanks ;)\n> >\n> > >\n> > > --\n> > > Best Wishes,\n> > > Ashutosh Bapat\n> >\n> >\n> >\n> > --\n> > Regards\n> > Junwang Zhao\n>\n>\n>\n> --\n> Best Wishes,\n> Ashutosh Bapat\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Thu, 10 Aug 2023 16:18:24 +0800", "msg_from": "Junwang Zhao <zhjwpku@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [BackendXidGetPid] only access allProcs when xid matches" }, { "msg_contents": "Hi Junwang,\nWe leave a line blank after variable declaration as in the attached patch.\n\nOtherwise the patch looks good to me.\n\nThe function modified by the patch is only used by extension\npgrowlocks. Given that the function will be invoked as many times as\nthe number of locked rows in the relation, the patch may show some\nimprovement and thus be more compelling. One way to measure\nperformance is to create a table with millions of rows, SELECT all\nrows with FOR SHARE/UPDATE clause. Then run pgrowlock() on that\nrelation. This will invoke the given function a million times. That\nway we might be able to catch some miniscule improvement per row.\n\nIf the performance is measurable, we can mark the CF entry as ready\nfor committer.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Thu, Aug 10, 2023 at 1:48 PM Junwang Zhao <zhjwpku@gmail.com> wrote:\n>\n> On Thu, Aug 10, 2023 at 4:11 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > Please add this to commitfest so that it's not forgotten.\n> >\n>\n> Added [1], thanks\n>\n> [1]: https://commitfest.postgresql.org/44/4495/\n>\n> > On Wed, Aug 9, 2023 at 8:37 PM Junwang Zhao <zhjwpku@gmail.com> wrote:\n> > >\n> > > On Wed, Aug 9, 2023 at 10:46 PM Ashutosh Bapat\n> > > <ashutosh.bapat.oss@gmail.com> wrote:\n> > > >\n> > > > On Wed, Aug 9, 2023 at 9:30 AM Junwang Zhao <zhjwpku@gmail.com> wrote:\n> > > > >\n> > > > > In function `BackendXidGetPid`, when looping every proc's\n> > > > > TransactionId, there is no need to access its PGPROC since there\n> > > > > is shared memory access: `arrayP->pgprocnos[index]`.\n> > > > >\n> > > > > Though the compiler can optimize this kind of inefficiency, I\n> > > > > believe we should ship with better code.\n> > > > >\n> > > >\n> > > > Looks good to me. However, I would just move the variable declaration\n> > > > with their assignments inside the if () rather than combing the\n> > > > expressions. It more readable that way.\n> > >\n> > > yeah, make sense, also checked elsewhere using the original style,\n> > > attachment file\n> > > keep that style, thanks ;)\n> > >\n> > > >\n> > > > --\n> > > > Best Wishes,\n> > > > Ashutosh Bapat\n> > >\n> > >\n> > >\n> > > --\n> > > Regards\n> > > Junwang Zhao\n> >\n> >\n> >\n> > --\n> > Best Wishes,\n> > Ashutosh Bapat\n>\n>\n>\n> --\n> Regards\n> Junwang Zhao\n\n\n", "msg_date": "Thu, 7 Sep 2023 13:22:07 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BackendXidGetPid] only access allProcs when xid matches" }, { "msg_contents": "On Thu, Sep 07, 2023 at 01:22:07PM +0530, Ashutosh Bapat wrote:\n> Otherwise the patch looks good to me.\n> \n> The function modified by the patch is only used by extension\n> pgrowlocks. Given that the function will be invoked as many times as\n> the number of locked rows in the relation, the patch may show some\n> improvement and thus be more compelling. One way to measure\n> performance is to create a table with millions of rows, SELECT all\n> rows with FOR SHARE/UPDATE clause. Then run pgrowlock() on that\n> relation. This will invoke the given function a million times. That\n> way we might be able to catch some miniscule improvement per row.\n> \n> If the performance is measurable, we can mark the CF entry as ready\n> for committer.\n\nSo, is the difference measurable? Assuming that your compiler does\nnot optimize that, my guess is no because the cycles are going to be\neaten by the other system calls in pgrowlocks. You could increase the \nnumber of proc entries and force a large loop around\nBackendXidGetPid() to see a difference of runtime, but that's going\nto require a lot more proc entries to see any kind of difference\noutside the scope of a usual ~1% (somewhat magic number!) noise with\nsuch micro benchmarks. \n\n- int pgprocno = arrayP->pgprocnos[index];\n- PGPROC *proc = &allProcs[pgprocno];\n-\n if (other_xids[index] == xid)\n {\n+ PGPROC *proc = &allProcs[arrayP->pgprocnos[index]];\n result = proc->pid;\n break;\n\nSaying that, moving the declarations into the inner loop is usually a\ngood practice, but I would just keep two variables instead of one for\nthe sake of readability. That's a nit, sure.\n--\nMichael", "msg_date": "Thu, 7 Sep 2023 17:04:41 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [BackendXidGetPid] only access allProcs when xid matches" }, { "msg_contents": "On Thu, Sep 7, 2023 at 4:04 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Sep 07, 2023 at 01:22:07PM +0530, Ashutosh Bapat wrote:\n> > Otherwise the patch looks good to me.\n> >\n> > The function modified by the patch is only used by extension\n> > pgrowlocks. Given that the function will be invoked as many times as\n> > the number of locked rows in the relation, the patch may show some\n> > improvement and thus be more compelling. One way to measure\n> > performance is to create a table with millions of rows, SELECT all\n> > rows with FOR SHARE/UPDATE clause. Then run pgrowlock() on that\n> > relation. This will invoke the given function a million times. That\n> > way we might be able to catch some miniscule improvement per row.\n> >\n> > If the performance is measurable, we can mark the CF entry as ready\n> > for committer.\n>\n> So, is the difference measurable? Assuming that your compiler does\nI have checked my compiler, this patch give them same assembly code\nas before.\n> not optimize that, my guess is no because the cycles are going to be\n> eaten by the other system calls in pgrowlocks. You could increase the\n> number of proc entries and force a large loop around\n> BackendXidGetPid() to see a difference of runtime, but that's going\n> to require a lot more proc entries to see any kind of difference\n> outside the scope of a usual ~1% (somewhat magic number!) noise with\n> such micro benchmarks.\n>\n> - int pgprocno = arrayP->pgprocnos[index];\n> - PGPROC *proc = &allProcs[pgprocno];\n> -\n> if (other_xids[index] == xid)\n> {\n> + PGPROC *proc = &allProcs[arrayP->pgprocnos[index]];\n> result = proc->pid;\n> break;\n>\n> Saying that, moving the declarations into the inner loop is usually a\n> good practice, but I would just keep two variables instead of one for\n> the sake of readability. That's a nit, sure.\n\nI remember I split this into two lines in v2 patch. Whatever, I attached\na v3 with a suggestion from Ashutosh Bapat.\n\n> --\n> Michael\n\n\n\n-- \nRegards\nJunwang Zhao", "msg_date": "Thu, 7 Sep 2023 16:17:33 +0800", "msg_from": "Junwang Zhao <zhjwpku@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [BackendXidGetPid] only access allProcs when xid matches" }, { "msg_contents": "Forgot to attach the patch.\n\nOn Thu, Sep 7, 2023 at 1:22 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> Hi Junwang,\n> We leave a line blank after variable declaration as in the attached patch.\n>\n> Otherwise the patch looks good to me.\n>\n> The function modified by the patch is only used by extension\n> pgrowlocks. Given that the function will be invoked as many times as\n> the number of locked rows in the relation, the patch may show some\n> improvement and thus be more compelling. One way to measure\n> performance is to create a table with millions of rows, SELECT all\n> rows with FOR SHARE/UPDATE clause. Then run pgrowlock() on that\n> relation. This will invoke the given function a million times. That\n> way we might be able to catch some miniscule improvement per row.\n>\n> If the performance is measurable, we can mark the CF entry as ready\n> for committer.\n>\n> --\n> Best Wishes,\n> Ashutosh Bapat\n>\n> On Thu, Aug 10, 2023 at 1:48 PM Junwang Zhao <zhjwpku@gmail.com> wrote:\n> >\n> > On Thu, Aug 10, 2023 at 4:11 PM Ashutosh Bapat\n> > <ashutosh.bapat.oss@gmail.com> wrote:\n> > >\n> > > Please add this to commitfest so that it's not forgotten.\n> > >\n> >\n> > Added [1], thanks\n> >\n> > [1]: https://commitfest.postgresql.org/44/4495/\n> >\n> > > On Wed, Aug 9, 2023 at 8:37 PM Junwang Zhao <zhjwpku@gmail.com> wrote:\n> > > >\n> > > > On Wed, Aug 9, 2023 at 10:46 PM Ashutosh Bapat\n> > > > <ashutosh.bapat.oss@gmail.com> wrote:\n> > > > >\n> > > > > On Wed, Aug 9, 2023 at 9:30 AM Junwang Zhao <zhjwpku@gmail.com> wrote:\n> > > > > >\n> > > > > > In function `BackendXidGetPid`, when looping every proc's\n> > > > > > TransactionId, there is no need to access its PGPROC since there\n> > > > > > is shared memory access: `arrayP->pgprocnos[index]`.\n> > > > > >\n> > > > > > Though the compiler can optimize this kind of inefficiency, I\n> > > > > > believe we should ship with better code.\n> > > > > >\n> > > > >\n> > > > > Looks good to me. However, I would just move the variable declaration\n> > > > > with their assignments inside the if () rather than combing the\n> > > > > expressions. It more readable that way.\n> > > >\n> > > > yeah, make sense, also checked elsewhere using the original style,\n> > > > attachment file\n> > > > keep that style, thanks ;)\n> > > >\n> > > > >\n> > > > > --\n> > > > > Best Wishes,\n> > > > > Ashutosh Bapat\n> > > >\n> > > >\n> > > >\n> > > > --\n> > > > Regards\n> > > > Junwang Zhao\n> > >\n> > >\n> > >\n> > > --\n> > > Best Wishes,\n> > > Ashutosh Bapat\n> >\n> >\n> >\n> > --\n> > Regards\n> > Junwang Zhao\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Thu, 7 Sep 2023 14:37:02 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BackendXidGetPid] only access allProcs when xid matches" }, { "msg_contents": "On Thu, Sep 7, 2023 at 5:07 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> Forgot to attach the patch.\n\nLGTM\n\nShould I change the status to ready for committer now?\n\n>\n> On Thu, Sep 7, 2023 at 1:22 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > Hi Junwang,\n> > We leave a line blank after variable declaration as in the attached patch.\n> >\n> > Otherwise the patch looks good to me.\n> >\n> > The function modified by the patch is only used by extension\n> > pgrowlocks. Given that the function will be invoked as many times as\n> > the number of locked rows in the relation, the patch may show some\n> > improvement and thus be more compelling. One way to measure\n> > performance is to create a table with millions of rows, SELECT all\n> > rows with FOR SHARE/UPDATE clause. Then run pgrowlock() on that\n> > relation. This will invoke the given function a million times. That\n> > way we might be able to catch some miniscule improvement per row.\n> >\n> > If the performance is measurable, we can mark the CF entry as ready\n> > for committer.\n> >\n> > --\n> > Best Wishes,\n> > Ashutosh Bapat\n> >\n> > On Thu, Aug 10, 2023 at 1:48 PM Junwang Zhao <zhjwpku@gmail.com> wrote:\n> > >\n> > > On Thu, Aug 10, 2023 at 4:11 PM Ashutosh Bapat\n> > > <ashutosh.bapat.oss@gmail.com> wrote:\n> > > >\n> > > > Please add this to commitfest so that it's not forgotten.\n> > > >\n> > >\n> > > Added [1], thanks\n> > >\n> > > [1]: https://commitfest.postgresql.org/44/4495/\n> > >\n> > > > On Wed, Aug 9, 2023 at 8:37 PM Junwang Zhao <zhjwpku@gmail.com> wrote:\n> > > > >\n> > > > > On Wed, Aug 9, 2023 at 10:46 PM Ashutosh Bapat\n> > > > > <ashutosh.bapat.oss@gmail.com> wrote:\n> > > > > >\n> > > > > > On Wed, Aug 9, 2023 at 9:30 AM Junwang Zhao <zhjwpku@gmail.com> wrote:\n> > > > > > >\n> > > > > > > In function `BackendXidGetPid`, when looping every proc's\n> > > > > > > TransactionId, there is no need to access its PGPROC since there\n> > > > > > > is shared memory access: `arrayP->pgprocnos[index]`.\n> > > > > > >\n> > > > > > > Though the compiler can optimize this kind of inefficiency, I\n> > > > > > > believe we should ship with better code.\n> > > > > > >\n> > > > > >\n> > > > > > Looks good to me. However, I would just move the variable declaration\n> > > > > > with their assignments inside the if () rather than combing the\n> > > > > > expressions. It more readable that way.\n> > > > >\n> > > > > yeah, make sense, also checked elsewhere using the original style,\n> > > > > attachment file\n> > > > > keep that style, thanks ;)\n> > > > >\n> > > > > >\n> > > > > > --\n> > > > > > Best Wishes,\n> > > > > > Ashutosh Bapat\n> > > > >\n> > > > >\n> > > > >\n> > > > > --\n> > > > > Regards\n> > > > > Junwang Zhao\n> > > >\n> > > >\n> > > >\n> > > > --\n> > > > Best Wishes,\n> > > > Ashutosh Bapat\n> > >\n> > >\n> > >\n> > > --\n> > > Regards\n> > > Junwang Zhao\n>\n>\n>\n> --\n> Best Wishes,\n> Ashutosh Bapat\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Thu, 7 Sep 2023 17:34:46 +0800", "msg_from": "Junwang Zhao <zhjwpku@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [BackendXidGetPid] only access allProcs when xid matches" }, { "msg_contents": "On Thu, Sep 7, 2023 at 3:05 PM Junwang Zhao <zhjwpku@gmail.com> wrote:\n>\n> On Thu, Sep 7, 2023 at 5:07 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > Forgot to attach the patch.\n>\n> LGTM\n>\n> Should I change the status to ready for committer now?\n>\n\nI would still love to see some numbers. It's not that hard. Either\nusing my micro benchmark or Michael's. We may or may not see an\nimprovement. But at least we tried. But Michael feels otherwise,\nchanging it to RoC is fine.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Thu, 7 Sep 2023 16:34:20 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BackendXidGetPid] only access allProcs when xid matches" }, { "msg_contents": "On Thu, Sep 07, 2023 at 04:34:20PM +0530, Ashutosh Bapat wrote:\n> I would still love to see some numbers. It's not that hard. Either\n> using my micro benchmark or Michael's. We may or may not see an\n> improvement. But at least we tried. But Michael feels otherwise,\n> changing it to RoC is fine.\n\nI have hardcoded a small SQL function that calls BackendXidGetPid() in\na tight loop a few hundred millions of times, and I see no difference\nin runtime between HEAD and the patch under -O2. The argument about\nreadability still applies for me, so applied.\n--\nMichael", "msg_date": "Fri, 8 Sep 2023 10:05:37 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [BackendXidGetPid] only access allProcs when xid matches" } ]
[ { "msg_contents": "Hi hackers,\n\nMost catcache and relcache entries (other than index info etc.) currently\ngo straight into CacheMemoryContext. And I believe these two caches can be\nthe ones with the largest contribution to the memory usage of\nCacheMemoryContext most of the time. For example, in cases where we have\nlots of database objects accessed in a long-lived connection,\nCacheMemoryContext tends to increase significantly.\n\nWhile I've been working on another patch for pg_backend_memory_contexts\nview, we thought that it would also be better to see the memory usages of\ndifferent kinds of caches broken down into their own contexts. The attached\npatch implements this and aims to easily keep track of the memory used by\nrelcache and catcache\n\nTo quickly show how pg_backend_memory_contexts would look like, I did the\nfollowing:\n\n-Create some tables:\nSELECT 'BEGIN;' UNION ALL SELECT format('CREATE TABLE %1$s(id serial\nprimary key, data text not null unique)', 'test_'||g.i) FROM\ngenerate_series(0, 1000) g(i) UNION ALL SELECT 'COMMIT;';\\gexec\n\n-Open a new connection and query pg_backend_memory_contexts [1]:\nThis is what you'll see before and after the patch.\n-- HEAD:\n name | used_bytes | free_bytes | total_bytes\n--------------------+------------+------------+-------------\n CacheMemoryContext | 467656 | 56632 | 524288\n index info | 111760 | 46960 | 158720\n relation rules | 4416 | 3776 | 8192\n(3 rows)\n\n-- Patch:\n name | used_bytes | free_bytes | total_bytes\n-----------------------+------------+------------+-------------\n CatCacheMemoryContext | 217696 | 44448 | 262144\n RelCacheMemoryContext | 248264 | 13880 | 262144\n index info | 111760 | 46960 | 158720\n CacheMemoryContext | 2336 | 5856 | 8192\n relation rules | 4416 | 3776 | 8192\n(5 rows)\n\n- Run select on all tables\nSELECT format('SELECT count(*) FROM %1$s', 'test_'||g.i) FROM\ngenerate_series(0, 1000) g(i);\\gexec\n\n- Then check pg_backend_memory_contexts [1] again:\n--HEAD\n name | used_bytes | free_bytes | total_bytes\n--------------------+------------+------------+-------------\n CacheMemoryContext | 8197344 | 257056 | 8454400\n index info | 2102160 | 113776 | 2215936\n relation rules | 4416 | 3776 | 8192\n(3 rows)\n\n--Patch\n name | used_bytes | free_bytes | total_bytes\n-----------------------+------------+------------+-------------\n RelCacheMemoryContext | 4706464 | 3682144 | 8388608\n CatCacheMemoryContext | 3489384 | 770712 | 4260096\n index info | 2102160 | 113776 | 2215936\n CacheMemoryContext | 2336 | 5856 | 8192\n relation rules | 4416 | 3776 | 8192\n(5 rows)\n\nYou can see that CacheMemoryContext does not use much memory without\ncatcache and relcache (at least in cases similar to above), and it's easy\nto bloat catcache and relcache. That's why I think it would be useful to\nsee their usage separately.\n\nAny feedback would be appreciated.\n\n[1]\nSELECT\n\nname,\nsum(used_bytes) AS used_bytes,\nsum(free_bytes) AS free_bytes,\nsum(total_bytes) AS total_bytes\n\nFROM pg_backend_memory_contexts\nWHERE name LIKE '%CacheMemoryContext%' OR parent LIKE '%CacheMemoryContext%'\nGROUP BY name\nORDER BY total_bytes DESC;\n\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft", "msg_date": "Wed, 9 Aug 2023 15:02:31 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Separate memory contexts for relcache and catcache" }, { "msg_contents": ">\n> Most catcache and relcache entries (other than index info etc.) currently\n> go straight into CacheMemoryContext. And I believe these two caches can be\n> the ones with the largest contribution to the memory usage of\n> CacheMemoryContext most of the time. For example, in cases where we have\n> lots of database objects accessed in a long-lived connection,\n> CacheMemoryContext tends to increase significantly.\n>\n> While I've been working on another patch for pg_backend_memory_contexts\n> view, we thought that it would also be better to see the memory usages of\n> different kinds of caches broken down into their own contexts. The attached\n> patch implements this and aims to easily keep track of the memory used by\n> relcache and catcache\n>\n>\n+ 1 for the idea, this would be pretty useful as a proof of which\ncontext is consuming most of the memory and it doesn't cost\nmuch. It would be handy than estimating that by something\nlike select count(*) from pg_class.\n\nI think, for example, if we find relcache using too much memory,\nit is a signal that the user may use too many partitioned tables.\n\n\n-- \nBest Regards\nAndy Fan\n\nMost catcache and relcache entries (other than index info etc.) currently go straight into CacheMemoryContext. And I believe these two caches can be the ones with the largest contribution to the memory usage of CacheMemoryContext most of the time. For example, in cases where we have lots of database objects accessed in a long-lived connection, CacheMemoryContext tends to increase significantly.While I've been working on another patch for pg_backend_memory_contexts view, we thought that it would also be better to see the memory usages of different kinds of caches broken down into their own contexts. The attached patch implements this and aims to easily keep track of the memory used by relcache and catcache+ 1 for the idea, this would be pretty useful as a proof of whichcontext is consuming most of the memory and it doesn't costmuch.  It would be handy than estimating that by something like select count(*) from pg_class. I think, for example,  if we find relcache using too much memory,it is a signal that the user may use too many partitioned tables. -- Best RegardsAndy Fan", "msg_date": "Wed, 9 Aug 2023 20:21:33 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Separate memory contexts for relcache and catcache" }, { "msg_contents": "On 2023-Aug-09, Melih Mutlu wrote:\n\n> --Patch\n> name | used_bytes | free_bytes | total_bytes\n> -----------------------+------------+------------+-------------\n> RelCacheMemoryContext | 4706464 | 3682144 | 8388608\n> CatCacheMemoryContext | 3489384 | 770712 | 4260096\n> index info | 2102160 | 113776 | 2215936\n> CacheMemoryContext | 2336 | 5856 | 8192\n> relation rules | 4416 | 3776 | 8192\n> (5 rows)\n\nHmm, is this saying that there's too much fragmentation in the relcache\ncontext? Maybe it would improve things to make it a SlabContext instead\nof AllocSet. Or, more precisely, a bunch of SlabContexts, each with the\nappropriate chunkSize for the object being stored. (I don't say this\nbecause I know for a fact that Slab is better for these purposes; it's\njust that I happened to read its comments yesterday and they stated that\nit behaves better in terms of fragmentation. Maybe Andres or Tomas have\nan opinion on this.)\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"I love the Postgres community. It's all about doing things _properly_. :-)\"\n(David Garamond)\n\n\n", "msg_date": "Wed, 9 Aug 2023 15:22:42 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Separate memory contexts for relcache and catcache" }, { "msg_contents": "On Thu, 10 Aug 2023 at 01:23, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2023-Aug-09, Melih Mutlu wrote:\n>\n> > --Patch\n> > name | used_bytes | free_bytes | total_bytes\n> > -----------------------+------------+------------+-------------\n> > RelCacheMemoryContext | 4706464 | 3682144 | 8388608\n> > CatCacheMemoryContext | 3489384 | 770712 | 4260096\n> > index info | 2102160 | 113776 | 2215936\n> > CacheMemoryContext | 2336 | 5856 | 8192\n> > relation rules | 4416 | 3776 | 8192\n> > (5 rows)\n>\n> Hmm, is this saying that there's too much fragmentation in the relcache\n> context?\n\nfree_bytes is just the space in the blocks that are not being used by\nany allocated chunks or chunks on the freelist.\n\nIt looks like RelCacheMemoryContext has 10 blocks including the 8kb\ninitial block:\n\npostgres=# select 8192 + sum(8192*power(2,x)) as total_bytes from\ngenerate_series(0,9) x;\n total_bytes\n-------------\n 8388608\n\nThe first 2 blocks are 8KB as we only start doubling after we malloc\nthe first 8kb block after the keeper block.\n\nIf there was 1 fewer block then total_bytes would be 4194304, which is\nless than the used_bytes for that context, so those 10 block look\nneeded.\n\n> Maybe it would improve things to make it a SlabContext instead\n> of AllocSet. Or, more precisely, a bunch of SlabContexts, each with the\n> appropriate chunkSize for the object being stored.\n\nIt would at least save from having to do the power of 2 rounding that\naset does. However, on a quick glance, it seems not all the size\nrequests in relcache.c are fixed. I see a datumCopy() in\nRelationBuildTupleDesc() for the attmissingval stuff, so we couldn't\nSlabAlloc that.\n\nIt could be worth looking at the size classes of the fixed-sized\nallocations to estimate how much memory we might save by using slab to\navoid the power-2 rounding that aset.c does. However, if there are too\nmany contexts then we may end up using more memory with all the\nmostly-empty contexts for backends that only query a tiny number of\ntables. That might not be good. Slab also does not do block doubling\nlike aset does, so it might be hard to choose a good block size.\n\n> (I don't say this\n> because I know for a fact that Slab is better for these purposes; it's\n> just that I happened to read its comments yesterday and they stated that\n> it behaves better in terms of fragmentation. Maybe Andres or Tomas have\n> an opinion on this.)\n\nI'm not sure of the exact comment, but I was in the recently and\nthere's a chance that I wrote that comment. Slab priorities putting\nnew chunks on fuller blocks and may free() blocks once they become\nempty of any chunks. Aset does no free()ing of blocks unless a block\nwas malloc()ed especially for a chunk above allocChunkLimit. That\nmeans aset might hold a lot of malloc'ed memory for chunks that just\nsit on freelists which might never be used ever again, meanwhile,\nother request sizes may have to malloc new blocks.\n\nDavid\n\n\n", "msg_date": "Thu, 10 Aug 2023 02:14:59 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Separate memory contexts for relcache and catcache" }, { "msg_contents": "Hi,\n\nOn 2023-08-09 15:02:31 +0300, Melih Mutlu wrote:\n> To quickly show how pg_backend_memory_contexts would look like, I did the\n> following:\n> \n> -Create some tables:\n> SELECT 'BEGIN;' UNION ALL SELECT format('CREATE TABLE %1$s(id serial\n> primary key, data text not null unique)', 'test_'||g.i) FROM\n> generate_series(0, 1000) g(i) UNION ALL SELECT 'COMMIT;';\\gexec\n> \n> -Open a new connection and query pg_backend_memory_contexts [1]:\n> This is what you'll see before and after the patch.\n> -- HEAD:\n> name | used_bytes | free_bytes | total_bytes\n> --------------------+------------+------------+-------------\n> CacheMemoryContext | 467656 | 56632 | 524288\n> index info | 111760 | 46960 | 158720\n> relation rules | 4416 | 3776 | 8192\n> (3 rows)\n> \n> -- Patch:\n> name | used_bytes | free_bytes | total_bytes\n> -----------------------+------------+------------+-------------\n> CatCacheMemoryContext | 217696 | 44448 | 262144\n> RelCacheMemoryContext | 248264 | 13880 | 262144\n> index info | 111760 | 46960 | 158720\n> CacheMemoryContext | 2336 | 5856 | 8192\n> relation rules | 4416 | 3776 | 8192\n> (5 rows)\n\nHave you checked what the source of the remaining allocations in\nCacheMemoryContext are?\n\n\nOne thing that I had observed previously and reproduced with this patch, is\nthat the first backend starting after a restart uses considerably more memory:\n\nfirst:\n┌───────────────────────┬────────────┬────────────┬─────────────┐\n│ name │ used_bytes │ free_bytes │ total_bytes │\n├───────────────────────┼────────────┼────────────┼─────────────┤\n│ CatCacheMemoryContext │ 370112 │ 154176 │ 524288 │\n│ RelCacheMemoryContext │ 244136 │ 18008 │ 262144 │\n│ index info │ 104392 │ 45112 │ 149504 │\n│ CacheMemoryContext │ 2304 │ 5888 │ 8192 │\n│ relation rules │ 3856 │ 240 │ 4096 │\n└───────────────────────┴────────────┴────────────┴─────────────┘\n\nsecond:\n┌───────────────────────┬────────────┬────────────┬─────────────┐\n│ name │ used_bytes │ free_bytes │ total_bytes │\n├───────────────────────┼────────────┼────────────┼─────────────┤\n│ CatCacheMemoryContext │ 215072 │ 47072 │ 262144 │\n│ RelCacheMemoryContext │ 243856 │ 18288 │ 262144 │\n│ index info │ 104944 │ 47632 │ 152576 │\n│ CacheMemoryContext │ 2304 │ 5888 │ 8192 │\n│ relation rules │ 3856 │ 240 │ 4096 │\n└───────────────────────┴────────────┴────────────┴─────────────┘\n\nThis isn't caused by this patch, but it does make it easier to pinpoint than\nbefore. The reason is fairly simple: On the first start we start without\nbeing able to use relcache init files, in later starts we can. The reason the\nsize increase is in CatCacheMemoryContext, rather than RelCacheMemoryContext,\nis simple: When using the init file the catcache isn't used, when not, we have\nto query the catcache a lot to build the initial relcache contents.\n\n\nGiven the size of both CatCacheMemoryContext and RelCacheMemoryContext in a\nnew backend, I think it might be worth using non-default aset parameters. A\nbit ridiculous to increase block sizes from 8k upwards in every single\nconnection made to postgres ever.\n\n\n> - Run select on all tables\n> SELECT format('SELECT count(*) FROM %1$s', 'test_'||g.i) FROM\n> generate_series(0, 1000) g(i);\\gexec\n> \n> - Then check pg_backend_memory_contexts [1] again:\n> --HEAD\n> name | used_bytes | free_bytes | total_bytes\n> --------------------+------------+------------+-------------\n> CacheMemoryContext | 8197344 | 257056 | 8454400\n> index info | 2102160 | 113776 | 2215936\n> relation rules | 4416 | 3776 | 8192\n> (3 rows)\n> \n> --Patch\n> name | used_bytes | free_bytes | total_bytes\n> -----------------------+------------+------------+-------------\n> RelCacheMemoryContext | 4706464 | 3682144 | 8388608\n> CatCacheMemoryContext | 3489384 | 770712 | 4260096\n> index info | 2102160 | 113776 | 2215936\n> CacheMemoryContext | 2336 | 5856 | 8192\n> relation rules | 4416 | 3776 | 8192\n> (5 rows)\n> \n> You can see that CacheMemoryContext does not use much memory without\n> catcache and relcache (at least in cases similar to above), and it's easy\n> to bloat catcache and relcache. That's why I think it would be useful to\n> see their usage separately.\n\nYes, I think it'd be quite useful. There's ways to bloat particularly catcache\nmuch further, and it's hard to differentiate that from other sources of bloat\nright now.\n\n\n> +static void\n> +CreateCatCacheMemoryContext()\n\nWe typically use (void) to differentiate from an older way of function\ndeclarations that didn't have argument types.\n\n\n> +{\n> +\tif (!CacheMemoryContext)\n> +\t\tCreateCacheMemoryContext();\n\nI wish we just made sure that cache memory context were created in the right\nplace, instead of spreading this check everywhere...\n\n\n> @@ -3995,9 +3998,9 @@ RelationCacheInitializePhase2(void)\n> \t\treturn;\n> \n> \t/*\n> -\t * switch to cache memory context\n> +\t * switch to relcache memory context\n> \t */\n> -\toldcxt = MemoryContextSwitchTo(CacheMemoryContext);\n> +\toldcxt = MemoryContextSwitchTo(RelCacheMemoryContext);\n> \n> \t/*\n> \t * Try to load the shared relcache cache file. If unsuccessful, bootstrap\n> @@ -4050,9 +4053,9 @@ RelationCacheInitializePhase3(void)\n> \tRelationMapInitializePhase3();\n> \n> \t/*\n> -\t * switch to cache memory context\n> +\t * switch to relcache memory context\n> \t */\n> -\toldcxt = MemoryContextSwitchTo(CacheMemoryContext);\n> +\toldcxt = MemoryContextSwitchTo(RelCacheMemoryContext);\n> \n> \t/*\n> \t * Try to load the local relcache cache file. If unsuccessful, bootstrap\n\nI'd just delete these comments, they're just pointlessly restating the code.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 12 Oct 2023 10:01:05 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Separate memory contexts for relcache and catcache" }, { "msg_contents": "Hi,\n\nI also think this change would be helpful.\n\nI imagine you're working on the Andres's comments and you already notice \nthis, but v1 patch cannot be applied to HEAD.\nFor the convenience of other reviewers, I marked it 'Waiting on Author'.\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation\n\n\n", "msg_date": "Mon, 04 Dec 2023 13:59:07 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Separate memory contexts for relcache and catcache" }, { "msg_contents": "Hi,\n\ntorikoshia <torikoshia@oss.nttdata.com>, 4 Ara 2023 Pzt, 07:59 tarihinde\nşunu yazdı:\n\n> Hi,\n>\n> I also think this change would be helpful.\n>\n> I imagine you're working on the Andres's comments and you already notice\n> this, but v1 patch cannot be applied to HEAD.\n> For the convenience of other reviewers, I marked it 'Waiting on Author'.\n>\n\nThanks for letting me know. I rebased the patch. PFA new version.\n\n\n\n\nAndres Freund <andres@anarazel.de>, 12 Eki 2023 Per, 20:01 tarihinde şunu\nyazdı:\n\n> Hi,\n>\n> Have you checked what the source of the remaining allocations in\n> CacheMemoryContext are?\n>\n\nIt's mostly typecache, around 2K. Do you think typecache also needs a\nseparate context?\n\nGiven the size of both CatCacheMemoryContext and RelCacheMemoryContext in a\n> new backend, I think it might be worth using non-default aset parameters. A\n> bit ridiculous to increase block sizes from 8k upwards in every single\n> connection made to postgres ever.\n>\n\nConsidering it starts from ~262K, what would be better for init size?\n256K?\n\n> +static void\n> > +CreateCatCacheMemoryContext()\n>\n> We typically use (void) to differentiate from an older way of function\n> declarations that didn't have argument types.\n>\nDone.\n\n> +{\n> > + if (!CacheMemoryContext)\n> > + CreateCacheMemoryContext();\n>\n> I wish we just made sure that cache memory context were created in the\n> right\n> place, instead of spreading this check everywhere...\n>\n\nThat would be nice. Do you have a suggestion about where that right place\nwould be?\n\nI'd just delete these comments, they're just pointlessly restating the code.\n>\n\nDone.\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft", "msg_date": "Wed, 3 Jan 2024 14:26:34 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Separate memory contexts for relcache and catcache" }, { "msg_contents": "On Wed, 3 Jan 2024 at 16:56, Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Hi,\n>\n> torikoshia <torikoshia@oss.nttdata.com>, 4 Ara 2023 Pzt, 07:59 tarihinde şunu yazdı:\n>>\n>> Hi,\n>>\n>> I also think this change would be helpful.\n>>\n>> I imagine you're working on the Andres's comments and you already notice\n>> this, but v1 patch cannot be applied to HEAD.\n>> For the convenience of other reviewers, I marked it 'Waiting on Author'.\n>\n>\n> Thanks for letting me know. I rebased the patch. PFA new version.\n\nCFBot shows that the patch does not apply anymore as in [1]:\n=== Applying patches on top of PostgreSQL commit ID\n729439607ad210dbb446e31754e8627d7e3f7dda ===\n=== applying patch\n./v2-0001-Separate-memory-contexts-for-relcache-and-catcach.patch\npatching file src/backend/utils/cache/catcache.c\n...\nHunk #8 FAILED at 1933.\nHunk #9 succeeded at 2253 (offset 84 lines).\n1 out of 9 hunks FAILED -- saving rejects to file\nsrc/backend/utils/cache/catcache.c.rej\n\nPlease post an updated version for the same.\n\n[1] - http://cfbot.cputube.org/patch_46_4554.log\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sat, 27 Jan 2024 08:31:13 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Separate memory contexts for relcache and catcache" }, { "msg_contents": "vignesh C <vignesh21@gmail.com>, 27 Oca 2024 Cmt, 06:01 tarihinde şunu\nyazdı:\n\n> On Wed, 3 Jan 2024 at 16:56, Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n> CFBot shows that the patch does not apply anymore as in [1]:\n> === Applying patches on top of PostgreSQL commit ID\n> 729439607ad210dbb446e31754e8627d7e3f7dda ===\n> === applying patch\n> ./v2-0001-Separate-memory-contexts-for-relcache-and-catcach.patch\n> patching file src/backend/utils/cache/catcache.c\n> ...\n> Hunk #8 FAILED at 1933.\n> Hunk #9 succeeded at 2253 (offset 84 lines).\n> 1 out of 9 hunks FAILED -- saving rejects to file\n> src/backend/utils/cache/catcache.c.rej\n>\n> Please post an updated version for the same.\n>\n> [1] - http://cfbot.cputube.org/patch_46_4554.log\n>\n> Regards,\n> Vignesh\n>\n\nRebased. PSA.\n\n\n-- \nMelih Mutlu\nMicrosoft", "msg_date": "Wed, 3 Apr 2024 16:12:49 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Separate memory contexts for relcache and catcache" } ]
[ { "msg_contents": "Hello pgsql-hackers,\n\nWhile testing out some WAL archiving and PITR scenarios, it was observed that\nenabling WAL archiving for the first time on a primary that was on a timeline\nhigher than 1 would not initially archive the timeline history file for the\ntimeline it was currently on. While this might be okay for most use cases, there\nare scenarios where this leads to unexpected failures that seem to expose some\nflaws in the logic.\n\nScenario 1:\nTake a backup of a primary on timeline 2 with `pg_basebackup -Xnone`. Create a\nstandby with that backup that will be continuously restoring from the WAL\narchives, the standby will not contain the timeline 2 history file. The standby\nwill operate normally but if you try to create a cascading standby off it using\nstreaming replication, the cascade standby's WAL receiver will continuously\nFATAL trying to request the timeline 2 history file that the main standby does\nnot have.\n\nScenario 2:\nTake a backup of a primary on timeline 2 with `pg_basebackup -Xnone`. Then try\nto create a new node by doing PITR with recovery_target_timeline set to\n'current' or 'latest' which will succeed. However, doing PITR with\nrecovery_target_timeline = '2' will fail since it is unable to find the timeline\n2 history file in the WAL archives. This may be a bit contradicting since we\nallow 'current' and 'latest' to recover but explicitly setting the\nrecovery_target_timeline to the control file's timeline id ends up with failure.\n\nAttached is a patch containing two TAP tests that demonstrate the scenarios.\n\nMy questions are:\n1. Why doesn't the archiver process try to archive timeline history files when\n WAL archiving is first configured and/or continually check (maybe when the\n archiver process gets started before the main loop)?\n2. Why does explicitly setting the recovery_target_timeline to the control\n file's timeline id not follow the same logic as recovery_target_timeline set\n to 'current'?\n3. Why does a cascaded standby require the timeline history file of its control\n file's timeline id (startTLI) when the main replica is able to operate fine\n without the timeline history file?\n\nNote that my initial observations came from testing with pgBackRest (copying\npg_wal/ during backup is disabled by default) but using `pg_basebackup -Xnone`\nreproduced the issues similarly and is what I present in the TAP tests. At the\nmoment, the only workaround I can think of is to manually run the\narchive_command on the missing timeline history file(s).\n\nAre these valid issues that should be looked into or are they expected? Scenario\n2 seems like it could be easily fixed if we determine that the\nrecovery_target_timeline numeric value is equal to the control file's timeline\nid (compare rtli and recoveryTargetTLI in validateRecoveryParameters()?) but I\nwasn't sure if maybe the opposite was true where we should make 'current' and\n'latest' require retrieving the timeline history files instead to help prevent\nScenario 1.\n\nRegards,\nJimmy Yih", "msg_date": "Thu, 10 Aug 2023 00:00:44 +0000", "msg_from": "Jimmy Yih <jyih@vmware.com>", "msg_from_op": true, "msg_subject": "Should the archiver process always make sure that the timeline\n history files exist in the archive?" }, { "msg_contents": "Hello pgsql-hackers,\n\nAfter doing some more debugging on the matter, I believe this issue might be a\nminor regression from commit 5332b8cec541. Prior to that commit, the archiver\nprocess when first started on a previously promoted primary would have all the\ntimeline history files marked as ready for immediate archiving. If that had\nhappened, none of my mentioned failure scenarios would be theoretically possible\n(barring someone manually deleting the timeline history files). With that in\nmind, I decided to look more into my Question 1 and created a patch proposal.\nThe attached patch will try to archive the current timeline history file if it\nhas not been archived yet when the archiver process starts up.\n\nRegards,\nJimmy Yih\n\n________________________________________\nFrom: Jimmy Yih <jyih@vmware.com>\nSent: Wednesday, August 9, 2023 5:00 PM\nTo: pgsql-hackers@postgresql.org\nSubject: Should the archiver process always make sure that the timeline history files exist in the archive?\n\nHello pgsql-hackers,\n\nWhile testing out some WAL archiving and PITR scenarios, it was observed that\nenabling WAL archiving for the first time on a primary that was on a timeline\nhigher than 1 would not initially archive the timeline history file for the\ntimeline it was currently on. While this might be okay for most use cases, there\nare scenarios where this leads to unexpected failures that seem to expose some\nflaws in the logic.\n\nScenario 1:\nTake a backup of a primary on timeline 2 with `pg_basebackup -Xnone`. Create a\nstandby with that backup that will be continuously restoring from the WAL\narchives, the standby will not contain the timeline 2 history file. The standby\nwill operate normally but if you try to create a cascading standby off it using\nstreaming replication, the cascade standby's WAL receiver will continuously\nFATAL trying to request the timeline 2 history file that the main standby does\nnot have.\n\nScenario 2:\nTake a backup of a primary on timeline 2 with `pg_basebackup -Xnone`. Then try\nto create a new node by doing PITR with recovery_target_timeline set to\n'current' or 'latest' which will succeed. However, doing PITR with\nrecovery_target_timeline = '2' will fail since it is unable to find the timeline\n2 history file in the WAL archives. This may be a bit contradicting since we\nallow 'current' and 'latest' to recover but explicitly setting the\nrecovery_target_timeline to the control file's timeline id ends up with failure.\n\nAttached is a patch containing two TAP tests that demonstrate the scenarios.\n\nMy questions are:\n1. Why doesn't the archiver process try to archive timeline history files when\n WAL archiving is first configured and/or continually check (maybe when the\n archiver process gets started before the main loop)?\n2. Why does explicitly setting the recovery_target_timeline to the control\n file's timeline id not follow the same logic as recovery_target_timeline set\n to 'current'?\n3. Why does a cascaded standby require the timeline history file of its control\n file's timeline id (startTLI) when the main replica is able to operate fine\n without the timeline history file?\n\nNote that my initial observations came from testing with pgBackRest (copying\npg_wal/ during backup is disabled by default) but using `pg_basebackup -Xnone`\nreproduced the issues similarly and is what I present in the TAP tests. At the\nmoment, the only workaround I can think of is to manually run the\narchive_command on the missing timeline history file(s).\n\nAre these valid issues that should be looked into or are they expected? Scenario\n2 seems like it could be easily fixed if we determine that the\nrecovery_target_timeline numeric value is equal to the control file's timeline\nid (compare rtli and recoveryTargetTLI in validateRecoveryParameters()?) but I\nwasn't sure if maybe the opposite was true where we should make 'current' and\n'latest' require retrieving the timeline history files instead to help prevent\nScenario 1.\n\nRegards,\nJimmy Yih", "msg_date": "Wed, 16 Aug 2023 07:33:29 +0000", "msg_from": "Jimmy Yih <jyih@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Should the archiver process always make sure that the timeline\n history files exist in the archive?" }, { "msg_contents": "At Wed, 16 Aug 2023 07:33:29 +0000, Jimmy Yih <jyih@vmware.com> wrote in \n> Hello pgsql-hackers,\n> \n> After doing some more debugging on the matter, I believe this issue might be a\n> minor regression from commit 5332b8cec541. Prior to that commit, the archiver\n> process when first started on a previously promoted primary would have all the\n> timeline history files marked as ready for immediate archiving. If that had\n> happened, none of my mentioned failure scenarios would be theoretically possible\n> (barring someone manually deleting the timeline history files). With that in\n> mind, I decided to look more into my Question 1 and created a patch proposal.\n> The attached patch will try to archive the current timeline history file if it\n> has not been archived yet when the archiver process starts up.\n\nIn essence, after taking a subtle but not necessarily wrong steps,\nthere's a case where a primary server lacks the timeline history file\nfor the current timeline in both pg_wal and archive, even if that\ntimeline is larger than 1. This primary can start, but a new standby\ncreated form the primary cannot start streaming, as it can't fetch the\ntimeline history file for the initial TLI.\n\nA. The OP suggests archiving the timeline history file for the current\n timeline every time the archiver starts. However, I don't think we\n want to keep archiving the same file over and over. (Granted, we're\n not always perfect at avoiding that..)\n\nB. Given that the steps valid, I concur to what is described in the\n test script provided: standbys don't really need that history file\n for the initial TLI (though I have yet to fully verify this). If the\n walreceiver just overlooks a fetch error for this file, the standby\n can successfully start. (Just skipping the first history file seems\n to work, but it feels a tad aggressive to me.)\n\nC. If those steps aren't valid, we might want to add a note stating\n that -X none basebackups do need the timeline history file for the\n initial TLI. And don't forget to enable archive mode before the\n latest timeline switch if any.\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 24 Aug 2023 17:15:00 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Should the archiver process always make sure that the timeline\n history files exist in the archive?" }, { "msg_contents": "Thanks for the insightful response! I have attached an updated patch\nthat moves the proposed logic to the end of StartupXLOG where it seems\nmore correct to do this. It also helps with backporting (if it's\nneeded) since the archiver process only has access to shared memory\nstarting from Postgres 14.\n\nKyotaro Horiguchi <horikyota(dot)ntt(at)gmail(dot)com> wrote:\n> A. The OP suggests archiving the timeline history file for the current\n> timeline every time the archiver starts. However, I don't think we\n> want to keep archiving the same file over and over. (Granted, we're\n> not always perfect at avoiding that..)\n\nWith the updated proposed patch, we'll be checking if the current\ntimeline history file needs to be archived at the end of StartupXLOG\nif archiving is enabled. If it detects that a .ready or .done file\nalready exists, then it won't do anything (which will be the common\ncase). I agree though that this may be an excessive check since it'll\nbe a no-op the majority of the time. However, it shouldn't execute\noften and seems like a quick safe preventive measure. Could you give\nmore details on why this would be too cumbersome?\n\nKyotaro Horiguchi <horikyota(dot)ntt(at)gmail(dot)com> wrote:\n> B. Given that the steps valid, I concur to what is described in the\n> test script provided: standbys don't really need that history file\n> for the initial TLI (though I have yet to fully verify this). If the\n> walreceiver just overlooks a fetch error for this file, the standby\n> can successfully start. (Just skipping the first history file seems\n> to work, but it feels a tad aggressive to me.)\n\nThis was my initial thought as well but I wasn't sure if it was okay\nto overlook the fetch error. Initial testing and brainstorming seems\nto show that it's okay. I think the main bad thing is that these new\nstandbys will not have their initial timeline history files which can\nbe useful for administration. I've attached a patch that attempts this\napproach if we want to switch to this approach as the solution. The\npatch contains an updated TAP test as well to better showcase the\nissue and fix.\n\nKyotaro Horiguchi <horikyota(dot)ntt(at)gmail(dot)com> wrote:\n> C. If those steps aren't valid, we might want to add a note stating\n> that -X none basebackups do need the timeline history file for the\n> initial TLI.\n\nThe difficult thing about only documenting this is that it forces the\nuser to manually store and track the timeline history files. It can be\na bit cumbersome for WAL archiving users to recognize this scenario\nwhen they're just trying to optimize their basebackups by using\n-Xnone. But then again -Xnone does seem like it's designed for\nadvanced users so this might be okay.\n\nKyotaro Horiguchi <horikyota(dot)ntt(at)gmail(dot)com> wrote:\n> And don't forget to enable archive mode before the latest timeline\n> switch if any.\n\nThis might not be reasonable since a user could've been using\nstreaming replication and doing failover/failbacks as part of general\nhigh availability to manage their Postgres without knowing they were\ngoing to enable WAL archiving later on. The user would need to\nconfigure archiving and force a failover which may not be\nstraightforward.\n\nRegards,\nJimmy Yih", "msg_date": "Tue, 29 Aug 2023 00:58:56 +0000", "msg_from": "Jimmy Yih <jyih@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Should the archiver process always make sure that the timeline\n history files exist in the archive?" }, { "msg_contents": "On Mon, Aug 28, 2023 at 8:59 PM Jimmy Yih <jyih@vmware.com> wrote:\n> Thanks for the insightful response! I have attached an updated patch\n> that moves the proposed logic to the end of StartupXLOG where it seems\n> more correct to do this. It also helps with backporting (if it's\n> needed) since the archiver process only has access to shared memory\n> starting from Postgres 14.\n\nHmm. Do I understand correctly that the two patches you attached are\nalternatives to each other, i.e. we need one or the other to fix the\nissue, but not both?\n\nIt seems to me that trying to fetch a timeline history file and then\nignoring any error has got to be wrong. Either the file is needed or\nit isn't. If it's needed, then failing to fetch it is a problem. If\nit's not needed, there's no reason to try fetching it in the first\nplace. So I feel like we could either try to archive the file at the\nend of recovery, as you propose in\nv2-0001-Archive-current-timeline-history-file-after-recovery.patch.\nAlternatively, we could try to find a way not to request the file in\nthe first place, if it's not required. But\nv1-0001-Allow-recovery-to-proceed-when-initial-timeline-hist.patch\ndoesn't seem good to me.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Jan 2024 14:35:23 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Should the archiver process always make sure that the timeline\n history files exist in the archive?" }, { "msg_contents": "On Tue, 29 Aug 2023 at 06:29, Jimmy Yih <jyih@vmware.com> wrote:\n>\n> Thanks for the insightful response! I have attached an updated patch\n> that moves the proposed logic to the end of StartupXLOG where it seems\n> more correct to do this. It also helps with backporting (if it's\n> needed) since the archiver process only has access to shared memory\n> starting from Postgres 14.\n>\n> Kyotaro Horiguchi <horikyota(dot)ntt(at)gmail(dot)com> wrote:\n> > A. The OP suggests archiving the timeline history file for the current\n> > timeline every time the archiver starts. However, I don't think we\n> > want to keep archiving the same file over and over. (Granted, we're\n> > not always perfect at avoiding that..)\n>\n> With the updated proposed patch, we'll be checking if the current\n> timeline history file needs to be archived at the end of StartupXLOG\n> if archiving is enabled. If it detects that a .ready or .done file\n> already exists, then it won't do anything (which will be the common\n> case). I agree though that this may be an excessive check since it'll\n> be a no-op the majority of the time. However, it shouldn't execute\n> often and seems like a quick safe preventive measure. Could you give\n> more details on why this would be too cumbersome?\n>\n> Kyotaro Horiguchi <horikyota(dot)ntt(at)gmail(dot)com> wrote:\n> > B. Given that the steps valid, I concur to what is described in the\n> > test script provided: standbys don't really need that history file\n> > for the initial TLI (though I have yet to fully verify this). If the\n> > walreceiver just overlooks a fetch error for this file, the standby\n> > can successfully start. (Just skipping the first history file seems\n> > to work, but it feels a tad aggressive to me.)\n>\n> This was my initial thought as well but I wasn't sure if it was okay\n> to overlook the fetch error. Initial testing and brainstorming seems\n> to show that it's okay. I think the main bad thing is that these new\n> standbys will not have their initial timeline history files which can\n> be useful for administration. I've attached a patch that attempts this\n> approach if we want to switch to this approach as the solution. The\n> patch contains an updated TAP test as well to better showcase the\n> issue and fix.\n>\n> Kyotaro Horiguchi <horikyota(dot)ntt(at)gmail(dot)com> wrote:\n> > C. If those steps aren't valid, we might want to add a note stating\n> > that -X none basebackups do need the timeline history file for the\n> > initial TLI.\n>\n> The difficult thing about only documenting this is that it forces the\n> user to manually store and track the timeline history files. It can be\n> a bit cumbersome for WAL archiving users to recognize this scenario\n> when they're just trying to optimize their basebackups by using\n> -Xnone. But then again -Xnone does seem like it's designed for\n> advanced users so this might be okay.\n>\n> Kyotaro Horiguchi <horikyota(dot)ntt(at)gmail(dot)com> wrote:\n> > And don't forget to enable archive mode before the latest timeline\n> > switch if any.\n>\n> This might not be reasonable since a user could've been using\n> streaming replication and doing failover/failbacks as part of general\n> high availability to manage their Postgres without knowing they were\n> going to enable WAL archiving later on. The user would need to\n> configure archiving and force a failover which may not be\n> straightforward.\n\nI have changed the status of the patch to \"Waiting on Author\" as\nRobert's suggestions have not yet been addressed. Feel free to address\nthe suggestions and update the status accordingly.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 11 Jan 2024 20:38:31 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Should the archiver process always make sure that the timeline\n history files exist in the archive?" }, { "msg_contents": "On Thu, 11 Jan 2024 at 20:38, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, 29 Aug 2023 at 06:29, Jimmy Yih <jyih@vmware.com> wrote:\n> >\n> > Thanks for the insightful response! I have attached an updated patch\n> > that moves the proposed logic to the end of StartupXLOG where it seems\n> > more correct to do this. It also helps with backporting (if it's\n> > needed) since the archiver process only has access to shared memory\n> > starting from Postgres 14.\n> >\n> > Kyotaro Horiguchi <horikyota(dot)ntt(at)gmail(dot)com> wrote:\n> > > A. The OP suggests archiving the timeline history file for the current\n> > > timeline every time the archiver starts. However, I don't think we\n> > > want to keep archiving the same file over and over. (Granted, we're\n> > > not always perfect at avoiding that..)\n> >\n> > With the updated proposed patch, we'll be checking if the current\n> > timeline history file needs to be archived at the end of StartupXLOG\n> > if archiving is enabled. If it detects that a .ready or .done file\n> > already exists, then it won't do anything (which will be the common\n> > case). I agree though that this may be an excessive check since it'll\n> > be a no-op the majority of the time. However, it shouldn't execute\n> > often and seems like a quick safe preventive measure. Could you give\n> > more details on why this would be too cumbersome?\n> >\n> > Kyotaro Horiguchi <horikyota(dot)ntt(at)gmail(dot)com> wrote:\n> > > B. Given that the steps valid, I concur to what is described in the\n> > > test script provided: standbys don't really need that history file\n> > > for the initial TLI (though I have yet to fully verify this). If the\n> > > walreceiver just overlooks a fetch error for this file, the standby\n> > > can successfully start. (Just skipping the first history file seems\n> > > to work, but it feels a tad aggressive to me.)\n> >\n> > This was my initial thought as well but I wasn't sure if it was okay\n> > to overlook the fetch error. Initial testing and brainstorming seems\n> > to show that it's okay. I think the main bad thing is that these new\n> > standbys will not have their initial timeline history files which can\n> > be useful for administration. I've attached a patch that attempts this\n> > approach if we want to switch to this approach as the solution. The\n> > patch contains an updated TAP test as well to better showcase the\n> > issue and fix.\n> >\n> > Kyotaro Horiguchi <horikyota(dot)ntt(at)gmail(dot)com> wrote:\n> > > C. If those steps aren't valid, we might want to add a note stating\n> > > that -X none basebackups do need the timeline history file for the\n> > > initial TLI.\n> >\n> > The difficult thing about only documenting this is that it forces the\n> > user to manually store and track the timeline history files. It can be\n> > a bit cumbersome for WAL archiving users to recognize this scenario\n> > when they're just trying to optimize their basebackups by using\n> > -Xnone. But then again -Xnone does seem like it's designed for\n> > advanced users so this might be okay.\n> >\n> > Kyotaro Horiguchi <horikyota(dot)ntt(at)gmail(dot)com> wrote:\n> > > And don't forget to enable archive mode before the latest timeline\n> > > switch if any.\n> >\n> > This might not be reasonable since a user could've been using\n> > streaming replication and doing failover/failbacks as part of general\n> > high availability to manage their Postgres without knowing they were\n> > going to enable WAL archiving later on. The user would need to\n> > configure archiving and force a failover which may not be\n> > straightforward.\n>\n> I have changed the status of the patch to \"Waiting on Author\" as\n> Robert's suggestions have not yet been addressed. Feel free to address\n> the suggestions and update the status accordingly.\n\nThe patch which you submitted has been awaiting your attention for\nquite some time now. As such, we have moved it to \"Returned with\nFeedback\" and removed it from the reviewing queue. Depending on\ntiming, this may be reversible. Kindly address the feedback you have\nreceived, and resubmit the patch to the next CommitFest.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 1 Feb 2024 23:53:26 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Should the archiver process always make sure that the timeline\n history files exist in the archive?" } ]
[ { "msg_contents": "Hello,\n\nCurrently, the psql's test of query cancelling (src/bin/psql/t/020_cancel.pl)\ngets the PPID of a running psql by using \"\\!\" meta command, and sends SIGINT to\nthe process by using \"kill\". However, IPC::Run provides signal() routine that\nsends a signal to a running process, so I think we can rewrite the test using\nthis routine to more simple fashion as attached patch. \n\nWhat do you think about it?\n\n\nUse of signal() is suggested by Fabien COELHO during the discussion of\nquery cancelling of pgbench [1].\n\n[1] https://www.postgresql.org/message-id/7691ade8-78-dd4-e26-135ccfbf0e9%40mines-paristech.fr\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Thu, 10 Aug 2023 12:59:35 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Make psql's qeury canceling test simple by using signal() routine\n of IPC::Run" }, { "msg_contents": "\nHello Yugo-san,\n\n> Currently, the psql's test of query cancelling (src/bin/psql/t/020_cancel.pl)\n> gets the PPID of a running psql by using \"\\!\" meta command, and sends SIGINT to\n> the process by using \"kill\". However, IPC::Run provides signal() routine that\n> sends a signal to a running process, so I think we can rewrite the test using\n> this routine to more simple fashion as attached patch.\n>\n> What do you think about it?\n\nI'm the one who pointed out signal(), so I'm a little biaised, \nnevertheless, regarding the patch:\n\nPatch applies with \"patch\".\n\nTest code is definitely much simpler.\n\nTest run is ok on my Ubuntu laptop.\n\nLet's drop 25 lines of perl!\n\n-- \nFabien.\n\n\n", "msg_date": "Sun, 13 Aug 2023 11:22:33 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Make psql's qeury canceling test simple by using signal() routine\n of IPC::Run" }, { "msg_contents": "On Sun, Aug 13, 2023 at 11:22:33AM +0200, Fabien COELHO wrote:\n> Test run is ok on my Ubuntu laptop.\n\nI have a few comments about this patch.\n\nOn HEAD and even after this patch, we still have the following:\nSKIP: { skip \"cancel test requires a Unix shell\", 2 if $windows_os;\n\nCould the SKIP be removed for $windows_os? If not, this had better be\ndocumented because the reason for the skip becomes incorrect.\n\nThe comment at the top of the SKIP block still states the following:\n# There is, as of this writing, no documented way to get the PID of\n# the process from IPC::Run. As a workaround, we have psql print its\n# own PID (which is the parent of the shell launched by psql) to a\n# file.\n\nThis is also incorrect.\n--\nMichael", "msg_date": "Mon, 14 Aug 2023 08:29:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Make psql's qeury canceling test simple by using signal()\n routine of IPC::Run" }, { "msg_contents": "Bonjour Michaᅵl,\n\n> On Sun, Aug 13, 2023 at 11:22:33AM +0200, Fabien COELHO wrote:\n>> Test run is ok on my Ubuntu laptop.\n>\n> I have a few comments about this patch.\n\nArgh, sorry!\n\nI looked at what was removed (a lot) from the previous version, not what \nwas remaining and should also have been removed.\n\n-- \nFabien.", "msg_date": "Mon, 14 Aug 2023 02:39:30 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Make psql's qeury canceling test simple by using signal() routine\n of IPC::Run" }, { "msg_contents": "On Mon, 14 Aug 2023 08:29:25 +0900\nMichael Paquier <michael@paquier.xyz> wrote:\n\n> On Sun, Aug 13, 2023 at 11:22:33AM +0200, Fabien COELHO wrote:\n> > Test run is ok on my Ubuntu laptop.\n> \n> I have a few comments about this patch.\n> \n> On HEAD and even after this patch, we still have the following:\n> SKIP: { skip \"cancel test requires a Unix shell\", 2 if $windows_os;\n> \n> Could the SKIP be removed for $windows_os? If not, this had better be\n> documented because the reason for the skip becomes incorrect.\n> \n> The comment at the top of the SKIP block still states the following:\n> # There is, as of this writing, no documented way to get the PID of\n> # the process from IPC::Run. As a workaround, we have psql print its\n> # own PID (which is the parent of the shell launched by psql) to a\n> # file.\n> \n> This is also incorrect.\n\nThank you for your comments\n\nI will check whether the test works in Windows and remove SKIP if possible.\nAlso, I'll fix the comment in either case.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Mon, 14 Aug 2023 23:37:25 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Make psql's qeury canceling test simple by using signal()\n routine of IPC::Run" }, { "msg_contents": "On Mon, 14 Aug 2023 23:37:25 +0900\nYugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> On Mon, 14 Aug 2023 08:29:25 +0900\n> Michael Paquier <michael@paquier.xyz> wrote:\n> \n> > On Sun, Aug 13, 2023 at 11:22:33AM +0200, Fabien COELHO wrote:\n> > > Test run is ok on my Ubuntu laptop.\n> > \n> > I have a few comments about this patch.\n> > \n> > On HEAD and even after this patch, we still have the following:\n> > SKIP: { skip \"cancel test requires a Unix shell\", 2 if $windows_os;\n> > \n> > Could the SKIP be removed for $windows_os? If not, this had better be\n> > documented because the reason for the skip becomes incorrect.\n> > \n> > The comment at the top of the SKIP block still states the following:\n> > # There is, as of this writing, no documented way to get the PID of\n> > # the process from IPC::Run. As a workaround, we have psql print its\n> > # own PID (which is the parent of the shell launched by psql) to a\n> > # file.\n> > \n> > This is also incorrect.\n> \n> Thank you for your comments\n> \n> I will check whether the test works in Windows and remove SKIP if possible.\n> Also, I'll fix the comment in either case.\n\nI checked if the test using IPC::Run::signal can work on Windows, and\nconfirmed that this didn't work because sending SIGINT caused to\nterminate the test itself. Here is the results;\n\nt/001_basic.pl ........... ok \nt/010_tab_completion.pl .. skipped: readline is not supported by this build \nt/020_cancel.pl .......... Terminating on signal SIGINT(2)\n\nTherefore, this test should be skipped on Windows.\n\nI attached the update patch. I removed the incorrect comments and\nunnecessary lines. Also, I rewrote the test to use \"skip_all\" instead\nof SKIP because we skip the whole test rather than a part of it.\n\nRegards,\nYugo Nagata\n\n> \n> Regards,\n> Yugo Nagata\n> \n> -- \n> Yugo NAGATA <nagata@sraoss.co.jp>\n> \n> \n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Wed, 6 Sep 2023 00:45:24 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Make psql's qeury canceling test simple by using signal()\n routine of IPC::Run" }, { "msg_contents": "On Wed, Sep 06, 2023 at 12:45:24AM +0900, Yugo NAGATA wrote:\n> I attached the update patch. I removed the incorrect comments and\n> unnecessary lines. Also, I rewrote the test to use \"skip_all\" instead\n> of SKIP because we skip the whole test rather than a part of it.\n\nThanks for checking how IPC::Run behaves in this case on Windows!\n\nRight. This test is currently setting up a node for nothing, so let's\nskip this test entirely under $windows_os and move on. I'll backpatch\nthat down to 15 once the embargo on REL_16_STABLE is lifted with the\n16.0 tag.\n--\nMichael", "msg_date": "Tue, 12 Sep 2023 15:18:05 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Make psql's qeury canceling test simple by using signal()\n routine of IPC::Run" }, { "msg_contents": "On Tue, Sep 12, 2023 at 03:18:05PM +0900, Michael Paquier wrote:\n> On Wed, Sep 06, 2023 at 12:45:24AM +0900, Yugo NAGATA wrote:\n> > I attached the update patch. I removed the incorrect comments and\n> > unnecessary lines. Also, I rewrote the test to use \"skip_all\" instead\n> > of SKIP because we skip the whole test rather than a part of it.\n> \n> Thanks for checking how IPC::Run behaves in this case on Windows!\n> \n> Right. This test is currently setting up a node for nothing, so let's\n> skip this test entirely under $windows_os and move on. I'll backpatch\n> that down to 15 once the embargo on REL_16_STABLE is lifted with the\n> 16.0 tag.\n\nAt the end, I have split this change into two:\n- One to disable the test to run on Windows, skipping the wasted node\ninitialization, and applied that down to 15.\n- One to switch to signal(), only for HEAD to see what happens in the\nbuildfarm once the test is able to run on platforms that do not\nsupport PPID. I am wondering as well how IPC::Run::signal is stable,\nas it is the first time we would use it, AFAIK.\n--\nMichael", "msg_date": "Wed, 13 Sep 2023 10:20:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Make psql's qeury canceling test simple by using signal()\n routine of IPC::Run" }, { "msg_contents": "On Wed, 13 Sep 2023 10:20:23 +0900\nMichael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Sep 12, 2023 at 03:18:05PM +0900, Michael Paquier wrote:\n> > On Wed, Sep 06, 2023 at 12:45:24AM +0900, Yugo NAGATA wrote:\n> > > I attached the update patch. I removed the incorrect comments and\n> > > unnecessary lines. Also, I rewrote the test to use \"skip_all\" instead\n> > > of SKIP because we skip the whole test rather than a part of it.\n> > \n> > Thanks for checking how IPC::Run behaves in this case on Windows!\n> > \n> > Right. This test is currently setting up a node for nothing, so let's\n> > skip this test entirely under $windows_os and move on. I'll backpatch\n> > that down to 15 once the embargo on REL_16_STABLE is lifted with the\n> > 16.0 tag.\n> \n> At the end, I have split this change into two:\n> - One to disable the test to run on Windows, skipping the wasted node\n> initialization, and applied that down to 15.\n> - One to switch to signal(), only for HEAD to see what happens in the\n> buildfarm once the test is able to run on platforms that do not\n> support PPID. I am wondering as well how IPC::Run::signal is stable,\n> as it is the first time we would use it, AFAIK.\n\nThank you for pushing them.\nI agree with the suspection about IPC::Run::singnal, so I'll also\ncheck the buildfarm result.\n\nRegards,\nYugo Nagata\n\n> --\n> Michael\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Wed, 13 Sep 2023 12:58:13 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Make psql's qeury canceling test simple by using signal()\n routine of IPC::Run" } ]
[ { "msg_contents": "We have a lot of Perl scripts in the tree, mostly code generation and \nTAP tests. Occasionally, these scripts produce warnings. These are \nAFAICT always mistakes on the developer side (true positives). Typical \nexamples are warnings from genbki.pl or related when you make a mess in \nthe catalog files during development, or warnings from tests when they \nmassage a config file that looks different on different hosts, or \nmistakes during merges (e.g., duplicate subroutine definitions), or just \nmistakes that weren't noticed, because, you know, there is a lot of \noutput in a verbose build.\n\nI wanted to figure put if we can catch these more reliably, in the style \nof -Werror. AFAICT, there is no way to automatically turn all warnings \ninto fatal errors. But there is a way to do it per script, by replacing\n\n use warnings;\n\nby\n\n use warnings FATAL => 'all';\n\nSee attached patch to try it out.\n\nThe documentation at <https://perldoc.perl.org/warnings#Fatal-Warnings> \nappears to sort of hand-wave against doing that. Their argument appears \nto be something like, the modules you use might in the future produce \nadditional warnings, thus breaking your scripts. On balance, I'd take \nthat risk, if it means I would notice the warnings in a more timely and \nrobust way. But that's just me at a moment in time.\n\nThoughts?\n\n\nNow to some funny business. If you apply this patch, the Cirrus CI \nLinux tasks will fail, because they get an undefined return from \ngetprotobyname() in PostgreSQL/Test/Cluster.pm. Huh? I need this patch \nto get past that:\n\ndiff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm \nb/src/test/perl/PostgreSQL/Test/Cluster.pm\nindex 3fa679ff97..dfe7bc7b1a 100644\n--- a/src/test/perl/PostgreSQL/Test/Cluster.pm\n+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm\n@@ -1570,7 +1570,7 @@ sub can_bind\n my ($host, $port) = @_;\n my $iaddr = inet_aton($host);\n my $paddr = sockaddr_in($port, $iaddr);\n- my $proto = getprotobyname(\"tcp\");\n+ my $proto = getprotobyname(\"tcp\") || 6;\n\n socket(SOCK, PF_INET, SOCK_STREAM, $proto)\n or die \"socket failed: $!\";\n\nWhat is going on there? Does this host not have /etc/protocols, or \nsomething like that?\n\n\nThere are also a couple of issues in the MSVC legacy build system that \nwould need to be tightened up in order to survive with fatal Perl \nwarnings. Obviously, there is a question whether it's worth spending \nany time on that anymore.", "msg_date": "Thu, 10 Aug 2023 07:58:27 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": true, "msg_subject": "Make all Perl warnings fatal" }, { "msg_contents": "To avoid a complete bloodbath on cfbot, here is an updated patch set \nthat includes a workaround for the getprotobyname() issue mentioned below.\n\n\nOn 10.08.23 07:58, Peter Eisentraut wrote:\n> We have a lot of Perl scripts in the tree, mostly code generation and \n> TAP tests.  Occasionally, these scripts produce warnings.  These are \n> AFAICT always mistakes on the developer side (true positives).  Typical \n> examples are warnings from genbki.pl or related when you make a mess in \n> the catalog files during development, or warnings from tests when they \n> massage a config file that looks different on different hosts, or \n> mistakes during merges (e.g., duplicate subroutine definitions), or just \n> mistakes that weren't noticed, because, you know, there is a lot of \n> output in a verbose build.\n> \n> I wanted to figure put if we can catch these more reliably, in the style \n> of -Werror.  AFAICT, there is no way to automatically turn all warnings \n> into fatal errors.  But there is a way to do it per script, by replacing\n> \n>     use warnings;\n> \n> by\n> \n>     use warnings FATAL => 'all';\n> \n> See attached patch to try it out.\n> \n> The documentation at <https://perldoc.perl.org/warnings#Fatal-Warnings> \n> appears to sort of hand-wave against doing that.  Their argument appears \n> to be something like, the modules you use might in the future produce \n> additional warnings, thus breaking your scripts.  On balance, I'd take \n> that risk, if it means I would notice the warnings in a more timely and \n> robust way.  But that's just me at a moment in time.\n> \n> Thoughts?\n> \n> \n> Now to some funny business.  If you apply this patch, the Cirrus CI \n> Linux tasks will fail, because they get an undefined return from \n> getprotobyname() in PostgreSQL/Test/Cluster.pm.  Huh?  I need this patch \n> to get past that:\n> \n> diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm \n> b/src/test/perl/PostgreSQL/Test/Cluster.pm\n> index 3fa679ff97..dfe7bc7b1a 100644\n> --- a/src/test/perl/PostgreSQL/Test/Cluster.pm\n> +++ b/src/test/perl/PostgreSQL/Test/Cluster.pm\n> @@ -1570,7 +1570,7 @@ sub can_bind\n>     my ($host, $port) = @_;\n>     my $iaddr = inet_aton($host);\n>     my $paddr = sockaddr_in($port, $iaddr);\n> -   my $proto = getprotobyname(\"tcp\");\n> +   my $proto = getprotobyname(\"tcp\") || 6;\n> \n>     socket(SOCK, PF_INET, SOCK_STREAM, $proto)\n>       or die \"socket failed: $!\";\n> \n> What is going on there?  Does this host not have /etc/protocols, or \n> something like that?\n> \n> \n> There are also a couple of issues in the MSVC legacy build system that \n> would need to be tightened up in order to survive with fatal Perl \n> warnings.  Obviously, there is a question whether it's worth spending \n> any time on that anymore.", "msg_date": "Mon, 21 Aug 2023 08:20:08 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": true, "msg_subject": "Re: Make all Perl warnings fatal" }, { "msg_contents": "On 2023-08-21 Mo 02:20, Peter Eisentraut wrote:\n> To avoid a complete bloodbath on cfbot, here is an updated patch set \n> that includes a workaround for the getprotobyname() issue mentioned \n> below.\n>\n>\n> On 10.08.23 07:58, Peter Eisentraut wrote:\n>> We have a lot of Perl scripts in the tree, mostly code generation and \n>> TAP tests.  Occasionally, these scripts produce warnings.  These are \n>> AFAICT always mistakes on the developer side (true positives).  \n>> Typical examples are warnings from genbki.pl or related when you make \n>> a mess in the catalog files during development, or warnings from \n>> tests when they massage a config file that looks different on \n>> different hosts, or mistakes during merges (e.g., duplicate \n>> subroutine definitions), or just mistakes that weren't noticed, \n>> because, you know, there is a lot of output in a verbose build.\n>>\n>> I wanted to figure put if we can catch these more reliably, in the \n>> style of -Werror.  AFAICT, there is no way to automatically turn all \n>> warnings into fatal errors.  But there is a way to do it per script, \n>> by replacing\n>>\n>>      use warnings;\n>>\n>> by\n>>\n>>      use warnings FATAL => 'all';\n>>\n>> See attached patch to try it out.\n>>\n>> The documentation at \n>> <https://perldoc.perl.org/warnings#Fatal-Warnings> appears to sort of \n>> hand-wave against doing that.  Their argument appears to be something \n>> like, the modules you use might in the future produce additional \n>> warnings, thus breaking your scripts.  On balance, I'd take that \n>> risk, if it means I would notice the warnings in a more timely and \n>> robust way.  But that's just me at a moment in time.\n>>\n>> Thoughts?\n\n\nIt's not really the same as -Werror, because many warnings can be \ngenerated at runtime rather than compile-time.\n\nStill, I guess that might not matter too much since apart from plperl we \nonly use perl for building / testing.\n\nRegarding the dangers mentioned, I guess we can undo it if it proves a \nnuisance.\n\n+1 to getting rid if the unnecessary call to getprotobyname().\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-08-21 Mo 02:20, Peter\n Eisentraut wrote:\n\nTo\n avoid a complete bloodbath on cfbot, here is an updated patch set\n that includes a workaround for the getprotobyname() issue\n mentioned below.\n \n\n\n On 10.08.23 07:58, Peter Eisentraut wrote:\n \nWe have a lot of Perl scripts in the tree,\n mostly code generation and TAP tests.  Occasionally, these\n scripts produce warnings.  These are AFAICT always mistakes on\n the developer side (true positives).  Typical examples are\n warnings from genbki.pl or related when you make a mess in the\n catalog files during development, or warnings from tests when\n they massage a config file that looks different on different\n hosts, or mistakes during merges (e.g., duplicate subroutine\n definitions), or just mistakes that weren't noticed, because,\n you know, there is a lot of output in a verbose build.\n \n\n I wanted to figure put if we can catch these more reliably, in\n the style of -Werror.  AFAICT, there is no way to automatically\n turn all warnings into fatal errors.  But there is a way to do\n it per script, by replacing\n \n\n      use warnings;\n \n\n by\n \n\n      use warnings FATAL => 'all';\n \n\n See attached patch to try it out.\n \n\n The documentation at\n <https://perldoc.perl.org/warnings#Fatal-Warnings> appears\n to sort of hand-wave against doing that.  Their argument appears\n to be something like, the modules you use might in the future\n produce additional warnings, thus breaking your scripts.  On\n balance, I'd take that risk, if it means I would notice the\n warnings in a more timely and robust way.  But that's just me at\n a moment in time.\n \n\n Thoughts?\n \n\n\n\n\nIt's not really the same as -Werror, because many warnings can be\n generated at runtime rather than compile-time.\nStill, I guess that might not matter too much since apart from\n plperl we only use perl for building / testing.\nRegarding the dangers mentioned, I guess we can undo it if it\n proves a nuisance.\n+1 to getting rid if the unnecessary call to getprotobyname().\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Mon, 21 Aug 2023 11:51:24 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Make all Perl warnings fatal" }, { "msg_contents": "On Mon, Aug 21, 2023 at 11:51:24AM -0400, Andrew Dunstan wrote:\n> It's not really the same as -Werror, because many warnings can be generated\n> at runtime rather than compile-time.\n> \n> Still, I guess that might not matter too much since apart from plperl we\n> only use perl for building / testing.\n\nHowever, is it possible to trust the out-of-core perl modules posted\non CPAN, assuming that these will never produce warnings? I've never\nseen any issues with IPC::Run in these last years, so perhaps that's\nOK in the long-run.\n\n> Regarding the dangers mentioned, I guess we can undo it if it proves a\n> nuisance.\n\nYeah. I am wondering what the buildfarm would say with this change.\n\n> +1 to getting rid if the unnecessary call to getprotobyname().\n\nLooking around here..\nhttps://perldoc.perl.org/perlipc#Sockets%3A-Client%2FServer-Communication\n\nHmm. Are you sure that this is OK even in the case where the TAP\ntests run on Windows without unix-domain socket support? The CI runs\non Windows, but always with unix domain sockets around as far as I\nknow.\n--\nMichael", "msg_date": "Tue, 22 Aug 2023 13:05:22 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Make all Perl warnings fatal" }, { "msg_contents": "On 2023-08-22 Tu 00:05, Michael Paquier wrote:\n> On Mon, Aug 21, 2023 at 11:51:24AM -0400, Andrew Dunstan wrote:\n>> It's not really the same as -Werror, because many warnings can be generated\n>> at runtime rather than compile-time.\n>>\n>> Still, I guess that might not matter too much since apart from plperl we\n>> only use perl for building / testing.\n> However, is it possible to trust the out-of-core perl modules posted\n> on CPAN, assuming that these will never produce warnings? I've never\n> seen any issues with IPC::Run in these last years, so perhaps that's\n> OK in the long-run.\n\n\nIf we do find any such issues then warnings can be turned off locally. \nWe already do that in several places.\n\n\n>> Regarding the dangers mentioned, I guess we can undo it if it proves a\n>> nuisance.\n> Yeah. I am wondering what the buildfarm would say with this change.\n>\n>> +1 to getting rid if the unnecessary call to getprotobyname().\n> Looking around here..\n> https://perldoc.perl.org/perlipc#Sockets%3A-Client%2FServer-Communication\n>\n> Hmm. Are you sure that this is OK even in the case where the TAP\n> tests run on Windows without unix-domain socket support? The CI runs\n> on Windows, but always with unix domain sockets around as far as I\n> know.\n\n\nThe socket call in question is for a PF_INET socket, so this has nothing \nat all to do with unix domain sockets. See the man page for socket() (2) \nfor an explanation of why 0 is ok in this case. (There's only one \nprotocol that matches the rest of the parameters).\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-08-22 Tu 00:05, Michael Paquier\n wrote:\n\n\nOn Mon, Aug 21, 2023 at 11:51:24AM -0400, Andrew Dunstan wrote:\n\n\nIt's not really the same as -Werror, because many warnings can be generated\nat runtime rather than compile-time.\n\nStill, I guess that might not matter too much since apart from plperl we\nonly use perl for building / testing.\n\n\n\nHowever, is it possible to trust the out-of-core perl modules posted\non CPAN, assuming that these will never produce warnings? I've never\nseen any issues with IPC::Run in these last years, so perhaps that's\nOK in the long-run.\n\n\n\n\nIf we do find any such issues then warnings can be turned off\n locally. We already do that in several places.\n\n\n\n\n\n\n\nRegarding the dangers mentioned, I guess we can undo it if it proves a\nnuisance.\n\n\n\nYeah. I am wondering what the buildfarm would say with this change.\n\n\n\n+1 to getting rid if the unnecessary call to getprotobyname().\n\n\n\nLooking around here..\nhttps://perldoc.perl.org/perlipc#Sockets%3A-Client%2FServer-Communication\n\nHmm. Are you sure that this is OK even in the case where the TAP\ntests run on Windows without unix-domain socket support? The CI runs\non Windows, but always with unix domain sockets around as far as I\nknow.\n\n\n\n\nThe socket call in question is for a PF_INET socket, so this has\n nothing at all to do with unix domain sockets. See the man page\n for socket() (2) for an explanation of why 0 is ok in this case.\n (There's only one protocol that matches the rest of the\n parameters).\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Tue, 22 Aug 2023 09:11:25 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Make all Perl warnings fatal" }, { "msg_contents": "On 2023-Aug-10, Peter Eisentraut wrote:\n\n> I wanted to figure put if we can catch these more reliably, in the style of\n> -Werror. AFAICT, there is no way to automatically turn all warnings into\n> fatal errors. But there is a way to do it per script, by replacing\n> \n> use warnings;\n> \n> by\n> \n> use warnings FATAL => 'all';\n> \n> See attached patch to try it out.\n\nBTW in case we do find that there's some unforeseen problem and we want\nto roll back, it would be great to have a way to disable this without\nhaving to edit every single Perl file again later. However, I didn't\nfind a way to do it -- I thought about creating a separate PgWarnings.pm\nfile that would do the \"use warnings FATAL => 'all'\" dance and which\nevery other Perl file would use or include; but couldn't make it work.\nMaybe some Perl expert knows a good answer to this.\n\nMaybe the BEGIN block of each file can `eval` a new PgWarnings.pm that\nemits the \"use warnings\" line?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"No renuncies a nada. No te aferres a nada.\"\n\n\n", "msg_date": "Tue, 22 Aug 2023 15:20:28 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Make all Perl warnings fatal" }, { "msg_contents": "On 2023-08-22 Tu 09:20, Alvaro Herrera wrote:\n> On 2023-Aug-10, Peter Eisentraut wrote:\n>\n>> I wanted to figure put if we can catch these more reliably, in the style of\n>> -Werror. AFAICT, there is no way to automatically turn all warnings into\n>> fatal errors. But there is a way to do it per script, by replacing\n>>\n>> use warnings;\n>>\n>> by\n>>\n>> use warnings FATAL => 'all';\n>>\n>> See attached patch to try it out.\n> BTW in case we do find that there's some unforeseen problem and we want\n> to roll back, it would be great to have a way to disable this without\n> having to edit every single Perl file again later. However, I didn't\n> find a way to do it -- I thought about creating a separate PgWarnings.pm\n> file that would do the \"use warnings FATAL => 'all'\" dance and which\n> every other Perl file would use or include; but couldn't make it work.\n> Maybe some Perl expert knows a good answer to this.\n>\n> Maybe the BEGIN block of each file can `eval` a new PgWarnings.pm that\n> emits the \"use warnings\" line?\n\n\nOnce we try it, I doubt we would want to revoke it globally, and if we \ndid I'd rather not be left with a wart like this. As I mentioned \nupthread, it is possible to override the setting locally. The manual \npage for the warnings pragma contains details.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-08-22 Tu 09:20, Alvaro Herrera\n wrote:\n\n\nOn 2023-Aug-10, Peter Eisentraut wrote:\n\n\n\nI wanted to figure put if we can catch these more reliably, in the style of\n-Werror. AFAICT, there is no way to automatically turn all warnings into\nfatal errors. But there is a way to do it per script, by replacing\n\n use warnings;\n\nby\n\n use warnings FATAL => 'all';\n\nSee attached patch to try it out.\n\n\n\nBTW in case we do find that there's some unforeseen problem and we want\nto roll back, it would be great to have a way to disable this without\nhaving to edit every single Perl file again later. However, I didn't\nfind a way to do it -- I thought about creating a separate PgWarnings.pm\nfile that would do the \"use warnings FATAL => 'all'\" dance and which\nevery other Perl file would use or include; but couldn't make it work.\nMaybe some Perl expert knows a good answer to this.\n\nMaybe the BEGIN block of each file can `eval` a new PgWarnings.pm that\nemits the \"use warnings\" line?\n\n\n\nOnce we try it, I doubt we would want to revoke it globally, and\n if we did I'd rather not be left with a wart like this. As I\n mentioned upthread, it is possible to override the setting\n locally. The manual page for the warnings pragma contains details.\n\n\ncheers\n\n\nandrew\n \n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Tue, 22 Aug 2023 14:46:46 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Make all Perl warnings fatal" }, { "msg_contents": "On 21.08.23 17:51, Andrew Dunstan wrote:\n> Still, I guess that might not matter too much since apart from plperl we \n> only use perl for building / testing.\n> \n> Regarding the dangers mentioned, I guess we can undo it if it proves a \n> nuisance.\n> \n> +1 to getting rid if the unnecessary call to getprotobyname().\n\nI have committed the latter part.\n\nThe rest would still depend on some fixes for the MSVC build system, so \nI'll hold that until we decide what to do with that.\n\n\n\n", "msg_date": "Wed, 23 Aug 2023 21:05:43 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": true, "msg_subject": "Re: Make all Perl warnings fatal" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n\n> On 2023-Aug-10, Peter Eisentraut wrote:\n>\n>> I wanted to figure put if we can catch these more reliably, in the style of\n>> -Werror. AFAICT, there is no way to automatically turn all warnings into\n>> fatal errors. But there is a way to do it per script, by replacing\n>> \n>> use warnings;\n>> \n>> by\n>> \n>> use warnings FATAL => 'all';\n>> \n>> See attached patch to try it out.\n>\n> BTW in case we do find that there's some unforeseen problem and we want\n> to roll back, it would be great to have a way to disable this without\n> having to edit every single Perl file again later. However, I didn't\n> find a way to do it -- I thought about creating a separate PgWarnings.pm\n> file that would do the \"use warnings FATAL => 'all'\" dance and which\n> every other Perl file would use or include; but couldn't make it work.\n> Maybe some Perl expert knows a good answer to this.\n\nLike most pragmas (all-lower-case module names), `warnings` affects the\ncurrently-compiling lexical scope, so to have a module like PgWarnings\ninject it into the module that uses it, you'd call warnings->import in\nits import method (which gets called when the `use PgWarnings;``\nstatement is compiled, e.g.:\n\npackage PgWarnings;\n\nsub import {\n\twarnings->import(FATAL => 'all');\n}\n\nI wouldn't bother with a whole module just for that, but if we have a\ngroup of pragmas or modules we always want to enable/import and have the\nability to change this set without having to edit all the files, it's\nquite common to have a ProjectName::Policy module that does that. For\nexample, to exclude warnings that are unsafe, pointless, or impossible\nto fatalise (c.f. https://metacpan.org/pod/strictures#CATEGORY-SELECTIONS):\n\npackage PostgreSQL::Policy;\n\nsub import {\n\tstrict->import;\n\twarnings->import(\n\t\tFATAL => 'all',\n\t\tNONFATAL => qw(exec internal malloc recursion),\n\t);\n\twarnings->uniport(qw(once));\n}\n\nNow that we require Perl 5.14, we might want to consider enabling its\nfeature bundle as well, with:\n\n\tfeature->import(':5.14')\n\nAlthough the only features of note that adds are:\n\n - say: the `say` function, like `print` but appends a newline\n\n - state: `state` variables, like `my` but only initialised the first\n time the function they're in is called, and the value persists\n between calls (like function-scoped `static` variables in C)\n\n - unicode_strings: use unicode semantics for characters in the\n 128-255 range, regardless of internal representation\n\n> Maybe the BEGIN block of each file can `eval` a new PgWarnings.pm that\n> emits the \"use warnings\" line?\n\nThat's ugly as sin, and thankfully not necessary.\n\n-ilmari\n\n\n", "msg_date": "Fri, 25 Aug 2023 21:49:34 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: Make all Perl warnings fatal" }, { "msg_contents": "On 2023-08-25 Fr 16:49, Dagfinn Ilmari Mannsåker wrote:\n> Alvaro Herrera<alvherre@alvh.no-ip.org> writes:\n>\n>> On 2023-Aug-10, Peter Eisentraut wrote:\n>>\n>>> I wanted to figure put if we can catch these more reliably, in the style of\n>>> -Werror. AFAICT, there is no way to automatically turn all warnings into\n>>> fatal errors. But there is a way to do it per script, by replacing\n>>>\n>>> use warnings;\n>>>\n>>> by\n>>>\n>>> use warnings FATAL => 'all';\n>>>\n>>> See attached patch to try it out.\n>> BTW in case we do find that there's some unforeseen problem and we want\n>> to roll back, it would be great to have a way to disable this without\n>> having to edit every single Perl file again later. However, I didn't\n>> find a way to do it -- I thought about creating a separate PgWarnings.pm\n>> file that would do the \"use warnings FATAL => 'all'\" dance and which\n>> every other Perl file would use or include; but couldn't make it work.\n>> Maybe some Perl expert knows a good answer to this.\n> Like most pragmas (all-lower-case module names), `warnings` affects the\n> currently-compiling lexical scope, so to have a module like PgWarnings\n> inject it into the module that uses it, you'd call warnings->import in\n> its import method (which gets called when the `use PgWarnings;``\n> statement is compiled, e.g.:\n>\n> package PgWarnings;\n>\n> sub import {\n> \twarnings->import(FATAL => 'all');\n> }\n>\n> I wouldn't bother with a whole module just for that, but if we have a\n> group of pragmas or modules we always want to enable/import and have the\n> ability to change this set without having to edit all the files, it's\n> quite common to have a ProjectName::Policy module that does that. For\n> example, to exclude warnings that are unsafe, pointless, or impossible\n> to fatalise (c.f.https://metacpan.org/pod/strictures#CATEGORY-SELECTIONS):\n>\n> package PostgreSQL::Policy;\n>\n> sub import {\n> \tstrict->import;\n> \twarnings->import(\n> \t\tFATAL => 'all',\n> \t\tNONFATAL => qw(exec internal malloc recursion),\n> \t);\n> \twarnings->uniport(qw(once));\n> }\n>\n> Now that we require Perl 5.14, we might want to consider enabling its\n> feature bundle as well, with:\n>\n> \tfeature->import(':5.14')\n>\n> Although the only features of note that adds are:\n>\n> - say: the `say` function, like `print` but appends a newline\n>\n> - state: `state` variables, like `my` but only initialised the first\n> time the function they're in is called, and the value persists\n> between calls (like function-scoped `static` variables in C)\n>\n> - unicode_strings: use unicode semantics for characters in the\n> 128-255 range, regardless of internal representation\n\n\nWe'd probably have to modify the perlcritic rules to account for it. See \n<https://metacpan.org/pod/Perl::Critic::Policy::TestingAndDebugging::RequireUseStrict> \nand similarly for RequireUseWarnings. In any case, it seems a bit like \noverkill.\n\n\n>> Maybe the BEGIN block of each file can `eval` a new PgWarnings.pm that\n>> emits the \"use warnings\" line?\n> That's ugly as sin, and thankfully not necessary.\n>\n\nAgreed.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-08-25 Fr 16:49, Dagfinn Ilmari\n Mannsåker wrote:\n\n\nAlvaro Herrera <alvherre@alvh.no-ip.org> writes:\n\n\n\nOn 2023-Aug-10, Peter Eisentraut wrote:\n\n\n\nI wanted to figure put if we can catch these more reliably, in the style of\n-Werror. AFAICT, there is no way to automatically turn all warnings into\nfatal errors. But there is a way to do it per script, by replacing\n\n use warnings;\n\nby\n\n use warnings FATAL => 'all';\n\nSee attached patch to try it out.\n\n\n\nBTW in case we do find that there's some unforeseen problem and we want\nto roll back, it would be great to have a way to disable this without\nhaving to edit every single Perl file again later. However, I didn't\nfind a way to do it -- I thought about creating a separate PgWarnings.pm\nfile that would do the \"use warnings FATAL => 'all'\" dance and which\nevery other Perl file would use or include; but couldn't make it work.\nMaybe some Perl expert knows a good answer to this.\n\n\n\nLike most pragmas (all-lower-case module names), `warnings` affects the\ncurrently-compiling lexical scope, so to have a module like PgWarnings\ninject it into the module that uses it, you'd call warnings->import in\nits import method (which gets called when the `use PgWarnings;``\nstatement is compiled, e.g.:\n\npackage PgWarnings;\n\nsub import {\n\twarnings->import(FATAL => 'all');\n}\n\nI wouldn't bother with a whole module just for that, but if we have a\ngroup of pragmas or modules we always want to enable/import and have the\nability to change this set without having to edit all the files, it's\nquite common to have a ProjectName::Policy module that does that. For\nexample, to exclude warnings that are unsafe, pointless, or impossible\nto fatalise (c.f. https://metacpan.org/pod/strictures#CATEGORY-SELECTIONS):\n\npackage PostgreSQL::Policy;\n\nsub import {\n\tstrict->import;\n\twarnings->import(\n\t\tFATAL => 'all',\n\t\tNONFATAL => qw(exec internal malloc recursion),\n\t);\n\twarnings->uniport(qw(once));\n}\n\nNow that we require Perl 5.14, we might want to consider enabling its\nfeature bundle as well, with:\n\n\tfeature->import(':5.14')\n\nAlthough the only features of note that adds are:\n\n - say: the `say` function, like `print` but appends a newline\n\n - state: `state` variables, like `my` but only initialised the first\n time the function they're in is called, and the value persists\n between calls (like function-scoped `static` variables in C)\n\n - unicode_strings: use unicode semantics for characters in the\n 128-255 range, regardless of internal representation\n\n\n\n\nWe'd probably have to modify the perlcritic rules to account for\n it. See\n<https://metacpan.org/pod/Perl::Critic::Policy::TestingAndDebugging::RequireUseStrict>\n and similarly for RequireUseWarnings. In any case, it seems a bit\n like overkill.\n\n\n\n\n\n\n\nMaybe the BEGIN block of each file can `eval` a new PgWarnings.pm that\nemits the \"use warnings\" line?\n\n\n\nThat's ugly as sin, and thankfully not necessary.\n\n\n\n\n\nAgreed.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Sun, 27 Aug 2023 07:23:11 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Make all Perl warnings fatal" }, { "msg_contents": "On 10.08.23 07:58, Peter Eisentraut wrote:\n> There are also a couple of issues in the MSVC legacy build system that \n> would need to be tightened up in order to survive with fatal Perl \n> warnings.  Obviously, there is a question whether it's worth spending \n> any time on that anymore.\n\nIt looks like there are no principled objections to the overall idea \nhere, but given this dependency on the MSVC build system removal, I'm \ngoing to set this patch to Returned with feedback for now.\n\n\n\n", "msg_date": "Tue, 12 Sep 2023 07:42:45 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": true, "msg_subject": "Re: Make all Perl warnings fatal" }, { "msg_contents": "On 12.09.23 07:42, Peter Eisentraut wrote:\n> On 10.08.23 07:58, Peter Eisentraut wrote:\n>> There are also a couple of issues in the MSVC legacy build system that \n>> would need to be tightened up in order to survive with fatal Perl \n>> warnings.  Obviously, there is a question whether it's worth spending \n>> any time on that anymore.\n> \n> It looks like there are no principled objections to the overall idea \n> here, but given this dependency on the MSVC build system removal, I'm \n> going to set this patch to Returned with feedback for now.\n\nNow that that is done, here is an updated patch for this.\n\nI found one more bug in the Perl code because of this, a fix for which \nis included here.\n\nWith this fix, this passes all the CI tests on all platforms.", "msg_date": "Fri, 22 Dec 2023 22:33:18 +0100", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": true, "msg_subject": "Re: Make all Perl warnings fatal" }, { "msg_contents": "On 22.12.23 22:33, Peter Eisentraut wrote:\n> On 12.09.23 07:42, Peter Eisentraut wrote:\n>> On 10.08.23 07:58, Peter Eisentraut wrote:\n>>> There are also a couple of issues in the MSVC legacy build system \n>>> that would need to be tightened up in order to survive with fatal \n>>> Perl warnings.  Obviously, there is a question whether it's worth \n>>> spending any time on that anymore.\n>>\n>> It looks like there are no principled objections to the overall idea \n>> here, but given this dependency on the MSVC build system removal, I'm \n>> going to set this patch to Returned with feedback for now.\n> \n> Now that that is done, here is an updated patch for this.\n> \n> I found one more bug in the Perl code because of this, a fix for which \n> is included here.\n> \n> With this fix, this passes all the CI tests on all platforms.\n\ncommitted\n\n\n", "msg_date": "Fri, 29 Dec 2023 20:27:31 +0100", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": true, "msg_subject": "Re: Make all Perl warnings fatal" }, { "msg_contents": "On Sat, Dec 30, 2023 at 12:57 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n>\n> committed\n\nWith the commit c5385929 converting perl warnings to FATAL, use of\npsql/safe_psql with timeout parameters [1] fail with the following\nerror:\n\nUse of uninitialized value $ret in bitwise and (&) at\n/home/ubuntu/postgres/src/test/recovery/../../../src/test/perl/PostgreSQL/Test/Cluster.pm\nline 2015.\n\nPerhaps assigning a default error code to $ret instead of undef in\nPostgreSQL::Test::Cluster - psql() function is the solution.\n\n[1]\nuse strict;\nuse warnings FATAL => 'all';\nuse PostgreSQL::Test::Cluster;\nuse PostgreSQL::Test::Utils;\nuse Test::More;\n\nmy $node = PostgreSQL::Test::Cluster->new('test');\n$node->init;\n$node->start;\n\nmy ($stdout, $stderr, $timed_out);\nmy $cmdret = $node->psql('postgres', q[SELECT pg_sleep(600)],\n stdout => \\$stdout, stderr => \\$stderr,\n timeout => 5,\n timed_out => \\$timed_out,\n extra_params => ['--single-transaction'],\n on_error_die => 1);\nprint \"pg_sleep timed out\" if $timed_out;\n\ndone_testing();\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 11 Jan 2024 16:59:24 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make all Perl warnings fatal" }, { "msg_contents": "On 11.01.24 12:29, Bharath Rupireddy wrote:\n> On Sat, Dec 30, 2023 at 12:57 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n>>\n>> committed\n> \n> With the commit c5385929 converting perl warnings to FATAL, use of\n> psql/safe_psql with timeout parameters [1] fail with the following\n> error:\n> \n> Use of uninitialized value $ret in bitwise and (&) at\n> /home/ubuntu/postgres/src/test/recovery/../../../src/test/perl/PostgreSQL/Test/Cluster.pm\n> line 2015.\n\nI think what is actually relevant is the timed_out parameter, otherwise \nthe psql/safe_psql function ends up calling \"die\" and you don't get any \nfurther.\n\n> Perhaps assigning a default error code to $ret instead of undef in\n> PostgreSQL::Test::Cluster - psql() function is the solution.\n\nI would put this code\n\n my $core = $ret & 128 ? \" (core dumped)\" : \"\";\n die \"psql exited with signal \"\n . ($ret & 127)\n . \"$core: '$$stderr' while running '@psql_params'\"\n if $ret & 127;\n $ret = $ret >> 8;\n\ninside a if (defined $ret) block.\n\nThen the behavior would be that the whole function returns undef on \ntimeout, which is usefully different from returning 0 (and matches \nprevious behavior).\n\n\n\n", "msg_date": "Fri, 12 Jan 2024 16:33:44 +0100", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": true, "msg_subject": "Re: Make all Perl warnings fatal" }, { "msg_contents": "On Fri, Jan 12, 2024 at 9:03 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n>\n> On 11.01.24 12:29, Bharath Rupireddy wrote:\n> > On Sat, Dec 30, 2023 at 12:57 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n> >>\n> >> committed\n> >\n> > With the commit c5385929 converting perl warnings to FATAL, use of\n> > psql/safe_psql with timeout parameters [1] fail with the following\n> > error:\n> >\n> > Use of uninitialized value $ret in bitwise and (&) at\n> > /home/ubuntu/postgres/src/test/recovery/../../../src/test/perl/PostgreSQL/Test/Cluster.pm\n> > line 2015.\n>\n> I think what is actually relevant is the timed_out parameter, otherwise\n> the psql/safe_psql function ends up calling \"die\" and you don't get any\n> further.\n>\n> > Perhaps assigning a default error code to $ret instead of undef in\n> > PostgreSQL::Test::Cluster - psql() function is the solution.\n>\n> I would put this code\n>\n> my $core = $ret & 128 ? \" (core dumped)\" : \"\";\n> die \"psql exited with signal \"\n> . ($ret & 127)\n> . \"$core: '$$stderr' while running '@psql_params'\"\n> if $ret & 127;\n> $ret = $ret >> 8;\n>\n> inside a if (defined $ret) block.\n>\n> Then the behavior would be that the whole function returns undef on\n> timeout, which is usefully different from returning 0 (and matches\n> previous behavior).\n\nWFM.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 12 Jan 2024 21:21:24 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make all Perl warnings fatal" }, { "msg_contents": "On Fri, Jan 12, 2024 at 9:21 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Jan 12, 2024 at 9:03 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n> >\n> > I would put this code\n> >\n> > my $core = $ret & 128 ? \" (core dumped)\" : \"\";\n> > die \"psql exited with signal \"\n> > . ($ret & 127)\n> > . \"$core: '$$stderr' while running '@psql_params'\"\n> > if $ret & 127;\n> > $ret = $ret >> 8;\n> >\n> > inside a if (defined $ret) block.\n> >\n> > Then the behavior would be that the whole function returns undef on\n> > timeout, which is usefully different from returning 0 (and matches\n> > previous behavior).\n>\n> WFM.\n\nI've attached a patch for the above change.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 16 Jan 2024 16:38:47 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make all Perl warnings fatal" }, { "msg_contents": "On 16.01.24 12:08, Bharath Rupireddy wrote:\n> On Fri, Jan 12, 2024 at 9:21 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>\n>> On Fri, Jan 12, 2024 at 9:03 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n>>>\n>>> I would put this code\n>>>\n>>> my $core = $ret & 128 ? \" (core dumped)\" : \"\";\n>>> die \"psql exited with signal \"\n>>> . ($ret & 127)\n>>> . \"$core: '$$stderr' while running '@psql_params'\"\n>>> if $ret & 127;\n>>> $ret = $ret >> 8;\n>>>\n>>> inside a if (defined $ret) block.\n>>>\n>>> Then the behavior would be that the whole function returns undef on\n>>> timeout, which is usefully different from returning 0 (and matches\n>>> previous behavior).\n>>\n>> WFM.\n> \n> I've attached a patch for the above change.\n\nCommitted, thanks.\n\n\n\n", "msg_date": "Thu, 18 Jan 2024 08:52:33 +0100", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": true, "msg_subject": "Re: Make all Perl warnings fatal" } ]
[ { "msg_contents": "Hey hacks,\n\nI'm looking the code of free space map and visibility map, both the module\ncall `ExtendBufferedRelTo` to extend its file when necessary, what confused\nme is that `vm_extend` has an extra step that send a shared message to force\nother backends to close other smgr references.\n\nMy question is why does `fsm_extend` not need the invalidation step?\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Thu, 10 Aug 2023 16:36:21 +0800", "msg_from": "Junwang Zhao <zhjwpku@gmail.com>", "msg_from_op": true, "msg_subject": "[question] difference between vm_extend and fsm_extend" } ]
[ { "msg_contents": "I have an range partition and query below:\ncreate table p_range(a int, b int) partition by range (a,b); create table\np_range1 partition of p_range for values from (1,1) to (3,3); create table\np_range2 partition of p_range for values from (4,4) to (6,6); explain\nselect * from p_range where b =2;\n QUERY PLAN\n--------------------------------------------------------------------------\n Append (cost=0.00..76.61 rows=22 width=8)\n -> Seq Scan on p_range1 p_range_1 (cost=0.00..38.25 rows=11 width=8)\n Filter: (b = 2)\n -> Seq Scan on p_range2 p_range_2 (cost=0.00..38.25 rows=11 width=8)\n Filter: (b = 2)\n(5 rows)\n\nThe result of EXPLAIN shows that no partition prune happened.\nAnd gen_prune_steps_from_opexps() has comments that can answer the result.\n/*\n* For range partitioning, if we have no clauses for the current key,\n* we can't consider any later keys either, so we can stop here.\n*/\nif (part_scheme->strategy == PARTITION_STRATEGY_RANGE &&\nclauselist == NIL)\nbreak;\n\nBut I want to know why we don't prune when just have latter partition key\nin whereClause.\nThanks.\n\nI have an range partition and query below:create table p_range(a int, b int) partition by range (a,b);\ncreate table p_range1 partition of p_range for values from (1,1) to (3,3);\ncreate table p_range2 partition of p_range for values from (4,4) to (6,6);\nexplain select * from p_range where b =2;                                QUERY PLAN-------------------------------------------------------------------------- Append  (cost=0.00..76.61 rows=22 width=8)   ->  Seq Scan on p_range1 p_range_1  (cost=0.00..38.25 rows=11 width=8)         Filter: (b = 2)   ->  Seq Scan on p_range2 p_range_2  (cost=0.00..38.25 rows=11 width=8)         Filter: (b = 2)(5 rows)The result of EXPLAIN shows that no partition prune happened.And gen_prune_steps_from_opexps() has comments that can answer the result. /*\t\t * For range partitioning, if we have no clauses for the current key,\t\t * we can't consider any later keys either, so we can stop here.\t\t */\t\tif (part_scheme->strategy == PARTITION_STRATEGY_RANGE &&\t\t\tclauselist == NIL)\t\t\tbreak;But I want to know why we don't prune when just have latter partition key in whereClause.Thanks.", "msg_date": "Thu, 10 Aug 2023 18:16:00 +0800", "msg_from": "tender wang <tndrwang@gmail.com>", "msg_from_op": true, "msg_subject": "[question] multil-column range partition prune" }, { "msg_contents": "On Thu, 10 Aug 2023 at 12:16, tender wang <tndrwang@gmail.com> wrote:\n>\n> I have an range partition and query below:\n> create table p_range(a int, b int) partition by range (a,b); create table p_range1 partition of p_range for values from (1,1) to (3,3); create table p_range2 partition of p_range for values from (4,4) to (6,6); explain select * from p_range where b =2;\n> QUERY PLAN\n> --------------------------------------------------------------------------\n> Append (cost=0.00..76.61 rows=22 width=8)\n> -> Seq Scan on p_range1 p_range_1 (cost=0.00..38.25 rows=11 width=8)\n> Filter: (b = 2)\n> -> Seq Scan on p_range2 p_range_2 (cost=0.00..38.25 rows=11 width=8)\n> Filter: (b = 2)\n> (5 rows)\n>\n> The result of EXPLAIN shows that no partition prune happened.\n> And gen_prune_steps_from_opexps() has comments that can answer the result.\n> /*\n> * For range partitioning, if we have no clauses for the current key,\n> * we can't consider any later keys either, so we can stop here.\n> */\n> if (part_scheme->strategy == PARTITION_STRATEGY_RANGE &&\n> clauselist == NIL)\n> break;\n>\n> But I want to know why we don't prune when just have latter partition key in whereClause.\n> Thanks.\n\nMulti-column range partitioning uses row compares for range\npartitions. For single columns that doesn't matter much, but for\nmultiple columns it is slightly less intuitive. But because they are\nrow compares, that means for the given partitions, the values\ncontained would be:\n\np_range1 contains rows with\n- A = 1, B >= 1\n- A > 1 and A < 3, B: any value\n- A = 3, B < 3\n\np_range2 contains rows with:\n- A = 4, B >= 4\n- A > 4 and A < 6, B: any value\n- A = 6, B < 6\n\nAs you can see, each partition contains a set of rows that may have\nany value for B, and thus these partitions cannot be pruned based on\nthe predicate.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Thu, 10 Aug 2023 12:30:53 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [question] multil-column range partition prune" }, { "msg_contents": "## tender wang (tndrwang@gmail.com):\n\n> But I want to know why we don't prune when just have latter partition key\n> in whereClause.\n\nStart with the high level documentation\nhttps://www.postgresql.org/docs/current/sql-createtable.html#SQL-CREATETABLE-PARTITION\nwhere the 5th paragraph points you to\nhttps://www.postgresql.org/docs/current/functions-comparisons.html#ROW-WISE-COMPARISON\nwhich has a detailed explanation of row comparison.\n\nRegards,\nChristoph\n\n-- \nSpare Space.\n\n\n", "msg_date": "Thu, 10 Aug 2023 14:45:54 +0200", "msg_from": "Christoph Moench-Tegeder <cmt@burggraben.net>", "msg_from_op": false, "msg_subject": "Re: [question] multil-column range partition prune" } ]
[ { "msg_contents": "Hi,\n\nPG CI is added starting from PG 15, adding PG CI to PG 14 and below\ncould be beneficial. So, firstly I tried adding it to the\nREL_14_STABLE branch. If that makes sense, I will try to add PG CI to\nother old PG releases.\n\n'Add PG CI to PG 14' patch is attached. I merged both CI commits and\nthe least amount of commits to pass the CI on all OSes.\n\nAddition to CI commits, other changes are:\n\n1_ 76e38b37a5f179d4c9d2865ff31b79130407530b is added for debugging\nWindows. Also a couple of SSL tests were failing without this because\nthe log file is empty. Example failures on 001_ssltests.pl:\n\n# Failed test 'certificate authorization succeeds with DN mapping:\nlog matches'\n# at c:/cirrus/src/test/perl/PostgresNode.pm line 2195.\n# ''\n# doesn't match '(?^:connection authenticated:\nidentity=\"CN=ssltestuser-dn,OU=Testing,OU=Engineering,O=PGDG\"\nmethod=cert)'\n\n# Failed test 'certificate authorization succeeds with CN mapping:\nlog matches'\n# at c:/cirrus/src/test/perl/PostgresNode.pm line 2195.\n# ''\n# doesn't match '(?^:connection authenticated:\nidentity=\"CN=ssltestuser-dn,OU=Testing,OU=Engineering,O=PGDG\"\nmethod=cert)'\n\n# Failed test 'certificate authorization fails with client cert\nbelonging to another user: log matches'\n# at c:/cirrus/src/test/perl/PostgresNode.pm line 2243.\n# ''\n# doesn't match '(?^:connection authenticated:\nidentity=\"CN=ssltestuser\" method=cert)'\n# Looks like you failed 3 tests of 110.\n\n2_ 45f52709d86ceaaf282a440f6311c51fc526340b is added for fixing socket\ndirectories on Windows.\n\n3_ I removed '--with-zstd' since it was not supported yet.\n\nAny kind of feedback would be appreciated.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft", "msg_date": "Thu, 10 Aug 2023 17:43:24 +0300", "msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>", "msg_from_op": true, "msg_subject": "Add PG CI to older PG releases" }, { "msg_contents": "On 10.08.23 16:43, Nazir Bilal Yavuz wrote:\n> 1_ 76e38b37a5f179d4c9d2865ff31b79130407530b is added for debugging\n> Windows. Also a couple of SSL tests were failing without this because\n> the log file is empty. Example failures on 001_ssltests.pl:\n\nI see only one attachment, so it's not clear what these commit hashes \nrefer to.\n\n\n", "msg_date": "Thu, 10 Aug 2023 17:05:37 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: Add PG CI to older PG releases" }, { "msg_contents": "Hi,\n\nOn Thu, 10 Aug 2023 at 18:05, Peter Eisentraut <peter@eisentraut.org> wrote:\n> I see only one attachment, so it's not clear what these commit hashes\n> refer to.\n\nI split the commit into 3 parts.\n\nv2-0001-windows-Only-consider-us-to-be-running-as-service.patch is an\nolder commit (59751ae47fd43add30350a4258773537e98d4063). A couple of\ntests were failing without this because the log file was empty and the\ntests were comparing strings with the log files (like in the first\nmail).\n\nv2-0002-Make-PG_TEST_USE_UNIX_SOCKETS-work-for-tap-tests-.patch is\nslightly modified version of 45f52709d86ceaaf282a440f6311c51fc526340b\nto backpatch it to PG 14. Without this, Windows tests were failing\nwhen they tried to create sockets with 'unix_socket_directories' path.\n\nv2-0003-ci-Add-PG-CI-to-PG-14.patch adds files that are needed for CI\nto work. I copied these files from the REL_15_STABLE branch. Then I\nremoved '--with-zstd' from .cirrus.yml since it was not supported yet.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft", "msg_date": "Thu, 10 Aug 2023 19:55:15 +0300", "msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add PG CI to older PG releases" }, { "msg_contents": "Nazir Bilal Yavuz <byavuz81@gmail.com> writes:\n> PG CI is added starting from PG 15, adding PG CI to PG 14 and below\n> could be beneficial. So, firstly I tried adding it to the\n> REL_14_STABLE branch. If that makes sense, I will try to add PG CI to\n> other old PG releases.\n\nI'm not actually sure this is worth spending time on. We don't\ndo new development in three-release-back branches, nor do we have\nso many spare CI cycles that we can routinely run CI testing in\nthose branches [1].\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/20230808021541.7lbzdefvma7qmn3w%40awork3.anarazel.de\n\n\n", "msg_date": "Thu, 10 Aug 2023 18:09:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add PG CI to older PG releases" }, { "msg_contents": "Hi,\n\nOn 2023-08-10 19:55:15 +0300, Nazir Bilal Yavuz wrote:\n> v2-0001-windows-Only-consider-us-to-be-running-as-service.patch is an\n> older commit (59751ae47fd43add30350a4258773537e98d4063). A couple of\n> tests were failing without this because the log file was empty and the\n> tests were comparing strings with the log files (like in the first\n> mail).\n\nHm. I'm a bit worried about backpatching this one, it's not a small\nbehavioural change. But the prior behaviour is pretty awful and could very\njustifiably be viewed as a bug.\n\nAt the very least this would need to be combined with\n\ncommit 950e64fa46b164df87b5eb7c6e15213ab9880f87\nAuthor: Andres Freund <andres@anarazel.de>\nDate: 2022-07-18 17:06:34 -0700\n\n Use STDOUT/STDERR_FILENO in most of syslogger.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 10 Aug 2023 16:00:54 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add PG CI to older PG releases" }, { "msg_contents": "Hi,\n\nOn 2023-08-10 18:09:03 -0400, Tom Lane wrote:\n> Nazir Bilal Yavuz <byavuz81@gmail.com> writes:\n> > PG CI is added starting from PG 15, adding PG CI to PG 14 and below\n> > could be beneficial. So, firstly I tried adding it to the\n> > REL_14_STABLE branch. If that makes sense, I will try to add PG CI to\n> > other old PG releases.\n> \n> I'm not actually sure this is worth spending time on. We don't\n> do new development in three-release-back branches, nor do we have\n> so many spare CI cycles that we can routinely run CI testing in\n> those branches [1].\n\nI find it much less stressful to backpatch if I can test the patches via CI\nfirst. I don't think I'm alone in that - Alvaro for example has previously\ndone this in a more limited fashion (no windows):\nhttps://postgr.es/m/20220928192226.4c6zeenujaoqq4bq%40alvherre.pgsql\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 10 Aug 2023 17:44:12 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add PG CI to older PG releases" }, { "msg_contents": "Hi,\n\nOn Fri, 11 Aug 2023 at 02:00, Andres Freund <andres@anarazel.de> wrote:\n> At the very least this would need to be combined with\n>\n> commit 950e64fa46b164df87b5eb7c6e15213ab9880f87\n> Author: Andres Freund <andres@anarazel.de>\n> Date: 2022-07-18 17:06:34 -0700\n>\n> Use STDOUT/STDERR_FILENO in most of syslogger.\n\n950e64fa46b164df87b5eb7c6e15213ab9880f87 needs to be combined with\n92f478657c5544eba560047c39eba8a030ddb83e. So, I combined\n59751ae47fd43add30350a4258773537e98d4063,\n950e64fa46b164df87b5eb7c6e15213ab9880f87 and\n92f478657c5544eba560047c39eba8a030ddb83e in a single commit [1].\n\nv2-0001-windows-Fix-windows-logging-problems.patch is a combination of\n[1] and the rest are the same.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft", "msg_date": "Mon, 14 Aug 2023 12:46:47 +0300", "msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add PG CI to older PG releases" } ]
[ { "msg_contents": "By default, PostgreSQL doesn't explicitly choose whether to link\nstatically or dynamically against LLVM when LLVM JIT is enabled (e.g.:\n`./configure --with-llvm`).\n\n`llvm-config` will choose to dynamically link by default.\n\nIn order to statically link, one must pass `--link-static` to\n`llvm-config` when listing linker flags (`--ldflags`) and libraries\n(`--libs`).\n\nThis patch enables custom flags to be passed to `llvm-config` linker\nrelated invocations through the environment variable\n`LLVM_CONFIG_LINK_ARGS`.\n\nTo statically link against LLVM it suffices, then, to call `configure`\nwith environment variable `LLVM_CONFIG_LINK_ARGS=--link-static`.\n---\n config/llvm.m4 | 5 +++--\n configure | 6 ++++--\n 2 files changed, 7 insertions(+), 4 deletions(-)\n\ndiff --git a/config/llvm.m4 b/config/llvm.m4\nindex 3a75cd8b4d..712bd3de6c 100644\n--- a/config/llvm.m4\n+++ b/config/llvm.m4\n@@ -13,6 +13,7 @@ AC_DEFUN([PGAC_LLVM_SUPPORT],\n AC_REQUIRE([AC_PROG_AWK])\n \n AC_ARG_VAR(LLVM_CONFIG, [path to llvm-config command])\n+ AC_ARG_VAR(LLVM_CONFIG_LINK_ARGS, [extra arguments for llvm-config linker related flags])\n PGAC_PATH_PROGS(LLVM_CONFIG, llvm-config llvm-config-7 llvm-config-6.0 llvm-config-5.0 llvm-config-4.0 llvm-config-3.9)\n \n # no point continuing if llvm wasn't found\n@@ -52,7 +53,7 @@ AC_DEFUN([PGAC_LLVM_SUPPORT],\n esac\n done\n \n- for pgac_option in `$LLVM_CONFIG --ldflags`; do\n+ for pgac_option in `$LLVM_CONFIG --ldflags $LLVM_CONFIG_LINK_ARGS`; do\n case $pgac_option in\n -L*) LDFLAGS=\"$LDFLAGS $pgac_option\";;\n esac\n@@ -84,7 +85,7 @@ AC_DEFUN([PGAC_LLVM_SUPPORT],\n # And then get the libraries that need to be linked in for the\n # selected components. They're large libraries, we only want to\n # link them into the LLVM using shared library.\n- for pgac_option in `$LLVM_CONFIG --libs --system-libs $pgac_components`; do\n+ for pgac_option in `$LLVM_CONFIG --libs --system-libs $LLVM_CONFIG_LINK_ARGS $pgac_components`; do\n case $pgac_option in\n -l*) LLVM_LIBS=\"$LLVM_LIBS $pgac_option\";;\n esac\ndiff --git a/configure b/configure\nindex 86ffccb1ee..974b7f2d4e 100755\n--- a/configure\n+++ b/configure\n@@ -1595,6 +1595,8 @@ Some influential environment variables:\n CXX C++ compiler command\n CXXFLAGS C++ compiler flags\n LLVM_CONFIG path to llvm-config command\n+ LLVM_CONFIG_LINK_ARGS\n+ extra arguments for llvm-config linker related flags\n CLANG path to clang compiler to generate bitcode\n CPP C preprocessor\n PKG_CONFIG path to pkg-config utility\n@@ -5200,7 +5202,7 @@ fi\n esac\n done\n \n- for pgac_option in `$LLVM_CONFIG --ldflags`; do\n+ for pgac_option in `$LLVM_CONFIG --ldflags $LLVM_CONFIG_LINK_ARGS`; do\n case $pgac_option in\n -L*) LDFLAGS=\"$LDFLAGS $pgac_option\";;\n esac\n@@ -5232,7 +5234,7 @@ fi\n # And then get the libraries that need to be linked in for the\n # selected components. They're large libraries, we only want to\n # link them into the LLVM using shared library.\n- for pgac_option in `$LLVM_CONFIG --libs --system-libs $pgac_components`; do\n+ for pgac_option in `$LLVM_CONFIG --libs --system-libs $LLVM_CONFIG_LINK_ARGS $pgac_components`; do\n case $pgac_option in\n -l*) LLVM_LIBS=\"$LLVM_LIBS $pgac_option\";;\n esac\n-- \n2.40.1\n\n\n\n", "msg_date": "Thu, 10 Aug 2023 14:45:47 -0500", "msg_from": "Marcelo Juchem <juchem@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] Support static linking against LLVM" }, { "msg_contents": "Andres, Tom, I found your names in the git history for JIT and LLVM.\nAny chance one of you could take a look at the patch?\n\n-mj\n\n\nOn Thu, Aug 10, 2023 at 2:45 PM Marcelo Juchem <juchem@gmail.com> wrote:\n\n> By default, PostgreSQL doesn't explicitly choose whether to link\n> statically or dynamically against LLVM when LLVM JIT is enabled (e.g.:\n> `./configure --with-llvm`).\n>\n> `llvm-config` will choose to dynamically link by default.\n>\n> In order to statically link, one must pass `--link-static` to\n> `llvm-config` when listing linker flags (`--ldflags`) and libraries\n> (`--libs`).\n>\n> This patch enables custom flags to be passed to `llvm-config` linker\n> related invocations through the environment variable\n> `LLVM_CONFIG_LINK_ARGS`.\n>\n> To statically link against LLVM it suffices, then, to call `configure`\n> with environment variable `LLVM_CONFIG_LINK_ARGS=--link-static`.\n> ---\n> config/llvm.m4 | 5 +++--\n> configure | 6 ++++--\n> 2 files changed, 7 insertions(+), 4 deletions(-)\n>\n> diff --git a/config/llvm.m4 b/config/llvm.m4\n> index 3a75cd8b4d..712bd3de6c 100644\n> --- a/config/llvm.m4\n> +++ b/config/llvm.m4\n> @@ -13,6 +13,7 @@ AC_DEFUN([PGAC_LLVM_SUPPORT],\n> AC_REQUIRE([AC_PROG_AWK])\n>\n> AC_ARG_VAR(LLVM_CONFIG, [path to llvm-config command])\n> + AC_ARG_VAR(LLVM_CONFIG_LINK_ARGS, [extra arguments for llvm-config\n> linker related flags])\n> PGAC_PATH_PROGS(LLVM_CONFIG, llvm-config llvm-config-7 llvm-config-6.0\n> llvm-config-5.0 llvm-config-4.0 llvm-config-3.9)\n>\n> # no point continuing if llvm wasn't found\n> @@ -52,7 +53,7 @@ AC_DEFUN([PGAC_LLVM_SUPPORT],\n> esac\n> done\n>\n> - for pgac_option in `$LLVM_CONFIG --ldflags`; do\n> + for pgac_option in `$LLVM_CONFIG --ldflags $LLVM_CONFIG_LINK_ARGS`; do\n> case $pgac_option in\n> -L*) LDFLAGS=\"$LDFLAGS $pgac_option\";;\n> esac\n> @@ -84,7 +85,7 @@ AC_DEFUN([PGAC_LLVM_SUPPORT],\n> # And then get the libraries that need to be linked in for the\n> # selected components. They're large libraries, we only want to\n> # link them into the LLVM using shared library.\n> - for pgac_option in `$LLVM_CONFIG --libs --system-libs\n> $pgac_components`; do\n> + for pgac_option in `$LLVM_CONFIG --libs --system-libs\n> $LLVM_CONFIG_LINK_ARGS $pgac_components`; do\n> case $pgac_option in\n> -l*) LLVM_LIBS=\"$LLVM_LIBS $pgac_option\";;\n> esac\n> diff --git a/configure b/configure\n> index 86ffccb1ee..974b7f2d4e 100755\n> --- a/configure\n> +++ b/configure\n> @@ -1595,6 +1595,8 @@ Some influential environment variables:\n> CXX C++ compiler command\n> CXXFLAGS C++ compiler flags\n> LLVM_CONFIG path to llvm-config command\n> + LLVM_CONFIG_LINK_ARGS\n> + extra arguments for llvm-config linker related flags\n> CLANG path to clang compiler to generate bitcode\n> CPP C preprocessor\n> PKG_CONFIG path to pkg-config utility\n> @@ -5200,7 +5202,7 @@ fi\n> esac\n> done\n>\n> - for pgac_option in `$LLVM_CONFIG --ldflags`; do\n> + for pgac_option in `$LLVM_CONFIG --ldflags $LLVM_CONFIG_LINK_ARGS`; do\n> case $pgac_option in\n> -L*) LDFLAGS=\"$LDFLAGS $pgac_option\";;\n> esac\n> @@ -5232,7 +5234,7 @@ fi\n> # And then get the libraries that need to be linked in for the\n> # selected components. They're large libraries, we only want to\n> # link them into the LLVM using shared library.\n> - for pgac_option in `$LLVM_CONFIG --libs --system-libs\n> $pgac_components`; do\n> + for pgac_option in `$LLVM_CONFIG --libs --system-libs\n> $LLVM_CONFIG_LINK_ARGS $pgac_components`; do\n> case $pgac_option in\n> -l*) LLVM_LIBS=\"$LLVM_LIBS $pgac_option\";;\n> esac\n> --\n> 2.40.1\n>\n>\n\nAndres, Tom, I found your names in the git history for JIT and LLVM.Any chance one of you could take a look at the patch?-mjOn Thu, Aug 10, 2023 at 2:45 PM Marcelo Juchem <juchem@gmail.com> wrote:By default, PostgreSQL doesn't explicitly choose whether to link\nstatically or dynamically against LLVM when LLVM JIT is enabled (e.g.:\n`./configure --with-llvm`).\n\n`llvm-config` will choose to dynamically link by default.\n\nIn order to statically link, one must pass `--link-static` to\n`llvm-config` when listing linker flags (`--ldflags`) and libraries\n(`--libs`).\n\nThis patch enables custom flags to be passed to `llvm-config` linker\nrelated invocations through the environment variable\n`LLVM_CONFIG_LINK_ARGS`.\n\nTo statically link against LLVM it suffices, then, to call `configure`\nwith environment variable `LLVM_CONFIG_LINK_ARGS=--link-static`.\n---\n config/llvm.m4 | 5 +++--\n configure      | 6 ++++--\n 2 files changed, 7 insertions(+), 4 deletions(-)\n\ndiff --git a/config/llvm.m4 b/config/llvm.m4\nindex 3a75cd8b4d..712bd3de6c 100644\n--- a/config/llvm.m4\n+++ b/config/llvm.m4\n@@ -13,6 +13,7 @@ AC_DEFUN([PGAC_LLVM_SUPPORT],\n   AC_REQUIRE([AC_PROG_AWK])\n\n   AC_ARG_VAR(LLVM_CONFIG, [path to llvm-config command])\n+  AC_ARG_VAR(LLVM_CONFIG_LINK_ARGS, [extra arguments for llvm-config linker related flags])\n   PGAC_PATH_PROGS(LLVM_CONFIG, llvm-config llvm-config-7 llvm-config-6.0 llvm-config-5.0 llvm-config-4.0 llvm-config-3.9)\n\n   # no point continuing if llvm wasn't found\n@@ -52,7 +53,7 @@ AC_DEFUN([PGAC_LLVM_SUPPORT],\n     esac\n   done\n\n-  for pgac_option in `$LLVM_CONFIG --ldflags`; do\n+  for pgac_option in `$LLVM_CONFIG --ldflags $LLVM_CONFIG_LINK_ARGS`; do\n     case $pgac_option in\n       -L*) LDFLAGS=\"$LDFLAGS $pgac_option\";;\n     esac\n@@ -84,7 +85,7 @@ AC_DEFUN([PGAC_LLVM_SUPPORT],\n   # And then get the libraries that need to be linked in for the\n   # selected components.  They're large libraries, we only want to\n   # link them into the LLVM using shared library.\n-  for pgac_option in `$LLVM_CONFIG --libs --system-libs $pgac_components`; do\n+  for pgac_option in `$LLVM_CONFIG --libs --system-libs $LLVM_CONFIG_LINK_ARGS $pgac_components`; do\n     case $pgac_option in\n       -l*) LLVM_LIBS=\"$LLVM_LIBS $pgac_option\";;\n     esac\ndiff --git a/configure b/configure\nindex 86ffccb1ee..974b7f2d4e 100755\n--- a/configure\n+++ b/configure\n@@ -1595,6 +1595,8 @@ Some influential environment variables:\n   CXX         C++ compiler command\n   CXXFLAGS    C++ compiler flags\n   LLVM_CONFIG path to llvm-config command\n+  LLVM_CONFIG_LINK_ARGS\n+              extra arguments for llvm-config linker related flags\n   CLANG       path to clang compiler to generate bitcode\n   CPP         C preprocessor\n   PKG_CONFIG  path to pkg-config utility\n@@ -5200,7 +5202,7 @@ fi\n     esac\n   done\n\n-  for pgac_option in `$LLVM_CONFIG --ldflags`; do\n+  for pgac_option in `$LLVM_CONFIG --ldflags $LLVM_CONFIG_LINK_ARGS`; do\n     case $pgac_option in\n       -L*) LDFLAGS=\"$LDFLAGS $pgac_option\";;\n     esac\n@@ -5232,7 +5234,7 @@ fi\n   # And then get the libraries that need to be linked in for the\n   # selected components.  They're large libraries, we only want to\n   # link them into the LLVM using shared library.\n-  for pgac_option in `$LLVM_CONFIG --libs --system-libs $pgac_components`; do\n+  for pgac_option in `$LLVM_CONFIG --libs --system-libs $LLVM_CONFIG_LINK_ARGS $pgac_components`; do\n     case $pgac_option in\n       -l*) LLVM_LIBS=\"$LLVM_LIBS $pgac_option\";;\n     esac\n-- \n2.40.1", "msg_date": "Fri, 11 Aug 2023 11:30:32 -0500", "msg_from": "Marcelo Juchem <juchem@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support static linking against LLVM" }, { "msg_contents": "Hi,\n\nOn 2023-08-10 14:45:47 -0500, Marcelo Juchem wrote:\n> By default, PostgreSQL doesn't explicitly choose whether to link\n> statically or dynamically against LLVM when LLVM JIT is enabled (e.g.:\n> `./configure --with-llvm`).\n> \n> `llvm-config` will choose to dynamically link by default.\n> \n> In order to statically link, one must pass `--link-static` to\n> `llvm-config` when listing linker flags (`--ldflags`) and libraries\n> (`--libs`).\n> \n> This patch enables custom flags to be passed to `llvm-config` linker\n> related invocations through the environment variable\n> `LLVM_CONFIG_LINK_ARGS`.\n> \n> To statically link against LLVM it suffices, then, to call `configure`\n> with environment variable `LLVM_CONFIG_LINK_ARGS=--link-static`.\n\nI'm not opposed to it, but I'm not sure I see much need for it either. Do you\nhave a specific use case for this?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 11 Aug 2023 10:43:44 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support static linking against LLVM" }, { "msg_contents": "In my case, my product has a very controlled environment.\nWe build all our infrastructure from source and we avoid dynamic linking by\ndesign, except where technically not viable (e.g.: pgsql extensions).\n\nLLVM is one of the libraries we're specifically required to statically link.\nUnfortunately I can't share the specifics of why that's the case.\n\nWithout static link support by PostgreSQL we can't enable LLVM JIT.\n\n\nOn a side note, I'd like to take this opportunity to ask you if there's any\nwork being done towards migrating away from deprecated LLVM APIs.\nIf there's no work being done on that front, I might take a stab at it if\nthere's any interest from the PostgreSQL community in that contribution.\n\nMore specifically, there are some deprecated C APIs that are used in\nPostgreSQL (more details in\nhttps://llvm.org/docs/OpaquePointers.html#frontends).\nFor that reason, PostgreSQL LLVM JIT support will fail to build starting\nwith the next version of LLVM (version 17).\n\nMigrating to the new APIs will have PostgreSQL require at the minimum\nversion 8 of LLVM (released in March 2019).\n\nRegards,\n\n-mj\n\n\nOn Fri, Aug 11, 2023 at 12:43 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2023-08-10 14:45:47 -0500, Marcelo Juchem wrote:\n> > By default, PostgreSQL doesn't explicitly choose whether to link\n> > statically or dynamically against LLVM when LLVM JIT is enabled (e.g.:\n> > `./configure --with-llvm`).\n> >\n> > `llvm-config` will choose to dynamically link by default.\n> >\n> > In order to statically link, one must pass `--link-static` to\n> > `llvm-config` when listing linker flags (`--ldflags`) and libraries\n> > (`--libs`).\n> >\n> > This patch enables custom flags to be passed to `llvm-config` linker\n> > related invocations through the environment variable\n> > `LLVM_CONFIG_LINK_ARGS`.\n> >\n> > To statically link against LLVM it suffices, then, to call `configure`\n> > with environment variable `LLVM_CONFIG_LINK_ARGS=--link-static`.\n>\n> I'm not opposed to it, but I'm not sure I see much need for it either. Do\n> you\n> have a specific use case for this?\n>\n> Greetings,\n>\n> Andres Freund\n>\n\nIn my case, my product has a very controlled environment.We build all our infrastructure from source and we avoid dynamic linking by design, except where technically not viable (e.g.: pgsql extensions).LLVM is one of the libraries we're specifically required to statically link.Unfortunately I can't share the specifics of why that's the case.Without static link support by PostgreSQL we can't enable LLVM JIT.On a side note, I'd like to take this opportunity to ask you if there's any work being done towards migrating away from deprecated LLVM APIs.If there's no work being done on that front, I might take a stab at it if there's any interest from the PostgreSQL community in that contribution.More specifically, there are some deprecated C APIs that are used in PostgreSQL (more details in https://llvm.org/docs/OpaquePointers.html#frontends).For that reason, PostgreSQL LLVM JIT support will fail to build starting with the next version of LLVM (version 17).Migrating to the new APIs will have PostgreSQL require at the minimum version 8 of LLVM (released in March 2019).Regards,-mjOn Fri, Aug 11, 2023 at 12:43 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2023-08-10 14:45:47 -0500, Marcelo Juchem wrote:\n> By default, PostgreSQL doesn't explicitly choose whether to link\n> statically or dynamically against LLVM when LLVM JIT is enabled (e.g.:\n> `./configure --with-llvm`).\n> \n> `llvm-config` will choose to dynamically link by default.\n> \n> In order to statically link, one must pass `--link-static` to\n> `llvm-config` when listing linker flags (`--ldflags`) and libraries\n> (`--libs`).\n> \n> This patch enables custom flags to be passed to `llvm-config` linker\n> related invocations through the environment variable\n> `LLVM_CONFIG_LINK_ARGS`.\n> \n> To statically link against LLVM it suffices, then, to call `configure`\n> with environment variable `LLVM_CONFIG_LINK_ARGS=--link-static`.\n\nI'm not opposed to it, but I'm not sure I see much need for it either. Do you\nhave a specific use case for this?\n\nGreetings,\n\nAndres Freund", "msg_date": "Fri, 11 Aug 2023 12:59:27 -0500", "msg_from": "Marcelo Juchem <juchem@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support static linking against LLVM" }, { "msg_contents": "Hi,\n\nOn 2023-08-11 12:59:27 -0500, Marcelo Juchem wrote:\n> In my case, my product has a very controlled environment.\n> We build all our infrastructure from source and we avoid dynamic linking by\n> design, except where technically not viable (e.g.: pgsql extensions).\n> \n> LLVM is one of the libraries we're specifically required to statically link.\n> Unfortunately I can't share the specifics of why that's the case.\n\nIf you build llvm with just static lib support it should work as-is?\n\n\n> On a side note, I'd like to take this opportunity to ask you if there's any\n> work being done towards migrating away from deprecated LLVM APIs.\n> If there's no work being done on that front, I might take a stab at it if\n> there's any interest from the PostgreSQL community in that contribution.\n\nYes: https://postgr.es/m/CA%2BhUKGKNX_%3Df%2B1C4r06WETKTq0G4Z_7q4L4Fxn5WWpMycDj9Fw%40mail.gmail.com\n\n\n> Migrating to the new APIs will have PostgreSQL require at the minimum\n> version 8 of LLVM (released in March 2019).\n\nI think Thomas' patch abstract it enough so that older versions continue to\nwork...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 11 Aug 2023 11:13:40 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support static linking against LLVM" }, { "msg_contents": "On Fri, Aug 11, 2023 at 4:39 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2023-08-11 15:06:44 -0500, Marcelo Juchem wrote:\n> > On Fri, Aug 11, 2023 at 2:53 PM Andres Freund <andres@anarazel.de>\n> wrote:\n> >\n> > > Hi,\n> > >\n> > > On 2023-08-11 13:43:17 -0500, Marcelo Juchem wrote:\n> > > > I'm not sure I know a good way to cleanly do that.\n> > >\n> > > Have you tried the approach I proposed here:\n> > >\n> https://postgr.es/m/20230811183154.vlyn5kvteklhym3v%40awork3.anarazel.de\n> > > ?\n> > >\n> >\n> > I want to check it out but the link is not working for me.\n>\n> Oh, I hadn't realized you had dropped the list from CC in the email prior,\n> that's why it's not archived. My proposal was:\n>\n\nSorry, that wasn't intentional. I'll add it back.\n\n\n>\n> On 2023-08-11 11:31:54 -0700, Andres Freund wrote:\n> > I'd prefer a patch that worked around that oddity, rather than adding a\n> > separate \"argument\" that requires everyone encountering it to figure out\n> the\n> > argument exists and to specify it.\n> >\n> > I don't have a static-only llvm around right now, but I do have a\n> \"dynamic\n> > only\" llvm around. It errors out when using \"--link-static --libs\" -\n> assuming\n> > that's the case with the static-only llvm, we could infer the need to\n> specify\n> > --link-static based on --link-dynamic erroring out?\n>\n> Does your static only llvm error out if you do llvm-config --link-dynamic\n> --libs?\n>\n\nYes, it does not work.\n\nI understand the final decision is not up to me, and I respect whatever\ndirection you and the community wants to go with, but I'd like to humbly\noffer my perspective:\n\nThe issue I'm facing may as well be a corner case or transitional issue\nwith `llvm-config` or LLVM build system.\nI've recently submitted a couple LLVM patches for a different build system\nissue related to static linking (has to do with `iovisor/bcc`).\nIn my experience, static linking support is not as ironed out as shared\nlinking in LLVM.\nI'm not sure it is in the best interest of PostgreSQL to pick up the slack.\n\nInstead of optimizing for my use case, , what about instead simply offering\n\"default\" (current behavior), \"static\" and \"shared\" (explicit choice)?\n\nIt seems to me it is easier to implement, and less intrusive towards\nPostgreSQL build system, as opposed to automatically detecting a possibly\nodd environment.\nIt also feels more general since in the average case, \"default\"\n(--with-llvm) should just work.\nBut if someone is intentional about link mode or, as in my case, needs to\nwork around an issue; then explicitly choosing `--with-llvm=static` or\n`--with-llvm=shared` should do the job just fine.\n\nWhat do you think?\n\n\n>\n> Greetings,\n>\n> Andres Freund\n>\n\nOn Fri, Aug 11, 2023 at 4:39 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2023-08-11 15:06:44 -0500, Marcelo Juchem wrote:\n> On Fri, Aug 11, 2023 at 2:53 PM Andres Freund <andres@anarazel.de> wrote:\n> \n> > Hi,\n> >\n> > On 2023-08-11 13:43:17 -0500, Marcelo Juchem wrote:\n> > > I'm not sure I know a good way to cleanly do that.\n> >\n> > Have you tried the approach I proposed here:\n> > https://postgr.es/m/20230811183154.vlyn5kvteklhym3v%40awork3.anarazel.de\n> > ?\n> >\n> \n> I want to check it out but the link is not working for me.\n\nOh, I hadn't realized you had dropped the list from CC in the email prior,\nthat's why it's not archived.  My proposal was:Sorry, that wasn't intentional. I'll add it back. \n\nOn 2023-08-11 11:31:54 -0700, Andres Freund wrote:\n> I'd prefer a patch that worked around that oddity, rather than adding a\n> separate \"argument\" that requires everyone encountering it to figure out the\n> argument exists and to specify it.\n> \n> I don't have a static-only llvm around right now, but I do have a \"dynamic\n> only\" llvm around. It errors out when using \"--link-static --libs\" - assuming\n> that's the case with the static-only llvm, we could infer the need to specify\n> --link-static based on --link-dynamic erroring out?\n\nDoes your static only llvm error out if you do llvm-config --link-dynamic --libs?Yes, it does not work.I understand the final decision is not up to me, and I respect whatever direction you and the community wants to go with, but I'd like to humbly offer my perspective:The issue I'm facing may as well be a corner case or transitional issue with `llvm-config` or LLVM build system.I've recently submitted a couple LLVM patches for a different build system issue related to static linking (has to do with `iovisor/bcc`).In my experience, static linking support is not as ironed out as shared linking in LLVM.I'm not sure it is in the best interest of PostgreSQL to pick up the slack.Instead of optimizing for my use case, , what about instead simply offering \"default\" (current behavior), \"static\" and \"shared\" (explicit choice)?It seems to me it is easier to implement, and less intrusive towards PostgreSQL build system, as opposed to automatically detecting a possibly odd environment.It also feels more general since in the average case, \"default\" (--with-llvm) should just work.But if someone is intentional about link mode or, as in my case, needs to work around an issue; then explicitly choosing `--with-llvm=static` or `--with-llvm=shared` should do the job just fine.What do you think? \n\nGreetings,\n\nAndres Freund", "msg_date": "Fri, 11 Aug 2023 16:57:56 -0500", "msg_from": "Marcelo Juchem <juchem@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support static linking against LLVM" } ]
[ { "msg_contents": "Hi Hackers, is there a function that would lookup the proc that\nimplements an operator?\n\nThanks,\n-cktan\n\n\n", "msg_date": "Thu, 10 Aug 2023 13:17:03 -0700", "msg_from": "CK Tan <cktan@vitessedata.com>", "msg_from_op": true, "msg_subject": "obtaining proc oid given a oper id" }, { "msg_contents": "Found it.\n\n/*\n * get_opcode\n *\n * Returns the regproc id of the routine used to implement an\n * operator given the operator oid.\n */\nRegProcedure\nget_opcode(Oid opno)\n\nOn Thu, Aug 10, 2023 at 1:17 PM CK Tan <cktan@vitessedata.com> wrote:\n>\n> Hi Hackers, is there a function that would lookup the proc that\n> implements an operator?\n>\n> Thanks,\n> -cktan\n\n\n", "msg_date": "Thu, 10 Aug 2023 15:12:28 -0700", "msg_from": "CK Tan <cktan@vitessedata.com>", "msg_from_op": true, "msg_subject": "Re: obtaining proc oid given a oper id" } ]