threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "FYI, I wasn't yet able to make this work yet.\n(gdb) print *segment_map->header\nCannot access memory at address 0x7f347e554000\n\nHowever I *did* reproduce the error in an isolated, non-production postgres\ninstance. It's a total empty, untuned v11.1 initdb just for this, running ONLY\na few simultaneous loops around just one query It looks like the simultaneous\nloops sometimes (but not always) fail together. This has happened a couple\ntimes. \n\nIt looks like one query failed due to \"could not attach\" in leader, one failed\ndue to same in worker, and one failed with \"not pinned\", which I hadn't seen\nbefore and appears to be related to DSM, not DSA...\n\n|ERROR: dsa_area could not attach to segment\n|ERROR: cannot unpin a segment that is not pinned\n|ERROR: dsa_area could not attach to segment\n|CONTEXT: parallel worker\n|\n|[2] Done while PGHOST=/tmp PGPORT=5678 psql postgres -c \"SELECT colcld.child c, parent p, array_agg(colpar.attname::text ORDER BY colpar.attnum) cols, array_agg(format_type(colpar.atttypid, colpar.atttypmod) ORDER BY colpar.attnum) AS types FROM queued_alters qa JOIN pg_attribute colpar ON to_regclass(qa.parent)=colpar.attrelid AND colpar.attnum>0 AND NOT colpar.attisdropped JOIN (SELECT *, attrelid::regclass::text AS child FROM pg_attribute) colcld ON to_regclass(qa.child) =colcld.attrelid AND colcld.attnum>0 AND NOT colcld.attisdropped WHERE colcld.attname=colpar.attname AND colpar.atttypid!=colcld.atttypid GROUP BY 1,2 ORDER BY parent LIKE 'unused%', regexp_replace(colcld.child, '.*_((([0-9]{4}_[0-9]{2})_[0-9]{2})|(([0-9]{6})([0-9]{2})?))$', '\\\\3\\\\5') DESC, regexp_replace(colcld.child, '.*_', '') DESC LIMIT 1\"; do\n| :;\n|done > /dev/null\n|[5]- Done while PGHOST=/tmp PGPORT=5678 psql postgres -c \"SELECT colcld.child c, parent p, array_agg(colpar.attname::text ORDER BY colpar.attnum) cols, array_agg(format_type(colpar.atttypid, colpar.atttypmod) ORDER BY colpar.attnum) AS types FROM queued_alters qa JOIN pg_attribute colpar ON to_regclass(qa.parent)=colpar.attrelid AND colpar.attnum>0 AND NOT colpar.attisdropped JOIN (SELECT *, attrelid::regclass::text AS child FROM pg_attribute) colcld ON to_regclass(qa.child) =colcld.attrelid AND colcld.attnum>0 AND NOT colcld.attisdropped WHERE colcld.attname=colpar.attname AND colpar.atttypid!=colcld.atttypid GROUP BY 1,2 ORDER BY parent LIKE 'unused%', regexp_replace(colcld.child, '.*_((([0-9]{4}_[0-9]{2})_[0-9]{2})|(([0-9]{6})([0-9]{2})?))$', '\\\\3\\\\5') DESC, regexp_replace(colcld.child, '.*_', '') DESC LIMIT 1\"; do\n| :;\n|done > /dev/null\n|[6]+ Done while PGHOST=/tmp PGPORT=5678 psql postgres -c \"SELECT colcld.child c, parent p, array_agg(colpar.attname::text ORDER BY colpar.attnum) cols, array_agg(format_type(colpar.atttypid, colpar.atttypmod) ORDER BY colpar.attnum) AS types FROM queued_alters qa JOIN pg_attribute colpar ON to_regclass(qa.parent)=colpar.attrelid AND colpar.attnum>0 AND NOT colpar.attisdropped JOIN (SELECT *, attrelid::regclass::text AS child FROM pg_attribute) colcld ON to_regclass(qa.child) =colcld.attrelid AND colcld.attnum>0 AND NOT colcld.attisdropped WHERE colcld.attname=colpar.attname AND colpar.atttypid!=colcld.atttypid GROUP BY 1,2 ORDER BY parent LIKE 'unused%', regexp_replace(colcld.child, '.*_((([0-9]{4}_[0-9]{2})_[0-9]{2})|(([0-9]{6})([0-9]{2})?))$', '\\\\3\\\\5') DESC, regexp_replace(colcld.child, '.*_', '') DESC LIMIT 1\"; do\n\nI'm also trying to reproduce on other production servers. But so far nothing\nelse has shown the bug, including the other server which hit our original\n(other) DSA error with the queued_alters query. So I tentatively think there\nreally may be something specific to the server (not the hypervisor so maybe the\nOS, libraries, kernel, scheduler, ??).\n\nFind the schema for that table here:\nhttps://www.postgresql.org/message-id/20181231221734.GB25379%40telsasoft.com\n\nNote, for unrelated reasons, that query was also previously discussed here:\nhttps://www.postgresql.org/message-id/20171110204043.GS8563%40telsasoft.com\n\nJustin\n\n",
"msg_date": "Wed, 6 Feb 2019 19:47:19 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg11.1: dsa_area could not attach to segment"
},
{
"msg_contents": "On Thu, Feb 7, 2019 at 12:47 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> However I *did* reproduce the error in an isolated, non-production postgres\n> instance. It's a total empty, untuned v11.1 initdb just for this, running ONLY\n> a few simultaneous loops around just one query It looks like the simultaneous\n> loops sometimes (but not always) fail together. This has happened a couple\n> times.\n>\n> It looks like one query failed due to \"could not attach\" in leader, one failed\n> due to same in worker, and one failed with \"not pinned\", which I hadn't seen\n> before and appears to be related to DSM, not DSA...\n\nHmm. I hadn't considered that angle... Some kind of interference\nbetween unrelated DSA areas, or other DSM activity? I will also try\nto repro that here...\n\n> I'm also trying to reproduce on other production servers. But so far nothing\n> else has shown the bug, including the other server which hit our original\n> (other) DSA error with the queued_alters query. So I tentatively think there\n> really may be something specific to the server (not the hypervisor so maybe the\n> OS, libraries, kernel, scheduler, ??).\n\nInitially I thought these might be two symptoms of the same corruption\nbut I'm now starting to wonder if there are two bugs here: \"could not\nallocate %d pages\" (rare) might be a logic bug in the computation of\ncontiguous_pages that requires a particular allocation pattern to hit,\nand \"dsa_area could not attach to segment\" (rarissimo) might be\nsomething else requiring concurrency/a race.\n\nOne thing that might be useful would be to add a call to\ndsa_dump(area) just before the errors are raised, which will write a\nbunch of stuff out to stderr and might give us some clues. And to\nprint out the variable \"index\" from get_segment_by_index() when it\nfails. I'm also going to try to work up some better assertions.\n--\nThomas Munro\nhttp://www.enterprisedb.com\n\n",
"msg_date": "Thu, 7 Feb 2019 14:31:39 +1100",
"msg_from": "Thomas Munro <thomas.munro@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pg11.1: dsa_area could not attach to segment"
},
{
"msg_contents": "On Wed, Feb 06, 2019 at 07:47:19PM -0600, Justin Pryzby wrote:\n> FYI, I wasn't yet able to make this work yet.\n> (gdb) print *segment_map->header\n> Cannot access memory at address 0x7f347e554000\n\nI'm still not able to make this work. Actually this doesn't work even:\n\n(gdb) print *segment_map\nCannot access memory at address 0x4227dcdd0\n\nThomas thought it's due to coredump_filter, but 0xff doesn't work (actually\n0x7f seems to be the max here). Any other ideas? The core is not being\ntruncated, since this is on a \"toy\" instance with 128MB buffers.\n\n-rw-r-----. 1 pryzbyj root 279M Feb 7 09:52 coredump\n\n[pryzbyj@telsasoft-db postgresql]$ ~/src/postgresql.bin/bin/pg_ctl -c start -D /var/lib/pgsql/test -o '-c operator_precedence_warning=on -c maintenance_work_mem=1GB -c max_wal_size=16GB -c full_page_writes=off -c autovacuum=off -c fsync=off -c port=5678 -c unix_socket_directories=/tmp' waiting for server to start....2019-02-07 09:25:45.745 EST [30741] LOG: listening on IPv6 address \"::1\", port 5678 2019-02-07 09:25:45.745 EST [30741] LOG: listening on IPv4 address \"127.0.0.1\", port 5678 2019-02-07 09:25:45.746 EST [30741] LOG: listening on Unix socket \"/tmp/.s.PGSQL.5678\"\n.2019-02-07 09:25:46.798 EST [30741] LOG: redirecting log output to logging collector process\n2019-02-07 09:25:46.798 EST [30741] HINT: Future log output will appear in directory \"log\".\n done\nserver started\n\n[pryzbyj@telsasoft-db postgresql]$ echo 0xff |sudo tee /proc/30741/coredump_filter\n\nJustin\n\n",
"msg_date": "Thu, 7 Feb 2019 09:08:32 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg11.1: dsa_area could not attach to segment"
},
{
"msg_contents": "Hi,\n\nOn Mon, Feb 11, 2019 at 11:11:32AM +1100, Thomas Munro wrote:\n> I haven't ever managed to reproduce that one yet. It's great you have\n> a reliable repro... Let's discuss it on the #15585 thread.\n\nI realized that I gave bad information (at least to Thomas). On the server\nwhere I've been reproducing this, it wasn't in an empty DB cluster, but one\nwhere I'd restored our DB schema. I think that's totally irrelevant, except\nthat pg_attribute needs to be big enough to get parallel scan.\n\nHere's confirmed steps to reproduce\n\ninitdb -D /var/lib/pgsql/test\npg_ctl -c start -D /var/lib/pgsql/test -o '-c operator_precedence_warning=on -c maintenance_work_mem=1GB -c max_wal_size=16GB -c full_page_writes=off -c autovacuum=off -c fsync=off -c port=5678 -c unix_socket_directories=/tmp'\nPGPORT=5678 PGHOST=/tmp psql postgres -c 'CREATE TABLE queued_alters(child text,parent text); CREATE TABLE queued_alters_child()INHERITS(queued_alters); ANALYZE queued_alters, pg_attribute'\n\n# Inflate pg_attribute to nontrivial size:\necho \"CREATE TABLE t(`for c in $(seq 1 222); do echo \"c$c int,\"; done |xargs |sed 's/,$//'`)\" |PGHOST=/tmp PGPORT=5678 psql postgres \nfor a in `seq 1 999`; do echo \"CREATE TABLE t$a() INHERITS(t);\"; done |PGHOST=/tmp PGPORT=5678 psql -q postgres\n\nwhile PGOPTIONS='-cmin_parallel_table_scan_size=0' PGPORT=5678 PGHOST=/tmp psql postgres -c \"explain analyze SELECT colcld.child c, parent p, array_agg(colpar.attname::text ORDER BY colpar.attnum) cols, array_agg(format_type(colpar.atttypid, colpar.atttypmod) ORDER BY colpar.attnum) AS types FROM queued_alters qa JOIN pg_attribute colpar ON to_regclass(qa.parent)=colpar.attrelid AND colpar.attnum>0 AND NOT colpar.attisdropped JOIN (SELECT *, attrelid::regclass::text AS child FROM pg_attribute) colcld ON to_regclass(qa.child) =colcld.attrelid AND colcld.attnum>0 AND NOT colcld.attisdropped WHERE colcld.attname=colpar.attname AND colpar.atttypid!=colcld.atttypid GROUP BY 1,2 ORDER BY parent LIKE 'unused%', regexp_replace(colcld.child, '.*_((([0-9]{4}_[0-9]{2})_[0-9]{2})|(([0-9]{6})([0-9]{2})?))$', '\\3\\5') DESC, regexp_replace(colcld.child, '.*_', '') DESC LIMIT 1\"; do :; done >/dev/null &\n\n# Verify this is planning parallel workers, then repeat 10-20x.\n\nTypically fails on this server in under 10min.\n\nSorry for the error.\n\nJustin\n\nOn Wed, Feb 06, 2019 at 07:47:19PM -0600, Justin Pryzby wrote:\n> FYI, I wasn't yet able to make this work yet.\n> (gdb) print *segment_map->header\n> Cannot access memory at address 0x7f347e554000\n> \n> However I *did* reproduce the error in an isolated, non-production postgres\n> instance. It's a total empty, untuned v11.1 initdb just for this, running ONLY\n> a few simultaneous loops around just one query It looks like the simultaneous\n> loops sometimes (but not always) fail together. This has happened a couple\n> times. \n> \n> It looks like one query failed due to \"could not attach\" in leader, one failed\n> due to same in worker, and one failed with \"not pinned\", which I hadn't seen\n> before and appears to be related to DSM, not DSA...\n> \n> |ERROR: dsa_area could not attach to segment\n> |ERROR: cannot unpin a segment that is not pinned\n> |ERROR: dsa_area could not attach to segment\n> |CONTEXT: parallel worker\n> |\n> |[2] Done while PGHOST=/tmp PGPORT=5678 psql postgres -c \"SELECT colcld.child c, parent p, array_agg(colpar.attname::text ORDER BY colpar.attnum) cols, array_agg(format_type(colpar.atttypid, colpar.atttypmod) ORDER BY colpar.attnum) AS types FROM queued_alters qa JOIN pg_attribute colpar ON to_regclass(qa.parent)=colpar.attrelid AND colpar.attnum>0 AND NOT colpar.attisdropped JOIN (SELECT *, attrelid::regclass::text AS child FROM pg_attribute) colcld ON to_regclass(qa.child) =colcld.attrelid AND colcld.attnum>0 AND NOT colcld.attisdropped WHERE colcld.attname=colpar.attname AND colpar.atttypid!=colcld.atttypid GROUP BY 1,2 ORDER BY parent LIKE 'unused%', regexp_replace(colcld.child, '.*_((([0-9]{4}_[0-9]{2})_[0-9]{2})|(([0-9]{6})([0-9]{2})?))$', '\\\\3\\\\5') DESC, regexp_replace(colcld.child, '.*_', '') DESC LIMIT 1\"; do\n> | :;\n> |done > /dev/null\n> |[5]- Done while PGHOST=/tmp PGPORT=5678 psql postgres -c \"SELECT colcld.child c, parent p, array_agg(colpar.attname::text ORDER BY colpar.attnum) cols, array_agg(format_type(colpar.atttypid, colpar.atttypmod) ORDER BY colpar.attnum) AS types FROM queued_alters qa JOIN pg_attribute colpar ON to_regclass(qa.parent)=colpar.attrelid AND colpar.attnum>0 AND NOT colpar.attisdropped JOIN (SELECT *, attrelid::regclass::text AS child FROM pg_attribute) colcld ON to_regclass(qa.child) =colcld.attrelid AND colcld.attnum>0 AND NOT colcld.attisdropped WHERE colcld.attname=colpar.attname AND colpar.atttypid!=colcld.atttypid GROUP BY 1,2 ORDER BY parent LIKE 'unused%', regexp_replace(colcld.child, '.*_((([0-9]{4}_[0-9]{2})_[0-9]{2})|(([0-9]{6})([0-9]{2})?))$', '\\\\3\\\\5') DESC, regexp_replace(colcld.child, '.*_', '') DESC LIMIT 1\"; do\n> | :;\n> |done > /dev/null\n> |[6]+ Done while PGHOST=/tmp PGPORT=5678 psql postgres -c \"SELECT colcld.child c, parent p, array_agg(colpar.attname::text ORDER BY colpar.attnum) cols, array_agg(format_type(colpar.atttypid, colpar.atttypmod) ORDER BY colpar.attnum) AS types FROM queued_alters qa JOIN pg_attribute colpar ON to_regclass(qa.parent)=colpar.attrelid AND colpar.attnum>0 AND NOT colpar.attisdropped JOIN (SELECT *, attrelid::regclass::text AS child FROM pg_attribute) colcld ON to_regclass(qa.child) =colcld.attrelid AND colcld.attnum>0 AND NOT colcld.attisdropped WHERE colcld.attname=colpar.attname AND colpar.atttypid!=colcld.atttypid GROUP BY 1,2 ORDER BY parent LIKE 'unused%', regexp_replace(colcld.child, '.*_((([0-9]{4}_[0-9]{2})_[0-9]{2})|(([0-9]{6})([0-9]{2})?))$', '\\\\3\\\\5') DESC, regexp_replace(colcld.child, '.*_', '') DESC LIMIT 1\"; do\n> \n> I'm also trying to reproduce on other production servers. But so far nothing\n> else has shown the bug, including the other server which hit our original\n> (other) DSA error with the queued_alters query. So I tentatively think there\n> really may be something specific to the server (not the hypervisor so maybe the\n> OS, libraries, kernel, scheduler, ??).\n> \n> Find the schema for that table here:\n> https://www.postgresql.org/message-id/20181231221734.GB25379%40telsasoft.com\n> \n> Note, for unrelated reasons, that query was also previously discussed here:\n> https://www.postgresql.org/message-id/20171110204043.GS8563%40telsasoft.com\n\n",
"msg_date": "Sun, 10 Feb 2019 22:01:32 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg11.1: dsa_area could not attach to segment"
},
{
"msg_contents": "Hi\n\n> Here's confirmed steps to reproduce\n\nWow, i confirm this testcase is reproducible for me. On my 4-core desktop i see \"dsa_area could not attach to segment\" error after minute or two.\nOn current REL_11_STABLE branch with PANIC level i see this backtrace for failed parallel process:\n\n#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n#1 0x00007f3b36983535 in __GI_abort () at abort.c:79\n#2 0x000055f03ab87a4e in errfinish (dummy=dummy@entry=0) at elog.c:555\n#3 0x000055f03ab899e0 in elog_finish (elevel=elevel@entry=22, fmt=fmt@entry=0x55f03ad86900 \"dsa_area could not attach to segment\") at elog.c:1376\n#4 0x000055f03abaa1e2 in get_segment_by_index (area=area@entry=0x55f03cdd6bf0, index=index@entry=7) at dsa.c:1743\n#5 0x000055f03abaa8ab in get_best_segment (area=area@entry=0x55f03cdd6bf0, npages=npages@entry=8) at dsa.c:1993\n#6 0x000055f03ababdb8 in dsa_allocate_extended (area=0x55f03cdd6bf0, size=size@entry=32768, flags=flags@entry=0) at dsa.c:701\n#7 0x000055f03a921469 in ExecParallelHashTupleAlloc (hashtable=hashtable@entry=0x55f03cdfd498, size=104, shared=shared@entry=0x7ffc9f355748) at nodeHash.c:2837\n#8 0x000055f03a9219fc in ExecParallelHashTableInsertCurrentBatch (hashtable=hashtable@entry=0x55f03cdfd498, slot=<optimized out>, hashvalue=2522126815) at nodeHash.c:1747\n#9 0x000055f03a9227ef in ExecParallelHashJoinNewBatch (hjstate=hjstate@entry=0x55f03cde17b0) at nodeHashjoin.c:1153\n#10 0x000055f03a924115 in ExecHashJoinImpl (parallel=true, pstate=0x55f03cde17b0) at nodeHashjoin.c:534\n#11 ExecParallelHashJoin (pstate=0x55f03cde17b0) at nodeHashjoin.c:581\n#12 0x000055f03a90d91c in ExecProcNodeFirst (node=0x55f03cde17b0) at execProcnode.c:445\n#13 0x000055f03a905f3b in ExecProcNode (node=0x55f03cde17b0) at ../../../src/include/executor/executor.h:247\n#14 ExecutePlan (estate=estate@entry=0x55f03cde0d38, planstate=0x55f03cde17b0, use_parallel_mode=<optimized out>, operation=operation@entry=CMD_SELECT, sendTuples=sendTuples@entry=true, numberTuples=numberTuples@entry=0, \n direction=ForwardScanDirection, dest=0x55f03cd7e4e8, execute_once=true) at execMain.c:1723\n#15 0x000055f03a906b4d in standard_ExecutorRun (queryDesc=0x55f03cdd13e0, direction=ForwardScanDirection, count=0, execute_once=execute_once@entry=true) at execMain.c:364\n#16 0x000055f03a906c08 in ExecutorRun (queryDesc=queryDesc@entry=0x55f03cdd13e0, direction=direction@entry=ForwardScanDirection, count=<optimized out>, execute_once=execute_once@entry=true) at execMain.c:307\n#17 0x000055f03a90b44f in ParallelQueryMain (seg=seg@entry=0x55f03cd320a8, toc=toc@entry=0x7f3b2d877000) at execParallel.c:1402\n#18 0x000055f03a7ce4cc in ParallelWorkerMain (main_arg=<optimized out>) at parallel.c:1409\n#19 0x000055f03a9e11cb in StartBackgroundWorker () at bgworker.c:834\n#20 0x000055f03a9eea1a in do_start_bgworker (rw=rw@entry=0x55f03cd2d460) at postmaster.c:5698\n#21 0x000055f03a9eeb5b in maybe_start_bgworkers () at postmaster.c:5911\n#22 0x000055f03a9ef5f0 in sigusr1_handler (postgres_signal_arg=<optimized out>) at postmaster.c:5091\n#23 <signal handler called>\n#24 0x00007f3b36a52327 in __GI___select (nfds=nfds@entry=6, readfds=readfds@entry=0x7ffc9f356160, writefds=writefds@entry=0x0, exceptfds=exceptfds@entry=0x0, timeout=timeout@entry=0x7ffc9f356150)\n at ../sysdeps/unix/sysv/linux/select.c:41\n#25 0x000055f03a9effaa in ServerLoop () at postmaster.c:1670\n#26 0x000055f03a9f1285 in PostmasterMain (argc=3, argv=<optimized out>) at postmaster.c:1379\n#27 0x000055f03a954f3d in main (argc=3, argv=0x55f03cd03200) at main.c:228\n\nregards, Sergei\n\n",
"msg_date": "Mon, 11 Feb 2019 17:51:09 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re: pg11.1: dsa_area could not attach to segment"
},
{
"msg_contents": "On Tue, Feb 12, 2019 at 1:51 AM Sergei Kornilov <sk@zsrv.org> wrote:\n> > Here's confirmed steps to reproduce\n>\n> Wow, i confirm this testcase is reproducible for me. On my 4-core desktop i see \"dsa_area could not attach to segment\" error after minute or two.\n\nWell that's something -- thanks for this report. I've had 3 different\nmachines (laptops and servers, with an without optimisation enabled,\nclang and gcc, 3 different OSes) grinding away on Justin's test case\nfor many hours today, without seeing the problem.\n\n> On current REL_11_STABLE branch with PANIC level i see this backtrace for failed parallel process:\n>\n> #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n> #1 0x00007f3b36983535 in __GI_abort () at abort.c:79\n> #2 0x000055f03ab87a4e in errfinish (dummy=dummy@entry=0) at elog.c:555\n> #3 0x000055f03ab899e0 in elog_finish (elevel=elevel@entry=22, fmt=fmt@entry=0x55f03ad86900 \"dsa_area could not attach to segment\") at elog.c:1376\n> #4 0x000055f03abaa1e2 in get_segment_by_index (area=area@entry=0x55f03cdd6bf0, index=index@entry=7) at dsa.c:1743\n> #5 0x000055f03abaa8ab in get_best_segment (area=area@entry=0x55f03cdd6bf0, npages=npages@entry=8) at dsa.c:1993\n> #6 0x000055f03ababdb8 in dsa_allocate_extended (area=0x55f03cdd6bf0, size=size@entry=32768, flags=flags@entry=0) at dsa.c:701\n\nOk, this contains some clues I didn't have before. Here we see that a\nrequest for a 32KB chunk of memory led to a traversal the linked list\nof segments in a given bin, and at some point we followed a link to\nsegment index number 7, which turned out to be bogus. We tried to\nattach to the segment whose handle is stored in\narea->control->segment_handles[7] and it was not known to dsm.c. It\nwasn't DSM_HANDLE_INVALID, or you'd have got a different error\nmessage. That means that it wasn't a segment that had been freed by\ndestroy_superblock(), or it'd hold DSM_HANDLE_INVALID.\n\nHmm. So perhaps the bin list was corrupted (the segment index was bad\ndue to some bogus list manipulation logic or memory overrun or...), or\nwe corrupted our array of handles, or there is some missing locking\nsomewhere (all bin manipulation and traversal should be protected by\nthe area lock), or a valid DSM handle was unexpectedly missing (dsm.c\nbug, bogus shm_open() EEXIST from the OS).\n\nCan we please see the stderr output of dsa_dump(area), added just\nbefore the PANIC? Can we see the value of \"handle\" when the error is\nraised, and the directory listing for /dev/shm (assuming Linux) after\nthe crash (maybe you need restart_after_crash = off to prevent\nautomatic cleanup)?\n\n-- \nThomas Munro\nhttp://www.enterprisedb.com\n\n",
"msg_date": "Tue, 12 Feb 2019 10:57:51 +1100",
"msg_from": "Thomas Munro <thomas.munro@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pg11.1: dsa_area could not attach to segment"
},
{
"msg_contents": "On Tue, Feb 12, 2019 at 10:57 AM Thomas Munro\n<thomas.munro@enterprisedb.com> wrote:\n> bogus shm_open() EEXIST from the OS\n\nStrike that particular idea... it'd be the non-DSM_OP_CREATE case, and\nif the file was somehow bogusly not visible to us we'd get ENOENT and\nthat'd raise an error, and we aren't seeing that.\n\n-- \nThomas Munro\nhttp://www.enterprisedb.com\n\n",
"msg_date": "Tue, 12 Feb 2019 11:07:35 +1100",
"msg_from": "Thomas Munro <thomas.munro@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pg11.1: dsa_area could not attach to segment"
},
{
"msg_contents": "On Tue, Feb 12, 2019 at 10:57:51AM +1100, Thomas Munro wrote:\n> > On current REL_11_STABLE branch with PANIC level i see this backtrace for failed parallel process:\n> >\n> > #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n> > #1 0x00007f3b36983535 in __GI_abort () at abort.c:79\n> > #2 0x000055f03ab87a4e in errfinish (dummy=dummy@entry=0) at elog.c:555\n> > #3 0x000055f03ab899e0 in elog_finish (elevel=elevel@entry=22, fmt=fmt@entry=0x55f03ad86900 \"dsa_area could not attach to segment\") at elog.c:1376\n> > #4 0x000055f03abaa1e2 in get_segment_by_index (area=area@entry=0x55f03cdd6bf0, index=index@entry=7) at dsa.c:1743\n> > #5 0x000055f03abaa8ab in get_best_segment (area=area@entry=0x55f03cdd6bf0, npages=npages@entry=8) at dsa.c:1993\n> > #6 0x000055f03ababdb8 in dsa_allocate_extended (area=0x55f03cdd6bf0, size=size@entry=32768, flags=flags@entry=0) at dsa.c:701\n> \n> Ok, this contains some clues I didn't have before. Here we see that a\n> request for a 32KB chunk of memory led to a traversal the linked list\n> of segments in a given bin, and at some point we followed a link to\n> segment index number 7, which turned out to be bogus. We tried to\n> attach to the segment whose handle is stored in\n> area->control->segment_handles[7] and it was not known to dsm.c. It\n> wasn't DSM_HANDLE_INVALID, or you'd have got a different error\n> message. That means that it wasn't a segment that had been freed by\n> destroy_superblock(), or it'd hold DSM_HANDLE_INVALID.\n> \n> Hmm. So perhaps the bin list was corrupted (the segment index was bad\n\nI think there is corruption *somewhere* due to never being able to do\nthis (and looks very broken?)\n\n(gdb) p segment_map\n$1 = (dsa_segment_map *) 0x1\n\n(gdb) print segment_map->header \nCannot access memory at address 0x11\n\n> Can we please see the stderr output of dsa_dump(area), added just\n> before the PANIC? Can we see the value of \"handle\" when the error is\n> raised, and the directory listing for /dev/shm (assuming Linux) after\n> the crash (maybe you need restart_after_crash = off to prevent\n> automatic cleanup)?\n\nPANIC: dsa_area could not attach to segment index:8 handle:1076305344\n\nI think it needs to be:\n\n| if (segment == NULL) {\n| LWLockRelease(DSA_AREA_LOCK(area));\n| dsa_dump(area);\n| elog(PANIC, \"dsa_area could not attach to segment index:%zd handle:%d\", index, handle);\n| }\n\n..but that triggers recursion:\n\n#0 0x00000037b9c32495 in raise () from /lib64/libc.so.6\n#1 0x00000037b9c33c75 in abort () from /lib64/libc.so.6\n#2 0x0000000000a395c0 in errfinish (dummy=0) at elog.c:567\n#3 0x0000000000a3bbf6 in elog_finish (elevel=22, fmt=0xc9faa0 \"dsa_area could not attach to segment index:%zd handle:%d\") at elog.c:1389\n#4 0x0000000000a6b97a in get_segment_by_index (area=0x1659200, index=8) at dsa.c:1747\n#5 0x0000000000a6a3dc in dsa_dump (area=0x1659200) at dsa.c:1093\n#6 0x0000000000a6b946 in get_segment_by_index (area=0x1659200, index=8) at dsa.c:1744\n[...]\n#717 0x0000000000a6a3dc in dsa_dump (area=0x1659200) at dsa.c:1093\n#718 0x0000000000a6b946 in get_segment_by_index (area=0x1659200, index=8) at dsa.c:1744\n#719 0x0000000000a6a3dc in dsa_dump (area=0x1659200) at dsa.c:1093\n#720 0x0000000000a6b946 in get_segment_by_index (area=0x1659200, index=8) at dsa.c:1744\n#721 0x0000000000a6c150 in get_best_segment (area=0x1659200, npages=8) at dsa.c:1997\n#722 0x0000000000a69680 in dsa_allocate_extended (area=0x1659200, size=32768, flags=0) at dsa.c:701\n#723 0x00000000007052eb in ExecParallelHashTupleAlloc (hashtable=0x7f56ff9b40e8, size=112, shared=0x7fffda8c36a0) at nodeHash.c:2837\n#724 0x00000000007034f3 in ExecParallelHashTableInsert (hashtable=0x7f56ff9b40e8, slot=0x1608948, hashvalue=2677813320) at nodeHash.c:1693\n#725 0x0000000000700ba3 in MultiExecParallelHash (node=0x1607f40) at nodeHash.c:288\n#726 0x00000000007007ce in MultiExecHash (node=0x1607f40) at nodeHash.c:112\n#727 0x00000000006e94d7 in MultiExecProcNode (node=0x1607f40) at execProcnode.c:501\n[...]\n\n[pryzbyj@telsasoft-db postgresql]$ ls -lt /dev/shm |head\ntotal 353056\n-rw-------. 1 pryzbyj pryzbyj 1048576 Feb 11 13:51 PostgreSQL.821164732\n-rw-------. 1 pryzbyj pryzbyj 2097152 Feb 11 13:51 PostgreSQL.1990121974\n-rw-------. 1 pryzbyj pryzbyj 2097152 Feb 11 12:54 PostgreSQL.847060172\n-rw-------. 1 pryzbyj pryzbyj 2097152 Feb 11 12:48 PostgreSQL.1369859581\n-rw-------. 1 postgres postgres 21328 Feb 10 21:00 PostgreSQL.1155375187\n-rw-------. 1 pryzbyj pryzbyj 196864 Feb 10 18:52 PostgreSQL.2136009186\n-rw-------. 1 pryzbyj pryzbyj 2097152 Feb 10 18:49 PostgreSQL.1648026537\n-rw-------. 1 pryzbyj pryzbyj 2097152 Feb 10 18:49 PostgreSQL.827867206\n-rw-------. 1 pryzbyj pryzbyj 2097152 Feb 10 18:49 PostgreSQL.1684837530\n\nJustin\n\n",
"msg_date": "Mon, 11 Feb 2019 20:14:28 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg11.1: dsa_area could not attach to segment"
},
{
"msg_contents": "On Mon, Feb 11, 2019 at 08:14:28PM -0600, Justin Pryzby wrote:\n> > Can we please see the stderr output of dsa_dump(area), added just\n> > before the PANIC? Can we see the value of \"handle\" when the error is\n> > raised, and the directory listing for /dev/shm (assuming Linux) after\n> > the crash (maybe you need restart_after_crash = off to prevent\n> > automatic cleanup)?\n> \n> PANIC: dsa_area could not attach to segment index:8 handle:1076305344\n> \n> I think it needs to be:\n> \n> | if (segment == NULL) {\n> | LWLockRelease(DSA_AREA_LOCK(area));\n> | dsa_dump(area);\n> | elog(PANIC, \"dsa_area could not attach to segment index:%zd handle:%d\", index, handle);\n> | }\n> \n> ..but that triggers recursion:\n\nHere's my dsa_log (which is repeated many times and 400kB total)..\n\ndsa_area handle 0:\n max_total_segment_size: 18446744073709551615\n total_segment_size: 15740928\n refcnt: 2\n pinned: f\n segment bins:\n segment bin 0 (at least -2147483648 contiguous pages free):\n segment index 2, usable_pages = 256, contiguous_pages = 0, mapped at 0x7f56ff9d5000\n segment index 0, usable_pages = 0, contiguous_pages = 0, mapped at 0x7f56ffbd6840\n segment bin 3 (at least 4 contiguous pages free):\n segment index 7, usable_pages = 510, contiguous_pages = 6, mapped at 0x7f56ff0b4000\n segment index 6, usable_pages = 510, contiguous_pages = 6, mapped at 0x7f56ff2b4000\n segment index 5, usable_pages = 510, contiguous_pages = 5, mapped at 0x7f56ff4b4000\n segment index 4, usable_pages = 510, contiguous_pages = 5, mapped at 0x7f56ff6b4000\n segment index 3, usable_pages = 255, contiguous_pages = 6, mapped at 0x7f56ff8b4000\n segment index 1, usable_pages = 255, contiguous_pages = 6, mapped at 0x7f56ffad6000\n segment bin 10 (at least 512 contiguous pages free):\n\nNote negative pages. Let me know if you want more of it (span descriptors?)\n\nJustin\n\n",
"msg_date": "Mon, 11 Feb 2019 20:36:14 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg11.1: dsa_area could not attach to segment"
},
{
"msg_contents": "On Tue, Feb 12, 2019 at 1:14 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Tue, Feb 12, 2019 at 10:57:51AM +1100, Thomas Munro wrote:\n> > > On current REL_11_STABLE branch with PANIC level i see this backtrace for failed parallel process:\n> > >\n> > > #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n> > > #1 0x00007f3b36983535 in __GI_abort () at abort.c:79\n> > > #2 0x000055f03ab87a4e in errfinish (dummy=dummy@entry=0) at elog.c:555\n> > > #3 0x000055f03ab899e0 in elog_finish (elevel=elevel@entry=22, fmt=fmt@entry=0x55f03ad86900 \"dsa_area could not attach to segment\") at elog.c:1376\n> > > #4 0x000055f03abaa1e2 in get_segment_by_index (area=area@entry=0x55f03cdd6bf0, index=index@entry=7) at dsa.c:1743\n> > > #5 0x000055f03abaa8ab in get_best_segment (area=area@entry=0x55f03cdd6bf0, npages=npages@entry=8) at dsa.c:1993\n> > > #6 0x000055f03ababdb8 in dsa_allocate_extended (area=0x55f03cdd6bf0, size=size@entry=32768, flags=flags@entry=0) at dsa.c:701\n> >\n> > Ok, this contains some clues I didn't have before. Here we see that a\n> > request for a 32KB chunk of memory led to a traversal the linked list\n> > of segments in a given bin, and at some point we followed a link to\n> > segment index number 7, which turned out to be bogus. We tried to\n> > attach to the segment whose handle is stored in\n> > area->control->segment_handles[7] and it was not known to dsm.c. It\n> > wasn't DSM_HANDLE_INVALID, or you'd have got a different error\n> > message. That means that it wasn't a segment that had been freed by\n> > destroy_superblock(), or it'd hold DSM_HANDLE_INVALID.\n> >\n> > Hmm. So perhaps the bin list was corrupted (the segment index was bad\n>\n> I think there is corruption *somewhere* due to never being able to do\n> this (and looks very broken?)\n>\n> (gdb) p segment_map\n> $1 = (dsa_segment_map *) 0x1\n>\n> (gdb) print segment_map->header\n> Cannot access memory at address 0x11\n\nIf you're in get_segment_by_index() in a core dumped at the \"could not\nattach\" error, the variable segment_map hasn't been assigned a value\nyet so that's uninitialised junk on the stack.\n\n> > Can we please see the stderr output of dsa_dump(area), added just\n> > before the PANIC? Can we see the value of \"handle\" when the error is\n> > raised, and the directory listing for /dev/shm (assuming Linux) after\n> > the crash (maybe you need restart_after_crash = off to prevent\n> > automatic cleanup)?\n>\n> PANIC: dsa_area could not attach to segment index:8 handle:1076305344\n>\n> I think it needs to be:\n>\n> | if (segment == NULL) {\n> | LWLockRelease(DSA_AREA_LOCK(area));\n> | dsa_dump(area);\n> | elog(PANIC, \"dsa_area could not attach to segment index:%zd handle:%d\", index, handle);\n> | }\n\nRight.\n\n> ..but that triggers recursion:\n>\n> #0 0x00000037b9c32495 in raise () from /lib64/libc.so.6\n> #1 0x00000037b9c33c75 in abort () from /lib64/libc.so.6\n> #2 0x0000000000a395c0 in errfinish (dummy=0) at elog.c:567\n> #3 0x0000000000a3bbf6 in elog_finish (elevel=22, fmt=0xc9faa0 \"dsa_area could not attach to segment index:%zd handle:%d\") at elog.c:1389\n> #4 0x0000000000a6b97a in get_segment_by_index (area=0x1659200, index=8) at dsa.c:1747\n> #5 0x0000000000a6a3dc in dsa_dump (area=0x1659200) at dsa.c:1093\n> #6 0x0000000000a6b946 in get_segment_by_index (area=0x1659200, index=8) at dsa.c:1744\n\nOk, that makes sense.\n\n> [pryzbyj@telsasoft-db postgresql]$ ls -lt /dev/shm |head\n> total 353056\n> -rw-------. 1 pryzbyj pryzbyj 1048576 Feb 11 13:51 PostgreSQL.821164732\n> -rw-------. 1 pryzbyj pryzbyj 2097152 Feb 11 13:51 PostgreSQL.1990121974\n> -rw-------. 1 pryzbyj pryzbyj 2097152 Feb 11 12:54 PostgreSQL.847060172\n> -rw-------. 1 pryzbyj pryzbyj 2097152 Feb 11 12:48 PostgreSQL.1369859581\n> -rw-------. 1 postgres postgres 21328 Feb 10 21:00 PostgreSQL.1155375187\n> -rw-------. 1 pryzbyj pryzbyj 196864 Feb 10 18:52 PostgreSQL.2136009186\n> -rw-------. 1 pryzbyj pryzbyj 2097152 Feb 10 18:49 PostgreSQL.1648026537\n> -rw-------. 1 pryzbyj pryzbyj 2097152 Feb 10 18:49 PostgreSQL.827867206\n> -rw-------. 1 pryzbyj pryzbyj 2097152 Feb 10 18:49 PostgreSQL.1684837530\n\nHmm. While contemplating this evidence that you have multiple\npostgres clusters running, I started wondering if there could be some\nway for two different DSA areas to get their DSM segments mixed up,\nperhaps related to the way that 11 generates identical sequences of\nrandom() numbers in all parallel workers. But I'm not seeing it; the\nO_CREAT | O_EXCL protocol seems to avoid that (not to mention\npermissions).\n\nYou truncated that list at 10 lines with \"head\"... do you know if\nPostgreSQL.1076305344 was present? Or if you do it again, the one who\nname matches the value of \"handle\".\n\n\n--\nThomas Munro\nhttp://www.enterprisedb.com\n\n",
"msg_date": "Tue, 12 Feb 2019 15:09:42 +1100",
"msg_from": "Thomas Munro <thomas.munro@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pg11.1: dsa_area could not attach to segment"
},
{
"msg_contents": "On Mon, Feb 11, 2019 at 08:43:14PM -0600, Justin Pryzby wrote:\n> I have a suspicion that this doesn't happen if\n> parallel_leader_participation=off.\n\nI think this is tentatively confirmed..I ran 20 loops for over 90 minutes with\nno crash when parallel_leader_participation=off.\n\nOn enabling parallel_leader_participation, crash within 10min.\n\nSergei, could you confirm ?\n\nThomas:\n\n2019-02-11 23:56:20.611 EST [12699] PANIC: dsa_area could not attach to segment index:6 handle:1376636277\n[pryzbyj@telsasoft-db postgresql]$ ls /dev/shm/ |grep PostgreSQL.1376636277 || echo Not present.\nNot present.\n\nThanks,\nJustin\n\n",
"msg_date": "Mon, 11 Feb 2019 23:00:58 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg11.1: dsa_area could not attach to segment"
},
{
"msg_contents": "On Tue, Feb 12, 2019 at 4:01 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Mon, Feb 11, 2019 at 08:43:14PM -0600, Justin Pryzby wrote:\n> > I have a suspicion that this doesn't happen if\n> > parallel_leader_participation=off.\n>\n> I think this is tentatively confirmed..I ran 20 loops for over 90 minutes with\n> no crash when parallel_leader_participation=off.\n>\n> On enabling parallel_leader_participation, crash within 10min.\n\nThat's quite interesting. I wonder if it's something specific about\nthe leader's behaviour, or if it's just because it takes one more\nprocess to hit the bad behaviour on your system.\n\n> Sergei, could you confirm ?\n>\n> Thomas:\n>\n> 2019-02-11 23:56:20.611 EST [12699] PANIC: dsa_area could not attach to segment index:6 handle:1376636277\n> [pryzbyj@telsasoft-db postgresql]$ ls /dev/shm/ |grep PostgreSQL.1376636277 || echo Not present.\n> Not present.\n\nOk, based on the absence of the file, it seems like we destroyed it\nbut didn't remove the segment from our the list (unless of course it's\na bogus handle that never existed). Perhaps that could happen if the\nDSM segment's ref count got out of whack so it was destroyed too soon,\nor if it didn't but our list manipulation code is borked or somehow we\ndidn't reach it or something concurrent confused it due to\ninsufficient locking or data being overwritten...\n\nI need to reproduce this and load it up with instrumentation and\nchecks. Will keep trying.\n\nWere you running with assertions enabled?\n\n\n--\nThomas Munro\nhttp://www.enterprisedb.com\n\n",
"msg_date": "Tue, 12 Feb 2019 16:27:01 +1100",
"msg_from": "Thomas Munro <thomas.munro@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pg11.1: dsa_area could not attach to segment"
},
{
"msg_contents": "On Tue, Feb 12, 2019 at 4:27 PM Thomas Munro\n<thomas.munro@enterprisedb.com> wrote:\n> On Tue, Feb 12, 2019 at 4:01 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > On Mon, Feb 11, 2019 at 08:43:14PM -0600, Justin Pryzby wrote:\n> > > I have a suspicion that this doesn't happen if\n> > > parallel_leader_participation=off.\n> >\n> > I think this is tentatively confirmed..I ran 20 loops for over 90 minutes with\n> > no crash when parallel_leader_participation=off.\n> >\n> > On enabling parallel_leader_participation, crash within 10min.\n>\n> That's quite interesting. I wonder if it's something specific about\n> the leader's behaviour, or if it's just because it takes one more\n> process to hit the bad behaviour on your system.\n\n. o O ( is there some way that getting peppered with signals from the\nshm tuple queue machinery could break something here? )\n\n-- \nThomas Munro\nhttp://www.enterprisedb.com\n\n",
"msg_date": "Tue, 12 Feb 2019 16:33:31 +1100",
"msg_from": "Thomas Munro <thomas.munro@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pg11.1: dsa_area could not attach to segment"
},
{
"msg_contents": "Hi\n\n> I think this is tentatively confirmed..I ran 20 loops for over 90 minutes with\n> no crash when parallel_leader_participation=off.\n>\n> On enabling parallel_leader_participation, crash within 10min.\n>\n> Sergei, could you confirm ?\n\nI still have error with parallel_leader_participation = off. One difference is time: with parallel_leader_participation = on i have error after minute-two, with off - error was after 20 min.\n\nMy desktop is \n- debian testing with actual updates, 4.19.0-2-amd64 #1 SMP Debian 4.19.16-1 (2019-01-17) x86_64 GNU/Linux\n- gcc version 8.2.0 (Debian 8.2.0-16)\n- i build fresh REL_11_STABLE postgresql with ./configure --enable-cassert --enable-debug CFLAGS=\"-ggdb -Og -g3 -fno-omit-frame-pointer\" --enable-tap-tests --prefix=/...\n\nCan't provide dsa_dump(area) due recursion. With such dirty hack:\n\n fprintf(stderr,\n- \" segment bin %zu (at least %d contiguous pages free):\\n\",\n- i, 1 << (i - 1));\n- segment_index = area->control->segment_bins[i];\n- while (segment_index != DSA_SEGMENT_INDEX_NONE)\n- {\n- dsa_segment_map *segment_map;\n-\n- segment_map =\n- get_segment_by_index(area, segment_index);\n-\n- fprintf(stderr,\n- \" segment index %zu, usable_pages = %zu, \"\n- \"contiguous_pages = %zu, mapped at %p\\n\",\n- segment_index,\n- segment_map->header->usable_pages,\n- fpm_largest(segment_map->fpm),\n- segment_map->mapped_address);\n- segment_index = segment_map->header->next;\n- }\n+ \" segment bin %zu (at least %d contiguous pages free), segment_index=%zu\\n\",\n+ i, 1 << (i - 1), area->control->segment_bins[i]);\n\ni have result:\n\ndsa_area handle 0:\n max_total_segment_size: 18446744073709551615\n total_segment_size: 2105344\n refcnt: 2\n pinned: f\n segment bins:\n segment bin 0 (at least -2147483648 contiguous pages free), segment_index=0\n segment bin 3 (at least 4 contiguous pages free), segment_index=1\n segment bin 8 (at least 128 contiguous pages free), segment_index=2\n pools:\n pool for blocks of span objects:\n fullness class 0 is empty\n fullness class 1:\n span descriptor at 0000010000001000, superblock at 0000010000001000, pages = 1, objects free = 54/72\n fullness class 2 is empty\n fullness class 3 is empty\n pool for large object spans:\n fullness class 0 is empty\n fullness class 1:\n span descriptor at 00000100000013b8, superblock at 0000020000009000, pages = 8, objects free = 0/0\n span descriptor at 0000010000001380, superblock at 0000020000001000, pages = 8, objects free = 0/0\n span descriptor at 0000010000001348, superblock at 00000100000f2000, pages = 8, objects free = 0/0\n span descriptor at 0000010000001310, superblock at 00000100000ea000, pages = 8, objects free = 0/0\n span descriptor at 00000100000012d8, superblock at 00000100000e2000, pages = 8, objects free = 0/0\n span descriptor at 00000100000012a0, superblock at 00000100000da000, pages = 8, objects free = 0/0\n span descriptor at 0000010000001268, superblock at 00000100000d2000, pages = 8, objects free = 0/0\n span descriptor at 0000010000001230, superblock at 00000100000ca000, pages = 8, objects free = 0/0\n span descriptor at 00000100000011f8, superblock at 00000100000c2000, pages = 8, objects free = 0/0\n span descriptor at 00000100000011c0, superblock at 00000100000ba000, pages = 8, objects free = 0/0\n span descriptor at 0000010000001188, superblock at 00000100000b2000, pages = 8, objects free = 0/0\n span descriptor at 0000010000001150, superblock at 00000100000aa000, pages = 8, objects free = 0/0\n span descriptor at 0000010000001118, superblock at 00000100000a2000, pages = 8, objects free = 0/0\n span descriptor at 00000100000010e0, superblock at 000001000009a000, pages = 8, objects free = 0/0\n span descriptor at 00000100000010a8, superblock at 0000010000092000, pages = 8, objects free = 0/0\n span descriptor at 0000010000001070, superblock at 0000010000012000, pages = 128, objects free = 0/0\n fullness class 2 is empty\n fullness class 3 is empty\n pool for size class 32 (object size 3640 bytes):\n fullness class 0 is empty\n fullness class 1:\n span descriptor at 0000010000001038, superblock at 0000010000002000, pages = 16, objects free = 17/18\n fullness class 2 is empty\n fullness class 3 is empty\n\n\"at least -2147483648\" seems surprise.\n\n\nmelkij@melkij:~$ LANG=C ls -lt /dev/shm\ntotal 0\n-rw------- 1 melkij melkij 4194304 Feb 12 11:56 PostgreSQL.1822959854\n\nonly one segment, restart_after_crash = off and no more postgresql instances running.\n\nregards, Sergei\n\n",
"msg_date": "Tue, 12 Feb 2019 12:14:59 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re: pg11.1: dsa_area could not attach to segment"
},
{
"msg_contents": "On Tue, Feb 12, 2019 at 10:15 PM Sergei Kornilov <sk@zsrv.org> wrote:\n> I still have error with parallel_leader_participation = off.\n\nJustin very kindly set up a virtual machine similar to the one where\nhe'd seen the problem so I could experiment with it. Eventually I\nalso managed to reproduce it locally, and have finally understood the\nproblem.\n\nIt doesn't happen on master (hence some of my initial struggle to\nreproduce it) because of commit 197e4af9, which added srandom() to set\na different seed for each parallel workers. Perhaps you see where\nthis is going already...\n\nThe problem is that a DSM handle (ie a random number) can be reused\nfor a new segment immediately after the shared memory object has been\ndestroyed but before the DSM slot has been released. Now two DSM\nslots have the same handle, and dsm_attach() can be confused by the\nold segment and give up.\n\nHere's a draft patch to fix that. It also clears the handle in a case\nwhere it wasn't previously cleared, but that wasn't strictly\nnecessary. It just made debugging less confusing.\n\n\n--\nThomas Munro\nhttp://www.enterprisedb.com",
"msg_date": "Fri, 15 Feb 2019 01:12:35 +1300",
"msg_from": "Thomas Munro <thomas.munro@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pg11.1: dsa_area could not attach to segment"
},
{
"msg_contents": "Hi!\n\nGreat work, thank you!\n\nI can not reproduce bug after 30min test long. (without patch bug was after minute-two)\n\nregards Sergei\n\n",
"msg_date": "Thu, 14 Feb 2019 16:31:22 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re: pg11.1: dsa_area could not attach to segment"
},
{
"msg_contents": "On Fri, Feb 15, 2019 at 01:12:35AM +1300, Thomas Munro wrote:\n> The problem is that a DSM handle (ie a random number) can be reused\n> for a new segment immediately after the shared memory object has been\n> destroyed but before the DSM slot has been released. Now two DSM\n> slots have the same handle, and dsm_attach() can be confused by the\n> old segment and give up.\n> \n> Here's a draft patch to fix that. It also clears the handle in a case\n> where it wasn't previously cleared, but that wasn't strictly\n> necessary. It just made debugging less confusing.\n\nThanks.\n\nDo you think that plausibly explains and resolves symptoms of bug#15585, too?\n\nJustin\n\n",
"msg_date": "Thu, 14 Feb 2019 10:20:10 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg11.1: dsa_area could not attach to segment"
},
{
"msg_contents": "Hi\n\n> Do you think that plausibly explains and resolves symptoms of bug#15585, too?\n\nI think yes. Bug#15585 raised only after \"dsa_area could not attach to segment\" in different parallel worker. Leader stuck because waiting all parallel workers, but one worker has unexpected recursion in dsm_backend_shutdown [1] and will never shutdown. Backtrace show previous error in this backend: \"cannot unpin a segment that is not pinned\" - root cause is earlier and in a different process.\n\n* https://www.postgresql.org/message-id/70942611548327380%40myt6-7734411c649e.qloud-c.yandex.net\n\nregards, Sergei\n\n",
"msg_date": "Thu, 14 Feb 2019 19:36:55 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re: pg11.1: dsa_area could not attach to segment"
},
{
"msg_contents": "On Fri, Feb 15, 2019 at 5:36 AM Sergei Kornilov <sk@zsrv.org> wrote:\n> > Do you think that plausibly explains and resolves symptoms of bug#15585, too?\n>\n> I think yes. Bug#15585 raised only after \"dsa_area could not attach to segment\" in different parallel worker. Leader stuck because waiting all parallel workers, but one worker has unexpected recursion in dsm_backend_shutdown [1] and will never shutdown. Backtrace show previous error in this backend: \"cannot unpin a segment that is not pinned\" - root cause is earlier and in a different process.\n\nAgreed. Even though it's an unpleasant failure mode, I'm not entirely\nsure if it's a good idea to make changes to avoid it. We could move\nthe code around so that the error is raised after releasing the lock,\nbut then you'd either blow the stack or loop forever due to longjmp (I\nhaven't checked which). To avoid that you'd have to clean out the\nbook-keeping in shared memory eagerly so that at the next level of\nerror recursion you've at least made progress (and admittedly there\nare examples of things like that in the code), but how far should we\ngo to tolerate cases that shouldn't happen? Practically, if we had\nthat behaviour and this bug, you'd eventually eat all the DSM slots\nwith leaked segments of shared memory, and your system wouldn't work\ntoo well. For now I think it's better to treat the root cause of the\nunexpected error.\n\n-- \nThomas Munro\nhttp://www.enterprisedb.com\n\n",
"msg_date": "Fri, 15 Feb 2019 10:00:12 +1300",
"msg_from": "Thomas Munro <thomas.munro@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pg11.1: dsa_area could not attach to segment"
},
{
"msg_contents": "On Fri, Feb 15, 2019 at 2:31 AM Sergei Kornilov <sk@zsrv.org> wrote:\n> I can not reproduce bug after 30min test long. (without patch bug was after minute-two)\n\nThank you Justin and Sergei for all your help reproducing and testing this.\n\nFix pushed to all supported releases. It's lightly refactored from\nthe version I posted yesterday. Just doing s/break/continue/ made for\na cute patch, but this way the result is easier to understand IMHO. I\nalso didn't bother with the non-essential change.\n\n-- \nThomas Munro\nhttp://www.enterprisedb.com\n\n",
"msg_date": "Fri, 15 Feb 2019 15:08:41 +1300",
"msg_from": "Thomas Munro <thomas.munro@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pg11.1: dsa_area could not attach to segment"
}
] |
[
{
"msg_contents": "Here is a patch that makes more use of unconstify() by replacing casts\nwhose only purpose is to cast away const. Also a preliminary patch to\nremove casts that were useless to begin with.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 7 Feb 2019 09:14:34 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "more unconstify use"
},
{
"msg_contents": "On 07/02/2019 09:14, Peter Eisentraut wrote:\n> Here is a patch that makes more use of unconstify() by replacing casts\n> whose only purpose is to cast away const. Also a preliminary patch to\n> remove casts that were useless to begin with.\n\ncommitted\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Wed, 13 Feb 2019 12:19:18 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: more unconstify use"
},
{
"msg_contents": "\n> On Feb 7, 2019, at 12:14 AM, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> \n> Here is a patch that makes more use of unconstify() by replacing casts\n> whose only purpose is to cast away const. Also a preliminary patch to\n> remove casts that were useless to begin with.\n> \n> -- \n> Peter Eisentraut http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n> <v1-0001-Remove-useless-casts.patch><v1-0002-More-unconstify-use.patch>\n\nPeter, so sorry I did not review this patch before you committed. There\nare a few places where you unconstify the argument to a function where\nchanging the function to take const seems better to me. I argued for\nsomething similar in 2016,\n\nhttps://www.postgresql.org/message-id/ACF3A030-E3D5-4E68-B744-184E11DE68F3@gmail.com\n\nBack then, Tom replied\n\n> \n> I'd call this kind of a wash, I guess. I'd be more excited about it if\n> the change allowed removal of an actual cast-away-of-constness somewhere.\n\nWe'd be able to remove some of these unconstify calls, and perhaps that\nmeets Tom's criteria from that thread.\n\n\n\nYour change:\n\n-\t\t\tmd5_calc((uint8 *) (input + i), ctxt);\n+\t\t\tmd5_calc(unconstify(uint8 *, (input + i)), ctxt);\n\nPerhaps md5_calc's signature should change to\n\n\tmd5_calc(const uint8 *, md5_ctxt *)\n\nsince it doesn't modify the first argument.\n\n\n\n\nYour change:\n\n-\t\tif (!PageIndexTupleOverwrite(oldpage, oldoff, (Item) newtup, newsz))\n+\t\tif (!PageIndexTupleOverwrite(oldpage, oldoff, (Item) unconstify(BrinTuple *, newtup), newsz))\n\nPerhaps PageIndexTupleOverwrite's third argument should be const, since it\nonly uses it as the const source of a memcpy. (This is a bit harder than\nfor md5_calc, above, since the third argument here is of type Item, which\nis itself a typedef to Pointer, and there exists no analogous ConstPointer\nor ConstItem definition in the sources.)\n\n\n\nYour change:\n\n-\t\t\tXLogRegisterBufData(0, (char *) newtup, newsz);\n+\t\t\tXLogRegisterBufData(0, (char *) unconstify(BrinTuple *, newtup), newsz);\n\nPerhaps the third argument to XLogRegisterBufData can be changed to const,\nwith the extra work of changing XLogRecData's data field to const. I'm not\nsure about the amount of code churn this would entail.\n\n\nYour change:\n\n-\t\tnewoff = PageAddItem(newpage, (Item) newtup, newsz,\n+\t\tnewoff = PageAddItem(newpage, (Item) unconstify(BrinTuple *, newtup), newsz,\n \t\t\t\t\t\t\t InvalidOffsetNumber, false, false);\n\n\nAs with PageIndexTupleOverwrite's third argument, the second argument to\nPageAddItem is only ever used as the const source of a memcpy.\n\n\n\nYour change:\n\n-\t\t\tXLogRegisterData((char *) twophase_gid, strlen(twophase_gid) + 1);\n+\t\t\tXLogRegisterData(unconstify(char *, twophase_gid), strlen(twophase_gid) + 1);\n\nThe first argument here gets assigned to XLogRecData.data, similarly to what is done\nabove in XLogRegisterBufData. This is another place where casting away const could\nbe avoided if we accepted the code churn cost of changing XLogRecData's data field\nto const. There are a few more of these in your patch which for brevity I won't quote.\n\n\nYour change:\n\n-\t\tresult = pg_be_scram_exchange(scram_opaq, input, inputlen,\n+\t\tresult = pg_be_scram_exchange(scram_opaq, unconstify(char *, input), inputlen,\n \t\t\t\t\t\t\t\t\t &output, &outputlen,\n \t\t\t\t\t\t\t\t\t logdetail);\n\npg_be_scram_exchange passes the second argument into two functions,\nread_client_first_message and read_client_final_message, both of which take\nit as a non-const argument but immediately pstrdup it such that it might\nas well have been const. Perhaps we should just clean up this mess rather\nthan unconstifying.\n\n\n\nmark\n\n\n\n",
"msg_date": "Wed, 13 Feb 2019 10:51:32 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: more unconstify use"
},
{
"msg_contents": "On 13/02/2019 19:51, Mark Dilger wrote:\n> Peter, so sorry I did not review this patch before you committed. There\n> are a few places where you unconstify the argument to a function where\n> changing the function to take const seems better to me. I argued for\n> something similar in 2016,\n\nOne can consider unconstify a \"todo\" marker. Some of these could be\nremoved by some API changes. It needs case-by-case consideration.\n\n> Your change:\n> \n> -\t\t\tmd5_calc((uint8 *) (input + i), ctxt);\n> +\t\t\tmd5_calc(unconstify(uint8 *, (input + i)), ctxt);\n> \n> Perhaps md5_calc's signature should change to\n> \n> \tmd5_calc(const uint8 *, md5_ctxt *)\n> \n> since it doesn't modify the first argument.\n\nFixed, thanks.\n\n> Your change:\n> \n> -\t\tif (!PageIndexTupleOverwrite(oldpage, oldoff, (Item) newtup, newsz))\n> +\t\tif (!PageIndexTupleOverwrite(oldpage, oldoff, (Item) unconstify(BrinTuple *, newtup), newsz))\n> \n> Perhaps PageIndexTupleOverwrite's third argument should be const, since it\n> only uses it as the const source of a memcpy. (This is a bit harder than\n> for md5_calc, above, since the third argument here is of type Item, which\n> is itself a typedef to Pointer, and there exists no analogous ConstPointer\n> or ConstItem definition in the sources.)\n\nYeah, typedefs to a pointer are a poor fit for this. We have been\ngetting rid of them from time to time, but I don't know a general solution.\n\n> Your change:\n> \n> -\t\t\tXLogRegisterBufData(0, (char *) newtup, newsz);\n> +\t\t\tXLogRegisterBufData(0, (char *) unconstify(BrinTuple *, newtup), newsz);\n> \n> Perhaps the third argument to XLogRegisterBufData can be changed to const,\n> with the extra work of changing XLogRecData's data field to const. I'm not\n> sure about the amount of code churn this would entail.\n\nIIRC, the XLogRegister stuff is a web of lies with respect to constness.\n Resolving this properly is tricky.\n\n> Your change:\n> \n> -\t\tresult = pg_be_scram_exchange(scram_opaq, input, inputlen,\n> +\t\tresult = pg_be_scram_exchange(scram_opaq, unconstify(char *, input), inputlen,\n> \t\t\t\t\t\t\t\t\t &output, &outputlen,\n> \t\t\t\t\t\t\t\t\t logdetail);\n> \n> pg_be_scram_exchange passes the second argument into two functions,\n> read_client_first_message and read_client_final_message, both of which take\n> it as a non-const argument but immediately pstrdup it such that it might\n> as well have been const. Perhaps we should just clean up this mess rather\n> than unconstifying.\n\nAlso fixed!\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Thu, 14 Feb 2019 21:07:45 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: more unconstify use"
}
] |
[
{
"msg_contents": "I'm wondering whether we should phase out the use of the ossp-uuid\nlibrary? (not the uuid-ossp extension) We have had preferred\nalternatives for a while now, so it shouldn't be necessary to use this\nanymore, except perhaps in some marginal circumstances? As we know,\nossp-uuid isn't maintained anymore, and a few weeks ago the website was\ngone altogether, but it seems to be back now.\n\nI suggest we declare it deprecated in PG12 and remove it altogether in PG13.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Thu, 7 Feb 2019 09:26:14 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "phase out ossp-uuid?"
},
{
"msg_contents": "On Thu, Feb 7, 2019 at 8:26 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> I'm wondering whether we should phase out the use of the ossp-uuid\n> library? (not the uuid-ossp extension) We have had preferred\n> alternatives for a while now, so it shouldn't be necessary to use this\n> anymore, except perhaps in some marginal circumstances? As we know,\n> ossp-uuid isn't maintained anymore, and a few weeks ago the website was\n> gone altogether, but it seems to be back now.\n>\n> I suggest we declare it deprecated in PG12 and remove it altogether in PG13.\n\nMuch as I'd like to get rid of it, we don't have an alternative for\nWindows do we? The docs for 11 imply it's required for UUID support\n(though the wording isn't exactly clear, saying it's required for\nUUID-OSSP support!):\nhttps://www.postgresql.org/docs/11/install-windows-full.html#id-1.6.4.8.8\n\n-- \nDave Page\nBlog: http://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEnterpriseDB UK: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Thu, 7 Feb 2019 09:03:06 +0000",
"msg_from": "Dave Page <dpage@pgadmin.org>",
"msg_from_op": false,
"msg_subject": "Re: phase out ossp-uuid?"
},
{
"msg_contents": "On 7/2/19 10:03, Dave Page wrote:\n> On Thu, Feb 7, 2019 at 8:26 AM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n>> I'm wondering whether we should phase out the use of the ossp-uuid\n>> library? (not the uuid-ossp extension)\n\nHmm... FWIW, just get it in core altogether? Seems small and useful \nenough... if it carries the opfamily with it, UUID would become really \nconvenient to use for distributed applications.\n\n(being using that functionality for a while, already)\n\n>> We have had preferred\n>> alternatives for a while now, so it shouldn't be necessary to use this\n>> anymore, except perhaps in some marginal circumstances? As we know,\n>> ossp-uuid isn't maintained anymore, and a few weeks ago the website was\n>> gone altogether, but it seems to be back now.\n>>\n>> I suggest we declare it deprecated in PG12 and remove it altogether in PG13.\n> Much as I'd like to get rid of it, we don't have an alternative for\n> Windows do we? The docs for 11 imply it's required for UUID support\n> (though the wording isn't exactly clear, saying it's required for\n> UUID-OSSP support!):\n> https://www.postgresql.org/docs/11/install-windows-full.html#id-1.6.4.8.8\n\nAFAIR, Windows has its own DCE/v4 UUID generation support. UUID v5 can \nbe generated using built-in crypto hashes. v1 are the ones (potentially) \nmore cumbersome to generate.... plus they are the least useful IMHO.\n\n\nJust my .02€\n\nThanks,\n\n / J.L.\n\n\n\n",
"msg_date": "Thu, 7 Feb 2019 11:37:22 +0100",
"msg_from": "Jose Luis Tallon <jltallon@adv-solutions.net>",
"msg_from_op": false,
"msg_subject": "Re: phase out ossp-uuid?"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> I'm wondering whether we should phase out the use of the ossp-uuid\n> library?\n\nIt's not really costing us any maintenance effort that I've noticed,\nso I vote no. Whether or not there are any people who can't use\nanother alternative, it would be more work to rip out that code than\nto (continue to) ignore it.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 07 Feb 2019 09:49:05 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: phase out ossp-uuid?"
},
{
"msg_contents": "On 07/02/2019 10:03, Dave Page wrote:\n> Much as I'd like to get rid of it, we don't have an alternative for\n> Windows do we?\n\nYes, that appears to be a significant problem, so we'll have to keep it\nfor the time being.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Thu, 7 Feb 2019 22:58:06 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: phase out ossp-uuid?"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-07 09:03:06 +0000, Dave Page wrote:\n> On Thu, Feb 7, 2019 at 8:26 AM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n> >\n> > I'm wondering whether we should phase out the use of the ossp-uuid\n> > library? (not the uuid-ossp extension) We have had preferred\n> > alternatives for a while now, so it shouldn't be necessary to use this\n> > anymore, except perhaps in some marginal circumstances? As we know,\n> > ossp-uuid isn't maintained anymore, and a few weeks ago the website was\n> > gone altogether, but it seems to be back now.\n> >\n> > I suggest we declare it deprecated in PG12 and remove it altogether in PG13.\n> \n> Much as I'd like to get rid of it, we don't have an alternative for\n> Windows do we? The docs for 11 imply it's required for UUID support\n> (though the wording isn't exactly clear, saying it's required for\n> UUID-OSSP support!):\n> https://www.postgresql.org/docs/11/install-windows-full.html#id-1.6.4.8.8\n\nGiven that we've now integrated strong crypto support, and are relying\non it for security (scram), perhaps we should just add a core uuidv4?\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Thu, 7 Feb 2019 14:03:07 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: phase out ossp-uuid?"
},
{
"msg_contents": "On 7/2/19 23:03, Andres Freund wrote:\n> Hi,\n>\n> On 2019-02-07 09:03:06 +0000, Dave Page wrote:\n>> On Thu, Feb 7, 2019 at 8:26 AM Peter Eisentraut\n>> <peter.eisentraut@2ndquadrant.com> wrote:\n>>> I suggest we declare it deprecated in PG12 and remove it altogether in PG13.\n>>>\n>>> Much as I'd like to get rid of it, we don't have an alternative for\n>>> Windows do we? The docs for 11 imply it's required for UUID support\n>>> (though the wording isn't exactly clear, saying it's required for\n>>> UUID-OSSP support!):\n>>> https://www.postgresql.org/docs/11/install-windows-full.html#id-1.6.4.8.8\n> Given that we've now integrated strong crypto support, and are relying\n> on it for security (scram), perhaps we should just add a core uuidv4?\n\nThis. But just make it \"uuid\" and so both parties will get their own:\n\nOn 7/2/19 11:37, I wrote:\n\n> AFAIR, Windows has its own DCE/v4 UUID generation support if needed.... \n> UUID v5 can be generated using built-in crypto hashes. v1 are the ones \n> (potentially) more cumbersome to generate.... plus they are the least \n> useful IMHO.\n\n- UUIDv3 <- with built-in crypto hashes\n\n- UUIDv4 <- with built-in crypto random\n\n- UUIDv5 <- with built-in crypto hashes\n\nOnly v1 remain. For those use cases one could use ossp-uuid.... so what \nabout:\n\n* Rename the extension's type to ossp_uuid or the like.\n\n* Have uuid in-core (we already got the platform independent required \ncrypto, so I wouldn't expect portability issues)\n\nI reckon that most use cases should be either UUID v4 or UUID v5 these \ndays. For those using v1 UUIDs, either implement v1 in core or provide \nsome fallback/migration path; This would only affect the \n\"uuid_generate_v1()\" and \"uuid_generate_v1mc()\" calls AFAICS.\n\n\nMoreover, the documentation (as far back as 9.4) already states:\n\n\"If you only need randomly-generated (version 4) UUIDs, consider using \nthe |gen_random_uuid()| function from the pgcrypto \n<https://www.postgresql.org/docs/9.4/pgcrypto.html> module instead.\"\n\nSo just importing the datatype into core would go a long way towards \nremoving the dependency for most users.\n\n\nThanks,\n\n / J.L.\n\n\n\n\n\n\n\n\nOn 7/2/19 23:03, Andres Freund wrote:\n\n\nHi,\n\nOn 2019-02-07 09:03:06 +0000, Dave Page wrote:\n\n\nOn Thu, Feb 7, 2019 at 8:26 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n\n\n\nI suggest we declare it deprecated in PG12 and remove it altogether in PG13.\n\nMuch as I'd like to get rid of it, we don't have an alternative for\nWindows do we? The docs for 11 imply it's required for UUID support\n(though the wording isn't exactly clear, saying it's required for\nUUID-OSSP support!):\nhttps://www.postgresql.org/docs/11/install-windows-full.html#id-1.6.4.8.8\n\n\n\n\nGiven that we've now integrated strong crypto support, and are relying\non it for security (scram), perhaps we should just add a core uuidv4?\n\nThis. But just make it \"uuid\" and so both parties will get their\n own:\nOn 7/2/19 11:37, I wrote:\n\nAFAIR, Windows has its own DCE/v4 UUID\n generation support if needed.... \nUUID v5 can be generated using built-in\n crypto hashes. v1 are the ones (potentially) more cumbersome to\n generate.... plus they are the least useful IMHO.\n- UUIDv3 <- with built-in crypto hashes\n- UUIDv4 <- with built-in crypto random\n- UUIDv5 <- with built-in crypto hashes\nOnly v1 remain. For those use cases one could use ossp-uuid....\n so what about:\n* Rename the extension's type to ossp_uuid or the like.\n* Have uuid in-core (we already got the platform independent\n required crypto, so I wouldn't expect portability issues)\nI reckon that most use cases should be either UUID v4 or UUID v5\n these days. For those using v1 UUIDs, either implement v1 in core\n or provide some fallback/migration path; This would only affect\n the \"uuid_generate_v1()\" and \"uuid_generate_v1mc()\" calls AFAICS.\n\n\n\nMoreover, the documentation (as far back as 9.4) already states:\n \n\n\"If you only need randomly-generated (version 4)\n UUIDs, consider using the gen_random_uuid()\n function from the pgcrypto\n module instead.\"\nSo just importing the datatype into core would go a long way\n towards removing the dependency for most users.\n\n\n\nThanks,\n / J.L.",
"msg_date": "Fri, 8 Feb 2019 11:27:31 +0100",
"msg_from": "Jose Luis Tallon <jltallon@adv-solutions.net>",
"msg_from_op": false,
"msg_subject": "Re: phase out ossp-uuid?"
}
] |
[
{
"msg_contents": "As discussed in [0], should we restrict access to pg_stat_ssl to\nsuperusers (and an appropriate pg_ role)?\n\nIf so, is there anything in that view that should be made available to\nnon-superusers? If not, then we could perhaps do this via a simple\npermission change instead of going the route of blanking out individual\ncolumns.\n\n\n[0]:\n<https://www.postgresql.org/message-id/flat/398754d8-6bb5-c5cf-e7b8-22e5f0983caf@2ndquadrant.com>\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Thu, 7 Feb 2019 09:30:38 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "restrict pg_stat_ssl to superuser?"
},
{
"msg_contents": "On Thu, Feb 07, 2019 at 09:30:38AM +0100, Peter Eisentraut wrote:\n> If so, is there anything in that view that should be made available to\n> non-superusers? If not, then we could perhaps do this via a simple\n> permission change instead of going the route of blanking out individual\n> columns.\n\nHm. It looks sensible to move to a per-permission approach for that\nview. Now, pg_stat_get_activity() is not really actually restricted,\nand would still return the information on direct calls, so the idea\nwould be to split the SSL-related data into its own function?\n--\nMichael",
"msg_date": "Tue, 12 Feb 2019 15:40:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: restrict pg_stat_ssl to superuser?"
},
{
"msg_contents": "On 2019-02-12 07:40, Michael Paquier wrote:\n> On Thu, Feb 07, 2019 at 09:30:38AM +0100, Peter Eisentraut wrote:\n>> If so, is there anything in that view that should be made available to\n>> non-superusers? If not, then we could perhaps do this via a simple\n>> permission change instead of going the route of blanking out individual\n>> columns.\n> \n> Hm. It looks sensible to move to a per-permission approach for that\n> view. Now, pg_stat_get_activity() is not really actually restricted,\n> and would still return the information on direct calls, so the idea\n> would be to split the SSL-related data into its own function?\n\nWe could remove default privileges from pg_stat_get_activity(). Would\nthat be a problem?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 15 Feb 2019 14:04:59 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: restrict pg_stat_ssl to superuser?"
},
{
"msg_contents": "On Fri, Feb 15, 2019 at 02:04:59PM +0100, Peter Eisentraut wrote:\n> We could remove default privileges from pg_stat_get_activity(). Would\n> that be a problem?\n\nI don't think so, still I am wondering about the impact that this\ncould have for monitoring tools calling it directly as we document\nit.. \n--\nMichael",
"msg_date": "Mon, 18 Feb 2019 12:58:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: restrict pg_stat_ssl to superuser?"
},
{
"msg_contents": "On 2019-02-18 04:58, Michael Paquier wrote:\n> On Fri, Feb 15, 2019 at 02:04:59PM +0100, Peter Eisentraut wrote:\n>> We could remove default privileges from pg_stat_get_activity(). Would\n>> that be a problem?\n> \n> I don't think so, still I am wondering about the impact that this\n> could have for monitoring tools calling it directly as we document\n> it.. \n\nActually, this approach isn't going to work anyway, because function\npermissions in a view are checked as the current user, not the view owner.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Tue, 19 Feb 2019 16:57:52 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: restrict pg_stat_ssl to superuser?"
},
{
"msg_contents": "On Thu, Feb 7, 2019 at 3:30 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> As discussed in [0], should we restrict access to pg_stat_ssl to\n> superusers (and an appropriate pg_ role)?\n>\n> If so, is there anything in that view that should be made available to\n> non-superusers? If not, then we could perhaps do this via a simple\n> permission change instead of going the route of blanking out individual\n> columns.\n\nShouldn't unprivileged users be able to see their own entries, or\nentries for roles which they could assume via SET ROLE?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Tue, 19 Feb 2019 12:44:51 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: restrict pg_stat_ssl to superuser?"
},
{
"msg_contents": "On 2019-02-19 16:57, Peter Eisentraut wrote:\n> On 2019-02-18 04:58, Michael Paquier wrote:\n>> On Fri, Feb 15, 2019 at 02:04:59PM +0100, Peter Eisentraut wrote:\n>>> We could remove default privileges from pg_stat_get_activity(). Would\n>>> that be a problem?\n>>\n>> I don't think so, still I am wondering about the impact that this\n>> could have for monitoring tools calling it directly as we document\n>> it.. \n> \n> Actually, this approach isn't going to work anyway, because function\n> permissions in a view are checked as the current user, not the view owner.\n\nSo here is a patch doing it the \"normal\" way of nulling out all the rows\nthe user shouldn't see.\n\nI haven't found any documentation of these access restrictions in the\ncontext of pg_stat_activity. Is there something that I'm not seeing or\nsomething that should be added?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 20 Feb 2019 11:51:08 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: restrict pg_stat_ssl to superuser?"
},
{
"msg_contents": "On Wed, Feb 20, 2019 at 11:51:08AM +0100, Peter Eisentraut wrote:\n> So here is a patch doing it the \"normal\" way of nulling out all the rows\n> the user shouldn't see.\n\nThat looks fine to me.\n\n> I haven't found any documentation of these access restrictions in the\n> context of pg_stat_activity. Is there something that I'm not seeing or\n> something that should be added?\n\nYes, there is nothing. I agree that it would be good to mention that\nsome fields are set to NULL and only visible to superusers or members of\npg_read_all_stats with a note for pg_stat_activity and\npg_stat_get_activity(). Here is an idea:\n\"Backend and SSL information are restricted to superusers and members\nof the <literal>pg_read_all_stats</literal> role. Access may be\ngranted to others using <command>GRANT</command>.\n\nGetting that back-patched would be nice where pg_read_all_stats was\nintroduced.\n--\nMichael",
"msg_date": "Thu, 21 Feb 2019 17:11:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: restrict pg_stat_ssl to superuser?"
},
{
"msg_contents": "On 2019-02-21 09:11, Michael Paquier wrote:\n> On Wed, Feb 20, 2019 at 11:51:08AM +0100, Peter Eisentraut wrote:\n>> So here is a patch doing it the \"normal\" way of nulling out all the rows\n>> the user shouldn't see.\n> \n> That looks fine to me.\n\nCommitted, thanks.\n\n>> I haven't found any documentation of these access restrictions in the\n>> context of pg_stat_activity. Is there something that I'm not seeing or\n>> something that should be added?\n> \n> Yes, there is nothing. I agree that it would be good to mention that\n> some fields are set to NULL and only visible to superusers or members of\n> pg_read_all_stats with a note for pg_stat_activity and\n> pg_stat_get_activity(). Here is an idea:\n> \"Backend and SSL information are restricted to superusers and members\n> of the <literal>pg_read_all_stats</literal> role. Access may be\n> granted to others using <command>GRANT</command>.\n> \n> Getting that back-patched would be nice where pg_read_all_stats was\n> introduced.\n\nI added something. I don't know if it's worth backpatching. This\nsituation goes back all the way to when pg_stat_activity was added.\npg_read_all_stats does have documentation, it's just not linked from\neverywhere.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Thu, 21 Feb 2019 19:56:31 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: restrict pg_stat_ssl to superuser?"
}
] |
[
{
"msg_contents": "This was suggested in\n\nhttps://www.postgresql.org/message-id/11766.1545942853%40sss.pgh.pa.us\n\nI also adjusted usage() to match. There might be other scripts that\ncould use this treatment, but I figure this is a good start.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 7 Feb 2019 10:09:08 +0100",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "use Getopt::Long for catalog scripts"
},
{
"msg_contents": "On Thu, Feb 07, 2019 at 10:09:08AM +0100, John Naylor wrote:\n> This was suggested in\n> \n> https://www.postgresql.org/message-id/11766.1545942853%40sss.pgh.pa.us\n> \n> I also adjusted usage() to match. There might be other scripts that\n> could use this treatment, but I figure this is a good start.\n> \n> -- \n> John Naylor https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n> diff --git a/src/backend/catalog/Makefile b/src/backend/catalog/Makefile\n> index d9f92ba701..8c4bd0c7ae 100644\n> --- a/src/backend/catalog/Makefile\n> +++ b/src/backend/catalog/Makefile\n> @@ -89,7 +89,8 @@ generated-header-symlinks: $(top_builddir)/src/include/catalog/header-stamp\n> # version number when it changes.\n> bki-stamp: genbki.pl Catalog.pm $(POSTGRES_BKI_SRCS) $(POSTGRES_BKI_DATA) $(top_srcdir)/configure.in\n> \t$(PERL) -I $(catalogdir) $< \\\n> -\t\t-I $(top_srcdir)/src/include/ --set-version=$(MAJORVERSION) \\\n> +\t\t--include-path=$(top_srcdir)/src/include/ \\\n> +\t\t--set-version=$(MAJORVERSION) \\\n> \t\t$(POSTGRES_BKI_SRCS)\n> \ttouch $@\n> \n> diff --git a/src/backend/catalog/genbki.pl b/src/backend/catalog/genbki.pl\n> index be81094ffb..48ba69cd0a 100644\n> --- a/src/backend/catalog/genbki.pl\n> +++ b/src/backend/catalog/genbki.pl\n> @@ -16,6 +16,7 @@\n> \n> use strict;\n> use warnings;\n> +use Getopt::Long;\n> \n> use File::Basename;\n> use File::Spec;\n> @@ -23,41 +24,20 @@ BEGIN { use lib File::Spec->rel2abs(dirname(__FILE__)); }\n> \n> use Catalog;\n> \n> -my @input_files;\n> my $output_path = '';\n> my $major_version;\n> my $include_path;\n> \n> -# Process command line switches.\n> -while (@ARGV)\n> -{\n> -\tmy $arg = shift @ARGV;\n> -\tif ($arg !~ /^-/)\n> -\t{\n> -\t\tpush @input_files, $arg;\n> -\t}\n> -\telsif ($arg =~ /^-I/)\n> -\t{\n> -\t\t$include_path = length($arg) > 2 ? substr($arg, 2) : shift @ARGV;\n> -\t}\n> -\telsif ($arg =~ /^-o/)\n> -\t{\n> -\t\t$output_path = length($arg) > 2 ? substr($arg, 2) : shift @ARGV;\n> -\t}\n> -\telsif ($arg =~ /^--set-version=(.*)$/)\n> -\t{\n> -\t\t$major_version = $1;\n> -\t\tdie \"Invalid version string.\\n\"\n> -\t\t if !($major_version =~ /^\\d+$/);\n> -\t}\n> -\telse\n> -\t{\n> -\t\tusage();\n> -\t}\n> -}\n> +GetOptions(\n> +\t'output:s' => \\$output_path,\n> +\t'include-path:s' => \\$include_path,\n> +\t'set-version:s' => \\$major_version) || usage();\n> +\n> +die \"Invalid version string.\\n\"\n> + if !($major_version =~ /^\\d+$/);\n\nMaybe this would be clearer as:\n\ndie \"Invalid version string.\\n\"\n unless $major_version =~ /^\\d+$/;\n\n> # Sanity check arguments.\n> -die \"No input files.\\n\" if !@input_files;\n> +die \"No input files.\\n\" if !@ARGV;\n> die \"--set-version must be specified.\\n\" if !defined $major_version;\n> die \"-I, the header include path, must be specified.\\n\" if !$include_path;\n\nSimilarly,\n\ndie \"No input files.\\n\" unless @ARGV;\ndie \"--set-version must be specified.\\n\" unless defined $major_version;\ndie \"-I, the header include path, must be specified.\\n\" unless $include_path;\n\n> \n> @@ -79,7 +59,7 @@ my @toast_decls;\n> my @index_decls;\n> my %oidcounts;\n> \n> -foreach my $header (@input_files)\n> +foreach my $header (@ARGV)\n> {\n> \t$header =~ /(.+)\\.h$/\n> \t or die \"Input files need to be header files.\\n\";\n> @@ -917,11 +897,11 @@ sub form_pg_type_symbol\n> sub usage\n> {\n> \tdie <<EOM;\n> -Usage: genbki.pl [options] header...\n> +Usage: perl -I [directory of Catalog.pm] genbki.pl [--output/-o <path>] [--include-path/-i include path] header...\n> \n> Options:\n> - -I include path\n> - -o output path\n> + --output Output directory (default '.')\n> + --include-path Include path for source directory\n> --set-version PostgreSQL version number for initdb cross-check\n> \n> genbki.pl generates BKI files and symbol definition\n> diff --git a/src/backend/utils/Gen_fmgrtab.pl b/src/backend/utils/Gen_fmgrtab.pl\n> index f17a78ebcd..9aa8714840 100644\n> --- a/src/backend/utils/Gen_fmgrtab.pl\n> +++ b/src/backend/utils/Gen_fmgrtab.pl\n> @@ -18,32 +18,14 @@ use Catalog;\n> \n> use strict;\n> use warnings;\n> +use Getopt::Long;\n> \n> -# Collect arguments\n> -my @input_files;\n> my $output_path = '';\n> my $include_path;\n> \n> -while (@ARGV)\n> -{\n> -\tmy $arg = shift @ARGV;\n> -\tif ($arg !~ /^-/)\n> -\t{\n> -\t\tpush @input_files, $arg;\n> -\t}\n> -\telsif ($arg =~ /^-o/)\n> -\t{\n> -\t\t$output_path = length($arg) > 2 ? substr($arg, 2) : shift @ARGV;\n> -\t}\n> -\telsif ($arg =~ /^-I/)\n> -\t{\n> -\t\t$include_path = length($arg) > 2 ? substr($arg, 2) : shift @ARGV;\n> -\t}\n> -\telse\n> -\t{\n> -\t\tusage();\n> -\t}\n> -}\n> +GetOptions(\n> +\t'output:s' => \\$output_path,\n> +\t'include:s' => \\$include_path) || usage();\n> \n> # Make sure output_path ends in a slash.\n> if ($output_path ne '' && substr($output_path, -1) ne '/')\n> @@ -52,7 +34,7 @@ if ($output_path ne '' && substr($output_path, -1) ne '/')\n> }\n> \n> # Sanity check arguments.\n\nAnd here:\n\n> -die \"No input files.\\n\" if !@input_files;\n> +die \"No input files.\\n\" if !@ARGV;\n> die \"No include path; you must specify -I.\\n\" if !$include_path;\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n",
"msg_date": "Thu, 7 Feb 2019 18:53:38 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: use Getopt::Long for catalog scripts"
},
{
"msg_contents": "Why is this script talking to me?\n\nOn 2019-Feb-07, David Fetter wrote:\n\n> Similarly,\n> \n> die \"-I, the header include path, must be specified.\\n\" unless $include_path;\n\nBut why must thee, oh mighty header include path, be specified?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Thu, 7 Feb 2019 15:53:58 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: use Getopt::Long for catalog scripts"
},
{
"msg_contents": "On Thu, Feb 7, 2019 at 6:53 PM David Fetter <david@fetter.org> wrote:\n> [some style suggestions]\n\nI've included these in v2, and also some additional tweaks for consistency.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 8 Feb 2019 08:27:12 +0100",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: use Getopt::Long for catalog scripts"
},
{
"msg_contents": "On 2019-Feb-08, John Naylor wrote:\n\n> On Thu, Feb 7, 2019 at 6:53 PM David Fetter <david@fetter.org> wrote:\n> > [some style suggestions]\n> \n> I've included these in v2, and also some additional tweaks for consistency.\n\nI noticed that because we have this line in the scripts,\n BEGIN { use lib File::Spec->rel2abs(dirname(__FILE__)); }\nthe -I switch to Perl is no longer needed in the makefiles.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Tue, 12 Feb 2019 12:11:55 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: use Getopt::Long for catalog scripts"
},
{
"msg_contents": "On 2019-Feb-12, Alvaro Herrera wrote:\n\n> On 2019-Feb-08, John Naylor wrote:\n> \n> > On Thu, Feb 7, 2019 at 6:53 PM David Fetter <david@fetter.org> wrote:\n> > > [some style suggestions]\n> > \n> > I've included these in v2, and also some additional tweaks for consistency.\n> \n> I noticed that because we have this line in the scripts,\n> BEGIN { use lib File::Spec->rel2abs(dirname(__FILE__)); }\n> the -I switch to Perl is no longer needed in the makefiles.\n\nPushed, thanks.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Tue, 12 Feb 2019 12:26:22 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: use Getopt::Long for catalog scripts"
},
{
"msg_contents": "On 2/12/19, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> Pushed, thanks.\n\nThank you.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Tue, 12 Feb 2019 17:21:33 +0100",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: use Getopt::Long for catalog scripts"
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nDoes increase in Transaction commits per second means good query\nperformance?\nWhy I asked this question is, many monitoring tools display that number of\ntransactions\nper second in the dashboard (including pgadmin).\n\nDuring the testing of bunch of queries with different set of\nconfigurations, I observed that\nTPS of some particular configuration has increased compared to default\nserver configuration, but the overall query execution performance is\ndecreased after comparing all queries run time.\n\nThis is because of larger xact_commit value than default configuration.\nWith the changed server configuration, that leads to generate more parallel\nworkers and every parallel worker operation is treated as an extra commit,\nbecause of this reason, the total number of commits increased, but the\noverall query performance is decreased.\n\nIs there any relation of transaction commits to performance?\n\nIs there any specific reason to consider the parallel worker activity also\nas a transaction commit? Especially in my observation, if we didn't\nconsider the parallel worker activity as separate commits, the test doesn't\nshow an increase in transaction commits.\n\nSuggestions?\n\nRegards,\nHaribabu Kommi\nFujitsu Australia\n\nHi Hackers,Does increase in Transaction commits per second means good query performance?Why I asked this question is, many monitoring tools display that number of transactionsper second in the dashboard (including pgadmin).During the testing of bunch of queries with different set of configurations, I observed thatTPS of some particular configuration has increased compared to default server configuration, but the overall query execution performance is decreased after comparing all queries run time. This is because of larger xact_commit value than default configuration. With the changed server configuration, that leads to generate more parallel workers and every parallel worker operation is treated as an extra commit, because of this reason, the total number of commits increased, but the overall query performance is decreased.Is there any relation of transaction commits to performance? Is there any specific reason to consider the parallel worker activity also as a transaction commit? Especially in my observation, if we didn't consider the parallel worker activity as separate commits, the test doesn't show an increase in transaction commits.Suggestions?Regards,Haribabu KommiFujitsu Australia",
"msg_date": "Thu, 7 Feb 2019 21:31:26 +1100",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Transaction commits VS Transaction commits (with parallel) VS query\n mean time"
},
{
"msg_contents": "On Thu, Feb 7, 2019 at 9:31 PM Haribabu Kommi <kommi.haribabu@gmail.com>\nwrote:\n\n> Hi Hackers,\n>\n> Does increase in Transaction commits per second means good query\n> performance?\n> Why I asked this question is, many monitoring tools display that number of\n> transactions\n> per second in the dashboard (including pgadmin).\n>\n> During the testing of bunch of queries with different set of\n> configurations, I observed that\n> TPS of some particular configuration has increased compared to default\n> server configuration, but the overall query execution performance is\n> decreased after comparing all queries run time.\n>\n> This is because of larger xact_commit value than default configuration.\n> With the changed server configuration, that leads to generate more parallel\n> workers and every parallel worker operation is treated as an extra commit,\n> because of this reason, the total number of commits increased, but the\n> overall query performance is decreased.\n>\n> Is there any relation of transaction commits to performance?\n>\n> Is there any specific reason to consider the parallel worker activity also\n> as a transaction commit? Especially in my observation, if we didn't\n> consider the parallel worker activity as separate commits, the test doesn't\n> show an increase in transaction commits.\n>\n\nThe following statements shows the increase in the xact_commit value with\nparallel workers. I can understand that workers updating the seq_scan stats\nas they performed the seq scan. Is the same applied to parallel worker\ntransaction\ncommits also?\n\nThe transaction commit counter is updated with all the internal operations\nlike\nautovacuum, checkpoint and etc. The increase in counters with these\noperations\nare not that visible compared to parallel workers.\n\nThe spike of TPS with parallel workers is fine to consider?\n\npostgres=# select relname, seq_scan from pg_stat_user_tables where relname\n= 'tbl';\n relname | seq_scan\n---------+----------\n tbl | 16\n(1 row)\n\npostgres=# begin;\nBEGIN\npostgres=# select xact_commit from pg_stat_database where datname =\n'postgres';\n xact_commit\n-------------\n 524\n(1 row)\n\npostgres=# explain analyze select * from tbl where f1 = 1000;\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------\n Gather (cost=0.00..3645.83 rows=1 width=214) (actual time=1.703..79.736\nrows=1 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Parallel Seq Scan on tbl (cost=0.00..3645.83 rows=1 width=214)\n(actual time=28.180..51.672 rows=0 loops=3)\n Filter: (f1 = 1000)\n Rows Removed by Filter: 33333\n Planning Time: 0.090 ms\n Execution Time: 79.776 ms\n(8 rows)\n\npostgres=# commit;\nCOMMIT\npostgres=# select xact_commit from pg_stat_database where datname =\n'postgres';\n xact_commit\n-------------\n 531\n(1 row)\n\npostgres=# select relname, seq_scan from pg_stat_user_tables where relname\n= 'tbl';\n relname | seq_scan\n---------+----------\n tbl | 19\n(1 row)\n\nRegards,\nHaribabu Kommi\nFujitsu Australia\n\nOn Thu, Feb 7, 2019 at 9:31 PM Haribabu Kommi <kommi.haribabu@gmail.com> wrote:Hi Hackers,Does increase in Transaction commits per second means good query performance?Why I asked this question is, many monitoring tools display that number of transactionsper second in the dashboard (including pgadmin).During the testing of bunch of queries with different set of configurations, I observed thatTPS of some particular configuration has increased compared to default server configuration, but the overall query execution performance is decreased after comparing all queries run time. This is because of larger xact_commit value than default configuration. With the changed server configuration, that leads to generate more parallel workers and every parallel worker operation is treated as an extra commit, because of this reason, the total number of commits increased, but the overall query performance is decreased.Is there any relation of transaction commits to performance? Is there any specific reason to consider the parallel worker activity also as a transaction commit? Especially in my observation, if we didn't consider the parallel worker activity as separate commits, the test doesn't show an increase in transaction commits.\nThe following statements shows the increase in the xact_commit value withparallel workers. I can understand that workers updating the seq_scan statsas they performed the seq scan. Is the same applied to parallel worker transactioncommits also?The transaction commit counter is updated with all the internal operations likeautovacuum, checkpoint and etc. The increase in counters with these operationsare not that visible compared to parallel workers.The spike of TPS with parallel workers is fine to consider?postgres=# select relname, seq_scan from pg_stat_user_tables where relname = 'tbl'; relname | seq_scan ---------+---------- tbl | 16(1 row)postgres=# begin;BEGINpostgres=# select xact_commit from pg_stat_database where datname = 'postgres'; xact_commit ------------- 524(1 row)postgres=# explain analyze select * from tbl where f1 = 1000; QUERY PLAN ------------------------------------------------------------------------------------------------------------------- Gather (cost=0.00..3645.83 rows=1 width=214) (actual time=1.703..79.736 rows=1 loops=1) Workers Planned: 2 Workers Launched: 2 -> Parallel Seq Scan on tbl (cost=0.00..3645.83 rows=1 width=214) (actual time=28.180..51.672 rows=0 loops=3) Filter: (f1 = 1000) Rows Removed by Filter: 33333 Planning Time: 0.090 ms Execution Time: 79.776 ms(8 rows)postgres=# commit;COMMITpostgres=# select xact_commit from pg_stat_database where datname = 'postgres'; xact_commit ------------- 531(1 row)postgres=# select relname, seq_scan from pg_stat_user_tables where relname = 'tbl'; relname | seq_scan ---------+---------- tbl | 19(1 row)Regards,Haribabu KommiFujitsu Australia",
"msg_date": "Fri, 8 Feb 2019 12:24:58 +1100",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Transaction commits VS Transaction commits (with parallel) VS\n query mean time"
},
{
"msg_contents": "On Fri, Feb 8, 2019 at 6:55 AM Haribabu Kommi <kommi.haribabu@gmail.com> wrote:\n>\n> On Thu, Feb 7, 2019 at 9:31 PM Haribabu Kommi <kommi.haribabu@gmail.com> wrote:\n>>\n>>\n>> This is because of larger xact_commit value than default configuration. With the changed server configuration, that leads to generate more parallel workers and every parallel worker operation is treated as an extra commit, because of this reason, the total number of commits increased, but the overall query performance is decreased.\n>>\n>> Is there any relation of transaction commits to performance?\n>>\n>> Is there any specific reason to consider the parallel worker activity also as a transaction commit? Especially in my observation, if we didn't consider the parallel worker activity as separate commits, the test doesn't show an increase in transaction commits.\n>\n>\n> The following statements shows the increase in the xact_commit value with\n> parallel workers. I can understand that workers updating the seq_scan stats\n> as they performed the seq scan.\n>\n\nYeah, that seems okay, however, one can say that for the scan they\nwant to consider it as a single scan even if part of the scan is\naccomplished by workers or may be a separate counter for parallel\nworkers scan.\n\n> Is the same applied to parallel worker transaction\n> commits also?\n>\n\nI don't think so. It seems to me that we should consider it as a\nsingle transaction. Do you want to do the leg work for this and try\nto come up with a patch? On a quick look, I think we might want to\nchange AtEOXact_PgStat so that the commits for parallel workers are\nnot considered. I think the caller has that information.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n",
"msg_date": "Sat, 9 Feb 2019 10:37:13 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction commits VS Transaction commits (with parallel) VS\n query mean time"
},
{
"msg_contents": "On Sat, Feb 9, 2019 at 4:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Fri, Feb 8, 2019 at 6:55 AM Haribabu Kommi <kommi.haribabu@gmail.com>\n> wrote:\n> >\n> > On Thu, Feb 7, 2019 at 9:31 PM Haribabu Kommi <kommi.haribabu@gmail.com>\n> wrote:\n> >>\n> >>\n> >> This is because of larger xact_commit value than default configuration.\n> With the changed server configuration, that leads to generate more parallel\n> workers and every parallel worker operation is treated as an extra commit,\n> because of this reason, the total number of commits increased, but the\n> overall query performance is decreased.\n> >>\n> >> Is there any relation of transaction commits to performance?\n> >>\n> >> Is there any specific reason to consider the parallel worker activity\n> also as a transaction commit? Especially in my observation, if we didn't\n> consider the parallel worker activity as separate commits, the test doesn't\n> show an increase in transaction commits.\n> >\n> >\n> > The following statements shows the increase in the xact_commit value with\n> > parallel workers. I can understand that workers updating the seq_scan\n> stats\n> > as they performed the seq scan.\n> >\n>\n\nThanks for your opinion.\n\n\n> Yeah, that seems okay, however, one can say that for the scan they\n> want to consider it as a single scan even if part of the scan is\n> accomplished by workers or may be a separate counter for parallel\n> workers scan.\n>\n\nOK.\n\n\n> > Is the same applied to parallel worker transaction\n> > commits also?\n> >\n>\n> I don't think so. It seems to me that we should consider it as a\n> single transaction. Do you want to do the leg work for this and try\n> to come up with a patch? On a quick look, I think we might want to\n> change AtEOXact_PgStat so that the commits for parallel workers are\n> not considered. I think the caller has that information.\n>\n\nI try to fix it by adding a check for parallel worker or not and based on it\ncount them into stats. Patch attached.\n\nWith this patch, currently it doesn't count parallel worker transactions,\nand\nrest of the stats like seqscan and etc are still get counted. IMO they\nstill\nmay need to be counted as those stats represent the number of tuples\nreturned and etc.\n\nComments?\n\nRegards,\nHaribabu Kommi\nFujitsu Australia",
"msg_date": "Sun, 10 Feb 2019 16:24:31 +1100",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Transaction commits VS Transaction commits (with parallel) VS\n query mean time"
},
{
"msg_contents": "Hi Haribabu,\n\nThe latest patch fails while applying header files part. Kindly rebase.\n\nThe patch looks good to me. However, I wonder what are the other scenarios\nwhere xact_commit is incremented because even if I commit a single\ntransaction with your patch applied the increment in xact_commit is > 1. As\nyou mentioned upthread, need to check what internal operation resulted in\nthe increase. But the increase in xact_commit is surely lesser with the\npatch.\n\npostgres=# BEGIN;\nBEGIN\npostgres=# select xact_commit from pg_stat_database where datname =\n'postgres';\n xact_commit\n-------------\n 158\n(1 row)\n\npostgres=# explain analyze select * from foo where i = 1000;\n QUERY\nPLAN\n------------------------------------------------------------------------------------------------------------------------\n Gather (cost=1000.00..136417.85 rows=1 width=37) (actual\ntime=4.596..3792.710 rows=1 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Parallel Seq Scan on foo (cost=0.00..135417.75 rows=1 width=37)\n(actual time=2448.038..3706.009 rows=0 loops=3)\n Filter: (i = 1000)\n Rows Removed by Filter: 3333333\n Planning Time: 0.353 ms\n Execution Time: 3793.572 ms\n(8 rows)\n\npostgres=# commit;\nCOMMIT\npostgres=# select xact_commit from pg_stat_database where datname =\n'postgres';\n xact_commit\n-------------\n 161\n(1 row)\n\n-- \nRahila Syed\nPerformance Engineer\n2ndQuadrant\nhttp://www.2ndQuadrant.com <http://www.2ndquadrant.com/>\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nHi Haribabu,The latest patch fails while applying header files part. Kindly rebase.The patch looks good to me. However, I wonder what are the other scenarios where xact_commit is incremented because even if I commit a single transaction with your patch applied the increment in xact_commit is > 1. As you mentioned upthread, need to check what internal operation resulted in the increase. But the increase in xact_commit is surely lesser with the patch. postgres=# BEGIN;BEGINpostgres=# select xact_commit from pg_stat_database where datname = 'postgres'; xact_commit ------------- 158(1 row)postgres=# explain analyze select * from foo where i = 1000; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------ Gather (cost=1000.00..136417.85 rows=1 width=37) (actual time=4.596..3792.710 rows=1 loops=1) Workers Planned: 2 Workers Launched: 2 -> Parallel Seq Scan on foo (cost=0.00..135417.75 rows=1 width=37) (actual time=2448.038..3706.009 rows=0 loops=3) Filter: (i = 1000) Rows Removed by Filter: 3333333 Planning Time: 0.353 ms Execution Time: 3793.572 ms(8 rows)postgres=# commit;COMMITpostgres=# select xact_commit from pg_stat_database where datname = 'postgres'; xact_commit ------------- 161(1 row)-- Rahila SyedPerformance Engineer2ndQuadrant http://www.2ndQuadrant.comPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 19 Mar 2019 09:02:48 +0530",
"msg_from": "Rahila Syed <rahila.syed@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction commits VS Transaction commits (with parallel) VS\n query mean time"
},
{
"msg_contents": "Hi Hari-san,\r\n\r\nOn Sunday, February 10, 2019 2:25 PM (GMT+9), Haribabu Kommi wrote:\r\n> I try to fix it by adding a check for parallel worker or not and based on it\r\n> count them into stats. Patch attached.\r\n>\r\n> With this patch, currently it doesn't count parallel worker transactions, and\r\n> rest of the stats like seqscan and etc are still get counted. IMO they still\r\n> may need to be counted as those stats represent the number of tuples\r\n> returned and etc.\r\n>\r\n> Comments?\r\n\r\nI took a look at your patch, and it’s pretty straightforward.\r\nHowever, currently the patch does not apply, so I reattached an updated one\r\nto keep the CFbot happy.\r\n\r\nThe previous patch also had a missing header to detect parallel workers\r\nso I added that. --> #include \"access/parallel.h\"\r\n\r\nRegards,\r\nKirk Jamison",
"msg_date": "Tue, 19 Mar 2019 07:46:52 +0000",
"msg_from": "\"Jamison, Kirk\" <k.jamison@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transaction commits VS Transaction commits (with parallel) VS\n query mean time"
},
{
"msg_contents": "On Tue, Mar 19, 2019 at 2:32 PM Rahila Syed <rahila.syed@2ndquadrant.com>\nwrote:\n\n> Hi Haribabu,\n>\n> The latest patch fails while applying header files part. Kindly rebase.\n>\n\nThanks for the review.\n\n\n> The patch looks good to me. However, I wonder what are the other scenarios\n> where xact_commit is incremented because even if I commit a single\n> transaction with your patch applied the increment in xact_commit is > 1. As\n> you mentioned upthread, need to check what internal operation resulted in\n> the increase. But the increase in xact_commit is surely lesser with the\n> patch.\n>\n\nCurrently, the transaction counts are incremented by the background process\nlike autovacuum and checkpointer.\nwhen turn the autovacuum off, you can see exactly how many transaction\ncommits increases.\n\n\n> postgres=# BEGIN;\n> BEGIN\n> postgres=# select xact_commit from pg_stat_database where datname =\n> 'postgres';\n> xact_commit\n> -------------\n> 158\n> (1 row)\n>\n> postgres=# explain analyze select * from foo where i = 1000;\n> QUERY\n> PLAN\n>\n> ------------------------------------------------------------------------------------------------------------------------\n> Gather (cost=1000.00..136417.85 rows=1 width=37) (actual\n> time=4.596..3792.710 rows=1 loops=1)\n> Workers Planned: 2\n> Workers Launched: 2\n> -> Parallel Seq Scan on foo (cost=0.00..135417.75 rows=1 width=37)\n> (actual time=2448.038..3706.009 rows=0 loops=3)\n> Filter: (i = 1000)\n> Rows Removed by Filter: 3333333\n> Planning Time: 0.353 ms\n> Execution Time: 3793.572 ms\n> (8 rows)\n>\n> postgres=# commit;\n> COMMIT\n> postgres=# select xact_commit from pg_stat_database where datname =\n> 'postgres';\n> xact_commit\n> -------------\n> 161\n> (1 row)\n>\n\nThanks for the test and confirmation.\n\nRegards,\nHaribabu Kommi\nFujitsu Australia\n\nOn Tue, Mar 19, 2019 at 2:32 PM Rahila Syed <rahila.syed@2ndquadrant.com> wrote:Hi Haribabu,The latest patch fails while applying header files part. Kindly rebase.Thanks for the review. The patch looks good to me. However, I wonder what are the other scenarios where xact_commit is incremented because even if I commit a single transaction with your patch applied the increment in xact_commit is > 1. As you mentioned upthread, need to check what internal operation resulted in the increase. But the increase in xact_commit is surely lesser with the patch. Currently, the transaction counts are incremented by the background process like autovacuum and checkpointer.when turn the autovacuum off, you can see exactly how many transaction commits increases. postgres=# BEGIN;BEGINpostgres=# select xact_commit from pg_stat_database where datname = 'postgres'; xact_commit ------------- 158(1 row)postgres=# explain analyze select * from foo where i = 1000; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------ Gather (cost=1000.00..136417.85 rows=1 width=37) (actual time=4.596..3792.710 rows=1 loops=1) Workers Planned: 2 Workers Launched: 2 -> Parallel Seq Scan on foo (cost=0.00..135417.75 rows=1 width=37) (actual time=2448.038..3706.009 rows=0 loops=3) Filter: (i = 1000) Rows Removed by Filter: 3333333 Planning Time: 0.353 ms Execution Time: 3793.572 ms(8 rows)postgres=# commit;COMMITpostgres=# select xact_commit from pg_stat_database where datname = 'postgres'; xact_commit ------------- 161(1 row)\nThanks for the test and confirmation.Regards,Haribabu KommiFujitsu Australia",
"msg_date": "Tue, 19 Mar 2019 18:50:09 +1100",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Transaction commits VS Transaction commits (with parallel) VS\n query mean time"
},
{
"msg_contents": "On Tue, Mar 19, 2019 at 6:47 PM Jamison, Kirk <k.jamison@jp.fujitsu.com>\nwrote:\n\n> Hi Hari-san,\n>\n>\n>\n> On Sunday, February 10, 2019 2:25 PM (GMT+9), Haribabu Kommi wrote:\n>\n> > I try to fix it by adding a check for parallel worker or not and based\n> on it\n>\n> > count them into stats. Patch attached.\n>\n> >\n>\n> > With this patch, currently it doesn't count parallel worker\n> transactions, and\n>\n> > rest of the stats like seqscan and etc are still get counted. IMO they\n> still\n>\n> > may need to be counted as those stats represent the number of tuples\n>\n> > returned and etc.\n>\n> >\n>\n> > Comments?\n>\n>\n>\n> I took a look at your patch, and it’s pretty straightforward.\n>\n> However, currently the patch does not apply, so I reattached an updated one\n>\n> to keep the CFbot happy.\n>\n>\n>\n> The previous patch also had a missing header to detect parallel workers\n>\n> so I added that. --> #include \"access/parallel.h\"\n>\n\nThanks for update and review krik.\n\nRegards,\nHaribabu Kommi\nFujitsu Australia\n\nOn Tue, Mar 19, 2019 at 6:47 PM Jamison, Kirk <k.jamison@jp.fujitsu.com> wrote:\n\n\n\n\n\n\n\n\nHi Hari-san,\n \nOn Sunday, February 10, 2019 2:25 PM (GMT+9), Haribabu Kommi wrote:\n> I try to fix it by adding a check for parallel worker or not and based on it\n> count them into stats. Patch attached.\n\n> \n> With this patch, currently it doesn't count parallel worker transactions, and\n> rest of the stats like seqscan and etc are still get counted. IMO they still\n\n> may need to be counted as those stats represent the number of tuples\n> returned and etc.\n> \n> Comments?\n \nI took a look at your patch, and it’s pretty straightforward.\nHowever, currently the patch does not apply, so I reattached an updated one\nto keep the CFbot happy.\n \nThe previous patch also had a missing header to detect parallel workers\nso I added that. --> #include \"access/parallel.h\"\n\n\n\n\n\n\n\n\nThanks for update and review krik.Regards,Haribabu KommiFujitsu Australia",
"msg_date": "Tue, 19 Mar 2019 18:51:15 +1100",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Transaction commits VS Transaction commits (with parallel) VS\n query mean time"
},
{
"msg_contents": "I tried to confirm the patch with the following configuration:\r\nmax_parallel_workers_per_gather = 2\r\nautovacuum = off\r\n\r\n----\r\npostgres=# BEGIN;\r\nBEGIN\r\npostgres=# select xact_commit from pg_stat_database where datname = 'postgres';\r\nxact_commit\r\n-------------\r\n 118\r\n(1 row)\r\n\r\npostgres=# explain analyze select * from tab where b = 6;\r\n QUERY PLAN\r\n\r\n---------------------------------------------------------------------------------------\r\n------------------------------------\r\nGather (cost=1000.00..102331.58 rows=50000 width=8) (actual time=0.984..1666.932 rows\r\n=99815 loops=1)\r\n Workers Planned: 2\r\n Workers Launched: 2\r\n -> Parallel Seq Scan on tab (cost=0.00..96331.58 rows=20833 width=8) (actual time=\r\n0.130..1587.463 rows=33272 loops=3)\r\n Filter: (b = 6)\r\n Rows Removed by Filter: 3300062\r\nPlanning Time: 0.146 ms\r\nExecution Time: 1682.155 ms\r\n(8 rows)\r\n\r\npostgres=# COMMIT;\r\nCOMMIT\r\npostgres=# select xact_commit from pg_stat_database where datname = 'postgres';\r\nxact_commit\r\n-------------\r\n 119\r\n(1 row)\r\n\r\n---------\r\n\r\n[Conclusion]\r\n\r\nI have confirmed that with the patch (with autovacuum=off,\r\nmax_parallel_workers_per_gather = 2), the increment is only 1.\r\nWithout the patch, I have also confirmed that\r\nthe increment in xact_commit > 1.\r\n\r\nIt seems Amit also confirmed that this should be the case,\r\nso the patch works as intended.\r\n\r\n>> Is the same applied to parallel worker transaction\r\n>> commits also?\r\n>\r\n> I don't think so. It seems to me that we should consider it as a\r\n> single transaction.\r\n\r\nHowever, I cannot answer if the consideration of parallel worker activity\r\nas a single xact_commit relates to good performance.\r\nBut ISTM, with this improvement we have a more accurate stats for xact_commit.\r\n\r\nRegards,\r\nKirk Jamison\r\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nI tried to confirm the patch with the following configuration:\nmax_parallel_workers_per_gather = 2\nautovacuum = off\n \n----\npostgres=# BEGIN;\nBEGIN\npostgres=# select xact_commit from pg_stat_database where datname = 'postgres';\nxact_commit\n-------------\n 118\n(1 row)\n \npostgres=# explain analyze select * from tab where b = 6;\n QUERY PLAN\n \n---------------------------------------------------------------------------------------\n------------------------------------\nGather (cost=1000.00..102331.58 rows=50000 width=8) (actual time=0.984..1666.932 rows\n=99815 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Parallel Seq Scan on tab (cost=0.00..96331.58 rows=20833 width=8) (actual time=\n0.130..1587.463 rows=33272 loops=3)\n Filter: (b = 6)\n Rows Removed by Filter: 3300062\nPlanning Time: 0.146 ms\nExecution Time: 1682.155 ms\n(8 rows)\n \npostgres=# COMMIT;\nCOMMIT\npostgres=# select xact_commit from pg_stat_database where datname = 'postgres';\nxact_commit\n-------------\n 119\n(1 row)\n \n---------\n \n[Conclusion]\n \nI have confirmed that with the patch (with autovacuum=off,\nmax_parallel_workers_per_gather = 2), the increment is only 1.\nWithout the patch, I have also confirmed that\nthe increment in xact_commit > 1.\n \nIt seems Amit also confirmed that this should be the case,\nso the patch works as intended.\n \n>> Is the same applied to parallel worker transaction\n>> commits also?\n> \n> I don't think so. It seems to me that we should consider it as a\n> single transaction.\n \nHowever, I cannot answer if the consideration of parallel worker activity\nas a single xact_commit relates to good performance.\nBut ISTM, with this improvement we have a more accurate stats for xact_commit.\n \nRegards,\nKirk Jamison",
"msg_date": "Wed, 20 Mar 2019 02:28:25 +0000",
"msg_from": "\"Jamison, Kirk\" <k.jamison@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transaction commits VS Transaction commits (with parallel) VS\n query mean time"
},
{
"msg_contents": "On Sun, Feb 10, 2019 at 10:54 AM Haribabu Kommi\n<kommi.haribabu@gmail.com> wrote:\n> On Sat, Feb 9, 2019 at 4:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> I don't think so. It seems to me that we should consider it as a\n>> single transaction. Do you want to do the leg work for this and try\n>> to come up with a patch? On a quick look, I think we might want to\n>> change AtEOXact_PgStat so that the commits for parallel workers are\n>> not considered. I think the caller has that information.\n>\n>\n> I try to fix it by adding a check for parallel worker or not and based on it\n> count them into stats. Patch attached.\n>\n\n@@ -2057,14 +2058,18 @@ AtEOXact_PgStat(bool isCommit)\n {\n..\n+ /* Don't count parallel worker transactions into stats */\n+ if (!IsParallelWorker())\n+ {\n+ /*\n+ * Count transaction commit or abort. (We use counters, not just bools,\n+ * in case the reporting message isn't sent right away.)\n+ */\n+ if (isCommit)\n+ pgStatXactCommit++;\n+ else\n+ pgStatXactRollback++;\n+ }\n..\n}\n\nI wonder why you haven't used the 'is_parallel_worker' flag from the\ncaller, see CommitTransaction/AbortTransaction? The difference is\nthat if we use that then it will just avoid counting transaction for\nthe parallel work (StartParallelWorkerTransaction), otherwise, it\nmight miss the count of any other transaction we started in the\nparallel worker for some intermediate work (for example, the\ntransaction we started to restore library and guc state).\n\nI think it boils down to whether we want to avoid any transaction that\nis started by a parallel worker or just the transaction which is\nshared among leader and worker. It seems to me that currently, we do\ncount all the internal transactions (started with\nStartTransactionCommand and CommitTransactionCommand) in this counter,\nso we should just try to avoid the transaction for which state is\nshared between leader and workers.\n\nAnyone else has any opinion on this matter?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n",
"msg_date": "Wed, 20 Mar 2019 14:08:10 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction commits VS Transaction commits (with parallel) VS\n query mean time"
},
{
"msg_contents": "On Wed, Mar 20, 2019 at 7:38 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Sun, Feb 10, 2019 at 10:54 AM Haribabu Kommi\n> <kommi.haribabu@gmail.com> wrote:\n> > On Sat, Feb 9, 2019 at 4:07 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >>\n> >> I don't think so. It seems to me that we should consider it as a\n> >> single transaction. Do you want to do the leg work for this and try\n> >> to come up with a patch? On a quick look, I think we might want to\n> >> change AtEOXact_PgStat so that the commits for parallel workers are\n> >> not considered. I think the caller has that information.\n> >\n> >\n> > I try to fix it by adding a check for parallel worker or not and based\n> on it\n> > count them into stats. Patch attached.\n> >\n>\n\nThanks for the review.\n\n\n> @@ -2057,14 +2058,18 @@ AtEOXact_PgStat(bool isCommit)\n> {\n> ..\n> + /* Don't count parallel worker transactions into stats */\n> + if (!IsParallelWorker())\n> + {\n> + /*\n> + * Count transaction commit or abort. (We use counters, not just bools,\n> + * in case the reporting message isn't sent right away.)\n> + */\n> + if (isCommit)\n> + pgStatXactCommit++;\n> + else\n> + pgStatXactRollback++;\n> + }\n> ..\n> }\n>\n> I wonder why you haven't used the 'is_parallel_worker' flag from the\n> caller, see CommitTransaction/AbortTransaction? The difference is\n> that if we use that then it will just avoid counting transaction for\n> the parallel work (StartParallelWorkerTransaction), otherwise, it\n> might miss the count of any other transaction we started in the\n> parallel worker for some intermediate work (for example, the\n> transaction we started to restore library and guc state).\n>\n\nI understand your comment. current patch just avoids every transaction\nthat is used for the parallel operation.\n\nWith the use of 'is_parallel_worker' flag, it avoids only the actual\ntransaction\nthat has done parallel operation (All supportive transactions are counted).\n\n\n> I think it boils down to whether we want to avoid any transaction that\n> is started by a parallel worker or just the transaction which is\n> shared among leader and worker. It seems to me that currently, we do\n> count all the internal transactions (started with\n> StartTransactionCommand and CommitTransactionCommand) in this counter,\n> so we should just try to avoid the transaction for which state is\n> shared between leader and workers.\n>\n> Anyone else has any opinion on this matter?\n>\n\nAs I said in my first mail, including pgAdmin4 also displays the TPS as\nstats on\nthe dashboard. Those TPS values are received from xact_commits column of the\npg_stat_database view.\n\nWith Inclusion of parallel worker transactions, the TPS will be a higher\nnumber,\nthus user may find it as better throughput. This is the case with one of\nour tool.\nThe tool changes the configuration randomly to find out the best\nconfiguration\nfor the server based on a set of workload, during our test, with some\nconfigurations,\nthe TPS is so good, but the overall performance is slow as the system is\nhaving\nless resources to keep up with that configuration.\n\nOpinions?\n\nRegards,\nHaribabu Kommi\nFujitsu Australia\n\nOn Wed, Mar 20, 2019 at 7:38 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Sun, Feb 10, 2019 at 10:54 AM Haribabu Kommi\n<kommi.haribabu@gmail.com> wrote:\n> On Sat, Feb 9, 2019 at 4:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> I don't think so. It seems to me that we should consider it as a\n>> single transaction. Do you want to do the leg work for this and try\n>> to come up with a patch? On a quick look, I think we might want to\n>> change AtEOXact_PgStat so that the commits for parallel workers are\n>> not considered. I think the caller has that information.\n>\n>\n> I try to fix it by adding a check for parallel worker or not and based on it\n> count them into stats. Patch attached.\n>Thanks for the review. \n@@ -2057,14 +2058,18 @@ AtEOXact_PgStat(bool isCommit)\n {\n..\n+ /* Don't count parallel worker transactions into stats */\n+ if (!IsParallelWorker())\n+ {\n+ /*\n+ * Count transaction commit or abort. (We use counters, not just bools,\n+ * in case the reporting message isn't sent right away.)\n+ */\n+ if (isCommit)\n+ pgStatXactCommit++;\n+ else\n+ pgStatXactRollback++;\n+ }\n..\n}\n\nI wonder why you haven't used the 'is_parallel_worker' flag from the\ncaller, see CommitTransaction/AbortTransaction? The difference is\nthat if we use that then it will just avoid counting transaction for\nthe parallel work (StartParallelWorkerTransaction), otherwise, it\nmight miss the count of any other transaction we started in the\nparallel worker for some intermediate work (for example, the\ntransaction we started to restore library and guc state).I understand your comment. current patch just avoids every transactionthat is used for the parallel operation.With the use of 'is_parallel_worker' flag, it avoids only the actual transactionthat has done parallel operation (All supportive transactions are counted). \nI think it boils down to whether we want to avoid any transaction that\nis started by a parallel worker or just the transaction which is\nshared among leader and worker. It seems to me that currently, we do\ncount all the internal transactions (started with\nStartTransactionCommand and CommitTransactionCommand) in this counter,\nso we should just try to avoid the transaction for which state is\nshared between leader and workers.\n\nAnyone else has any opinion on this matter?\nAs I said in my first mail, including pgAdmin4 also displays the TPS as stats onthe dashboard. Those TPS values are received from xact_commits column of thepg_stat_database view.With Inclusion of parallel worker transactions, the TPS will be a higher number,thus user may find it as better throughput. This is the case with one of our tool.The tool changes the configuration randomly to find out the best configurationfor the server based on a set of workload, during our test, with some configurations,the TPS is so good, but the overall performance is slow as the system is havingless resources to keep up with that configuration.Opinions?Regards,Haribabu KommiFujitsu Australia",
"msg_date": "Thu, 21 Mar 2019 17:17:33 +1100",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Transaction commits VS Transaction commits (with parallel) VS\n query mean time"
},
{
"msg_contents": "On Thu, Mar 21, 2019 at 2:18 AM Haribabu Kommi <kommi.haribabu@gmail.com> wrote:\n> With Inclusion of parallel worker transactions, the TPS will be a higher number,\n> thus user may find it as better throughput. This is the case with one of our tool.\n> The tool changes the configuration randomly to find out the best configuration\n> for the server based on a set of workload, during our test, with some configurations,\n> the TPS is so good, but the overall performance is slow as the system is having\n> less resources to keep up with that configuration.\n>\n> Opinions?\n\nWell, I think that might be a sign that the data isn't being used\ncorrectly. I don't have a strong position on what the \"right\" thing\nto do here is, but I think if you want to know how many client\ntransactions are being executed, you should count them on the client\nside, as pgbench does. I agree that it's a little funny to count the\nparallel worker commit as a separate transaction, since in a certain\nsense it is part of the same transaction. But if you do that, then as\nalready noted you have to next decide what to do about other\ntransactions that parallel workers use internally. And then you have\nto decide what to do about other background transactions. And there's\nnot really one \"right\" answer to any of these questions, I don't\nthink. You might want to count different things depending on how the\ninformation is going to be used.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Thu, 21 Mar 2019 16:03:22 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction commits VS Transaction commits (with parallel) VS\n query mean time"
},
{
"msg_contents": "On 2019-Mar-21, Robert Haas wrote:\n\n> I don't have a strong position on what the \"right\" thing\n> to do here is, but I think if you want to know how many client\n> transactions are being executed, you should count them on the client\n> side, as pgbench does.\n\nI think counting on the client side is untenable in practical terms.\nThey may not even know what clients there are, or may not have the\nsource code to add such a feature even if they know how. Plus they\nwould have to aggregate data coming from dozens of different systems?\nI don't think this argument has any wings.\n\nOTOH the reason the server offers stats is so that the user can know\nwhat the server activity is, not to display useless internal state.\nIf a user disables an internal option (such as parallel query) and their\nmonitoring system suddenly starts showing half as many transactions as\nbefore, they're legitimaly going to think that the server is broken.\nSuch stats are pointless.\n\nThe use case for those stats seems fairly clear to me: display numbers\nin a monitoring system. You seem to be saying that we're just not going\nto help developers of monitoring systems, and that users have to feed\nthem on their own.\n\n> I agree that it's a little funny to count the parallel worker commit\n> as a separate transaction, since in a certain sense it is part of the\n> same transaction.\n\nRight. So you don't count it. This seems crystal clear. Doing the\nother thing is outright wrong, there's no doubt about that.\n\n> But if you do that, then as already noted you have to next decide what\n> to do about other transactions that parallel workers use internally.\n\nYou don't count those ones either.\n\n> And then you have to decide what to do about other background\n> transactions.\n\nNot count them if they're implementation details; otherwise count them.\nFor example, IMO autovacuum transactions should definitely be counted\n(as one transaction, even if they run parallel vacuum).\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 22 Mar 2019 01:33:59 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction commits VS Transaction commits (with parallel) VS\n query mean time"
},
{
"msg_contents": "On Fri, Mar 22, 2019 at 12:34 AM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n> > And then you have to decide what to do about other background\n> > transactions.\n>\n> Not count them if they're implementation details; otherwise count them.\n> For example, IMO autovacuum transactions should definitely be counted\n> (as one transaction, even if they run parallel vacuum).\n\nHmm, interesting. autovacuum isn't an implementation detail?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Fri, 22 Mar 2019 08:41:30 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction commits VS Transaction commits (with parallel) VS\n query mean time"
},
{
"msg_contents": "On Fri, Mar 22, 2019 at 10:04 AM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n> On 2019-Mar-21, Robert Haas wrote:\n>\n> > I agree that it's a little funny to count the parallel worker commit\n> > as a separate transaction, since in a certain sense it is part of the\n> > same transaction.\n>\n> Right. So you don't count it. This seems crystal clear. Doing the\n> other thing is outright wrong, there's no doubt about that.\n>\n\nAgreed, this is a clear problem and we can just fix this and move\nahead, but it is better if we spend some more time to see if we can\ncome up with something better. We might want to just fix this and\nthen deal with the second part of the problem separately along with\nsome other similar cases.\n\n> > But if you do that, then as already noted you have to next decide what\n> > to do about other transactions that parallel workers use internally.\n>\n> You don't count those ones either.\n>\n\nYeah, we can do that but it is not as clear as the previous one. The\nreason is that we do similarly start/commit transaction for various\noperations like autovacuum, cluster, drop index concurrently, etc.\nSo, it doesn't sound good to me to just change this for parallel query\nand leave others as it is.\n\n> > And then you have to decide what to do about other background\n> > transactions.\n>\n> Not count them if they're implementation details; otherwise count them.\n> For example, IMO autovacuum transactions should definitely be counted\n> (as one transaction, even if they run parallel vacuum).\n>\n\nIt appears to me that the definition of what we want to display in\nxact_commit/xact_rollback (for pg_stat_database view) is slightly\nvague. For ex. does it mean that we will show only transactions\nstarted by the user or does it also includes the other transactions\nstarted internally (which you call implementation detail) to perform\nthe various operations? I think users would be more interested in the\ntransactions initiated by them. I think some users might also be\ninterested in the write transactions happened in the system,\nbasically, those have consumed xid.\n\nI think we should decide what we want to do for all other internal\ntransactions that are started before fixing the second part of this\nproblem (\"other transactions that parallel workers use internally\").\n\nThanks, Robert and Alvaro to chime in as this needs some discussion to\ndecide what behavior we want to provide to users.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n",
"msg_date": "Sat, 23 Mar 2019 09:22:00 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction commits VS Transaction commits (with parallel) VS\n query mean time"
},
{
"msg_contents": "On 2019-Mar-23, Amit Kapila wrote:\n\n> On Fri, Mar 22, 2019 at 10:04 AM Alvaro Herrera\n> <alvherre@2ndquadrant.com> wrote:\n\n> > Not count them if they're implementation details; otherwise count them.\n> > For example, IMO autovacuum transactions should definitely be counted\n> > (as one transaction, even if they run parallel vacuum).\n> \n> It appears to me that the definition of what we want to display in\n> xact_commit/xact_rollback (for pg_stat_database view) is slightly\n> vague. For ex. does it mean that we will show only transactions\n> started by the user or does it also includes the other transactions\n> started internally (which you call implementation detail) to perform\n> the various operations? I think users would be more interested in the\n> transactions initiated by them.\n\nYes, you're probably right.\n\n\n> I think some users might also be interested in the write transactions\n> happened in the system, basically, those have consumed xid.\n\nWell, do they really want to *count* these transactions, or is it enough\nto keep an eye on the \"age\" of some XID column? Other than for XID\nfreezing purposes, I don't see such internal transactions as very\ninteresting.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Sat, 23 Mar 2019 01:20:48 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction commits VS Transaction commits (with parallel) VS\n query mean time"
},
{
"msg_contents": "On Sat, Mar 23, 2019 at 9:50 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Mar-23, Amit Kapila wrote:\n>\n> > I think some users might also be interested in the write transactions\n> > happened in the system, basically, those have consumed xid.\n>\n> Well, do they really want to *count* these transactions, or is it enough\n> to keep an eye on the \"age\" of some XID column? Other than for XID\n> freezing purposes, I don't see such internal transactions as very\n> interesting.\n>\n\nThat's what I also had in mind. I think doing anything more than just\nfixing the count for the parallel cooperating transaction by parallel\nworkers doesn't seem intuitive to me. I mean if we want we can commit\nthe fix such that all supporting transactions by parallel worker\nshouldn't be counted, but I am not able to convince myself that that\nis the good fix. Instead, I think rather than fixing that one case we\nshould think more broadly about all the supportive transactions\nhappening in the various operations. Also, as that is a kind of\nbehavior change, we should discuss that as a separate topic.\n\nI know what I am proposing here won't completely fix the problem Hari\nis facing, but I am not sure what else we can do here which doesn't\ncreate some form of inconsistency with other parts of the system.\n\n\n--\nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n",
"msg_date": "Sat, 23 Mar 2019 17:39:45 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction commits VS Transaction commits (with parallel) VS\n query mean time"
},
{
"msg_contents": "On Sat, Mar 23, 2019 at 11:10 PM Amit Kapila <amit.kapila16@gmail.com>\nwrote:\n\n> On Sat, Mar 23, 2019 at 9:50 AM Alvaro Herrera <alvherre@2ndquadrant.com>\n> wrote:\n> >\n> > On 2019-Mar-23, Amit Kapila wrote:\n> >\n> > > I think some users might also be interested in the write transactions\n> > > happened in the system, basically, those have consumed xid.\n> >\n> > Well, do they really want to *count* these transactions, or is it enough\n> > to keep an eye on the \"age\" of some XID column? Other than for XID\n> > freezing purposes, I don't see such internal transactions as very\n> > interesting.\n> >\n>\n> That's what I also had in mind. I think doing anything more than just\n> fixing the count for the parallel cooperating transaction by parallel\n> workers doesn't seem intuitive to me. I mean if we want we can commit\n> the fix such that all supporting transactions by parallel worker\n> shouldn't be counted, but I am not able to convince myself that that\n> is the good fix. Instead, I think rather than fixing that one case we\n> should think more broadly about all the supportive transactions\n> happening in the various operations. Also, as that is a kind of\n> behavior change, we should discuss that as a separate topic.\n>\n\nThanks to everyone for their opinions and suggestions to improve.\n\nWithout parallel workers, there aren't many internal implementation\nlogic code that causes the stats to bloat. Parallel worker stats are not\njust the transactions, it update other stats also, for eg; seqscan stats\nof a relation. I also eagree that just fixing it for transactions may not\nbe a proper solution.\n\nUsing of XID data may not give proper TPS of the instance as it doesn't\ninvolve the read only transactions information.\n\nHow about adding additional two columns that provides all the internal\nand background worker transactions into that column?\n\nRegards,\nHaribabu Kommi\nFujitsu Australia\n\nOn Sat, Mar 23, 2019 at 11:10 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Sat, Mar 23, 2019 at 9:50 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Mar-23, Amit Kapila wrote:\n>\n> > I think some users might also be interested in the write transactions\n> > happened in the system, basically, those have consumed xid.\n>\n> Well, do they really want to *count* these transactions, or is it enough\n> to keep an eye on the \"age\" of some XID column? Other than for XID\n> freezing purposes, I don't see such internal transactions as very\n> interesting.\n>\n\nThat's what I also had in mind. I think doing anything more than just\nfixing the count for the parallel cooperating transaction by parallel\nworkers doesn't seem intuitive to me. I mean if we want we can commit\nthe fix such that all supporting transactions by parallel worker\nshouldn't be counted, but I am not able to convince myself that that\nis the good fix. Instead, I think rather than fixing that one case we\nshould think more broadly about all the supportive transactions\nhappening in the various operations. Also, as that is a kind of\nbehavior change, we should discuss that as a separate topic.Thanks to everyone for their opinions and suggestions to improve.Without parallel workers, there aren't many internal implementationlogic code that causes the stats to bloat. Parallel worker stats are notjust the transactions, it update other stats also, for eg; seqscan statsof a relation. I also eagree that just fixing it for transactions may notbe a proper solution.Using of XID data may not give proper TPS of the instance as it doesn'tinvolve the read only transactions information.How about adding additional two columns that provides all the internaland background worker transactions into that column?Regards,Haribabu KommiFujitsu Australia",
"msg_date": "Tue, 26 Mar 2019 00:25:19 +1100",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Transaction commits VS Transaction commits (with parallel) VS\n query mean time"
},
{
"msg_contents": "On Mon, Mar 25, 2019 at 6:55 PM Haribabu Kommi <kommi.haribabu@gmail.com> wrote:\n>\n> On Sat, Mar 23, 2019 at 11:10 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Sat, Mar 23, 2019 at 9:50 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>> >\n>> > On 2019-Mar-23, Amit Kapila wrote:\n>> >\n>> > > I think some users might also be interested in the write transactions\n>> > > happened in the system, basically, those have consumed xid.\n>> >\n>> > Well, do they really want to *count* these transactions, or is it enough\n>> > to keep an eye on the \"age\" of some XID column? Other than for XID\n>> > freezing purposes, I don't see such internal transactions as very\n>> > interesting.\n>> >\n>>\n>> That's what I also had in mind. I think doing anything more than just\n>> fixing the count for the parallel cooperating transaction by parallel\n>> workers doesn't seem intuitive to me. I mean if we want we can commit\n>> the fix such that all supporting transactions by parallel worker\n>> shouldn't be counted, but I am not able to convince myself that that\n>> is the good fix. Instead, I think rather than fixing that one case we\n>> should think more broadly about all the supportive transactions\n>> happening in the various operations. Also, as that is a kind of\n>> behavior change, we should discuss that as a separate topic.\n>\n>\n> Thanks to everyone for their opinions and suggestions to improve.\n>\n> Without parallel workers, there aren't many internal implementation\n> logic code that causes the stats to bloat. Parallel worker stats are not\n> just the transactions, it update other stats also, for eg; seqscan stats\n> of a relation. I also eagree that just fixing it for transactions may not\n> be a proper solution.\n>\n> Using of XID data may not give proper TPS of the instance as it doesn't\n> involve the read only transactions information.\n>\n> How about adding additional two columns that provides all the internal\n> and background worker transactions into that column?\n>\n\nI can see the case where the users will be interested in application\ninitiated transactions, so if we want to add new columns, it could be\nto display that information and the current columns can keep\ndisplaying the overall transactions in the database. Here, I think\nwe need to find a way to distinguish between internal and\nuser-initiated transactions. OTOH, I am not sure adding new columns\nis better than changing the meaning of existing columns. We can go\neither way based on what others feel good, but I think we can do that\nas a separate head-only feature. As part of this thread, maybe we can\njust fix the case of the parallel cooperating transaction.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n",
"msg_date": "Tue, 26 Mar 2019 15:37:57 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction commits VS Transaction commits (with parallel) VS\n query mean time"
},
{
"msg_contents": "On Tue, Mar 26, 2019 at 9:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Mon, Mar 25, 2019 at 6:55 PM Haribabu Kommi <kommi.haribabu@gmail.com>\n> wrote:\n> >\n> >\n> > Thanks to everyone for their opinions and suggestions to improve.\n> >\n> > Without parallel workers, there aren't many internal implementation\n> > logic code that causes the stats to bloat. Parallel worker stats are not\n> > just the transactions, it update other stats also, for eg; seqscan stats\n> > of a relation. I also eagree that just fixing it for transactions may not\n> > be a proper solution.\n> >\n> > Using of XID data may not give proper TPS of the instance as it doesn't\n> > involve the read only transactions information.\n> >\n> > How about adding additional two columns that provides all the internal\n> > and background worker transactions into that column?\n> >\n>\n> I can see the case where the users will be interested in application\n> initiated transactions, so if we want to add new columns, it could be\n> to display that information and the current columns can keep\n> displaying the overall transactions in the database. Here, I think\n> we need to find a way to distinguish between internal and\n> user-initiated transactions. OTOH, I am not sure adding new columns\n> is better than changing the meaning of existing columns. We can go\n> either way based on what others feel good, but I think we can do that\n> as a separate head-only feature.\n\n\nI agree with you. Adding new columns definitely needs more discussions\nof what processes should be skipped and what needs to be added and etc.\n\n\n\n> As part of this thread, maybe we can\n> just fix the case of the parallel cooperating transaction.\n>\n\nWith the current patch, all the parallel implementation transaction are\ngetting\nskipped, in my tests parallel workers are the major factor in the\ntransaction\nstats counter. Even before parallelism, the stats of the autovacuum and etc\nare still present but their contribution is not greatly influencing the\nstats.\n\nI agree with you in fixing the stats with parallel workers and improve\nstats.\n\nRegards,\nHaribabu Kommi\nFujitsu Australia\n\nOn Tue, Mar 26, 2019 at 9:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Mon, Mar 25, 2019 at 6:55 PM Haribabu Kommi <kommi.haribabu@gmail.com> wrote:\n>>\n> Thanks to everyone for their opinions and suggestions to improve.\n>\n> Without parallel workers, there aren't many internal implementation\n> logic code that causes the stats to bloat. Parallel worker stats are not\n> just the transactions, it update other stats also, for eg; seqscan stats\n> of a relation. I also eagree that just fixing it for transactions may not\n> be a proper solution.\n>\n> Using of XID data may not give proper TPS of the instance as it doesn't\n> involve the read only transactions information.\n>\n> How about adding additional two columns that provides all the internal\n> and background worker transactions into that column?\n>\n\nI can see the case where the users will be interested in application\ninitiated transactions, so if we want to add new columns, it could be\nto display that information and the current columns can keep\ndisplaying the overall transactions in the database. Here, I think\nwe need to find a way to distinguish between internal and\nuser-initiated transactions. OTOH, I am not sure adding new columns\nis better than changing the meaning of existing columns. We can go\neither way based on what others feel good, but I think we can do that\nas a separate head-only feature.I agree with you. Adding new columns definitely needs more discussionsof what processes should be skipped and what needs to be added and etc. As part of this thread, maybe we can\njust fix the case of the parallel cooperating transaction.With the current patch, all the parallel implementation transaction are gettingskipped, in my tests parallel workers are the major factor in the transactionstats counter. Even before parallelism, the stats of the autovacuum and etcare still present but their contribution is not greatly influencing the stats.I agree with you in fixing the stats with parallel workers and improve stats.Regards,Haribabu KommiFujitsu Australia",
"msg_date": "Wed, 27 Mar 2019 12:23:03 +1100",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Transaction commits VS Transaction commits (with parallel) VS\n query mean time"
},
{
"msg_contents": "On Wed, Mar 27, 2019 at 6:53 AM Haribabu Kommi <kommi.haribabu@gmail.com> wrote:\n> On Tue, Mar 26, 2019 at 9:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> As part of this thread, maybe we can\n>> just fix the case of the parallel cooperating transaction.\n>\n>\n> With the current patch, all the parallel implementation transaction are getting\n> skipped, in my tests parallel workers are the major factor in the transaction\n> stats counter. Even before parallelism, the stats of the autovacuum and etc\n> are still present but their contribution is not greatly influencing the stats.\n>\n> I agree with you in fixing the stats with parallel workers and improve stats.\n>\n\nI was proposing to fix only the transaction started with\nStartParallelWorkerTransaction by using is_parallel_worker flag as\ndiscussed above. I understand that it won't completely fix the\nproblem reported by you, but it will be a step in that direction. My\nmain worry is that if we fix it the way you are proposing and we also\ninvent a new way to deal with all other internal transactions, then\nthe fix done by us now needs to be changed/reverted. Note, that this\nfix needs to be backpatched as well, so we should avoid doing any fix\nwhich needs to be changed or reverted.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 27 Mar 2019 17:57:08 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction commits VS Transaction commits (with parallel) VS\n query mean time"
},
{
"msg_contents": "On Wed, Mar 27, 2019 at 11:27 PM Amit Kapila <amit.kapila16@gmail.com>\nwrote:\n\n> On Wed, Mar 27, 2019 at 6:53 AM Haribabu Kommi <kommi.haribabu@gmail.com>\n> wrote:\n> > On Tue, Mar 26, 2019 at 9:08 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >>\n> >> As part of this thread, maybe we can\n> >> just fix the case of the parallel cooperating transaction.\n> >\n> >\n> > With the current patch, all the parallel implementation transaction are\n> getting\n> > skipped, in my tests parallel workers are the major factor in the\n> transaction\n> > stats counter. Even before parallelism, the stats of the autovacuum and\n> etc\n> > are still present but their contribution is not greatly influencing the\n> stats.\n> >\n> > I agree with you in fixing the stats with parallel workers and improve\n> stats.\n> >\n>\n> I was proposing to fix only the transaction started with\n> StartParallelWorkerTransaction by using is_parallel_worker flag as\n> discussed above. I understand that it won't completely fix the\n> problem reported by you, but it will be a step in that direction. My\n> main worry is that if we fix it the way you are proposing and we also\n> invent a new way to deal with all other internal transactions, then\n> the fix done by us now needs to be changed/reverted. Note, that this\n> fix needs to be backpatched as well, so we should avoid doing any fix\n> which needs to be changed or reverted.\n>\n\nI tried the approach as your suggested as by not counting the actual\nparallel work\ntransactions by just releasing the resources without touching the counters,\nthe counts are not reduced much.\n\nHEAD - With 4 parallel workers running query generates 13 stats ( 4 * 3 +\n1)\nPatch - With 4 parallel workers running query generates 9 stats ( 4 * 2 +\n1)\nOld approach patch - With 4 parallel workers running query generates 1 stat\n(1)\n\nCurrently the parallel worker start transaction 3 times in the following\nplaces.\n1. InitPostgres\n2. ParallelWorkerMain (2)\n\nwith the attached patch, we reduce one count from ParallelWorkerMain.\n\nRegards,\nHaribabu Kommi\nFujitsu Australia",
"msg_date": "Thu, 28 Mar 2019 17:12:57 +1100",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Transaction commits VS Transaction commits (with parallel) VS\n query mean time"
},
{
"msg_contents": "On Thursday, March 28, 2019 3:13 PM (GMT+9), Haribabu Kommi wrote:\r\n> I tried the approach as your suggested as by not counting the actual parallel work\r\n> transactions by just releasing the resources without touching the counters,\r\n> the counts are not reduced much.\r\n>\r\n> HEAD - With 4 parallel workers running query generates 13 stats ( 4 * 3 + 1)\r\n> Patch - With 4 parallel workers running query generates 9 stats ( 4 * 2 + 1)\r\n> Old approach patch - With 4 parallel workers running query generates 1 stat (1)\r\n>\r\n> Currently the parallel worker start transaction 3 times in the following places.\r\n> 1. InitPostgres\r\n> 2. ParallelWorkerMain (2)\r\n>\r\n> with the attached patch, we reduce one count from ParallelWorkerMain.\r\n\r\nI'm sorry for the late review of the patch.\r\nPatch applies, compiles cleanly, make check passes too.\r\nI tested the recent patch again using the same method above\r\nand confirmed the increase of generated stats by 9 w/ 4 parallel workers.\r\n\r\npostgres=# BEGIN;\r\npostgres=# select xact_commit from pg_stat_database where datname = 'postgres';\r\nxact_commit\r\n-------------\r\n 60\r\n(1 row)\r\npostgres=# explain analyze select * from tab where b = 6;\r\n[snipped]\r\npostgres=# COMMIT;\r\npostgres=# select xact_commit from pg_stat_database where datname = 'postgres';\r\nxact_commit\r\n-------------\r\n 69\r\n\r\nThe intention of the latest patch is to fix the stat of\r\n(IOW, do not count) parallel cooperating transactions,\r\nor the transactions started by StartParallelWorkerTransaction,\r\nbased from the advice of Amit.\r\nAfter testing, the current patch works as intended.\r\n\r\nHowever, also currently being discussed in the latter mails\r\nis the behavior of how parallel xact stats should be shown.\r\nSo the goal eventually is to improve how we present the\r\nstats for parallel workers by differentiating between the\r\ninternal-initiated (bgworker xact, autovacuum, etc) and\r\nuser/application-initiated transactions, apart from keeping\r\nthe overall xact stats.\r\n..So fixing the latter one will change this current thread's fix?\r\n\r\nI personally have no problems with committing this fix first\r\nbefore fixing the latter problem discussed above.\r\nBut I think there should be a consensus regarding that one.\r\n\r\nRegards,\r\nKirk Jamison\r\n\n\n\n\n\n\n\n\n\nOn\r\nThursday, March 28, 2019 3:13 PM (GMT+9), Haribabu Kommi wrote:\n\n\n> I tried the approach as your suggested as by not counting the actual parallel work\n\n\n> transactions by just releasing the resources without touching the counters,\n\n\n> the counts are not reduced much.\n\n\n>\r\n\n\n\n> HEAD - With 4 parallel workers running query generates 13 stats ( 4 * 3 + 1)\n\n\n> Patch - With 4 parallel workers running query generates 9 stats ( 4 * 2 + 1)\n\n\n> Old approach patch - With 4 parallel workers running query generates 1 stat (1)\n\n\n> \n\n\n> Currently the parallel worker start transaction 3 times in the following places.\n\n\n> 1. InitPostgres\n\n\n> 2. ParallelWorkerMain (2)\n\n\n> \n\n\n> with the attached patch, we reduce one count from ParallelWorkerMain.\n\n\n\n\n\n\n \nI'm sorry for the late review of the patch.\nPatch applies, compiles cleanly, make check passes too.\nI tested the recent patch again using the same method above\nand confirmed the increase of generated stats by 9 w/ 4 parallel workers.\n \npostgres=# BEGIN;\npostgres=# select xact_commit from pg_stat_database where datname = 'postgres';\nxact_commit\n-------------\n 60\n(1 row)\npostgres=# explain analyze select * from tab where b = 6;\n[snipped]\npostgres=# COMMIT;\npostgres=# select xact_commit from pg_stat_database where datname = 'postgres';\nxact_commit\n-------------\n 69\n \nThe intention of the latest patch is to fix the stat of\n(IOW, do not count) parallel cooperating transactions,\nor the transactions started by StartParallelWorkerTransaction,\nbased from the advice of Amit.\nAfter testing, the current patch works as intended.\n \nHowever, also currently being discussed in the latter mails\nis the behavior of how parallel xact stats should be shown.\nSo the goal eventually is to improve how we present the\nstats for parallel workers by differentiating between the\ninternal-initiated (bgworker xact, autovacuum, etc) and\nuser/application-initiated transactions, apart from keeping\nthe overall xact stats.\n..So fixing the latter one will change this current thread's fix?\n \nI personally have no problems with committing this fix first\nbefore fixing the latter problem discussed above.\nBut I think there should be a consensus regarding that one.\n \nRegards,\nKirk Jamison",
"msg_date": "Tue, 2 Apr 2019 08:00:55 +0000",
"msg_from": "\"Jamison, Kirk\" <k.jamison@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transaction commits VS Transaction commits (with parallel) VS\n query mean time"
},
{
"msg_contents": "On Thu, Mar 28, 2019 at 11:43 AM Haribabu Kommi\n<kommi.haribabu@gmail.com> wrote:\n>\n> On Wed, Mar 27, 2019 at 11:27 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Wed, Mar 27, 2019 at 6:53 AM Haribabu Kommi <kommi.haribabu@gmail.com> wrote:\n>> > On Tue, Mar 26, 2019 at 9:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> >>\n>> >> As part of this thread, maybe we can\n>> >> just fix the case of the parallel cooperating transaction.\n>> >\n>> >\n>> > With the current patch, all the parallel implementation transaction are getting\n>> > skipped, in my tests parallel workers are the major factor in the transaction\n>> > stats counter. Even before parallelism, the stats of the autovacuum and etc\n>> > are still present but their contribution is not greatly influencing the stats.\n>> >\n>> > I agree with you in fixing the stats with parallel workers and improve stats.\n>> >\n>>\n>> I was proposing to fix only the transaction started with\n>> StartParallelWorkerTransaction by using is_parallel_worker flag as\n>> discussed above. I understand that it won't completely fix the\n>> problem reported by you, but it will be a step in that direction. My\n>> main worry is that if we fix it the way you are proposing and we also\n>> invent a new way to deal with all other internal transactions, then\n>> the fix done by us now needs to be changed/reverted. Note, that this\n>> fix needs to be backpatched as well, so we should avoid doing any fix\n>> which needs to be changed or reverted.\n>\n>\n> I tried the approach as your suggested as by not counting the actual parallel work\n> transactions by just releasing the resources without touching the counters,\n> the counts are not reduced much.\n>\n> HEAD - With 4 parallel workers running query generates 13 stats ( 4 * 3 + 1)\n> Patch - With 4 parallel workers running query generates 9 stats ( 4 * 2 + 1)\n> Old approach patch - With 4 parallel workers running query generates 1 stat (1)\n>\n> Currently the parallel worker start transaction 3 times in the following places.\n> 1. InitPostgres\n> 2. ParallelWorkerMain (2)\n>\n> with the attached patch, we reduce one count from ParallelWorkerMain.\n>\n\nRight, that is expected from this fix. BTW, why you have changed the\napproach in this patch to not count anything by the parallel worker as\ncompared to the earlier version where you were just ignoring the stats\nfor transactions. As of now, either way is fine, but in future (after\nparallel inserts/updates), we want other stats to be updated. I think\nyour previous idea was better, just you need to start using\nis_parallel_worker flag.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 3 Apr 2019 08:29:15 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction commits VS Transaction commits (with parallel) VS\n query mean time"
},
{
"msg_contents": "On Wed, Apr 3, 2019 at 1:59 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Thu, Mar 28, 2019 at 11:43 AM Haribabu Kommi\n> <kommi.haribabu@gmail.com> wrote:\n> >\n> > On Wed, Mar 27, 2019 at 11:27 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >>\n> >> On Wed, Mar 27, 2019 at 6:53 AM Haribabu Kommi <\n> kommi.haribabu@gmail.com> wrote:\n> >> > On Tue, Mar 26, 2019 at 9:08 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >> >>\n> >> >> As part of this thread, maybe we can\n> >> >> just fix the case of the parallel cooperating transaction.\n> >> >\n> >> >\n> >> > With the current patch, all the parallel implementation transaction\n> are getting\n> >> > skipped, in my tests parallel workers are the major factor in the\n> transaction\n> >> > stats counter. Even before parallelism, the stats of the autovacuum\n> and etc\n> >> > are still present but their contribution is not greatly influencing\n> the stats.\n> >> >\n> >> > I agree with you in fixing the stats with parallel workers and\n> improve stats.\n> >> >\n> >>\n> >> I was proposing to fix only the transaction started with\n> >> StartParallelWorkerTransaction by using is_parallel_worker flag as\n> >> discussed above. I understand that it won't completely fix the\n> >> problem reported by you, but it will be a step in that direction. My\n> >> main worry is that if we fix it the way you are proposing and we also\n> >> invent a new way to deal with all other internal transactions, then\n> >> the fix done by us now needs to be changed/reverted. Note, that this\n> >> fix needs to be backpatched as well, so we should avoid doing any fix\n> >> which needs to be changed or reverted.\n> >\n> >\n> > I tried the approach as your suggested as by not counting the actual\n> parallel work\n> > transactions by just releasing the resources without touching the\n> counters,\n> > the counts are not reduced much.\n> >\n> > HEAD - With 4 parallel workers running query generates 13 stats ( 4 * 3\n> + 1)\n> > Patch - With 4 parallel workers running query generates 9 stats ( 4 * 2\n> + 1)\n> > Old approach patch - With 4 parallel workers running query generates 1\n> stat (1)\n> >\n> > Currently the parallel worker start transaction 3 times in the following\n> places.\n> > 1. InitPostgres\n> > 2. ParallelWorkerMain (2)\n> >\n> > with the attached patch, we reduce one count from ParallelWorkerMain.\n> >\n>\n> Right, that is expected from this fix. BTW, why you have changed the\n> approach in this patch to not count anything by the parallel worker as\n> compared to the earlier version where you were just ignoring the stats\n> for transactions. As of now, either way is fine, but in future (after\n> parallel inserts/updates), we want other stats to be updated. I think\n> your previous idea was better, just you need to start using\n> is_parallel_worker flag.\n>\n\nThanks for the review.\n\nWhile changing the approach to use the is_parallel_worker_flag, I thought\nthat the rest of the stats are also not required to be updated and also\nthose\nare any way write operations and those values are zero anyway for parallel\nworkers.\n\nInstead of expanding the patch scope, I just changed to avoid the commit\nor rollback stats as discussed, and later we can target the handling of all\nthe\ninternal transactions and their corresponding stats.\n\nRegards,\nHaribabu Kommi\nFujitsu Australia",
"msg_date": "Wed, 3 Apr 2019 16:15:00 +1100",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Transaction commits VS Transaction commits (with parallel) VS\n query mean time"
},
{
"msg_contents": "On Wed, Apr 3, 2019 at 10:45 AM Haribabu Kommi <kommi.haribabu@gmail.com> wrote:\n>\n> Thanks for the review.\n>\n> While changing the approach to use the is_parallel_worker_flag, I thought\n> that the rest of the stats are also not required to be updated and also those\n> are any way write operations and those values are zero anyway for parallel\n> workers.\n>\n> Instead of expanding the patch scope, I just changed to avoid the commit\n> or rollback stats as discussed, and later we can target the handling of all the\n> internal transactions and their corresponding stats.\n>\n\nThe patch looks good to me. I have changed the commit message and ran\nthe pgindent in the attached patch. Can you once see if that looks\nfine to you? Also, we should backpatch this till 9.6. So, can you\nonce verify if the change is fine in all bank branches? Also, test\nwith a force_parallel_mode option. I have already tested it with\nforce_parallel_mode = 'regress' in HEAD, please test it in back\nbranches as well.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 4 Apr 2019 09:59:41 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction commits VS Transaction commits (with parallel) VS\n query mean time"
},
{
"msg_contents": "On Thu, Apr 4, 2019 at 3:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Wed, Apr 3, 2019 at 10:45 AM Haribabu Kommi <kommi.haribabu@gmail.com>\n> wrote:\n> >\n> > Thanks for the review.\n> >\n> > While changing the approach to use the is_parallel_worker_flag, I thought\n> > that the rest of the stats are also not required to be updated and also\n> those\n> > are any way write operations and those values are zero anyway for\n> parallel\n> > workers.\n> >\n> > Instead of expanding the patch scope, I just changed to avoid the commit\n> > or rollback stats as discussed, and later we can target the handling of\n> all the\n> > internal transactions and their corresponding stats.\n> >\n>\n> The patch looks good to me. I have changed the commit message and ran\n> the pgindent in the attached patch. Can you once see if that looks\n> fine to you? Also, we should backpatch this till 9.6. So, can you\n> once verify if the change is fine in all bank branches? Also, test\n> with a force_parallel_mode option. I have already tested it with\n> force_parallel_mode = 'regress' in HEAD, please test it in back\n> branches as well.\n>\n\n\nThanks for the updated patch.\nI tested in back branches even with force_parallelmode and it is working\nas expected. But the patches apply is failing in back branches, so attached\nthe patches for their branches. For v11 it applies with hunks.\n\nRegards,\nHaribabu Kommi\nFujitsu Australia",
"msg_date": "Mon, 8 Apr 2019 10:04:00 +1000",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Transaction commits VS Transaction commits (with parallel) VS\n query mean time"
},
{
"msg_contents": "On Monday, April 8, 2019 9:04 AM (GMT+9), Haribabu Kommi wrote:\r\n\r\n>On Thu, Apr 4, 2019 at 3:29 PM Amit Kapila <amit.kapila16@gmail.com<mailto:amit.kapila16@gmail.com>> wrote:\r\n>On Wed, Apr 3, 2019 at 10:45 AM Haribabu Kommi <kommi.haribabu@gmail.com<mailto:kommi.haribabu@gmail.com>> wrote:\r\n>>\r\n>> Thanks for the review.\r\n>>\r\n>> While changing the approach to use the is_parallel_worker_flag, I thought\r\n>> that the rest of the stats are also not required to be updated and also those\r\n>> are any way write operations and those values are zero anyway for parallel\r\n>> workers.\r\n>>\r\n>> Instead of expanding the patch scope, I just changed to avoid the commit\r\n>> or rollback stats as discussed, and later we can target the handling of all the\r\n>> internal transactions and their corresponding stats.\r\n>>\r\n\r\n> The patch looks good to me. I have changed the commit message and ran\r\n> the pgindent in the attached patch. Can you once see if that looks\r\n> fine to you? Also, we should backpatch this till 9.6. So, can you\r\n> once verify if the change is fine in all bank branches? Also, test\r\n> with a force_parallel_mode option. I have already tested it with\r\n> force_parallel_mode = 'regress' in HEAD, please test it in back\r\n> branches as well.\r\n>\r\n>\r\n> Thanks for the updated patch.\r\n> I tested in back branches even with force_parallelmode and it is working\r\n> as expected. But the patches apply is failing in back branches, so attached\r\n> the patches for their branches. For v11 it applies with hunks.\r\n\r\n\r\nThere are 3 patches for this thread:\r\n_v5: for PG v11 to current head\r\n_10: for PG10 branch\r\n_96: for PG9.6\r\n\r\nI have also tried applying these latest patches, .\r\nThe patch set works correctly from patch application, build to compilation.\r\nI also tested with force_parallel_mode, and it works as intended.\r\n\r\nSo I am marking this thread as “Ready for Committer”.\r\nI hope this makes it on v12 before the feature freeze.\r\n\r\n\r\nRegards,\r\nKirk Jamison\r\n\n\n\n\n\n\n\n\n\nOn Monday, April 8, 2019 9:04 AM (GMT+9), Haribabu Kommi wrote:\n\n\n\n \n>On Thu, Apr 4, 2019 at 3:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n\n>On Wed, Apr 3, 2019 at 10:45 AM Haribabu Kommi <kommi.haribabu@gmail.com> wrote:\n>>\n>> Thanks for the review.\n>>\r\n>> While changing the approach to use the is_parallel_worker_flag, I thought\n>> that the rest of the stats are also not required to be updated and also those\n>> are any way write operations and those values are zero anyway for parallel\r\n>> workers.\n>>\n>> Instead of expanding the patch scope, I just changed to avoid the commit\n>> or rollback stats as discussed, and later we can target the handling of all the\n>> internal transactions and their corresponding stats.\n>>\n\n> The patch looks good to me. I have changed the commit message and ran\n> the pgindent in the attached patch. Can you once see if that looks\n> fine to you? Also, we should backpatch this till 9.6. So, can you\n> once verify if the change is fine in all bank branches? Also, test\n> with a force_parallel_mode option. I have already tested it with\n> force_parallel_mode = 'regress' in HEAD, please test it in back\n> branches as well.\n\n\n\n> \n\n\n> \n\n\n> Thanks for the updated patch.\n\n\n> I tested in back branches even with force_parallelmode and it is working\n\n\n> as expected. But the patches apply is failing in back branches, so attached\n\n\n> the patches for their branches. For v11 it applies with hunks.\n\n\n \n \nThere are 3 patches for this thread:\n_v5: for PG v11 to current head\n_10: for PG10 branch\n_96: for PG9.6\n \nI have also tried applying these latest patches, .\nThe patch set works correctly from patch application, build to compilation.\nI also tested with force_parallel_mode, and it works as intended.\n \nSo I am marking this thread as “Ready for Committer”.\nI hope this makes it on v12 before the feature freeze.\n \n \nRegards,\nKirk Jamison",
"msg_date": "Mon, 8 Apr 2019 02:24:33 +0000",
"msg_from": "\"Jamison, Kirk\" <k.jamison@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Transaction commits VS Transaction commits (with parallel) VS\n query mean time"
},
{
"msg_contents": "On Mon, Apr 8, 2019 at 7:54 AM Jamison, Kirk <k.jamison@jp.fujitsu.com> wrote:\n>\n> On Monday, April 8, 2019 9:04 AM (GMT+9), Haribabu Kommi wrote:\n>\n> >On Thu, Apr 4, 2019 at 3:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > The patch looks good to me. I have changed the commit message and ran\n> > the pgindent in the attached patch. Can you once see if that looks\n> > fine to you? Also, we should backpatch this till 9.6. So, can you\n> > once verify if the change is fine in all bank branches? Also, test\n> > with a force_parallel_mode option. I have already tested it with\n> > force_parallel_mode = 'regress' in HEAD, please test it in back\n> > branches as well.\n> >\n> > Thanks for the updated patch.\n> > I tested in back branches even with force_parallelmode and it is working\n> > as expected. But the patches apply is failing in back branches, so attached\n> > the patches for their branches. For v11 it applies with hunks.\n>\n> There are 3 patches for this thread:\n> _v5: for PG v11 to current head\n> _10: for PG10 branch\n> _96: for PG9.6\n>\n> I have also tried applying these latest patches, .\n> The patch set works correctly from patch application, build to compilation.\n> I also tested with force_parallel_mode, and it works as intended.\n>\n> So I am marking this thread as “Ready for Committer”.\n>\n\nThanks, Hari and Jamison for verification. The patches for\nback-branches looks good to me. I will once again verify them before\ncommit. I will commit this patch tomorrow unless someone has\nobjections. Robert/Alvaro, do let me know if you see any problem with\nthis fix?\n\n> I hope this makes it on v12 before the feature freeze.\n>\n\nYes, I think fixing bugs should be fine unless we delay too much.\n\nI see one typo in the commit message (transactions as we that is what\nwe do in ../transactions as that is what we do in ..), will fix it\nbefore commit.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 8 Apr 2019 08:51:29 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction commits VS Transaction commits (with parallel) VS\n query mean time"
},
{
"msg_contents": "On Mon, Apr 8, 2019 at 8:51 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Apr 8, 2019 at 7:54 AM Jamison, Kirk <k.jamison@jp.fujitsu.com> wrote:\n> > So I am marking this thread as “Ready for Committer”.\n> >\n>\n> Thanks, Hari and Jamison for verification. The patches for\n> back-branches looks good to me. I will once again verify them before\n> commit. I will commit this patch tomorrow unless someone has\n> objections. Robert/Alvaro, do let me know if you see any problem with\n> this fix?\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 10 Apr 2019 11:02:02 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction commits VS Transaction commits (with parallel) VS\n query mean time"
},
{
"msg_contents": "On Wed, Apr 10, 2019 at 3:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Mon, Apr 8, 2019 at 8:51 AM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >\n> > On Mon, Apr 8, 2019 at 7:54 AM Jamison, Kirk <k.jamison@jp.fujitsu.com>\n> wrote:\n> > > So I am marking this thread as “Ready for Committer”.\n> > >\n> >\n> > Thanks, Hari and Jamison for verification. The patches for\n> > back-branches looks good to me. I will once again verify them before\n> > commit. I will commit this patch tomorrow unless someone has\n> > objections. Robert/Alvaro, do let me know if you see any problem with\n> > this fix?\n> >\n>\n> Pushed.\n>\n\nThanks Amit.\nWill look into it further to handle all the internally generated\ntransactions.\n\nRegards,\nHaribabu Kommi\nFujitsu Australia\n\nOn Wed, Apr 10, 2019 at 3:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Mon, Apr 8, 2019 at 8:51 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Apr 8, 2019 at 7:54 AM Jamison, Kirk <k.jamison@jp.fujitsu.com> wrote:\n> > So I am marking this thread as “Ready for Committer”.\n> >\n>\n> Thanks, Hari and Jamison for verification. The patches for\n> back-branches looks good to me. I will once again verify them before\n> commit. I will commit this patch tomorrow unless someone has\n> objections. Robert/Alvaro, do let me know if you see any problem with\n> this fix?\n>\n\nPushed.\nThanks Amit.Will look into it further to handle all the internally generated transactions.Regards,Haribabu KommiFujitsu Australia",
"msg_date": "Wed, 10 Apr 2019 19:13:10 +1000",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Transaction commits VS Transaction commits (with parallel) VS\n query mean time"
}
] |
[
{
"msg_contents": "While reviewing [1] it occurred to me that it might not be clean that\npostgresGetForeignUpperPaths() adds ORDER BY to the remote query when it\nreceives stage=UPPERREL_GROUP_AGG. Shouldn't that only happen for\nUPPERREL_ORDERED? Thus the processing of UPPERREL_ORDERED would have to be\nable to handle grouping too, so some refactoring would be needed. Do I\nmisunderstand anything?\n\n[1] https://commitfest.postgresql.org/22/1950/\n\n-- \nAntonin Houska\nCybertec Schönig & Schönig GmbH\nGröhrmühlgasse 26, A-2700 Wiener Neustadt\nWeb: https://www.cybertec-postgresql.com\n\n",
"msg_date": "Thu, 07 Feb 2019 19:33:51 +0100",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Handling of ORDER BY by postgres_fdw"
}
] |
[
{
"msg_contents": "While I've been staring at dependency.c, I've realized that this bit\nin findDependentObjects is unsafe:\n\n /*\n * First, release caller's lock on this object and get\n * deletion lock on the owning object. (We must release\n * caller's lock to avoid deadlock against a concurrent\n * deletion of the owning object.)\n */\n ReleaseDeletionLock(object);\n AcquireDeletionLock(&otherObject, 0);\n\n /*\n * The owning object might have been deleted while we waited\n * to lock it; if so, neither it nor the current object are\n * interesting anymore. We test this by checking the\n * pg_depend entry (see notes below).\n */\n if (!systable_recheck_tuple(scan, tup))\n {\n systable_endscan(scan);\n ReleaseDeletionLock(&otherObject);\n return;\n }\n\n /*\n * Okay, recurse to the owning object instead of proceeding.\n\nThe unstated assumption here is that if the pg_depend entry we are looking\nat has gone away, then both the current object and its owning object must\nbe gone too. That was a safe assumption when this code was written,\nfifteen years ago, because nothing except object deletion could cause\na pg_depend entry to disappear. Since then, however, we have merrily\nhanded people a bunch of foot-guns with which they can munge pg_depend\nlike mad, for example ALTER EXTENSION ... DROP.\n\nHence, if the pg_depend entry is gone, that might only mean that somebody\nrandomly decided to remove the dependency. Now, I think it's legit to\ndecide that we needn't remove the previously-owning object in that case.\nBut it's not okay to just pack up shop and return, because if the current\nobject is still there, proceeding with deletion of whatever we were\ndeleting would be bad. That could leave us with scenarios like triggers\nwhose function is gone, views referring to a deceased table, indexes\ndependent on a vanished datatype or opclass, yadda yadda.\n\nIt seems like what ought to happen here, if systable_recheck_tuple fails,\nis to reacquire the deletion lock that we gave up on \"object\", then\ncheck to see if \"object\" is still there, and if so continue with deleting\nit. Only if it in fact isn't there is it OK to conclude that \nfindDependentObjects needn't do any more work at this recursion level.\n\nI do not think we have any off-the-shelf way of asking whether an\nObjectAddress's referent still exists. We could probably teach\nobjectaddress.c to provide one --- it seems to have enough infrastructure\nalready to know where to find the object's (main) catalog tuple.\n\nThis issue seems independent of the partition problems I'm working\non right now, so I plan to leave it for later.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 07 Feb 2019 15:53:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Race condition in dependency searches"
}
] |
[
{
"msg_contents": "The following bug has been logged on the website:\n\nBug reference: 15623\nLogged by: Roger Curley\nEmail address: rocurley@gmail.com\nPostgreSQL version: 11.1\nOperating system: Ubuntu 11.1\nDescription: \n\nSteps to reproduce (run in psql shell):\r\n```\r\nDROP TABLE IF EXISTS test CASCADE;\r\nCREATE TABLE test (\r\n id int PRIMARY KEY,\r\n value int DEFAULT 0\r\n);\r\nCREATE VIEW test_view AS (SELECT * FROM test);\r\n\r\nINSERT INTO test_view VALUES (1, DEFAULT), (2, DEFAULT);\r\nINSERT INTO test VALUES (3, DEFAULT), (4, DEFAULT);\r\nINSERT INTO test_view VALUES (5, DEFAULT);\r\nSELECT * FROM test;\r\n```\r\n\r\nResult:\r\n```\r\n id | value \r\n----+-------\r\n 1 | \r\n 2 | \r\n 3 | 0\r\n 4 | 0\r\n 5 | 0\r\n```\r\n\r\nExpected Result:\r\n```\r\n id | value \r\n----+-------\r\n 1 | 0\r\n 2 | 0\r\n 3 | 0\r\n 4 | 0\r\n 5 | 0\r\n```\r\nIn particular, it's surprising that inserting 1 row into an updatable view\nuses the table default, while inserting 2 uses null.",
"msg_date": "Thu, 07 Feb 2019 21:42:30 +0000",
"msg_from": "PG Bug reporting form <noreply@postgresql.org>",
"msg_from_op": true,
"msg_subject": "BUG #15623: Inconsistent use of default for updatable view"
},
{
"msg_contents": "Hi,\n\nOn 2019/02/08 6:42, PG Bug reporting form wrote:\n> The following bug has been logged on the website:\n> \n> Bug reference: 15623\n> Logged by: Roger Curley\n> Email address: rocurley@gmail.com\n> PostgreSQL version: 11.1\n> Operating system: Ubuntu 11.1\n> Description: \n> \n> Steps to reproduce (run in psql shell):\n> ```\n> DROP TABLE IF EXISTS test CASCADE;\n> CREATE TABLE test (\n> id int PRIMARY KEY,\n> value int DEFAULT 0\n> );\n> CREATE VIEW test_view AS (SELECT * FROM test);\n> \n> INSERT INTO test_view VALUES (1, DEFAULT), (2, DEFAULT);\n> INSERT INTO test VALUES (3, DEFAULT), (4, DEFAULT);\n> INSERT INTO test_view VALUES (5, DEFAULT);\n> SELECT * FROM test;\n> ```\n> \n> Result:\n> ```\n> id | value \n> ----+-------\n> 1 | \n> 2 | \n> 3 | 0\n> 4 | 0\n> 5 | 0\n> ```\n> \n> Expected Result:\n> ```\n> id | value \n> ----+-------\n> 1 | 0\n> 2 | 0\n> 3 | 0\n> 4 | 0\n> 5 | 0\n> ```\n> In particular, it's surprising that inserting 1 row into an updatable view\n> uses the table default, while inserting 2 uses null.\n\nThanks for the report. Seems odd indeed.\n\nLooking into this, the reason it works when inserting just one row vs.\nmore than one row is that those two cases are handled by nearby but\ndifferent pieces of code. The code that handles multiple rows seems buggy\nas seen in the above example. Specifically, I think the bug is in\nrewriteValuesRTE() which is a function to replace the default placeholders\nin the input rows by the default values as defined for the target\nrelation. It is called twice when inserting via the view -- first for the\nview relation and then again for the underlying table. This arrangement\nseems to work correctly if the view specifies its own defaults for columns\n(assuming that it's okay for the view's defaults to override the\nunderlying base table's). If there are no view-specified defaults, then\nrewriteValuesRTE replaces the default placeholders in the input row by\nNULL constants when called for the first time with the view as target\nrelation and the next invocation for the underlying table finds that it\nhas no work to do, so its defaults are not filled.\n\nAttached find a patch that adjusts rewriteValuesRTE to not replace the\ndefault placeholder if the view has no default value for a given column.\nAlso, adds a test in updatable_views.sql.\n\nThanks,\nAmit",
"msg_date": "Fri, 8 Feb 2019 14:07:28 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15623: Inconsistent use of default for updatable view"
},
{
"msg_contents": "On Fri, 8 Feb 2019 at 05:07, Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> wrote:\n> Thanks for the report. Seems odd indeed.\n\nHmm, indeed. That seems to have been broken ever since updatable views\nwere added.\n\n> Looking into this, the reason it works when inserting just one row vs.\n> more than one row is that those two cases are handled by nearby but\n> different pieces of code. The code that handles multiple rows seems buggy\n> as seen in the above example. Specifically, I think the bug is in\n> rewriteValuesRTE() which is a function to replace the default placeholders\n> in the input rows by the default values as defined for the target\n> relation. It is called twice when inserting via the view -- first for the\n> view relation and then again for the underlying table.\n\nRight, except when the view is trigger-updatable. In that case, we do\nhave to explicitly set the column value to NULL when\nrewriteValuesRTE() is called for the view, because it won't be called\nagain for the underlying table -- it is the trigger's responsibility\nto work how (or indeed if) to update the underlying table. IOW, you\nneed to also use view_has_instead_trigger() to check the view,\notherwise your patch breaks this case:\n\nDROP TABLE IF EXISTS test CASCADE;\nCREATE TABLE test (\n id int PRIMARY KEY,\n value int DEFAULT 0\n);\nCREATE VIEW test_view AS (SELECT * FROM test);\n\nCREATE OR REPLACE FUNCTION test_view_ins() RETURNS trigger\nAS\n$$\nBEGIN\n INSERT INTO test VALUES (NEW.id, NEW.value);\n RETURN NEW;\nEND;\n$$\nLANGUAGE plpgsql;\n\nCREATE TRIGGER test_view_trig INSTEAD OF INSERT ON test_view\n FOR EACH ROW EXECUTE FUNCTION test_view_ins();\n\nINSERT INTO test_view VALUES (1, DEFAULT), (2, DEFAULT);\n\nERROR: unrecognized node type: 142\n\n\nWhile playing around with this, I noticed a related bug affecting the\nnew identity columns feature. I've not investigated it fully, but It\nlooks almost the same -- if the column is an identity column, and\nwe're inserting a multi-row VALUES set containing DEFAULTS, they will\nget rewritten to NULLs which will then lead to an error if overriding\nthe generated value isn't allowed:\n\nDROP TABLE IF EXISTS foo CASCADE;\nCREATE TABLE foo\n(\n a int,\n b int GENERATED ALWAYS AS IDENTITY\n);\n\nINSERT INTO foo VALUES (1,DEFAULT); -- OK\nINSERT INTO foo VALUES (2,DEFAULT),(3,DEFAULT); -- Fails\n\nI think fixing that should be tackled separately, because it may turn\nout to be subtly different, but it definitely looks like another bug.\n\nRegards,\nDean\n\n",
"msg_date": "Fri, 8 Feb 2019 11:00:47 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15623: Inconsistent use of default for updatable view"
},
{
"msg_contents": "Thanks for looking at this.\n\nOn Fri, Feb 8, 2019 at 8:01 PM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Fri, 8 Feb 2019 at 05:07, Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> wrote:\n> > Thanks for the report. Seems odd indeed.\n>\n> Hmm, indeed. That seems to have been broken ever since updatable views\n> were added.\n>\n> > Looking into this, the reason it works when inserting just one row vs.\n> > more than one row is that those two cases are handled by nearby but\n> > different pieces of code. The code that handles multiple rows seems buggy\n> > as seen in the above example. Specifically, I think the bug is in\n> > rewriteValuesRTE() which is a function to replace the default placeholders\n> > in the input rows by the default values as defined for the target\n> > relation. It is called twice when inserting via the view -- first for the\n> > view relation and then again for the underlying table.\n>\n> Right, except when the view is trigger-updatable. In that case, we do\n> have to explicitly set the column value to NULL when\n> rewriteValuesRTE() is called for the view, because it won't be called\n> again for the underlying table -- it is the trigger's responsibility\n> to work how (or indeed if) to update the underlying table. IOW, you\n> need to also use view_has_instead_trigger() to check the view,\n> otherwise your patch breaks this case:\n>\n> DROP TABLE IF EXISTS test CASCADE;\n> CREATE TABLE test (\n> id int PRIMARY KEY,\n> value int DEFAULT 0\n> );\n> CREATE VIEW test_view AS (SELECT * FROM test);\n>\n> CREATE OR REPLACE FUNCTION test_view_ins() RETURNS trigger\n> AS\n> $$\n> BEGIN\n> INSERT INTO test VALUES (NEW.id, NEW.value);\n> RETURN NEW;\n> END;\n> $$\n> LANGUAGE plpgsql;\n>\n> CREATE TRIGGER test_view_trig INSTEAD OF INSERT ON test_view\n> FOR EACH ROW EXECUTE FUNCTION test_view_ins();\n>\n> INSERT INTO test_view VALUES (1, DEFAULT), (2, DEFAULT);\n>\n> ERROR: unrecognized node type: 142\n\nOops, I missed this bit. Updated the patch per your suggestion and\nexpanded the test case to exercise this.\n\n> While playing around with this, I noticed a related bug affecting the\n> new identity columns feature. I've not investigated it fully, but It\n> looks almost the same -- if the column is an identity column, and\n> we're inserting a multi-row VALUES set containing DEFAULTS, they will\n> get rewritten to NULLs which will then lead to an error if overriding\n> the generated value isn't allowed:\n>\n> DROP TABLE IF EXISTS foo CASCADE;\n> CREATE TABLE foo\n> (\n> a int,\n> b int GENERATED ALWAYS AS IDENTITY\n> );\n>\n> INSERT INTO foo VALUES (1,DEFAULT); -- OK\n> INSERT INTO foo VALUES (2,DEFAULT),(3,DEFAULT); -- Fails\n>\n> I think fixing that should be tackled separately, because it may turn\n> out to be subtly different, but it definitely looks like another bug.\n\nI haven't looked into the details of this, but agree about raising a\nthread on -hackers about it.\n\nThanks,\nAmit",
"msg_date": "Sun, 10 Feb 2019 01:57:16 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15623: Inconsistent use of default for updatable view"
},
{
"msg_contents": "On Sat, 9 Feb 2019 at 16:57, Amit Langote <amitlangote09@gmail.com> wrote:\n> On Fri, Feb 8, 2019 at 8:01 PM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> > On Fri, 8 Feb 2019 at 05:07, Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> wrote:\n> > > Looking into this, the reason it works when inserting just one row vs.\n> > > more than one row is that those two cases are handled by nearby but\n> > > different pieces of code. The code that handles multiple rows seems buggy\n> > > as seen in the above example. Specifically, I think the bug is in\n> > > rewriteValuesRTE() which is a function to replace the default placeholders\n> > > in the input rows by the default values as defined for the target\n> > > relation. It is called twice when inserting via the view -- first for the\n> > > view relation and then again for the underlying table.\n> >\n> > Right, except when the view is trigger-updatable...\n>\n> Oops, I missed this bit. Updated the patch per your suggestion and\n> expanded the test case to exercise this.\n>\n\nUnfortunately, that's still not quite right because it makes the\nbehaviour of single- and multi-row inserts inconsistent for\nrule-updatable views. Attached is an updated patch that fixes that. I\nadjusted the tests a bit to try to make it clearer which defaults get\napplied, and test all possibilities.\n\nHowever, this is still not the end of the story, because it doesn't\nfix the fact that, if the view has a DO ALSO rule on it, single-row\ninserts behave differently from multi-row inserts. In that case, each\ninsert becomes 2 inserts, and defaults need to be treated differently\nin each of the 2 queries. That's going to need a little more thought.\n\nRegards,\nDean",
"msg_date": "Sun, 10 Feb 2019 00:48:12 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15623: Inconsistent use of default for updatable view"
},
{
"msg_contents": "On Sun, 10 Feb 2019 at 00:48, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> However, this is still not the end of the story, because it doesn't\n> fix the fact that, if the view has a DO ALSO rule on it, single-row\n> inserts behave differently from multi-row inserts. In that case, each\n> insert becomes 2 inserts, and defaults need to be treated differently\n> in each of the 2 queries. That's going to need a little more thought.\n>\n\nHere's an updated patch to handle that case.\n\nIn case it's not obvious, I'm not intending to try to get this into\nnext week's updates -- more time is needed to be sure of this fix, and\nmore pairs of eyes would definitely be helpful, once those updates\nhave been shipped.\n\nRegards,\nDean",
"msg_date": "Sun, 10 Feb 2019 11:18:46 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15623: Inconsistent use of default for updatable view"
},
{
"msg_contents": "On Sun, 10 Feb 2019 at 11:18, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Sun, 10 Feb 2019 at 00:48, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> > However, this is still not the end of the story, because it doesn't\n> > fix the fact that, if the view has a DO ALSO rule on it, single-row\n> > inserts behave differently from multi-row inserts. In that case, each\n> > insert becomes 2 inserts, and defaults need to be treated differently\n> > in each of the 2 queries. That's going to need a little more thought.\n> >\n>\n> Here's an updated patch to handle that case.\n>\n> In case it's not obvious, I'm not intending to try to get this into\n> next week's updates -- more time is needed to be sure of this fix.\n\nSo I did some more testing of this and I'm reasonably happy that this\nnow fixes the originally reported issue of inconsistent handling of\nDEFAULTS in multi-row VALUES lists vs single-row ones. I tested\nvarious other scenarios involving conditional/unconditional\nalso/instead rules, and I didn't find any other surprises. Attached is\nan updated patch with improved comments, and a little less code\nduplication.\n\nRegards,\nDean",
"msg_date": "Tue, 12 Feb 2019 10:33:33 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15623: Inconsistent use of default for updatable view"
},
{
"msg_contents": "Hi Dean,\n\nOn 2019/02/12 19:33, Dean Rasheed wrote:\n> On Sun, 10 Feb 2019 at 11:18, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>> On Sun, 10 Feb 2019 at 00:48, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>>> However, this is still not the end of the story, because it doesn't\n>>> fix the fact that, if the view has a DO ALSO rule on it, single-row\n>>> inserts behave differently from multi-row inserts. In that case, each\n>>> insert becomes 2 inserts, and defaults need to be treated differently\n>>> in each of the 2 queries. That's going to need a little more thought.\n>>>\n>>\n>> Here's an updated patch to handle that case.\n>>\n>> In case it's not obvious, I'm not intending to try to get this into\n>> next week's updates -- more time is needed to be sure of this fix.\n> \n> So I did some more testing of this and I'm reasonably happy that this\n> now fixes the originally reported issue of inconsistent handling of\n> DEFAULTS in multi-row VALUES lists vs single-row ones. I tested\n> various other scenarios involving conditional/unconditional\n> also/instead rules, and I didn't find any other surprises. Attached is\n> an updated patch with improved comments, and a little less code\n> duplication.\n\nThanks for updating the patch.\n\nI can't really comment on all of the changes that that you made\nconsidering various cases, but became curious if the single-row and\nmulti-row inserts cases could share the code that determines if the\nDEFAULT item be replaced by the target-relation-specified default or NULL?\n IOW, is there some reason why we can't avoid the special handling for the\nmulti-row (RTE_VALUES) case?\n\nThanks,\nAmit\n\n\n",
"msg_date": "Wed, 13 Feb 2019 12:02:25 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15623: Inconsistent use of default for updatable view"
},
{
"msg_contents": "On Wed, 13 Feb 2019 at 03:02, Amit Langote\n<Langote_Amit_f8@lab.ntt.co.jp> wrote:\n>\n> I can't really comment on all of the changes that that you made\n> considering various cases, but became curious if the single-row and\n> multi-row inserts cases could share the code that determines if the\n> DEFAULT item be replaced by the target-relation-specified default or NULL?\n> IOW, is there some reason why we can't avoid the special handling for the\n> multi-row (RTE_VALUES) case?\n>\n\nNo, not as far as I can see.\n\nThe tricky part for a multi-row insert is working out what to do when\nit sees a DEFAULT, and there is no column default on the target\nrelation. For an auto-updatable view, it needs to leave the DEFAULT\nuntouched, so that it can later apply the base relation's column\ndefault when it recurses. For all other kinds of relation, it needs to\nturn the DEFAULT into a NULL.\n\nFor a single-row insert, that's all much easier. If it sees a DEFAULT,\nand there is no column default, it simply omits that entry from the\ntargetlist. If it then recurses to the base relation, it will put the\ntargetlist entry back in, if the base relation has a column default.\nSo it doesn't need to know locally whether it's an auto-updatable\nview, and the logic is much simpler. The multi-row case can't easily\ndo that (add and remove columns) because it's working with a\nfixed-width table structure.\n\nActually, that's not quite the end of it. So far, this has only been\nconsidering INSERT's. I think there are more issues with UPDATE's, but\nthat's a whole other can of worms. I think I'll commit this first, and\nstart a thread on -hackers to discuss that.\n\nRegards,\nDean\n\n",
"msg_date": "Wed, 13 Feb 2019 09:04:40 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15623: Inconsistent use of default for updatable view"
},
{
"msg_contents": "On 2019/02/13 18:04, Dean Rasheed wrote:\n> On Wed, 13 Feb 2019 at 03:02, Amit Langote\n> <Langote_Amit_f8@lab.ntt.co.jp> wrote:\n>>\n>> I can't really comment on all of the changes that that you made\n>> considering various cases, but became curious if the single-row and\n>> multi-row inserts cases could share the code that determines if the\n>> DEFAULT item be replaced by the target-relation-specified default or NULL?\n>> IOW, is there some reason why we can't avoid the special handling for the\n>> multi-row (RTE_VALUES) case?\n>>\n> \n> No, not as far as I can see.\n> \n> The tricky part for a multi-row insert is working out what to do when\n> it sees a DEFAULT, and there is no column default on the target\n> relation. For an auto-updatable view, it needs to leave the DEFAULT\n> untouched, so that it can later apply the base relation's column\n> default when it recurses. For all other kinds of relation, it needs to\n> turn the DEFAULT into a NULL.\n> \n> For a single-row insert, that's all much easier. If it sees a DEFAULT,\n> and there is no column default, it simply omits that entry from the\n> targetlist. If it then recurses to the base relation, it will put the\n> targetlist entry back in, if the base relation has a column default.\n> So it doesn't need to know locally whether it's an auto-updatable\n> view, and the logic is much simpler. The multi-row case can't easily\n> do that (add and remove columns) because it's working with a\n> fixed-width table structure.\n\nHmm yeah, column sets must be the same in in all value-lists.\n\n> Actually, that's not quite the end of it. So far, this has only been\n> considering INSERT's. I think there are more issues with UPDATE's, but\n> that's a whole other can of worms. I think I'll commit this first, and\n> start a thread on -hackers to discuss that.\n\nSure, thanks for the explanation.\n\nRegards,\nAmit\n\n\n",
"msg_date": "Wed, 13 Feb 2019 18:33:05 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15623: Inconsistent use of default for updatable view"
},
{
"msg_contents": "On Tue, 12 Feb 2019 at 10:33, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> Here's an updated patch ...\n\nSo I pushed that. However, ...\n\nPlaying around with it some more, I realised that whilst this appeared\nto fix the reported problem, it exposes another issue which is down to\nthe interaction between rewriteTargetListIU() and rewriteValuesRTE()\n--- for an INSERT with a VALUES RTE, rewriteTargetListIU() computes a\nlist of target attribute numbers (attrno_list) from the targetlist in\nits original order, which rewriteValuesRTE() then relies on being the\nsame length (and in the same order) as each of the lists in the VALUES\nRTE. That's OK for the initial invocation of those functions, but it\nbreaks down when they're recursively invoked for auto-updatable views.\n\nFor example, the following test-case, based on a slight variation of\nthe new regression tests:\n\ncreate table foo (a int default 1, b int);\ncreate view foo_v as select * from foo;\nalter view foo_v alter column b set default 2;\ninsert into foo_v values (default), (default);\n\ntriggers the following Assert in rewriteValuesRTE():\n\n /* Check list lengths (we can assume all the VALUES sublists are alike) */\n Assert(list_length(attrnos) == list_length(linitial(rte->values_lists)));\n\nWhat's happening is that the initial targetlist, which contains just 1\nentry for the column a, gains another entry to set the default for\ncolumn b from the view. Then, when it recurses into\nrewriteTargetListIU()/rewriteValuesRTE() for the base relation, the\nsize of the targetlist (now 2) no longer matches the sizes of the\nVALUES RTE lists (1).\n\nI think that that failed Assert was unreachable prior to 41531e42d3,\nbecause the old version of rewriteValuesRTE() would always replace all\nunresolved DEFAULT items with NULLs, so when it recursed into\nrewriteValuesRTE() for the base relation, it would always bail out\nearly because there would be no DEFAULT items left, and so it would\nfail to notice the list size mismatch.\n\nMy first thought was that this could be fixed by having\nrewriteTargetListIU() compute attrno_list using only those targetlist\nentries that refer to the VALUES RTE, and thus omit any new entries\nadded due to view defaults. That doesn't work though, because that's\nnot the only way that a list size mismatch can be triggered --- it's\nalso possible that rewriteTargetListIU() will merge together\ntargetlist entries, for example if they're array element assignments\nreferring to the same column, in which case the rewritten targetlist\ncould be shorter than the VALUES RTE lists, and attempting to compute\nattrno_list from it correctly would be trickier.\n\nSo actually, I think the right way to fix this is to give up trying to\ncompute attrno_list in rewriteTargetListIU(), and have\nrewriteValuesRTE() work out the attribute assignments itself for each\ncolumn of the VALUES RTE by examining the rewritten targetlist. That\nlooks to be quite straightforward, and results in a cleaner separation\nof logic between the 2 functions, per the attached patch.\n\nI think that rewriteValuesRTE() should only ever see DEFAULT items for\ncolumns that are simple assignments to columns of the target relation,\nso it only needs to work out the target attribute numbers for TLEs\nthat contain simple Vars referring to the VALUES RTE. Any DEFAULT seen\nin a column not matching that would be an error, but actually I think\nsuch a thing ought to be a \"can't happen\" error because of the prior\nchecks during parse analysis, so I've used elog() to report if this\ndoes happen.\n\nIncidentally, it looks like the part of rewriteValuesRTE()'s comment\nabout subscripted and field assignment has been wrong since\na3c7a993d5, so I've attempted to clarify that. That will need to look\ndifferent pre-9.6, I think.\n\nThoughts?\n\nRegards,\nDean",
"msg_date": "Wed, 27 Feb 2019 09:37:11 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15623: Inconsistent use of default for updatable view"
},
{
"msg_contents": "On 2019/02/27 18:37, Dean Rasheed wrote:\n> On Tue, 12 Feb 2019 at 10:33, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>> Here's an updated patch ...\n> \n> So I pushed that. However, ...\n> \n> Playing around with it some more, I realised that whilst this appeared\n> to fix the reported problem, it exposes another issue which is down to\n> the interaction between rewriteTargetListIU() and rewriteValuesRTE()\n> --- for an INSERT with a VALUES RTE, rewriteTargetListIU() computes a\n> list of target attribute numbers (attrno_list) from the targetlist in\n> its original order, which rewriteValuesRTE() then relies on being the\n> same length (and in the same order) as each of the lists in the VALUES\n> RTE. That's OK for the initial invocation of those functions, but it\n> breaks down when they're recursively invoked for auto-updatable views.\n>> So actually, I think the right way to fix this is to give up trying to\n> compute attrno_list in rewriteTargetListIU(), and have\n> rewriteValuesRTE() work out the attribute assignments itself for each\n> column of the VALUES RTE by examining the rewritten targetlist. That\n> looks to be quite straightforward, and results in a cleaner separation\n> of logic between the 2 functions, per the attached patch.\n\n+1. Only rewriteValuesRTE needs to use that information, so it's better\nto teach it to figure it by itself.\n\n> I think that rewriteValuesRTE() should only ever see DEFAULT items for\n> columns that are simple assignments to columns of the target relation,\n> so it only needs to work out the target attribute numbers for TLEs\n> that contain simple Vars referring to the VALUES RTE. Any DEFAULT seen\n> in a column not matching that would be an error, but actually I think\n> such a thing ought to be a \"can't happen\" error because of the prior\n> checks during parse analysis, so I've used elog() to report if this\n> does happen.\n\n+ if (attrno == 0)\n+ elog(ERROR, \"Cannot set value in column %d to\nDEFAULT\", i);\n\nMaybe: s/Cannot/cannot/g\n\n+ Assert(list_length(sublist) == numattrs);\n\nIsn't this Assert useless, because we're setting numattrs to\nlist_length(<first-sublist>) and transformInsertStmt ensures that all\nsublists are same length?\n\n\nThanks,\nAmit\n\n\n",
"msg_date": "Thu, 28 Feb 2019 16:46:57 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15623: Inconsistent use of default for updatable view"
},
{
"msg_contents": "On Thu, 28 Feb 2019 at 07:47, Amit Langote\n<Langote_Amit_f8@lab.ntt.co.jp> wrote:\n>\n> + if (attrno == 0)\n> + elog(ERROR, \"Cannot set value in column %d to\n> DEFAULT\", i);\n>\n> Maybe: s/Cannot/cannot/g\n>\n\nAh yes, you're right. That is the convention.\n\n\n> + Assert(list_length(sublist) == numattrs);\n>\n> Isn't this Assert useless, because we're setting numattrs to\n> list_length(<first-sublist>) and transformInsertStmt ensures that all\n> sublists are same length?\n>\n\nWell possibly I'm being over-paranoid, but given that it may have come\nvia a previous invocation of rewriteValuesRTE() that may have\ncompletely rebuilt the lists, it seemed best to be sure that it hadn't\ndone something unexpected. It's about to use that to read from the\nattrnos array, so it might read beyond the array bounds if any of the\nprior logic was faulty.\n\nThanks for looking.\n\nRegards,\nDean\n\n",
"msg_date": "Thu, 28 Feb 2019 14:07:46 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15623: Inconsistent use of default for updatable view"
}
] |
[
{
"msg_contents": "Hi All,\n\nWhen \"ON SELECT\" rule is created on a table without columns, it\nsuccessfully converts a table into the view. However, when the same is\ndone using CREATE VIEW command, it fails with an error saying: \"view\nmust have at least one column\". Here is what I'm trying to say:\n\n-- create table t1 without columns\ncreate table t1();\n\n-- create table t2 without columns\ncreate table t2();\n\n-- create ON SELECT rule on t1 - this would convert t1 from table to view\ncreate rule \"_RETURN\" as on select to t1 do instead select * from t2;\n\n-- now check the definition of t1\n\\d t1\n\npostgres=# \\d+ t1\n View \"public.t1\"\n Column | Type | Collation | Nullable | Default | Storage | Description\n--------+------+-----------+----------+---------+---------+-------------\nView definition:\n SELECT\n FROM t2;\n\nThe output of \"\\d+ t1\" shows the definition of converted view t1 which\ndoesn't have any columns in the select query.\n\nNow, when i try creating another view with the same definition using\nCREATE VIEW command, it fails with the error -> ERROR: view must have\nat least one column. See below\n\npostgres=# create view v1 as select from t2;\nERROR: view must have at least one column\n\nOR,\n\npostgres=# create view v1 as select * from t2;\nERROR: view must have at least one column\n\nIsn't that a bug in create rule command or am i missing something here ?\n\nIf it is a bug, then, attached is the patch that fixes it.\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com",
"msg_date": "Fri, 8 Feb 2019 12:18:32 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "ON SELECT rule on a table without columns"
},
{
"msg_contents": "On Fri, Feb 8, 2019 at 12:18 PM Ashutosh Sharma <ashu.coek88@gmail.com>\nwrote:\n\n> Hi All,\n>\n> When \"ON SELECT\" rule is created on a table without columns, it\n> successfully converts a table into the view. However, when the same is\n> done using CREATE VIEW command, it fails with an error saying: \"view\n> must have at least one column\". Here is what I'm trying to say:\n>\n> -- create table t1 without columns\n> create table t1();\n>\n> -- create table t2 without columns\n> create table t2();\n>\n> -- create ON SELECT rule on t1 - this would convert t1 from table to view\n> create rule \"_RETURN\" as on select to t1 do instead select * from t2;\n>\n> -- now check the definition of t1\n> \\d t1\n>\n> postgres=# \\d+ t1\n> View \"public.t1\"\n> Column | Type | Collation | Nullable | Default | Storage | Description\n> --------+------+-----------+----------+---------+---------+-------------\n> View definition:\n> SELECT\n> FROM t2;\n>\n> The output of \"\\d+ t1\" shows the definition of converted view t1 which\n> doesn't have any columns in the select query.\n>\n> Now, when i try creating another view with the same definition using\n> CREATE VIEW command, it fails with the error -> ERROR: view must have\n> at least one column. See below\n>\n> postgres=# create view v1 as select from t2;\n> ERROR: view must have at least one column\n>\n> OR,\n>\n> postgres=# create view v1 as select * from t2;\n> ERROR: view must have at least one column\n>\n> Isn't that a bug in create rule command or am i missing something here ?\n>\n\nYes, it's looks like a bug to me.\n\n>\n> If it is a bug, then, attached is the patch that fixes it.\n>\n\nI had quick glance to the patch - here are few commits:\n\n1)\n\n+ if (event_relation->rd_rel->relnatts == 0)\n\nCan't use direct relnatts - as need to consider attisdropped.\n\n2)\nI think you may like to change the error message to be in-line with\nthe other error message in the similar code area.\n\nMay be something like:\n\"could not convert table \\\"%s\\\" to a view because table does not have any\ncolumn\"\n\n\nRegards,\nRushabh Lathia\nwww.EnterpriseDB.com\n\nOn Fri, Feb 8, 2019 at 12:18 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:Hi All,\n\nWhen \"ON SELECT\" rule is created on a table without columns, it\nsuccessfully converts a table into the view. However, when the same is\ndone using CREATE VIEW command, it fails with an error saying: \"view\nmust have at least one column\". Here is what I'm trying to say:\n\n-- create table t1 without columns\ncreate table t1();\n\n-- create table t2 without columns\ncreate table t2();\n\n-- create ON SELECT rule on t1 - this would convert t1 from table to view\ncreate rule \"_RETURN\" as on select to t1 do instead select * from t2;\n\n-- now check the definition of t1\n\\d t1\n\npostgres=# \\d+ t1\n View \"public.t1\"\n Column | Type | Collation | Nullable | Default | Storage | Description\n--------+------+-----------+----------+---------+---------+-------------\nView definition:\n SELECT\n FROM t2;\n\nThe output of \"\\d+ t1\" shows the definition of converted view t1 which\ndoesn't have any columns in the select query.\n\nNow, when i try creating another view with the same definition using\nCREATE VIEW command, it fails with the error -> ERROR: view must have\nat least one column. See below\n\npostgres=# create view v1 as select from t2;\nERROR: view must have at least one column\n\nOR,\n\npostgres=# create view v1 as select * from t2;\nERROR: view must have at least one column\n\nIsn't that a bug in create rule command or am i missing something here ?Yes, it's looks like a bug to me. \n\nIf it is a bug, then, attached is the patch that fixes it.I had quick glance to the patch - here are few commits:1) + if (event_relation->rd_rel->relnatts == 0)Can't use direct relnatts - as need to consider attisdropped.2) I think you may like to change the error message to be in-line withthe other error message in the similar code area.May be something like: \"could not convert table \\\"%s\\\" to a view because table does not have any column\"Regards,Rushabh Lathiawww.EnterpriseDB.com",
"msg_date": "Fri, 8 Feb 2019 12:41:23 +0530",
"msg_from": "Rushabh Lathia <rushabh.lathia@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ON SELECT rule on a table without columns"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-08 12:18:32 +0530, Ashutosh Sharma wrote:\n> When \"ON SELECT\" rule is created on a table without columns, it\n> successfully converts a table into the view. However, when the same is\n> done using CREATE VIEW command, it fails with an error saying: \"view\n> must have at least one column\". Here is what I'm trying to say:\n> \n> -- create table t1 without columns\n> create table t1();\n> \n> -- create table t2 without columns\n> create table t2();\n> \n> -- create ON SELECT rule on t1 - this would convert t1 from table to view\n> create rule \"_RETURN\" as on select to t1 do instead select * from t2;\n> \n> -- now check the definition of t1\n> \\d t1\n> \n> postgres=# \\d+ t1\n> View \"public.t1\"\n> Column | Type | Collation | Nullable | Default | Storage | Description\n> --------+------+-----------+----------+---------+---------+-------------\n> View definition:\n> SELECT\n> FROM t2;\n> \n> The output of \"\\d+ t1\" shows the definition of converted view t1 which\n> doesn't have any columns in the select query.\n> \n> Now, when i try creating another view with the same definition using\n> CREATE VIEW command, it fails with the error -> ERROR: view must have\n> at least one column. See below\n> \n> postgres=# create view v1 as select from t2;\n> ERROR: view must have at least one column\n> \n> OR,\n> \n> postgres=# create view v1 as select * from t2;\n> ERROR: view must have at least one column\n> \n> Isn't that a bug in create rule command or am i missing something here ?\n> \n> If it is a bug, then, attached is the patch that fixes it.\n> \n> -- \n> With Regards,\n> Ashutosh Sharma\n> EnterpriseDB:http://www.enterprisedb.com\n\n> diff --git a/src/backend/rewrite/rewriteDefine.c b/src/backend/rewrite/rewriteDefine.c\n> index 3496e6f..cb51955 100644\n> --- a/src/backend/rewrite/rewriteDefine.c\n> +++ b/src/backend/rewrite/rewriteDefine.c\n> @@ -473,6 +473,11 @@ DefineQueryRewrite(const char *rulename,\n> \t\t\t\t\t\t errmsg(\"could not convert table \\\"%s\\\" to a view because it has row security enabled\",\n> \t\t\t\t\t\t\t\tRelationGetRelationName(event_relation))));\n> \n> +\t\t\tif (event_relation->rd_rel->relnatts == 0)\n> +\t\t\t\tereport(ERROR,\n> +\t\t\t\t\t\t(errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n> +\t\t\t\t\t\t errmsg(\"view must have at least one column\")));\n> +\n> \t\t\tif (relation_has_policies(event_relation))\n> \t\t\t\tereport(ERROR,\n> \t\t\t\t\t\t(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n\nMaybe I'm missing something, but why do we want to forbid this? Given\nthat we these days allows selects without columns, I see no reason to\nrequire this for views. The view error check long predates allowing\nSELECT and CREATE TABLE without columns. I think it's existence is just\nan oversight. Tom, you did relaxed the permissive cases, any opinion?\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Thu, 7 Feb 2019 23:18:39 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: ON SELECT rule on a table without columns"
},
{
"msg_contents": "On Fri, Feb 8, 2019 at 12:48 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2019-02-08 12:18:32 +0530, Ashutosh Sharma wrote:\n> > When \"ON SELECT\" rule is created on a table without columns, it\n> > successfully converts a table into the view. However, when the same is\n> > done using CREATE VIEW command, it fails with an error saying: \"view\n> > must have at least one column\". Here is what I'm trying to say:\n> >\n> > -- create table t1 without columns\n> > create table t1();\n> >\n> > -- create table t2 without columns\n> > create table t2();\n> >\n> > -- create ON SELECT rule on t1 - this would convert t1 from table to view\n> > create rule \"_RETURN\" as on select to t1 do instead select * from t2;\n> >\n> > -- now check the definition of t1\n> > \\d t1\n> >\n> > postgres=# \\d+ t1\n> > View \"public.t1\"\n> > Column | Type | Collation | Nullable | Default | Storage | Description\n> > --------+------+-----------+----------+---------+---------+-------------\n> > View definition:\n> > SELECT\n> > FROM t2;\n> >\n> > The output of \"\\d+ t1\" shows the definition of converted view t1 which\n> > doesn't have any columns in the select query.\n> >\n> > Now, when i try creating another view with the same definition using\n> > CREATE VIEW command, it fails with the error -> ERROR: view must have\n> > at least one column. See below\n> >\n> > postgres=# create view v1 as select from t2;\n> > ERROR: view must have at least one column\n> >\n> > OR,\n> >\n> > postgres=# create view v1 as select * from t2;\n> > ERROR: view must have at least one column\n> >\n> > Isn't that a bug in create rule command or am i missing something here ?\n> >\n> > If it is a bug, then, attached is the patch that fixes it.\n> >\n> > --\n> > With Regards,\n> > Ashutosh Sharma\n> > EnterpriseDB:http://www.enterprisedb.com\n>\n> > diff --git a/src/backend/rewrite/rewriteDefine.c\n> b/src/backend/rewrite/rewriteDefine.c\n> > index 3496e6f..cb51955 100644\n> > --- a/src/backend/rewrite/rewriteDefine.c\n> > +++ b/src/backend/rewrite/rewriteDefine.c\n> > @@ -473,6 +473,11 @@ DefineQueryRewrite(const char *rulename,\n> > errmsg(\"could not convert\n> table \\\"%s\\\" to a view because it has row security enabled\",\n> >\n> RelationGetRelationName(event_relation))));\n> >\n> > + if (event_relation->rd_rel->relnatts == 0)\n> > + ereport(ERROR,\n> > +\n> (errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n> > + errmsg(\"view must have at\n> least one column\")));\n> > +\n> > if (relation_has_policies(event_relation))\n> > ereport(ERROR,\n> >\n> (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n>\n> Maybe I'm missing something, but why do we want to forbid this?\n\n\nBecause pg_dump - produce the output for such case as:\n\n CREATE VIEW public.foo AS\n SELECT\n FROM public.bar;\n\nwhich fails to restore because we forbid this in create view:\n\npostgres@20625=#CREATE VIEW public.foo AS\npostgres-# SELECT\npostgres-# FROM public.bar;\nERROR: view must have at least one column\npostgres@20625=#\n\nGiven\n> that we these days allows selects without columns, I see no reason to\n> require this for views. The view error check long predates allowing\n> SELECT and CREATE TABLE without columns. I think it's existence is just\n> an oversight. Tom, you did relaxed the permissive cases, any opinion?\n>\n> Greetings,\n>\n> Andres Freund\n>\n>\n\n-- \nRushabh Lathia\n\nOn Fri, Feb 8, 2019 at 12:48 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2019-02-08 12:18:32 +0530, Ashutosh Sharma wrote:\n> When \"ON SELECT\" rule is created on a table without columns, it\n> successfully converts a table into the view. However, when the same is\n> done using CREATE VIEW command, it fails with an error saying: \"view\n> must have at least one column\". Here is what I'm trying to say:\n> \n> -- create table t1 without columns\n> create table t1();\n> \n> -- create table t2 without columns\n> create table t2();\n> \n> -- create ON SELECT rule on t1 - this would convert t1 from table to view\n> create rule \"_RETURN\" as on select to t1 do instead select * from t2;\n> \n> -- now check the definition of t1\n> \\d t1\n> \n> postgres=# \\d+ t1\n> View \"public.t1\"\n> Column | Type | Collation | Nullable | Default | Storage | Description\n> --------+------+-----------+----------+---------+---------+-------------\n> View definition:\n> SELECT\n> FROM t2;\n> \n> The output of \"\\d+ t1\" shows the definition of converted view t1 which\n> doesn't have any columns in the select query.\n> \n> Now, when i try creating another view with the same definition using\n> CREATE VIEW command, it fails with the error -> ERROR: view must have\n> at least one column. See below\n> \n> postgres=# create view v1 as select from t2;\n> ERROR: view must have at least one column\n> \n> OR,\n> \n> postgres=# create view v1 as select * from t2;\n> ERROR: view must have at least one column\n> \n> Isn't that a bug in create rule command or am i missing something here ?\n> \n> If it is a bug, then, attached is the patch that fixes it.\n> \n> -- \n> With Regards,\n> Ashutosh Sharma\n> EnterpriseDB:http://www.enterprisedb.com\n\n> diff --git a/src/backend/rewrite/rewriteDefine.c b/src/backend/rewrite/rewriteDefine.c\n> index 3496e6f..cb51955 100644\n> --- a/src/backend/rewrite/rewriteDefine.c\n> +++ b/src/backend/rewrite/rewriteDefine.c\n> @@ -473,6 +473,11 @@ DefineQueryRewrite(const char *rulename,\n> errmsg(\"could not convert table \\\"%s\\\" to a view because it has row security enabled\",\n> RelationGetRelationName(event_relation))));\n> \n> + if (event_relation->rd_rel->relnatts == 0)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n> + errmsg(\"view must have at least one column\")));\n> +\n> if (relation_has_policies(event_relation))\n> ereport(ERROR,\n> (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n\nMaybe I'm missing something, but why do we want to forbid this?Because pg_dump - produce the output for such case as: CREATE VIEW public.foo AS SELECT FROM public.bar;which fails to restore because we forbid this in create view:postgres@20625=#CREATE VIEW public.foo ASpostgres-# SELECTpostgres-# FROM public.bar;ERROR: view must have at least one columnpostgres@20625=# Given\nthat we these days allows selects without columns, I see no reason to\nrequire this for views. The view error check long predates allowing\nSELECT and CREATE TABLE without columns. I think it's existence is just\nan oversight. Tom, you did relaxed the permissive cases, any opinion?\n\nGreetings,\n\nAndres Freund\n\n-- Rushabh Lathia",
"msg_date": "Fri, 8 Feb 2019 14:35:03 +0530",
"msg_from": "Rushabh Lathia <rushabh.lathia@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ON SELECT rule on a table without columns"
},
{
"msg_contents": "On Fri, Feb 8, 2019 at 12:48 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2019-02-08 12:18:32 +0530, Ashutosh Sharma wrote:\n> > When \"ON SELECT\" rule is created on a table without columns, it\n> > successfully converts a table into the view. However, when the same is\n> > done using CREATE VIEW command, it fails with an error saying: \"view\n> > must have at least one column\". Here is what I'm trying to say:\n> >\n> > -- create table t1 without columns\n> > create table t1();\n> >\n> > -- create table t2 without columns\n> > create table t2();\n> >\n> > -- create ON SELECT rule on t1 - this would convert t1 from table to view\n> > create rule \"_RETURN\" as on select to t1 do instead select * from t2;\n> >\n> > -- now check the definition of t1\n> > \\d t1\n> >\n> > postgres=# \\d+ t1\n> > View \"public.t1\"\n> > Column | Type | Collation | Nullable | Default | Storage | Description\n> > --------+------+-----------+----------+---------+---------+-------------\n> > View definition:\n> > SELECT\n> > FROM t2;\n> >\n> > The output of \"\\d+ t1\" shows the definition of converted view t1 which\n> > doesn't have any columns in the select query.\n> >\n> > Now, when i try creating another view with the same definition using\n> > CREATE VIEW command, it fails with the error -> ERROR: view must have\n> > at least one column. See below\n> >\n> > postgres=# create view v1 as select from t2;\n> > ERROR: view must have at least one column\n> >\n> > OR,\n> >\n> > postgres=# create view v1 as select * from t2;\n> > ERROR: view must have at least one column\n> >\n> > Isn't that a bug in create rule command or am i missing something here ?\n> >\n> > If it is a bug, then, attached is the patch that fixes it.\n> >\n> > --\n> > With Regards,\n> > Ashutosh Sharma\n> > EnterpriseDB:http://www.enterprisedb.com\n>\n> > diff --git a/src/backend/rewrite/rewriteDefine.c b/src/backend/rewrite/rewriteDefine.c\n> > index 3496e6f..cb51955 100644\n> > --- a/src/backend/rewrite/rewriteDefine.c\n> > +++ b/src/backend/rewrite/rewriteDefine.c\n> > @@ -473,6 +473,11 @@ DefineQueryRewrite(const char *rulename,\n> > errmsg(\"could not convert table \\\"%s\\\" to a view because it has row security enabled\",\n> > RelationGetRelationName(event_relation))));\n> >\n> > + if (event_relation->rd_rel->relnatts == 0)\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n> > + errmsg(\"view must have at least one column\")));\n> > +\n> > if (relation_has_policies(event_relation))\n> > ereport(ERROR,\n> > (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n>\n> Maybe I'm missing something, but why do we want to forbid this? Given\n> that we these days allows selects without columns, I see no reason to\n> require this for views. The view error check long predates allowing\n> SELECT and CREATE TABLE without columns. I think it's existence is just\n> an oversight. Tom, you did relaxed the permissive cases, any opinion?\n>\n\nThat's because, we don't allow creation of a view on a table without\ncolumns. So, shouldn't we do the same when converting table to a view\nthat doesn't have any column in it. Regarding why we can't allow\nselect on a view without columns given that select on a table without\ncolumn is possible, I don't have any answer :)\n\nI can see that, even SELECT without any targetlist or table name in\nit, works fine, see this,\n\npostgres=# select;\n--\n(1 row)\n\n",
"msg_date": "Fri, 8 Feb 2019 14:46:04 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: ON SELECT rule on a table without columns"
},
{
"msg_contents": "\n\nOn February 8, 2019 10:05:03 AM GMT+01:00, Rushabh Lathia <rushabh.lathia@gmail.com> wrote:\n>On Fri, Feb 8, 2019 at 12:48 PM Andres Freund <andres@anarazel.de>\n>wrote:\n>\n>> Hi,\n>>\n>> On 2019-02-08 12:18:32 +0530, Ashutosh Sharma wrote:\n>> > When \"ON SELECT\" rule is created on a table without columns, it\n>> > successfully converts a table into the view. However, when the same\n>is\n>> > done using CREATE VIEW command, it fails with an error saying:\n>\"view\n>> > must have at least one column\". Here is what I'm trying to say:\n>> >\n>> > -- create table t1 without columns\n>> > create table t1();\n>> >\n>> > -- create table t2 without columns\n>> > create table t2();\n>> >\n>> > -- create ON SELECT rule on t1 - this would convert t1 from table\n>to view\n>> > create rule \"_RETURN\" as on select to t1 do instead select * from\n>t2;\n>> >\n>> > -- now check the definition of t1\n>> > \\d t1\n>> >\n>> > postgres=# \\d+ t1\n>> > View \"public.t1\"\n>> > Column | Type | Collation | Nullable | Default | Storage |\n>Description\n>> >\n>--------+------+-----------+----------+---------+---------+-------------\n>> > View definition:\n>> > SELECT\n>> > FROM t2;\n>> >\n>> > The output of \"\\d+ t1\" shows the definition of converted view t1\n>which\n>> > doesn't have any columns in the select query.\n>> >\n>> > Now, when i try creating another view with the same definition\n>using\n>> > CREATE VIEW command, it fails with the error -> ERROR: view must\n>have\n>> > at least one column. See below\n>> >\n>> > postgres=# create view v1 as select from t2;\n>> > ERROR: view must have at least one column\n>> >\n>> > OR,\n>> >\n>> > postgres=# create view v1 as select * from t2;\n>> > ERROR: view must have at least one column\n>> >\n>> > Isn't that a bug in create rule command or am i missing something\n>here ?\n>> >\n>> > If it is a bug, then, attached is the patch that fixes it.\n>> >\n>> > --\n>> > With Regards,\n>> > Ashutosh Sharma\n>> > EnterpriseDB:http://www.enterprisedb.com\n>>\n>> > diff --git a/src/backend/rewrite/rewriteDefine.c\n>> b/src/backend/rewrite/rewriteDefine.c\n>> > index 3496e6f..cb51955 100644\n>> > --- a/src/backend/rewrite/rewriteDefine.c\n>> > +++ b/src/backend/rewrite/rewriteDefine.c\n>> > @@ -473,6 +473,11 @@ DefineQueryRewrite(const char *rulename,\n>> > errmsg(\"could not\n>convert\n>> table \\\"%s\\\" to a view because it has row security enabled\",\n>> >\n>> RelationGetRelationName(event_relation))));\n>> >\n>> > + if (event_relation->rd_rel->relnatts == 0)\n>> > + ereport(ERROR,\n>> > +\n>> (errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n>> > + errmsg(\"view must\n>have at\n>> least one column\")));\n>> > +\n>> > if (relation_has_policies(event_relation))\n>> > ereport(ERROR,\n>> >\n>> (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n>>\n>> Maybe I'm missing something, but why do we want to forbid this?\n>\n>\n>Because pg_dump - produce the output for such case as:\n>\n> CREATE VIEW public.foo AS\n> SELECT\n> FROM public.bar;\n>\n>which fails to restore because we forbid this in create view:\n>\n>postgres@20625=#CREATE VIEW public.foo AS\n>postgres-# SELECT\n>postgres-# FROM public.bar;\n>ERROR: view must have at least one column\n>postgres@20625=#\n\nYou misunderstood my point: I'm asking why we shouldn't remove that check from views, rather than adding it to create rule.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n",
"msg_date": "Fri, 08 Feb 2019 10:35:03 +0100",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: ON SELECT rule on a table without columns"
},
{
"msg_contents": "On Fri, Feb 8, 2019 at 3:05 PM Andres Freund <andres@anarazel.de> wrote:\n>\n>\n>\n> On February 8, 2019 10:05:03 AM GMT+01:00, Rushabh Lathia <rushabh.lathia@gmail.com> wrote:\n> >On Fri, Feb 8, 2019 at 12:48 PM Andres Freund <andres@anarazel.de>\n> >wrote:\n> >\n> >> Hi,\n> >>\n> >> On 2019-02-08 12:18:32 +0530, Ashutosh Sharma wrote:\n> >> > When \"ON SELECT\" rule is created on a table without columns, it\n> >> > successfully converts a table into the view. However, when the same\n> >is\n> >> > done using CREATE VIEW command, it fails with an error saying:\n> >\"view\n> >> > must have at least one column\". Here is what I'm trying to say:\n> >> >\n> >> > -- create table t1 without columns\n> >> > create table t1();\n> >> >\n> >> > -- create table t2 without columns\n> >> > create table t2();\n> >> >\n> >> > -- create ON SELECT rule on t1 - this would convert t1 from table\n> >to view\n> >> > create rule \"_RETURN\" as on select to t1 do instead select * from\n> >t2;\n> >> >\n> >> > -- now check the definition of t1\n> >> > \\d t1\n> >> >\n> >> > postgres=# \\d+ t1\n> >> > View \"public.t1\"\n> >> > Column | Type | Collation | Nullable | Default | Storage |\n> >Description\n> >> >\n> >--------+------+-----------+----------+---------+---------+-------------\n> >> > View definition:\n> >> > SELECT\n> >> > FROM t2;\n> >> >\n> >> > The output of \"\\d+ t1\" shows the definition of converted view t1\n> >which\n> >> > doesn't have any columns in the select query.\n> >> >\n> >> > Now, when i try creating another view with the same definition\n> >using\n> >> > CREATE VIEW command, it fails with the error -> ERROR: view must\n> >have\n> >> > at least one column. See below\n> >> >\n> >> > postgres=# create view v1 as select from t2;\n> >> > ERROR: view must have at least one column\n> >> >\n> >> > OR,\n> >> >\n> >> > postgres=# create view v1 as select * from t2;\n> >> > ERROR: view must have at least one column\n> >> >\n> >> > Isn't that a bug in create rule command or am i missing something\n> >here ?\n> >> >\n> >> > If it is a bug, then, attached is the patch that fixes it.\n> >> >\n> >> > --\n> >> > With Regards,\n> >> > Ashutosh Sharma\n> >> > EnterpriseDB:http://www.enterprisedb.com\n> >>\n> >> > diff --git a/src/backend/rewrite/rewriteDefine.c\n> >> b/src/backend/rewrite/rewriteDefine.c\n> >> > index 3496e6f..cb51955 100644\n> >> > --- a/src/backend/rewrite/rewriteDefine.c\n> >> > +++ b/src/backend/rewrite/rewriteDefine.c\n> >> > @@ -473,6 +473,11 @@ DefineQueryRewrite(const char *rulename,\n> >> > errmsg(\"could not\n> >convert\n> >> table \\\"%s\\\" to a view because it has row security enabled\",\n> >> >\n> >> RelationGetRelationName(event_relation))));\n> >> >\n> >> > + if (event_relation->rd_rel->relnatts == 0)\n> >> > + ereport(ERROR,\n> >> > +\n> >> (errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n> >> > + errmsg(\"view must\n> >have at\n> >> least one column\")));\n> >> > +\n> >> > if (relation_has_policies(event_relation))\n> >> > ereport(ERROR,\n> >> >\n> >> (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> >>\n> >> Maybe I'm missing something, but why do we want to forbid this?\n> >\n> >\n> >Because pg_dump - produce the output for such case as:\n> >\n> > CREATE VIEW public.foo AS\n> > SELECT\n> > FROM public.bar;\n> >\n> >which fails to restore because we forbid this in create view:\n> >\n> >postgres@20625=#CREATE VIEW public.foo AS\n> >postgres-# SELECT\n> >postgres-# FROM public.bar;\n> >ERROR: view must have at least one column\n> >postgres@20625=#\n>\n> You misunderstood my point: I'm asking why we shouldn't remove that check from views, rather than adding it to create rule.\n>\n\nHere is the second point from my previous response:\n\n\"Regarding why we can't allow select on a view without columns given\nthat select on a table without column is possible, I don't have any\nanswer :)\"\n\nI prepared the patch assuming that the current behaviour of create\nview on a table without column is fine.\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n",
"msg_date": "Fri, 8 Feb 2019 15:37:00 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: ON SELECT rule on a table without columns"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> You misunderstood my point: I'm asking why we shouldn't remove that check from views, rather than adding it to create rule.\n\n+1. This seems pretty obviously to be something we just missed when\nwe changed things to allow zero-column tables.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 08 Feb 2019 09:25:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ON SELECT rule on a table without columns"
},
{
"msg_contents": "On Fri, Feb 8, 2019 at 7:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Andres Freund <andres@anarazel.de> writes:\n> > You misunderstood my point: I'm asking why we shouldn't remove that check from views, rather than adding it to create rule.\n>\n> +1. This seems pretty obviously to be something we just missed when\n> we changed things to allow zero-column tables.\n>\n\nThanks Andres for bringing up that point and thanks Tom for the confirmation.\n\nAttached is the patch that allows us to create view on a table without\ncolumns. I've also added some test-cases for it in create_view.sql.\nPlease have a look and let me know your opinion.\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com",
"msg_date": "Fri, 8 Feb 2019 21:57:14 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: ON SELECT rule on a table without columns"
},
{
"msg_contents": "Ashutosh Sharma <ashu.coek88@gmail.com> writes:\n> Attached is the patch that allows us to create view on a table without\n> columns. I've also added some test-cases for it in create_view.sql.\n> Please have a look and let me know your opinion.\n\nHaven't read the patch, but a question seems in order here: should\nwe regard this as a back-patchable bug fix? The original example\nshows that it's possible to create a zero-column view in existing\nreleases, which I believe would then lead to dump/reload failures.\nSo that seems to qualify as a bug not just a missing feature.\nOn the other hand, given the lack of field complaints, maybe it's\nnot worth the trouble to back-patch. I don't have a strong\nopinion either way.\n\nBTW, has anyone checked on what the matview code paths will do?\nOr SELECT INTO?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 08 Feb 2019 13:02:54 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ON SELECT rule on a table without columns"
},
{
"msg_contents": "On Fri, Feb 8, 2019 at 11:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Ashutosh Sharma <ashu.coek88@gmail.com> writes:\n> > Attached is the patch that allows us to create view on a table without\n> > columns. I've also added some test-cases for it in create_view.sql.\n> > Please have a look and let me know your opinion.\n>\n> Haven't read the patch, but a question seems in order here: should\n> we regard this as a back-patchable bug fix? The original example\n> shows that it's possible to create a zero-column view in existing\n> releases, which I believe would then lead to dump/reload failures.\n> So that seems to qualify as a bug not just a missing feature.\n> On the other hand, given the lack of field complaints, maybe it's\n> not worth the trouble to back-patch. I don't have a strong\n> opinion either way.\n>\n\nIn my opinion, this looks like a bug fix that needs to be back ported,\nelse, we might encounter dump/restore failure in some cases, like the\none described in the first email.\n\n> BTW, has anyone checked on what the matview code paths will do?\n> Or SELECT INTO?\n>\n\nI just checked on that and found that both mat view and SELECT INTO\nstatement works like CREATE TABLE AS command and it doesn't really\ncare about the target list of the source table unlike normal views\nwhich would error out when the source table has no columns.\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n",
"msg_date": "Sat, 9 Feb 2019 00:20:51 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: ON SELECT rule on a table without columns"
},
{
"msg_contents": "On Sat, Feb 9, 2019 at 12:20 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> On Fri, Feb 8, 2019 at 11:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Ashutosh Sharma <ashu.coek88@gmail.com> writes:\n> > > Attached is the patch that allows us to create view on a table without\n> > > columns. I've also added some test-cases for it in create_view.sql.\n> > > Please have a look and let me know your opinion.\n> >\n> > Haven't read the patch, but a question seems in order here: should\n> > we regard this as a back-patchable bug fix? The original example\n> > shows that it's possible to create a zero-column view in existing\n> > releases, which I believe would then lead to dump/reload failures.\n> > So that seems to qualify as a bug not just a missing feature.\n> > On the other hand, given the lack of field complaints, maybe it's\n> > not worth the trouble to back-patch. I don't have a strong\n> > opinion either way.\n> >\n>\n> In my opinion, this looks like a bug fix that needs to be back ported,\n> else, we might encounter dump/restore failure in some cases, like the\n> one described in the first email.\n>\n> > BTW, has anyone checked on what the matview code paths will do?\n> > Or SELECT INTO?\n> >\n>\n> I just checked on that and found that both mat view and SELECT INTO\n> statement works like CREATE TABLE AS command and it doesn't really\n> care about the target list of the source table unlike normal views\n> which would error out when the source table has no columns.\n>\n\nAdded the regression test-cases for mat views and SELECT INTO\nstatements in the attached patch. Earlier patch just had the\ntest-cases for normal views along with the fix.\n\nAndres, Tom, Please have a look into the attached patch and let me\nknow if I'm still missing something. Thank you.\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com",
"msg_date": "Mon, 11 Feb 2019 15:39:03 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: ON SELECT rule on a table without columns"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-11 15:39:03 +0530, Ashutosh Sharma wrote:\n> Andres, Tom, Please have a look into the attached patch and let me\n> know if I'm still missing something. Thank you.\n\n> --- a/src/test/regress/expected/create_view.out\n> +++ b/src/test/regress/expected/create_view.out\n> @@ -1706,9 +1706,16 @@ select pg_get_ruledef(oid, true) from pg_rewrite\n> 43 AS col_b;\n> (1 row)\n> \n> +-- create view on a table without columns\n> +create table t0();\n> +create view v0 as select * from t0;\n> +select * from v0;\n> +--\n> +(0 rows)\n\nI suggest also adding a view that select zero columns, where the\nunerlying table has columns.\n\nI think it'd be good to name the view in a way that's a bit more unique,\nand leaving it in place. That way pg_dump can be tested with this too\n(and mostly would be tested via pg_upgrade's tests).\n\n\n> -- clean up all the random objects we made above\n> \\set VERBOSITY terse \\\\ -- suppress cascade details\n> DROP SCHEMA temp_view_test CASCADE;\n> NOTICE: drop cascades to 27 other objects\n> DROP SCHEMA testviewschm2 CASCADE;\n> -NOTICE: drop cascades to 62 other objects\n> +NOTICE: drop cascades to 64 other objects\n> diff --git a/src/test/regress/expected/matview.out b/src/test/regress/expected/matview.out\n> index d0121a7..f1d24e6 100644\n> --- a/src/test/regress/expected/matview.out\n> +++ b/src/test/regress/expected/matview.out\n> @@ -589,3 +589,12 @@ SELECT * FROM mvtest2;\n> ERROR: materialized view \"mvtest2\" has not been populated\n> HINT: Use the REFRESH MATERIALIZED VIEW command.\n> ROLLBACK;\n> +-- create materialized view on a table without columns\n> +create table mt0();\n> +create materialized view mv0 as select * from mt0;\n> +select * from mv0;\n> +--\n> +(0 rows)\n\nSame.\n\n\nThanks!\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 11 Feb 2019 02:21:56 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: ON SELECT rule on a table without columns"
},
{
"msg_contents": "Thanks Andres for the quick review.\n\nOn Mon, Feb 11, 2019 at 3:52 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2019-02-11 15:39:03 +0530, Ashutosh Sharma wrote:\n> > Andres, Tom, Please have a look into the attached patch and let me\n> > know if I'm still missing something. Thank you.\n>\n> > --- a/src/test/regress/expected/create_view.out\n> > +++ b/src/test/regress/expected/create_view.out\n> > @@ -1706,9 +1706,16 @@ select pg_get_ruledef(oid, true) from pg_rewrite\n> > 43 AS col_b;\n> > (1 row)\n> >\n> > +-- create view on a table without columns\n> > +create table t0();\n> > +create view v0 as select * from t0;\n> > +select * from v0;\n> > +--\n> > +(0 rows)\n>\n> I suggest also adding a view that select zero columns, where the\n> unerlying table has columns.\n>\n\nDone.\n\n> I think it'd be good to name the view in a way that's a bit more unique,\n> and leaving it in place. That way pg_dump can be tested with this too\n> (and mostly would be tested via pg_upgrade's tests).\n>\n\nRenamed view to something like -> 'view_no_column' and\n'view_zero_column' and didn't drop it so that it so that it gets\ntested with pg_upgrade.\n\n>\n> > -- clean up all the random objects we made above\n> > \\set VERBOSITY terse \\\\ -- suppress cascade details\n> > DROP SCHEMA temp_view_test CASCADE;\n> > NOTICE: drop cascades to 27 other objects\n> > DROP SCHEMA testviewschm2 CASCADE;\n> > -NOTICE: drop cascades to 62 other objects\n> > +NOTICE: drop cascades to 64 other objects\n> > diff --git a/src/test/regress/expected/matview.out b/src/test/regress/expected/matview.out\n> > index d0121a7..f1d24e6 100644\n> > --- a/src/test/regress/expected/matview.out\n> > +++ b/src/test/regress/expected/matview.out\n> > @@ -589,3 +589,12 @@ SELECT * FROM mvtest2;\n> > ERROR: materialized view \"mvtest2\" has not been populated\n> > HINT: Use the REFRESH MATERIALIZED VIEW command.\n> > ROLLBACK;\n> > +-- create materialized view on a table without columns\n> > +create table mt0();\n> > +create materialized view mv0 as select * from mt0;\n> > +select * from mv0;\n> > +--\n> > +(0 rows)\n>\n> Same.\n>\n>\n\nDone.\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com",
"msg_date": "Mon, 11 Feb 2019 17:50:42 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: ON SELECT rule on a table without columns"
},
{
"msg_contents": "Ashutosh Sharma <ashu.coek88@gmail.com> writes:\n> [ allow-create-view-on-table-without-columns-v3.patch ]\n\nPushed. I revised the test cases a bit --- notably, I wanted to be\nsure we exercised pg_dump's createDummyViewAsClause for this, especially\nafter noticing that it wasn't being tested at all before :-(\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 17 Feb 2019 12:40:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ON SELECT rule on a table without columns"
},
{
"msg_contents": "On Sun, Feb 17, 2019 at 11:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Ashutosh Sharma <ashu.coek88@gmail.com> writes:\n> > [ allow-create-view-on-table-without-columns-v3.patch ]\n>\n> Pushed. I revised the test cases a bit --- notably, I wanted to be\n> sure we exercised pg_dump's createDummyViewAsClause for this, especially\n> after noticing that it wasn't being tested at all before :-(\n>\n\nOkay. Thanks for that changes in the test-cases and committing the patch.\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n",
"msg_date": "Mon, 18 Feb 2019 11:21:10 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: ON SELECT rule on a table without columns"
}
] |
[
{
"msg_contents": "By default, the fallback_application_name for a physical walreceiver is\n\"walreceiver\". This means that multiple standbys cannot be\ndistinguished easily on a primary, for example in pg_stat_activity or\nsynchronous_standby_names.\n\nI propose, if cluster_name is set, use that for\nfallback_application_name in the walreceiver. (If it's not set, it\nremains \"walreceiver\".) If someone set cluster_name to identify their\ninstance, we might as well use that by default to identify the node\nremotely as well. It's still possible to specify another\napplication_name in primary_conninfo explicitly.\n\nThen you can do something like cluster_name = 'nodeN' and\nsynchronous_standby_names = 'node1,node2,node3' without any further\nfiddling with application_name.\n\nSee attached patches.\n\nI also included a patch to set cluster_name in PostgresNode.pm\ninstances, for easier identification and a bit of minimal testing.\nBecause of the issues described in [0], this doesn't allow dropping the\nexplicit application_name assignments in tests yet, but it's part of the\npath to get there.\n\n[0]:\n<https://www.postgresql.org/message-id/33383613-690e-6f1b-d5ba-4957ff40f6ce@2ndquadrant.com>\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 8 Feb 2019 09:16:21 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Set fallback_application_name for a walreceiver to cluster_name"
},
{
"msg_contents": "Em sex, 8 de fev de 2019 às 05:16, Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> escreveu:\n>\n> By default, the fallback_application_name for a physical walreceiver is\n> \"walreceiver\". This means that multiple standbys cannot be\n> distinguished easily on a primary, for example in pg_stat_activity or\n> synchronous_standby_names.\n>\nAlthough standby identification could be made by client_addr in\npg_stat_activity, it could be useful in multiple clusters in the same\nhost. Since it is a fallback application name, it can be overridden by\nan application_name in primary_conninfo parameter.\n\n> I propose, if cluster_name is set, use that for\n> fallback_application_name in the walreceiver. (If it's not set, it\n> remains \"walreceiver\".) If someone set cluster_name to identify their\n> instance, we might as well use that by default to identify the node\n> remotely as well. It's still possible to specify another\n> application_name in primary_conninfo explicitly.\n>\nI tested it and both patches work as described. Passes all tests. Doc\ndescribes the proposed feature. Doc builds without errors.\n\n\n-- \n Euler Taveira Timbira -\nhttp://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\n",
"msg_date": "Wed, 20 Feb 2019 21:36:06 -0300",
"msg_from": "Euler Taveira <euler@timbira.com.br>",
"msg_from_op": false,
"msg_subject": "Re: Set fallback_application_name for a walreceiver to cluster_name"
},
{
"msg_contents": "On 2019-02-21 01:36, Euler Taveira wrote:\n>> By default, the fallback_application_name for a physical walreceiver is\n>> \"walreceiver\". This means that multiple standbys cannot be\n>> distinguished easily on a primary, for example in pg_stat_activity or\n>> synchronous_standby_names.\n>>\n> Although standby identification could be made by client_addr in\n> pg_stat_activity, it could be useful in multiple clusters in the same\n> host. Since it is a fallback application name, it can be overridden by\n> an application_name in primary_conninfo parameter.\n\nYeah, plus that doesn't help with synchronous_standby_names.\n\n>> I propose, if cluster_name is set, use that for\n>> fallback_application_name in the walreceiver. (If it's not set, it\n>> remains \"walreceiver\".) If someone set cluster_name to identify their\n>> instance, we might as well use that by default to identify the node\n>> remotely as well. It's still possible to specify another\n>> application_name in primary_conninfo explicitly.\n>>\n> I tested it and both patches work as described. Passes all tests. Doc\n> describes the proposed feature. Doc builds without errors.\n\nCommitted, thanks!\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Wed, 27 Feb 2019 11:02:18 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Set fallback_application_name for a walreceiver to cluster_name"
}
] |
[
{
"msg_contents": "A couple months ago, I implemented prepared statements for PyGreSQL. While\nupdating our application in advance of their release with that feature, I\nnoticed that our query logs were several times larger.\n\nWith non-prepared statements, we logged to CSV:\n|message | SELECT 1\n\nWith SQL EXECUTE, we log:\n|message | statement: EXECUTE sqlex(2);\n|detail | prepare: prepare sqlex AS SELECT $1;\n\nWith PQexecPrepared, we would log:\n|message | execute qq: PREPARE qq AS SELECT $1\n|detail | parameters: $1 = '3'\n\nFor comparison, with PQexecParams, the logs I see look like this (apparently\nthe \"unnamed\" prepared statement is used behind the scenes):\n|message | execute <unnamed>: SELECT [...]\n\nIt's not clear to me why it'd be desirable for the previous PREPARE to be\nadditionally logged during every execution, instead of just its name (in\n\"message\") and params (in \"detail\"). (Actually, I had to triple check that it\nwasn't somehow executing a prepared statement which itself created a prepared\nstatement...)\n\nFor us, the performance benefit is to minimize the overhead (mostly in pygres)\nof many INSERTs into append-only tables. It's not great that a feature\nintended to improve performance is causing 2x more log volume to be written.\n\nAlso, it seems odd that for SQL EXECUTE, the PREPARE is shown in \"detail\", but\nfor the library call, it's shown in \"message\".\n\nI found:\n|commit bc24d5b97673c365f19be21f83acca3c184cf1a7\n|Author: Bruce Momjian <bruce@momjian.us>\n|Date: Tue Aug 29 02:11:30 2006 +0000\n|\n| Now bind displays prepare as detail, and execute displays prepare and\n| optionally bind. I re-added the \"statement:\" label so people will\n| understand why the line is being printed (it is log_*statement\n| behavior).\n| \n| Use single quotes for bind values, instead of double quotes, and double\n| literal single quotes in bind values (and document that). I also made\n| use of the DETAIL line to have much cleaner output.\n\nand:\n\n|commit c8961bf1ce0b51019db31c5572dac18b664e02f1\n|Author: Bruce Momjian <bruce@momjian.us>\n|Date: Fri Aug 4 18:53:46 2006 +0000\n|\n| Improve logging of protocol-level prepared statements.\n\nJustin\n\n",
"msg_date": "Fri, 8 Feb 2019 07:29:53 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "query logging of prepared statements"
},
{
"msg_contents": "On Fri, Feb 08, 2019 at 07:29:53AM -0600, Justin Pryzby wrote:\n> A couple months ago, I implemented prepared statements for PyGreSQL. While\n> updating our application in advance of their release with that feature, I\n> noticed that our query logs were several times larger.\n\nPreviously sent to -general (and quoted fully below), resending to -hackers\nwith patch.\nhttps://www.postgresql.org/message-id/20190208132953.GF29720%40telsasoft.com\n\nI propose that the prepared statement associated with an EXECUTE should only be\nincluded in log \"DETAIL\" when log_error_verbosity=VERBOSE, for both SQL EXECUTE\nand PQexecPrepared. I'd like to be able to continue logging DETAIL (including\nbind parameters), so want like to avoid setting \"TERSE\" just to avoid redundant\nmessages.\n\nWith attached patch, I'm not sure if !*portal_name && !portal->prepStmtName\nwould catch cases other than PQexecParams (?)\n\nCompare unpatched server to patched server to patched server with\nlog_error_verbosity=verbose.\n\n|$ psql postgres -xtc \"SET log_error_verbosity=default;SET log_statement='all'; SET client_min_messages=log\" -c \"PREPARE q AS SELECT 2\" -c \"EXECUTE q\"\n|SET\n|LOG: statement: PREPARE q AS SELECT 2\n|PREPARE\n|LOG: statement: EXECUTE q\n|DETAIL: prepare: PREPARE q AS SELECT 2\n|?column? | 2\n\n|$ PGHOST=/tmp PGPORT=5678 psql postgres -xtc \"SET log_error_verbosity=default;SET log_statement='all'; SET client_min_messages=log\" -c \"PREPARE q AS SELECT 2\" -c \"EXECUTE q\"\n|SET\n|LOG: statement: PREPARE q AS SELECT 2\n|PREPARE\n|LOG: statement: EXECUTE q\n|?column? | 2\n\n|$ PGHOST=/tmp PGPORT=5678 psql postgres -xtc \"SET log_error_verbosity=verbose;SET log_statement='all'; SET client_min_messages=log\" -c \"PREPARE q AS SELECT 2\" -c \"EXECUTE q\"\n|SET\n|LOG: statement: PREPARE q AS SELECT 2\n|PREPARE\n|LOG: statement: EXECUTE q\n|DETAIL: prepare: PREPARE q AS SELECT 2\n|?column? | 2\n\nFor PQexecPrepared library call:\n\n|$ xPGPORT=5678 xPGHOST=/tmp PYTHONPATH=../PyGreSQL/build/lib.linux-x86_64-2.7/ python2.7 -c \"import pg; d=pg.DB('postgres'); d.query('SET client_min_messages=log; SET log_error_verbosity=default; SET log_statement=\\\"all\\\"'); d.query('SELECT 1; PREPARE q AS SELECT \\$1'); d.query_prepared('q',[1]); d.query_formatted('SELECT %s', [2])\"\n|LOG: statement: SELECT 1; PREPARE q AS SELECT $1\n|LOG: execute q: SELECT 1; PREPARE q AS SELECT $1\n|DETAIL: parameters: $1 = '1'\n|LOG: execute <unnamed>: SELECT $1\n|DETAIL: parameters: $1 = '2'\n\n|$ PGPORT=5678 PGHOST=/tmp PYTHONPATH=../PyGreSQL/build/lib.linux-x86_64-2.7/ python2.7 -c \"import pg; d=pg.DB('postgres'); d.query('SET client_min_messages=log; SET log_error_verbosity=default; SET log_statement=\\\"all\\\"'); d.query('SELECT 1; PREPARE q AS SELECT \\$1'); d.query_prepared('q',[1]); d.query_formatted('SELECT %s', [2])\"\n|LOG: statement: SELECT 1; PREPARE q AS SELECT $1\n|LOG: execute q\n|DETAIL: parameters: $1 = '1'\n|LOG: execute <unnamed>: SELECT $1\n|DETAIL: parameters: $1 = '2'\n\n|$ PGPORT=5678 PGHOST=/tmp PYTHONPATH=../PyGreSQL/build/lib.linux-x86_64-2.7/ python2.7 -c \"import pg; d=pg.DB('postgres'); d.query('SET client_min_messages=log; SET log_error_verbosity=verbose; SET log_statement=\\\"all\\\"'); d.query('SELECT 1; PREPARE q AS SELECT \\$1'); d.query_prepared('q',[1]); d.query_formatted('SELECT %s', [2])\"\n|LOG: statement: SELECT 1; PREPARE q AS SELECT $1\n|LOG: execute q: SELECT 1; PREPARE q AS SELECT $1\n|DETAIL: parameters: $1 = '1'\n|LOG: execute <unnamed>: SELECT $1\n|DETAIL: parameters: $1 = '2'\n\nAlso, I noticed that if the statement was prepared using SQL PREPARE (rather\nthan PQprepare), and if it used simple query with multiple commands, they're\nall included in the log, like this when executed with PQexecPrepared:\n|LOG: execute q: SET log_error_verbosity=verbose; SET client_min_messages=log; PREPARE q AS SELECT $1\n\nAnd looks like this for SQL EXECUTE:\n|[pryzbyj@telsasoft-db postgresql]$ psql postgres -txc \"SET client_min_messages=log;SELECT 1;PREPARE q AS SELECT 2\" -c \"EXECUTE q\"\n|PREPARE\n|LOG: statement: EXECUTE q\n|DETAIL: prepare: SET client_min_messages=log;SELECT 1;PREPARE q AS SELECT 2\n|?column? | 2\n\nOn Fri, Feb 08, 2019 at 07:29:53AM -0600, Justin Pryzby wrote:\n> A couple months ago, I implemented prepared statements for PyGreSQL. While\n> updating our application in advance of their release with that feature, I\n> noticed that our query logs were several times larger.\n> \n> With non-prepared statements, we logged to CSV:\n> |message | SELECT 1\n> \n> With SQL EXECUTE, we log:\n> |message | statement: EXECUTE sqlex(2);\n> |detail | prepare: prepare sqlex AS SELECT $1;\n> \n> With PQexecPrepared, we would log:\n> |message | execute qq: PREPARE qq AS SELECT $1\n> |detail | parameters: $1 = '3'\n> \n> For comparison, with PQexecParams, the logs I see look like this (apparently\n> the \"unnamed\" prepared statement is used behind the scenes):\n> |message | execute <unnamed>: SELECT [...]\n> \n> It's not clear to me why it'd be desirable for the previous PREPARE to be\n> additionally logged during every execution, instead of just its name (in\n> \"message\") and params (in \"detail\"). (Actually, I had to triple check that it\n> wasn't somehow executing a prepared statement which itself created a prepared\n> statement...)\n> \n> For us, the performance benefit is to minimize the overhead (mostly in pygres)\n> of many INSERTs into append-only tables. It's not great that a feature\n> intended to improve performance is causing 2x more log volume to be written.\n> \n> Also, it seems odd that for SQL EXECUTE, the PREPARE is shown in \"detail\", but\n> for the library call, it's shown in \"message\".\n> \n> I found:\n> |commit bc24d5b97673c365f19be21f83acca3c184cf1a7\n> |Author: Bruce Momjian <bruce@momjian.us>\n> |Date: Tue Aug 29 02:11:30 2006 +0000\n> |\n> | Now bind displays prepare as detail, and execute displays prepare and\n> | optionally bind. I re-added the \"statement:\" label so people will\n> | understand why the line is being printed (it is log_*statement\n> | behavior).\n> | \n> | Use single quotes for bind values, instead of double quotes, and double\n> | literal single quotes in bind values (and document that). I also made\n> | use of the DETAIL line to have much cleaner output.\n> \n> and:\n> \n> |commit c8961bf1ce0b51019db31c5572dac18b664e02f1\n> |Author: Bruce Momjian <bruce@momjian.us>\n> |Date: Fri Aug 4 18:53:46 2006 +0000\n> |\n> | Improve logging of protocol-level prepared statements.\n> \n> Justin",
"msg_date": "Sat, 9 Feb 2019 19:57:08 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: query logging of prepared statements"
},
{
"msg_contents": "Sigh, resending to -hackers for real.\n\nOn Fri, Feb 08, 2019 at 07:29:53AM -0600, Justin Pryzby wrote:\n> A couple months ago, I implemented prepared statements for PyGreSQL. While\n> updating our application in advance of their release with that feature, I\n> noticed that our query logs were several times larger.\n\nPreviously sent to -general (and quoted fully below), resending to -hackers\nwith patch.\nhttps://www.postgresql.org/message-id/20190208132953.GF29720%40telsasoft.com\nhttps://www.postgresql.org/docs/current/runtime-config-logging.html\n\nI propose that the prepared statement associated with an EXECUTE should be\nincluded in log \"DETAIL\" only when log_error_verbosity=VERBOSE, for both SQL\nEXECUTE and PQexecPrepared (if at all). I'd like to be able to continue\nlogging DETAIL (including bind parameters), so want like to avoid setting\n\"TERSE\" just to avoid redundant messages. (It occurs to me that the GUC should\nprobably stick to its existing documented behavior rather than be extended,\nwhich suggests that the duplicitive portions of the logs should simply be\nremoved, rather than conditionalized. Let me know what you think).\n\nWith attached patch, I'm not sure if !*portal_name && !portal->prepStmtName\nwould catch cases other than PQexecParams (?)\n\nCompare unpatched server to patched server to patched server with\nlog_error_verbosity=verbose.\n\n|$ psql postgres -xtc \"SET log_error_verbosity=default;SET log_statement='all'; SET client_min_messages=log\" -c \"PREPARE q AS SELECT 2\" -c \"EXECUTE q\"\n|SET\n|LOG: statement: PREPARE q AS SELECT 2\n|PREPARE\n|LOG: statement: EXECUTE q\n|DETAIL: prepare: PREPARE q AS SELECT 2\n|?column? | 2\n\n|$ PGHOST=/tmp PGPORT=5678 psql postgres -xtc \"SET log_error_verbosity=default;SET log_statement='all'; SET client_min_messages=log\" -c \"PREPARE q AS SELECT 2\" -c \"EXECUTE q\"\n|SET\n|LOG: statement: PREPARE q AS SELECT 2\n|PREPARE\n|LOG: statement: EXECUTE q\n|?column? | 2\n\n|$ PGHOST=/tmp PGPORT=5678 psql postgres -xtc \"SET log_error_verbosity=verbose;SET log_statement='all'; SET client_min_messages=log\" -c \"PREPARE q AS SELECT 2\" -c \"EXECUTE q\"\n|SET\n|LOG: statement: PREPARE q AS SELECT 2\n|PREPARE\n|LOG: statement: EXECUTE q\n|DETAIL: prepare: PREPARE q AS SELECT 2\n|?column? | 2\n\nFor PQexecPrepared library call:\n\n|$ xPGPORT=5678 xPGHOST=/tmp PYTHONPATH=../PyGreSQL/build/lib.linux-x86_64-2.7/ python2.7 -c \"import pg; d=pg.DB('postgres'); d.query('SET client_min_messages=log; SET log_error_verbosity=default; SET log_statement=\\\"all\\\"'); d.query('SELECT 1; PREPARE q AS SELECT \\$1'); d.query_prepared('q',[1]); d.query_formatted('SELECT %s', [2])\"\n|LOG: statement: SELECT 1; PREPARE q AS SELECT $1\n|LOG: execute q: SELECT 1; PREPARE q AS SELECT $1\n|DETAIL: parameters: $1 = '1'\n|LOG: execute <unnamed>: SELECT $1\n|DETAIL: parameters: $1 = '2'\n\n|$ PGPORT=5678 PGHOST=/tmp PYTHONPATH=../PyGreSQL/build/lib.linux-x86_64-2.7/ python2.7 -c \"import pg; d=pg.DB('postgres'); d.query('SET client_min_messages=log; SET log_error_verbosity=default; SET log_statement=\\\"all\\\"'); d.query('SELECT 1; PREPARE q AS SELECT \\$1'); d.query_prepared('q',[1]); d.query_formatted('SELECT %s', [2])\"\n|LOG: statement: SELECT 1; PREPARE q AS SELECT $1\n|LOG: execute q\n|DETAIL: parameters: $1 = '1'\n|LOG: execute <unnamed>: SELECT $1\n|DETAIL: parameters: $1 = '2'\n\n|$ PGPORT=5678 PGHOST=/tmp PYTHONPATH=../PyGreSQL/build/lib.linux-x86_64-2.7/ python2.7 -c \"import pg; d=pg.DB('postgres'); d.query('SET client_min_messages=log; SET log_error_verbosity=verbose; SET log_statement=\\\"all\\\"'); d.query('SELECT 1; PREPARE q AS SELECT \\$1'); d.query_prepared('q',[1]); d.query_formatted('SELECT %s', [2])\"\n|LOG: statement: SELECT 1; PREPARE q AS SELECT $1\n|LOG: execute q: SELECT 1; PREPARE q AS SELECT $1\n|DETAIL: parameters: $1 = '1'\n|LOG: execute <unnamed>: SELECT $1\n|DETAIL: parameters: $1 = '2'\n\nAlso, I noticed that if the statement was prepared using SQL PREPARE (rather\nthan PQprepare), and if it used simple query with multiple commands, they're\nall included in the log, like this when executed with PQexecPrepared:\n|LOG: execute q: SET log_error_verbosity=verbose; SET client_min_messages=log; PREPARE q AS SELECT $1\n\nAnd looks like this for SQL EXECUTE:\n|[pryzbyj@telsasoft-db postgresql]$ psql postgres -txc \"SET client_min_messages=log;SELECT 1;PREPARE q AS SELECT 2\" -c \"EXECUTE q\"\n|PREPARE\n|LOG: statement: EXECUTE q\n|DETAIL: prepare: SET client_min_messages=log;SELECT 1;PREPARE q AS SELECT 2\n|?column? | 2\n\nOn Fri, Feb 08, 2019 at 07:29:53AM -0600, Justin Pryzby wrote:\n> A couple months ago, I implemented prepared statements for PyGreSQL. While\n> updating our application in advance of their release with that feature, I\n> noticed that our query logs were several times larger.\n> \n> With non-prepared statements, we logged to CSV:\n> |message | SELECT 1\n> \n> With SQL EXECUTE, we log:\n> |message | statement: EXECUTE sqlex(2);\n> |detail | prepare: prepare sqlex AS SELECT $1;\n> \n> With PQexecPrepared, we would log:\n> |message | execute qq: PREPARE qq AS SELECT $1\n> |detail | parameters: $1 = '3'\n> \n> For comparison, with PQexecParams, the logs I see look like this (apparently\n> the \"unnamed\" prepared statement is used behind the scenes):\n> |message | execute <unnamed>: SELECT [...]\n> \n> It's not clear to me why it'd be desirable for the previous PREPARE to be\n> additionally logged during every execution, instead of just its name (in\n> \"message\") and params (in \"detail\"). (Actually, I had to triple check that it\n> wasn't somehow executing a prepared statement which itself created a prepared\n> statement...)\n> \n> For us, the performance benefit is to minimize the overhead (mostly in pygres)\n> of many INSERTs into append-only tables. It's not great that a feature\n> intended to improve performance is causing 2x more log volume to be written.\n> \n> Also, it seems odd that for SQL EXECUTE, the PREPARE is shown in \"detail\", but\n> for the library call, it's shown in \"message\".\n> \n> I found:\n> |commit bc24d5b97673c365f19be21f83acca3c184cf1a7\n> |Author: Bruce Momjian <bruce@momjian.us>\n> |Date: Tue Aug 29 02:11:30 2006 +0000\n> |\n> | Now bind displays prepare as detail, and execute displays prepare and\n> | optionally bind. I re-added the \"statement:\" label so people will\n> | understand why the line is being printed (it is log_*statement\n> | behavior).\n> | \n> | Use single quotes for bind values, instead of double quotes, and double\n> | literal single quotes in bind values (and document that). I also made\n> | use of the DETAIL line to have much cleaner output.\n> \n> and:\n> \n> |commit c8961bf1ce0b51019db31c5572dac18b664e02f1\n> |Author: Bruce Momjian <bruce@momjian.us>\n> |Date: Fri Aug 4 18:53:46 2006 +0000\n> |\n> | Improve logging of protocol-level prepared statements.\n> \n> Justin",
"msg_date": "Fri, 15 Feb 2019 08:57:04 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: query logging of prepared statements"
},
{
"msg_contents": "On Fri, Feb 15, 2019 at 08:57:04AM -0600, Justin Pryzby wrote:\n> I propose that the prepared statement associated with an EXECUTE should be\n> included in log \"DETAIL\" only when log_error_verbosity=VERBOSE, for both SQL\n> EXECUTE and PQexecPrepared (if at all). I'd like to be able to continue\n> logging DETAIL (including bind parameters), so want like to avoid setting\n> \"TERSE\" just to avoid redundant messages. (It occurs to me that the GUC should\n> probably stick to its existing documented behavior rather than be extended,\n> which suggests that the duplicitive portions of the logs should simply be\n> removed, rather than conditionalized. Let me know what you think).\n\nI'm attaching a v2 patch which avoids repeated logging of PREPARE, rather than\nmaking such logs conditional on log_error_verbosity=VERBOSE, since\nlog_error_verbosity is documented to control whether these are output:\nDETAIL/HINT/QUERY/CONTEXT/SQLSTATE.\n\nFor SQL EXECUTE, excluding \"detail\" seems reasonable (perhaps for\nlog_error_verbosity<VERBOSE). But for PQexecPrepared, the\nv1 patch made log_error_verbosity also control the \"message\" output, which is\noutside the scope of its documented behavior.\n\n|message | execute qq: PREPARE qq AS SELECT $1\n|detail | parameters: $1 = '3'\n\nhttps://www.postgresql.org/docs/current/runtime-config-logging.html\n|Controls the amount of detail written in the server log for each message that\n|is logged. Valid values are TERSE, DEFAULT, and VERBOSE, each adding more\n|fields to displayed messages. TERSE excludes the logging of DETAIL, HINT,\n|QUERY, and CONTEXT error information. VERBOSE output includes the SQLSTATE\n|error code (see also Appendix A) and the source code file name, function name,\n|and line number that generated the error. Only superusers can change this\n|setting.\n\nAs I mentioned in my original message, it seems odd that for SQL EXECUTE, the\nPREPARE is shown in \"detail\", but for the library call, it's shown in\n\"message\". This patch resolves that inconsistency by showing it in neither.",
"msg_date": "Wed, 27 Feb 2019 12:06:53 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: query logging of prepared statements"
},
{
"msg_contents": "Hello Justin,\n\nOn 27.02.2019 21:06, Justin Pryzby wrote:\n> I'm attaching a v2 patch which avoids repeated logging of PREPARE, rather than\n> making such logs conditional on log_error_verbosity=VERBOSE, since\n> log_error_verbosity is documented to control whether these are output:\n> DETAIL/HINT/QUERY/CONTEXT/SQLSTATE.\nI looked the patch. I got interesting result with different parameters.\n\nWith log_statement='all' and log_min_duration_statement='0' I get:\n\n=# execute test_ins(3);\nLOG: statement: execute test_ins(3);\nLOG: duration: 17.283 ms\n\nBut with log_statement='none' and log_min_duration_statement='0' I get:\n\n=# execute test_ins(3);\nLOG: duration: 8.439 ms statement: execute test_ins(3);\nDETAIL: prepare: prepare test_ins (int) as\ninsert into test values ($1);\n\nIs it intended? In the second result I got the query details.\n\n-- \nArthur Zakirov\nPostgres Professional: http://www.postgrespro.com\nRussian Postgres Company\n\n",
"msg_date": "Mon, 4 Mar 2019 18:53:31 +0300",
"msg_from": "Arthur Zakirov <a.zakirov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: query logging of prepared statements"
},
{
"msg_contents": "On Mon, Mar 04, 2019 at 06:53:31PM +0300, Arthur Zakirov wrote:\n> Hello Justin,\n> \n> On 27.02.2019 21:06, Justin Pryzby wrote:\n> >I'm attaching a v2 patch which avoids repeated logging of PREPARE, rather than\n> >making such logs conditional on log_error_verbosity=VERBOSE, since\n> >log_error_verbosity is documented to control whether these are output:\n> >DETAIL/HINT/QUERY/CONTEXT/SQLSTATE.\n> I looked the patch. I got interesting result with different parameters.\n> \n> But with log_statement='none' and log_min_duration_statement='0' I get:\n> \n> =# execute test_ins(3);\n> LOG: duration: 8.439 ms statement: execute test_ins(3);\n> DETAIL: prepare: prepare test_ins (int) as\n> insert into test values ($1);\n> \n> Is it intended? In the second result I got the query details.\n\nIt wasn't intentional. Find attached v3 patch which handles that case,\nby removing the 2nd call to errdetail_execute() ; since it's otherwise unused,\nso remove that function entirely.\n\n|postgres=# execute test_ins(3);\n|2019-03-04 11:56:15.997 EST [4044] LOG: duration: 0.637 ms statement: execute test_ins(3);\n\nI also fixed a 2nd behavior using library call pqExecPrepared with\nlog_min_duration_statement=0.\n\nIt was logging:\n|LOG: duration: 0.264 ms statement: SELECT 1; PREPARE q AS SELECT $1\n|LOG: duration: 0.027 ms bind q: SELECT 1; PREPARE q AS SELECT $1\n|DETAIL: parameters: $1 = '1'\n|LOG: duration: 0.006 ms execute q: SELECT 1; PREPARE q AS SELECT $1\n|DETAIL: parameters: $1 = '1'\n\nBut now logs:\n\nPGPORT=5679 PGHOST=/tmp PYTHONPATH=../PyGreSQL/build/lib.linux-x86_64-2.7/ python2.7 -c \"import pg; d=pg.DB('postgres'); d.query('SET client_min_messages=error; SET log_error_verbosity=default; SET log_min_duration_statement=0; SET log_statement=\\\"none\\\"'); d.query('SELECT 1; PREPARE q AS SELECT \\$1'); d.query_prepared('q',[1])\"\n|LOG: duration: 0.479 ms statement: SELECT 1; PREPARE q AS SELECT $1\n|LOG: duration: 0.045 ms bind q\n|DETAIL: parameters: $1 = '1'\n|LOG: duration: 0.008 ms execute q\n|DETAIL: parameters: $1 = '1'\n\nThanks for reviewing. I'm also interested in discussion about whether this\nchange is undesirable for someone else for some reason ? For me, the existing\noutput seems duplicative and \"denormalized\". :)\n\nJustin",
"msg_date": "Mon, 4 Mar 2019 12:31:50 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: query logging of prepared statements"
},
{
"msg_contents": "On 04.03.2019 21:31, Justin Pryzby wrote:\n> It wasn't intentional. Find attached v3 patch which handles that case,\n> by removing the 2nd call to errdetail_execute() ; since it's otherwise unused,\n> so remove that function entirely.\n\nThank you.\n\n> Thanks for reviewing. I'm also interested in discussion about whether this\n> change is undesirable for someone else for some reason ? For me, the existing\n> output seems duplicative and \"denormalized\". :)\n\nI perfectly understand your use case. I agree, it is duplicated. But I \nthink some people may want to see it at every EXECUTE, if they don't \nwant to grep for the prepared statement body which was logged earlier.\n\nI think it would be good to give possibility to configure this behavior. \nAt first version of your patch you relied on log_error_verbosity GUC. \nI'm not sure that this variables is suitable for configuring visibility \nof prepared statement body in logs, because it sets more general \nbehavior. Maybe it would be better to introduce some new GUC variable if \nthe community don't mind.\n\n-- \nArthur Zakirov\nPostgres Professional: http://www.postgrespro.com\nRussian Postgres Company\n\n",
"msg_date": "Tue, 5 Mar 2019 13:30:18 +0300",
"msg_from": "Arthur Zakirov <a.zakirov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: query logging of prepared statements"
},
{
"msg_contents": "Hi Justin,\n\nOn 3/5/19 2:30 PM, Arthur Zakirov wrote:\n> On 04.03.2019 21:31, Justin Pryzby wrote:\n>> It wasn't intentional.� Find attached v3 patch which handles that case,\n>> by removing the 2nd call to errdetail_execute() ; since it's otherwise \n>> unused,\n>> so remove that function entirely.\n> \n> Thank you.\n> \n>> Thanks for reviewing.� I'm also interested in discussion about whether \n>> this\n>> change is undesirable for someone else for some reason ?� For me, the \n>> existing\n>> output seems duplicative and \"denormalized\".� :)\n> \n> I perfectly understand your use case. I agree, it is duplicated. But I \n> think some people may want to see it at every EXECUTE, if they don't \n> want to grep for the prepared statement body which was logged earlier.\n> \n> I think it would be good to give possibility to configure this behavior. \n> At first version of your patch you relied on log_error_verbosity GUC. \n> I'm not sure that this variables is suitable for configuring visibility \n> of prepared statement body in logs, because it sets more general \n> behavior. Maybe it would be better to introduce some new GUC variable if \n> the community don't mind.\nThoughts on this?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n",
"msg_date": "Wed, 20 Mar 2019 14:46:00 +0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: query logging of prepared statements"
},
{
"msg_contents": "Hi,\n\nOn Wed, Mar 20, 2019 at 02:46:00PM +0400, David Steele wrote:\n> >I perfectly understand your use case. I agree, it is duplicated. But I\n> >think some people may want to see it at every EXECUTE, if they don't want\n> >to grep for the prepared statement body which was logged earlier.\n> >\n> >I think it would be good to give possibility to configure this behavior.\n> >At first version of your patch you relied on log_error_verbosity GUC. I'm\n> >not sure that this variables is suitable for configuring visibility of\n> >prepared statement body in logs, because it sets more general behavior.\n> >Maybe it would be better to introduce some new GUC variable if the\n> >community don't mind.\n>\n> Thoughts on this?\n\nI'm happy to make the behavior configurable, but I'm having trouble believing\nmany people would want a GUC added for this. But I'd be interested to hear\ninput on whether it should be configurable, or whether the original \"verbose\"\nlogging is desirable to anyone.\n\nThis is mostly tangential, but since writing the original patch, I considered\nthe possibility of a logging GUC which is scales better than log_* booleans:\nhttps://www.postgresql.org/message-id/20190316122422.GR6030%40telsasoft.com\nIf that idea were desirable, there could be a logging_bit to enable verbose\nlogging of prepared statements.\n\nJustin\n\n",
"msg_date": "Wed, 20 Mar 2019 08:28:34 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Re: query logging of prepared statements"
},
{
"msg_contents": "On 2019-Mar-04, Justin Pryzby wrote:\n\n> Thanks for reviewing. I'm also interested in discussion about whether this\n> change is undesirable for someone else for some reason ? For me, the existing\n> output seems duplicative and \"denormalized\". :)\n\nSome digging turned up that the function you're removing was added by\ncommit 893632be4e17. The commit message mentions output for testlibpq3,\nso I ran that against a patched server. With log_min_duration_statement=0\nI get this, which looks good:\n\n2019-04-04 14:59:15.529 -03 [31723] LOG: duration: 0.108 ms statement: SET search_path = testlibpq3\n2019-04-04 14:59:15.530 -03 [31723] LOG: duration: 0.303 ms parse <unnamed>: SELECT * FROM test1 WHERE t = $1\n2019-04-04 14:59:15.530 -03 [31723] LOG: duration: 0.231 ms bind <unnamed>\n2019-04-04 14:59:15.530 -03 [31723] DETAIL: parameters: $1 = 'joe''s place'\n2019-04-04 14:59:15.530 -03 [31723] LOG: duration: 0.016 ms execute <unnamed>\n2019-04-04 14:59:15.530 -03 [31723] DETAIL: parameters: $1 = 'joe''s place'\n2019-04-04 14:59:15.530 -03 [31723] LOG: duration: 0.096 ms parse <unnamed>: SELECT * FROM test1 WHERE i = $1::int4\n2019-04-04 14:59:15.530 -03 [31723] LOG: duration: 0.053 ms bind <unnamed>\n2019-04-04 14:59:15.530 -03 [31723] DETAIL: parameters: $1 = '2'\n2019-04-04 14:59:15.530 -03 [31723] LOG: duration: 0.007 ms execute <unnamed>\n2019-04-04 14:59:15.530 -03 [31723] DETAIL: parameters: $1 = '2'\n\nAn unpatched server emits this:\n\n2019-04-04 15:03:01.176 -03 [1165] LOG: duration: 0.163 ms statement: SET search_path = testlibpq3\n2019-04-04 15:03:01.176 -03 [1165] LOG: duration: 0.475 ms parse <unnamed>: SELECT * FROM test1 WHERE t = $1\n2019-04-04 15:03:01.177 -03 [1165] LOG: duration: 0.403 ms bind <unnamed>: SELECT * FROM test1 WHERE t = $1\n2019-04-04 15:03:01.177 -03 [1165] DETAIL: parameters: $1 = 'joe''s place'\n2019-04-04 15:03:01.177 -03 [1165] LOG: duration: 0.028 ms execute <unnamed>: SELECT * FROM test1 WHERE t = $1\n2019-04-04 15:03:01.177 -03 [1165] DETAIL: parameters: $1 = 'joe''s place'\n2019-04-04 15:03:01.177 -03 [1165] LOG: duration: 0.177 ms parse <unnamed>: SELECT * FROM test1 WHERE i = $1::int4\n2019-04-04 15:03:01.177 -03 [1165] LOG: duration: 0.096 ms bind <unnamed>: SELECT * FROM test1 WHERE i = $1::int4\n2019-04-04 15:03:01.177 -03 [1165] DETAIL: parameters: $1 = '2'\n2019-04-04 15:03:01.177 -03 [1165] LOG: duration: 0.014 ms execute <unnamed>: SELECT * FROM test1 WHERE i = $1::int4\n2019-04-04 15:03:01.177 -03 [1165] DETAIL: parameters: $1 = '2'\n\nNote that with your patch we no longer get the statement in the \"bind\" and\n\"execute\" lines, which seems good; that was far too noisy.\n\n\nHowever, turning duration logging off and using log_statement=all, this is what\nI get:\n\n2019-04-04 14:58:42.564 -03 [31685] LOG: statement: SET search_path = testlibpq3\n2019-04-04 14:58:42.565 -03 [31685] LOG: execute <unnamed>\n2019-04-04 14:58:42.565 -03 [31685] DETAIL: parameters: $1 = 'joe''s place'\n2019-04-04 14:58:42.565 -03 [31685] LOG: execute <unnamed>\n2019-04-04 14:58:42.565 -03 [31685] DETAIL: parameters: $1 = '2'\n\nwhich does not look good -- the statement is nowhere to be seen. The commit\nmessage quotes this as desirable output:\n\n LOG: statement: execute <unnamed>: SELECT * FROM test1 WHERE t = $1\n DETAIL: parameters: $1 = 'joe''s place'\n LOG: statement: execute <unnamed>: SELECT * FROM test1 WHERE i = $1::int4\n DETAIL: parameters: $1 = '2'\n\nwhich is approximately what I get with an unpatched server:\n\n2019-04-04 15:04:25.718 -03 [1235] LOG: statement: SET search_path = testlibpq3\n2019-04-04 15:04:25.719 -03 [1235] LOG: execute <unnamed>: SELECT * FROM test1 WHERE t = $1\n2019-04-04 15:04:25.719 -03 [1235] DETAIL: parameters: $1 = 'joe''s place'\n2019-04-04 15:04:25.720 -03 [1235] LOG: execute <unnamed>: SELECT * FROM test1 WHERE i = $1::int4\n2019-04-04 15:04:25.720 -03 [1235] DETAIL: parameters: $1 = '2'\n\nSo I think this needs a bit more work.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 4 Apr 2019 15:07:04 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: query logging of prepared statements"
},
{
"msg_contents": "On 2019-Apr-04, Alvaro Herrera wrote:\n\n> However, turning duration logging off and using log_statement=all, this is what\n> I get:\n> \n> 2019-04-04 14:58:42.564 -03 [31685] LOG: statement: SET search_path = testlibpq3\n> 2019-04-04 14:58:42.565 -03 [31685] LOG: execute <unnamed>\n> 2019-04-04 14:58:42.565 -03 [31685] DETAIL: parameters: $1 = 'joe''s place'\n> 2019-04-04 14:58:42.565 -03 [31685] LOG: execute <unnamed>\n> 2019-04-04 14:58:42.565 -03 [31685] DETAIL: parameters: $1 = '2'\n\nWith this patch (pretty much equivalent to reinstanting the\nerrdetail_execute for that line),\n\ndiff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c\nindex dbc7b797c6e..fd73d5e9951 100644\n--- a/src/backend/tcop/postgres.c\n+++ b/src/backend/tcop/postgres.c\n@@ -2056,7 +2056,6 @@ exec_execute_message(const char *portal_name, long max_rows)\n \t\t\t\t\t\tprepStmtName,\n \t\t\t\t\t\t*portal_name ? \"/\" : \"\",\n \t\t\t\t\t\t*portal_name ? portal_name : \"\"),\n-\t\t\t\t errhidestmt(true),\n \t\t\t\t errdetail_params(portalParams)));\n \t\twas_logged = true;\n \t}\n\nI get what seems to be pretty much what is wanted for this case:\n\n2019-04-04 15:18:16.817 -03 [4559] LOG: statement: SET search_path = testlibpq3\n2019-04-04 15:18:16.819 -03 [4559] LOG: execute <unnamed>\n2019-04-04 15:18:16.819 -03 [4559] DETAIL: parameters: $1 = 'joe''s place'\n2019-04-04 15:18:16.819 -03 [4559] STATEMENT: SELECT * FROM test1 WHERE t = $1\n2019-04-04 15:18:16.820 -03 [4559] LOG: execute <unnamed>\n2019-04-04 15:18:16.820 -03 [4559] DETAIL: parameters: $1 = '2'\n2019-04-04 15:18:16.820 -03 [4559] STATEMENT: SELECT * FROM test1 WHERE i = $1::int4\n\nHowever, by setting both log_statement=all and\nlog_min_duration_statement=0 and that patch (I also added %l to\nlog_line_prefix), I get this:\n\n2019-04-04 15:23:45 -03 [5208-1] LOG: statement: SET search_path = testlibpq3\n2019-04-04 15:23:45 -03 [5208-2] LOG: duration: 0.441 ms\n2019-04-04 15:23:45 -03 [5208-3] LOG: duration: 1.127 ms parse <unnamed>: SELECT * FROM test1 WHERE t = $1\n2019-04-04 15:23:45 -03 [5208-4] LOG: duration: 0.789 ms bind <unnamed>\n2019-04-04 15:23:45 -03 [5208-5] DETAIL: parameters: $1 = 'joe''s place'\n2019-04-04 15:23:45 -03 [5208-6] LOG: execute <unnamed>\n2019-04-04 15:23:45 -03 [5208-7] DETAIL: parameters: $1 = 'joe''s place'\n2019-04-04 15:23:45 -03 [5208-8] STATEMENT: SELECT * FROM test1 WHERE t = $1\n2019-04-04 15:23:45 -03 [5208-9] LOG: duration: 0.088 ms\n2019-04-04 15:23:45 -03 [5208-10] LOG: duration: 0.363 ms parse <unnamed>: SELECT * FROM test1 WHERE i = $1::int4\n2019-04-04 15:23:45 -03 [5208-11] LOG: duration: 0.206 ms bind <unnamed>\n2019-04-04 15:23:45 -03 [5208-12] DETAIL: parameters: $1 = '2'\n2019-04-04 15:23:45 -03 [5208-13] LOG: execute <unnamed>\n2019-04-04 15:23:45 -03 [5208-14] DETAIL: parameters: $1 = '2'\n2019-04-04 15:23:45 -03 [5208-15] STATEMENT: SELECT * FROM test1 WHERE i = $1::int4\n2019-04-04 15:23:45 -03 [5208-16] LOG: duration: 0.053 ms\n\nNote that line 5208-8 is duplicative of 5208-3.\n\nI think we could improve on this by setting a \"logged\" flag in the\nportal; if the Parse logs the statement, then don't include the\nstatement in further lines, otherwise do.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 4 Apr 2019 15:26:30 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: query logging of prepared statements"
},
{
"msg_contents": "On 2019-Apr-04, Alvaro Herrera wrote:\n\n> I think we could improve on this by setting a \"logged\" flag in the\n> portal; if the Parse logs the statement, then don't include the\n> statement in further lines, otherwise do.\n\nAlso: I think such a flag could help the case of a query that takes\nlong enough to execute to exceed the log_min_duration_statement, but not\nlong enough to parse. Right now, log_min_duration_statement=500 shows\n\n2019-04-04 15:59:39 -03 [6353-1] LOG: duration: 2002.298 ms execute <unnamed>\n2019-04-04 15:59:39 -03 [6353-2] DETAIL: parameters: $1 = 'joe''s place'\n\nif I change the testlibpq3 query to be \n\t\"SELECT * FROM test1 WHERE t = $1 and pg_sleep(1) is not null\",\n\nAlso, if you parse once and bind/execute many times, IMO the statement\nshould be logged exactly once. I think you could that with the flag I\npropose.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 4 Apr 2019 16:01:26 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: query logging of prepared statements"
},
{
"msg_contents": "On Thu, Apr 04, 2019 at 03:07:04PM -0300, Alvaro Herrera wrote:\n> which does not look good -- the statement is nowhere to be seen. The commit\n> message quotes this as desirable output:\n\nGood catch.\n\nUnnamed statements sent behind the scenes by pqExecParams weren't being logged.\n\nI specifically handled unnamed statements in my v1 patch, and tested that in\n20190215145704.GW30291@telsasoft.com, but for some reason dropped that logic in\nv2, which was intended to only remove behavior conditional on\nlog_error_verbosity.\n\nPrevious patches also never logged pqPrepare with named prepared statements\n(unnamed prepared statements were handled in v1 and SQL PREPARE was handled as\na simple statement). \n\nOn Thu, Apr 04, 2019 at 03:26:30PM -0300, Alvaro Herrera wrote:\n> With this patch (pretty much equivalent to reinstanting the\n> errdetail_execute for that line),\n\nThat means the text of the prepared statement is duplicated for each execute,\nwhich is what we're trying to avoid, no ?\n\nAttached patch promotes message to LOG in exec_parse_message. Parse is a\nprotocol-layer message, and I think it's used by (only) pqPrepare and\npqExecParams.\n\ntestlibpq3 now shows:\n\n|LOG: parse <unnamed>: SELECT * FROM test1 WHERE t = $1\n|LOG: execute <unnamed>\n|DETAIL: parameters: $1 = 'joe''s place'\n\n|LOG: parse <unnamed>: SELECT * FROM test1 WHERE i = $1::int4\n|LOG: execute <unnamed>\n|DETAIL: parameters: $1 = '2'\n\nCompare unpatched v11.2 , the text of the prepared statement was shown in\n\"parse\" phase rather than in each execute:\n\n|LOG: execute <unnamed>: SELECT * FROM test1 WHERE t = $1\n|DETAIL: parameters: $1 = 'joe''s place'\n\n|LOG: execute <unnamed>: SELECT * FROM test1 WHERE i = $1::int4\n|DETAIL: parameters: $1 = '2'\n\nJustin",
"msg_date": "Thu, 4 Apr 2019 16:26:13 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: query logging of prepared statements"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-04 16:01:26 -0300, Alvaro Herrera wrote:\n> Also, if you parse once and bind/execute many times, IMO the statement\n> should be logged exactly once. I think you could that with the flag I\n> propose.\n\nI'm not actually sure I buy this. Consider e.g. log analysis for\nworkloads with long-running connections. If most statements are just\nalready prepared statements - pretty common in higher throughput apps -\nthe query will suddenly be either far away in the logfile (thereby\nrequiring pretty expensive analysis to figure out the corresponding\nstatement) or even in a different logfile due to rotation.\n\nI'm sympathetic to the desire to reduce log volume, but I'm fearful this\nwould make log analysis much harder. Searching through many gigabytes\njust to find the query text of the statement being executed over and\nover doesn't sound great.\n\nI think deduplicating the logging between bind and execute has less of\nthat hazard.\n\n- Andres\n\n\n",
"msg_date": "Fri, 5 Apr 2019 18:16:38 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: query logging of prepared statements"
},
{
"msg_contents": "Hi,\n\nThanks both for thinking about this.\n\nOn Fri, Apr 05, 2019 at 06:16:38PM -0700, Andres Freund wrote:\n> On 2019-04-04 16:01:26 -0300, Alvaro Herrera wrote:\n> > Also, if you parse once and bind/execute many times, IMO the statement\n> > should be logged exactly once. I think you could that with the flag I\n> > propose.\n\n> I think deduplicating the logging between bind and execute has less of\n> that hazard.\n\nDo you mean the bind parameters, which are only duplicated in the case of\nlog_min_duration_statement ?\n\nI actually implemented that, using Alvaro's suggestion of a flag in the Portal,\nand decided that if duration is exceeded, for bind or execute, then it's likely\ndesirable to log the params, even if it's duplicitive. Since I've been using\nlog_statement=all, and not log_min_duration_statement, I don't have a strong\nopinion about it.\n\nPerhaps you're right (and perhaps I should switch to\nlog_min_duration_statement). I'll tentately plan to withdraw the patch.\n\nThanks,\nJustin\n\n\n",
"msg_date": "Sun, 7 Apr 2019 10:31:01 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: query logging of prepared statements"
},
{
"msg_contents": "On 2019-Apr-07, Justin Pryzby wrote:\n\n> [...] Since I've been using log_statement=all, and not\n> log_min_duration_statement, I don't have a strong opinion about it.\n\nAh, so your plan for this in production is to use the sample-based\nlogging facilities, I see! Did you get Adrien a beer already?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 7 Apr 2019 12:09:57 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: query logging of prepared statements"
}
] |
[
{
"msg_contents": "Hi list,\n\nI've attached a patch that implements SortSupport for the\ninet/cidr types. It has the effect of typically reducing\nthe time taken to sort these types by ~50-60% (as measured\nby `SELECT COUNT(DISTINCT ...)` which will carry over to\ncommon operations like index creation, `ORDER BY`, and\n`DISTINCT`.\n\nAs with other SortSupport implementations, this one\nproposes an abbreviated key design that packs in as much\nsorting-relevant information into the available datum as\npossible while remaining faithful to the existing sorting\nrules for the types. The key format depends on datum size\nand whether working with IPv4 or IPv6. In most cases that\ninvolves storing as many netmask bits as we can fit, but we\ncan do something a little more complete with IPv4 on 64-bit\n— because inet data is small and the datum is large, we\ncan store enough information for a successful key-only\ncomparison in the majority of cases. Precise details\nincluding background and bit-level diagrams are included in\nthe comments of the attached patch.\n\nI've tried to take a variety of steps to build confidence\nthat the proposed SortSupport-based sort is correct:\n\n1. It passes the existing inet regression suite (which was\n pretty complete already).\n\n2. I added a few new test cases the suite, specifically\n trying to target edge cases like the minimums and\n maximums of each abbreviated key bit \"boundary\". The new\n cases were run against the master branch to double-check\n that they're right.\n\n3. I've added a variety of assertions throughout the\n implementation to make sure that each step is seeing\n inputs with expected parameters, and only manipulates\n the bits that it's supposed to be manipulating.\n\n4. I built large random data sets (10M rows) and sorted\n them against a development build to try and trigger the\n aforementioned assertions.\n\n5. I built an index on 10M values and ran amcheck against\n it.\n\n6. I wrote some unit tests to verify that the bit-level\n representation of the new abbreviated keys are indeed\n what we expect. They're available here [3]. I didn't\n include them in the patch because I couldn't find an\n easy way to produce an expected `.out` file for a 32-bit\n machine (experiments building with `setarch` didn't\n succeed). They might be overkill anyway. I can continue\n to pursue getting them working if reviewers think they'd\n be useful.\n\nMy benchmarking methodology and script is available here\n[1], and involves gathering statistics for 100\n`count(distinct ...)` queries at various data sizes. I've\nsaved the results I got on my machine here [2].\n\nHat tip to Peter Geoghegan for advising on an appropriate\nabbreviated key design and helping answer many questions\nalong the way.\n\nBrandur",
"msg_date": "Fri, 8 Feb 2019 07:48:40 -0800",
"msg_from": "Brandur Leach <brandur@mutelight.org>",
"msg_from_op": true,
"msg_subject": "Patch for SortSupport implementation on inet/cdir"
},
{
"msg_contents": "Attached a V2 patch: identical to V1 except rebased and\nwith a new OID selected.",
"msg_date": "Fri, 8 Feb 2019 10:19:57 -0800",
"msg_from": "Brandur Leach <brandur@mutelight.org>",
"msg_from_op": true,
"msg_subject": "Re: Patch for SortSupport implementation on inet/cdir"
},
{
"msg_contents": "On Sat, 9 Feb 2019 at 04:48, Brandur Leach <brandur@mutelight.org> wrote:\n> I've attached a patch that implements SortSupport for the\n> inet/cidr types. It has the effect of typically reducing\n> the time taken to sort these types by ~50-60% (as measured\n> by `SELECT COUNT(DISTINCT ...)` which will carry over to\n> common operations like index creation, `ORDER BY`, and\n> `DISTINCT`.\n\nHi Brandur,\n\nI had a look at this. Your V2 patch applies cleanly, and the code was\nstraightforward and well commented. I appreciate the big comment at the\ntop of network_abbrev_convert explaining how you encode the data.\n\nThe tests pass. I ran a couple of large scale tests myself and didn't find\nany problems. Sorting a million random inets in work_mem = 256MB goes from\nroughty 3670ms to 1620ms with the SortSupport, which is pretty impressive.\n(But that's in my debug build, so not a serious benchmark.)\n\nAn interesting thing about sorting IPv4 inets on 64-bit machines is that\nwhen the inets are the same, the abbreviated comparitor will return 0 which\nis taken by the sorting machinery to mean \"the datums are the same up to\nthis point, so you need to call the full comparitor\" -- but, in this case,\n0 means \"the datums truly are the same, no need to call the full\ncomparitor\". Since the full comparitor assumes its arguments to be the\noriginal (typically pass-by-reference) datums, you can't do it there.\nYou'd need to add another optional comparitor to be called after the\nabbreviated one. In inet's case on a 64-bit machine, it would look at the\nabbreviated datums and if they're both in the IPv4 family, would return 0\n(knowing that the abbreviated comparitor has already done the real work).\nI have no reason to believe this particular optimisation is worth anything\nmuch, though; it's outside the scope of this patch, besides.\n\nI have some comments on the comments:\n\nnetwork.c:552\n* SortSupport conversion routine. Converts original inet/cidr\nrepresentations\n* to abbreviated keys . The inet/cidr types are pass-by-reference, so is an\n* optimization so that sorting routines don't have to pull full values from\n* the heap to compare.\n\nLooks like you have an extra space before the \".\" on line 553. And\nabbreviated keys being an optimisation for pass-by-reference types can be\ntaken for granted, so I think the last sentence is redundant.\n\nnetwork.c::567\n* IPv4 and IPv6 are identical in this makeup, with the difference being that\n* IPv4 addresses have a maximum of 32 bits compared to IPv6's 64 bits, so in\n* IPv6 each part may be larger.\n\nIPv6's addresses are 128 bit. I'm not sure sure if \"maximum\" is accurate,\nor whether you should just say \"IPv4 addresses have 32 bits\".\n\nnetwork.c::571\n* inet/cdir types compare using these sorting rules. If inequality is\ndetected\n* at any step, comparison is done. If any rule is a tie, the algorithm drops\n* through to the next to break it:\n\nWhen you say \"comparison is done\" it sounds like more comparing is going to\nbe done, but what I think you mean is that comparison is finished.\n\n> [...]\n\n> My benchmarking methodology and script is available here\n> [1], and involves gathering statistics for 100\n> `count(distinct ...)` queries at various data sizes. I've\n> saved the results I got on my machine here [2].\n\nI didn't see any links for [1], [2] and [3] in your email.\n\nFinally, there's a duplicate CF entry:\nhttps://commitfest.postgresql.org/22/1990/ .\n\nSince you're updating https://commitfest.postgresql.org/22/1991/ , I\nsuggest you mark 1990 as Withdrawn to avoid confusion. If there's a way to\nremove it from the CF list, that would be even better.\n\nEdmund\n\nOn Sat, 9 Feb 2019 at 04:48, Brandur Leach <brandur@mutelight.org> wrote:> I've attached a patch that implements SortSupport for the> inet/cidr types. It has the effect of typically reducing> the time taken to sort these types by ~50-60% (as measured> by `SELECT COUNT(DISTINCT ...)` which will carry over to> common operations like index creation, `ORDER BY`, and> `DISTINCT`.Hi Brandur,I had a look at this. Your V2 patch applies cleanly, and the code was straightforward and well commented. I appreciate the big comment at the top of network_abbrev_convert explaining how you encode the data.The tests pass. I ran a couple of large scale tests myself and didn't find any problems. Sorting a million random inets in work_mem = 256MB goes from roughty 3670ms to 1620ms with the SortSupport, which is pretty impressive. (But that's in my debug build, so not a serious benchmark.)An interesting thing about sorting IPv4 inets on 64-bit machines is that when the inets are the same, the abbreviated comparitor will return 0 which is taken by the sorting machinery to mean \"the datums are the same up to this point, so you need to call the full comparitor\" -- but, in this case, 0 means \"the datums truly are the same, no need to call the full comparitor\". Since the full comparitor assumes its arguments to be the original (typically pass-by-reference) datums, you can't do it there. You'd need to add another optional comparitor to be called after the abbreviated one. In inet's case on a 64-bit machine, it would look at the abbreviated datums and if they're both in the IPv4 family, would return 0 (knowing that the abbreviated comparitor has already done the real work). I have no reason to believe this particular optimisation is worth anything much, though; it's outside the scope of this patch, besides.I have some comments on the comments:network.c:552 * SortSupport conversion routine. Converts original inet/cidr representations * to abbreviated keys . The inet/cidr types are pass-by-reference, so is an * optimization so that sorting routines don't have to pull full values from * the heap to compare.Looks like you have an extra space before the \".\" on line 553. And abbreviated keys being an optimisation for pass-by-reference types can be taken for granted, so I think the last sentence is redundant.network.c::567 * IPv4 and IPv6 are identical in this makeup, with the difference being that * IPv4 addresses have a maximum of 32 bits compared to IPv6's 64 bits, so in * IPv6 each part may be larger.IPv6's addresses are 128 bit. I'm not sure sure if \"maximum\" is accurate, or whether you should just say \"IPv4 addresses have 32 bits\".network.c::571 * inet/cdir types compare using these sorting rules. If inequality is detected * at any step, comparison is done. If any rule is a tie, the algorithm drops * through to the next to break it:When you say \"comparison is done\" it sounds like more comparing is going to be done, but what I think you mean is that comparison is finished.> [...] > My benchmarking methodology and script is available here> [1], and involves gathering statistics for 100> `count(distinct ...)` queries at various data sizes. I've> saved the results I got on my machine here [2].I didn't see any links for [1], [2] and [3] in your email.Finally, there's a duplicate CF entry: https://commitfest.postgresql.org/22/1990/ .Since you're updating https://commitfest.postgresql.org/22/1991/ , I suggest you mark 1990 as Withdrawn to avoid confusion. If there's a way to remove it from the CF list, that would be even better.Edmund",
"msg_date": "Sat, 9 Feb 2019 20:12:53 +1300",
"msg_from": "Edmund Horner <ejrh00@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Patch for SortSupport implementation on inet/cdir"
},
{
"msg_contents": "Hi Brandur,\n\n\nOn 2019-02-09 20:12:53 +1300, Edmund Horner wrote:\n> I had a look at this. Your V2 patch applies cleanly, and the code was\n> straightforward and well commented. I appreciate the big comment at the\n> top of network_abbrev_convert explaining how you encode the data.\n\nI've marked the CF entry as waiting-on-author, due to this review.\n\nBrandur, unfortunately this patch has only been submitted to the last\ncommitfest for v12. Our policy is that all nontrivial patches should be\nsubmitted to at least one earlier commitfest. Therefore I unfortunately\nthink that v12 is not a realistic target, sorry :(\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Fri, 15 Feb 2019 18:13:09 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Patch for SortSupport implementation on inet/cdir"
},
{
"msg_contents": "On 2/16/19 4:13 AM, Andres Freund wrote:\n\n> On 2019-02-09 20:12:53 +1300, Edmund Horner wrote:\n>> I had a look at this. Your V2 patch applies cleanly, and the code was\n>> straightforward and well commented. I appreciate the big comment at the\n>> top of network_abbrev_convert explaining how you encode the data.\n> \n> I've marked the CF entry as waiting-on-author, due to this review.\n> \n> Brandur, unfortunately this patch has only been submitted to the last\n> commitfest for v12. Our policy is that all nontrivial patches should be\n> submitted to at least one earlier commitfest. Therefore I unfortunately\n> think that v12 is not a realistic target, sorry :(\n\nI agree, and since there has been no response from the author I have \npushed the target version to PG13.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n",
"msg_date": "Tue, 5 Mar 2019 14:36:44 +0200",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: Patch for SortSupport implementation on inet/cdir"
},
{
"msg_contents": "On Fri, Feb 8, 2019 at 11:13 PM Edmund Horner <ejrh00@gmail.com> wrote:\n> I have some comments on the comments:\n\nSeems reasonable to me.\n\nWhere are we on this? I'd like to get the patch committed soon.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 25 Jul 2019 19:54:21 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Patch for SortSupport implementation on inet/cdir"
},
{
"msg_contents": "On Fri, Feb 8, 2019 at 10:20 AM Brandur Leach <brandur@mutelight.org> wrote:\n> Attached a V2 patch: identical to V1 except rebased and\n> with a new OID selected.\n\nAttached is a revised version that I came up with, based on your v2.\n\nI found this part of your approach confusing:\n\n> + /*\n> + * Number of bits in subnet. e.g. An IPv4 that's /24 is 32 - 24 = 8.\n> + *\n> + * However, only some of the bits may have made it into the fixed sized\n> + * datum, so take the smallest number between bits in the subnet and bits\n> + * in the datum which are not part of the network.\n> + */\n> + datum_subnet_size = Min(ip_maxbits(authoritative) - ip_bits(authoritative),\n> + SIZEOF_DATUM * BITS_PER_BYTE - ip_bits(authoritative));\n\nThe way that you put a Min() on the subnet size potentially constrains\nthe size of the bitmask used for the network component of the\nabbreviated key (the component that comes immediately after the\nipfamily status bit). Why not just let the bitmask be a bitmask,\nwithout bringing SIZEOF_DATUM into it? Doing it that way allowed for a\nmore streamlined approach, with significantly fewer special cases. I'm\nnot sure whether or not your approach had bugs, but I didn't like the\nway you sometimes did a straight \"network = ipaddr_datum\" assignment\nwithout masking.\n\nI really liked your diagrams, but much of the text that went with them\neither seemed redundant (it described established rules about how the\nunderlying types sort), or seemed to discuss things that were better\ndiscussed next to the relevant network_abbrev_convert() code.\n\nThoughts?\n-- \nPeter Geoghegan",
"msg_date": "Fri, 26 Jul 2019 18:58:41 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Patch for SortSupport implementation on inet/cdir"
},
{
"msg_contents": "On Fri, Jul 26, 2019 at 6:58 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I found this part of your approach confusing:\n>\n> > + /*\n> > + * Number of bits in subnet. e.g. An IPv4 that's /24 is 32 - 24 = 8.\n> > + *\n> > + * However, only some of the bits may have made it into the fixed sized\n> > + * datum, so take the smallest number between bits in the subnet and bits\n> > + * in the datum which are not part of the network.\n> > + */\n> > + datum_subnet_size = Min(ip_maxbits(authoritative) - ip_bits(authoritative),\n> > + SIZEOF_DATUM * BITS_PER_BYTE - ip_bits(authoritative));\n>\n> The way that you put a Min() on the subnet size potentially constrains\n> the size of the bitmask used for the network component of the\n> abbreviated key (the component that comes immediately after the\n> ipfamily status bit). Why not just let the bitmask be a bitmask,\n> without bringing SIZEOF_DATUM into it? Doing it that way allowed for a\n> more streamlined approach, with significantly fewer special cases. I'm\n> not sure whether or not your approach had bugs, but I didn't like the\n> way you sometimes did a straight \"network = ipaddr_datum\" assignment\n> without masking.\n\nI guess that the idea here was to prevent masking on ipv6 addresses,\nthough not on ipv4 addresses. Obviously we're only dealing with a\nprefix with ipv6 addresses, whereas we usually have the whole raw\nipaddr with ipv4. Not sure if I'm doing the right thing there in v3,\neven though the tests pass. In any case, this will need to be a lot\nclearer in the final version.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 26 Jul 2019 19:25:26 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Patch for SortSupport implementation on inet/cdir"
},
{
"msg_contents": "Thanks the follow ups on this one Edmund/Peter!\n\nI've attached a new V4 variant of the patch based on\nPeter's V3, mostly containing comment amendments and a few\nother minor stylistic fixes.\n\n> An interesting thing about sorting IPv4 inets on 64-bit machines is that\nwhen the inets are the same, the abbreviated comparitor will return 0 which\nis taken by the sorting machinery to mean \"the datums are the same up to\nthis point, so you need to call the full comparitor\" — but, in this case, 0\nmeans \"the datums truly are the same, no need to call the full\ncomparitor\". Since the full comparitor assumes its arguments to be the\noriginal (typically pass-by-reference) datums, you can't do it there.\nYou'd need to add another optional comparitor to be called after the\nabbreviated one. In inet's case on a 64-bit machine, it would look at the\nabbreviated datums and if they're both in the IPv4 family, would return 0\n(knowing that the abbreviated comparitor has already done the real work).\nI have no reason to believe this particular optimisation is worth anything\nmuch, though; it's outside the scope of this patch, besides.\n\nEdmund: thanks a lot for the really quick review turn\naround, and apologies for not following up sooner!\n\nAgreed that this change is out-of-scope, but it could be an\ninteresting improvement. You'd have similar potential speed\nimprovements for other SortSupport data types like uuid,\nstrings (short ones), and macaddr. Low cardinality data\nsets would probably benefit the most.\n\n> When you say \"comparison is done\" it sounds like more comparing is going\nto be done, but what I think you mean is that comparison is finished.\n\nPeter's version of my patch ended up stripping out and/or\nchanging some of my original comments, so most of them are\nfixed by virtue of that. And agreed about the ambiguity in\nwording above — I changed \"done\" to \"finished\".\n\n> I didn't see any links for [1], [2] and [3] in your email.\n\nGood catch. Here are the original footnote links:\n\n [1] https://github.com/brandur/inet-sortsupport-test\n [2]\nhttps://github.com/brandur/inet-sortsupport-test/blob/master/results.md\n [3]\nhttps://github.com/brandur/postgres/compare/master...brandur-inet-sortsupport-unit#diff-a28824d1339d3bb74bb0297c60140dd1\n\n> The way that you put a Min() on the subnet size potentially constrains\n> the size of the bitmask used for the network component of the\n> abbreviated key (the component that comes immediately after the\n> ipfamily status bit). Why not just let the bitmask be a bitmask,\n> without bringing SIZEOF_DATUM into it? Doing it that way allowed for a\n> more streamlined approach, with significantly fewer special cases.\n\nPeter: thanks a lot for the very thorough look and revised\nversion! Generally agreed that fewer special cases is good,\nbut I was also trying to make sure that we didn't\ncompromise the code's understandability by optimizing for\nfewer special cases above everything else (especially for\nthis sort of thing where tricky bit manipulation is\ninvolved).\n\nBut I traced through your variant and it looks fine to me\n(still looks correct, and readability is still good). I've\npulled most of your changes into V4.\n\n> I'm not sure whether or not your approach had bugs, but I\n> didn't like the way you sometimes did a straight \"network\n> = ipaddr_datum\" assignment without masking.\n\nFor what it's worth, I believe this did work, even if it\ndid depend on being within that one branch of code. Agreed\nthough that avoiding it (as in the new version) is more\nhygienic.\n\n> I really liked your diagrams, but much of the text that went with them\n> either seemed redundant (it described established rules about how the\n> underlying types sort), or seemed to discuss things that were better\n> discussed next to the relevant network_abbrev_convert() code.\n\nThanks! I kept most of your changes, but resurrected some\nof my original introductory text. The fine points of the\ncode's implementation are intricate enough that I think\nhaving some background included is useful to new entrants;\nspecifically:\n\n1. Norms for naming the different \"parts\" (network, size,\n subnet) of an inet/cidr value aren't broadly\n well-established, so defining what they are before\n showing diagrams is helpful.\n\n2. Having examples of each part (`1.2.3.0`, `/24`,\n `0.0.0.4`) helps mentally cement them.\n\n3. I know that it's in the PG documentation, but the rules\n for sorting inet/cidr are not very intuitive. Spending a\n few lines re-iterating them so that they don't have to\n be cross-referenced elsewhere is worth the space.\n\nAnyway, once again appreciate the extreme attention to\ndetail on these reviews — this level of rigor would be a\nvery rare find in projects outside of the Postgres\ncommunity!\n\nBrandur",
"msg_date": "Sun, 28 Jul 2019 13:14:02 -0700",
"msg_from": "Brandur Leach <brandur@mutelight.org>",
"msg_from_op": true,
"msg_subject": "Re: Patch for SortSupport implementation on inet/cdir"
},
{
"msg_contents": "And a slightly amended version of the last patch with a bug\nfixed where IPv4 abbreviated keys were were not being\ninitialized correctly on big-endian machines.",
"msg_date": "Mon, 29 Jul 2019 20:17:19 -0700",
"msg_from": "Brandur Leach <brandur@mutelight.org>",
"msg_from_op": true,
"msg_subject": "Re: Patch for SortSupport implementation on inet/cdir"
},
{
"msg_contents": "On Fri, Jul 26, 2019 at 7:25 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I guess that the idea here was to prevent masking on ipv6 addresses,\n> though not on ipv4 addresses. Obviously we're only dealing with a\n> prefix with ipv6 addresses, whereas we usually have the whole raw\n> ipaddr with ipv4. Not sure if I'm doing the right thing there in v3,\n> even though the tests pass. In any case, this will need to be a lot\n> clearer in the final version.\n\nThis turned out to be borked for certain IPv6 cases, as suspected.\nAttached is a revised v6, which fixes the issue by adding the explicit\nhandling needed when ipaddr_datum is just a prefix of the full ipaddr\nfrom the authoritative representation. Also made sure that the tests\nwill catch issues like this. Separately, it occurred to me that it's\nprobably not okay to do straight type punning of the ipaddr unsigned\nchar array to a Datum on alignment-picky platforms. Using a memcpy()\nseems like the right approach, which is what we do in the latest\nrevision.\n\nI accepted almost all of Brandur's comment revisions from v5 for v6.\n\nI'm probably going to commit this tomorrow morning Pacific time. Do\nyou have any final input on the testing, Brandur? I would like to hear\nyour thoughts on the possibility of edge cases that still don't have\ncoverage. The tests will break if the new \"if (ip_bits(authoritative)\n== 0)\" branch is removed, but only at one exact point. I'm pretty sure\nthat there are no remaining subtleties like that one, but two heads\nare better than one.\n\n-- \nPeter Geoghegan",
"msg_date": "Wed, 31 Jul 2019 10:49:08 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Patch for SortSupport implementation on inet/cdir"
},
{
"msg_contents": "Thanks Peter. V6 is pretty uncontroversial by me — the new\nconditional ladder broken cleanly into cases of (1) all\nsubnet, (2) network/subnet mix, and (3) all network is a\nlittle more verbose, but all in all makes things easier to\nreason about.\n\n> Do you have any final input on the testing, Brandur? I\n> would like to hear your thoughts on the possibility of\n> edge cases that still don't have coverage. The tests will\n> break if the new \"if (ip_bits(authoritative) == 0)\"\n> branch is removed, but only at one exact point. I'm\n> pretty sure that there are no remaining subtleties like\n> that one, but two heads are better than one.\n\n(Discussed a little offline already, but) the new, more\nexhaustive test suite combined with your approach of\nsynthesizing many random values and comparing before and\nafter sorting seems sufficient to me. The approach was\nmeant to suss out any remaining edge cases, and seems to\nhave done its job.\n\nBrandur\n\nOn Wed, Jul 31, 2019 at 10:49 AM Peter Geoghegan <pg@bowt.ie> wrote:\n\n> On Fri, Jul 26, 2019 at 7:25 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > I guess that the idea here was to prevent masking on ipv6 addresses,\n> > though not on ipv4 addresses. Obviously we're only dealing with a\n> > prefix with ipv6 addresses, whereas we usually have the whole raw\n> > ipaddr with ipv4. Not sure if I'm doing the right thing there in v3,\n> > even though the tests pass. In any case, this will need to be a lot\n> > clearer in the final version.\n>\n> This turned out to be borked for certain IPv6 cases, as suspected.\n> Attached is a revised v6, which fixes the issue by adding the explicit\n> handling needed when ipaddr_datum is just a prefix of the full ipaddr\n> from the authoritative representation. Also made sure that the tests\n> will catch issues like this. Separately, it occurred to me that it's\n> probably not okay to do straight type punning of the ipaddr unsigned\n> char array to a Datum on alignment-picky platforms. Using a memcpy()\n> seems like the right approach, which is what we do in the latest\n> revision.\n>\n> I accepted almost all of Brandur's comment revisions from v5 for v6.\n>\n> I'm probably going to commit this tomorrow morning Pacific time. Do\n> you have any final input on the testing, Brandur? I would like to hear\n> your thoughts on the possibility of edge cases that still don't have\n> coverage. The tests will break if the new \"if (ip_bits(authoritative)\n> == 0)\" branch is removed, but only at one exact point. I'm pretty sure\n> that there are no remaining subtleties like that one, but two heads\n> are better than one.\n>\n> --\n> Peter Geoghegan\n>\n\nThanks Peter. V6 is pretty uncontroversial by me — the newconditional ladder broken cleanly into cases of (1) allsubnet, (2) network/subnet mix, and (3) all network is alittle more verbose, but all in all makes things easier toreason about.> Do you have any final input on the testing, Brandur? I> would like to hear your thoughts on the possibility of> edge cases that still don't have coverage. The tests will> break if the new \"if (ip_bits(authoritative) == 0)\"> branch is removed, but only at one exact point. I'm> pretty sure that there are no remaining subtleties like> that one, but two heads are better than one.(Discussed a little offline already, but) the new, moreexhaustive test suite combined with your approach ofsynthesizing many random values and comparing before andafter sorting seems sufficient to me. The approach wasmeant to suss out any remaining edge cases, and seems tohave done its job.BrandurOn Wed, Jul 31, 2019 at 10:49 AM Peter Geoghegan <pg@bowt.ie> wrote:On Fri, Jul 26, 2019 at 7:25 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I guess that the idea here was to prevent masking on ipv6 addresses,\n> though not on ipv4 addresses. Obviously we're only dealing with a\n> prefix with ipv6 addresses, whereas we usually have the whole raw\n> ipaddr with ipv4. Not sure if I'm doing the right thing there in v3,\n> even though the tests pass. In any case, this will need to be a lot\n> clearer in the final version.\n\nThis turned out to be borked for certain IPv6 cases, as suspected.\nAttached is a revised v6, which fixes the issue by adding the explicit\nhandling needed when ipaddr_datum is just a prefix of the full ipaddr\nfrom the authoritative representation. Also made sure that the tests\nwill catch issues like this. Separately, it occurred to me that it's\nprobably not okay to do straight type punning of the ipaddr unsigned\nchar array to a Datum on alignment-picky platforms. Using a memcpy()\nseems like the right approach, which is what we do in the latest\nrevision.\n\nI accepted almost all of Brandur's comment revisions from v5 for v6.\n\nI'm probably going to commit this tomorrow morning Pacific time. Do\nyou have any final input on the testing, Brandur? I would like to hear\nyour thoughts on the possibility of edge cases that still don't have\ncoverage. The tests will break if the new \"if (ip_bits(authoritative)\n== 0)\" branch is removed, but only at one exact point. I'm pretty sure\nthat there are no remaining subtleties like that one, but two heads\nare better than one.\n\n-- \nPeter Geoghegan",
"msg_date": "Thu, 1 Aug 2019 08:34:06 -0700",
"msg_from": "Brandur Leach <brandur@mutelight.org>",
"msg_from_op": true,
"msg_subject": "Re: Patch for SortSupport implementation on inet/cdir"
},
{
"msg_contents": "On Thu, Aug 1, 2019 at 8:34 AM Brandur Leach <brandur@mutelight.org> wrote:\n> Thanks Peter. V6 is pretty uncontroversial by me — the new\n> conditional ladder broken cleanly into cases of (1) all\n> subnet, (2) network/subnet mix, and (3) all network is a\n> little more verbose, but all in all makes things easier to\n> reason about.\n\nPushed.\n\nThanks!\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 1 Aug 2019 09:41:59 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Patch for SortSupport implementation on inet/cdir"
}
] |
[
{
"msg_contents": "Why don't we provide a small reserved OID range that can be used by\npatch authors temporarily, with the expectation that they'll be\nreplaced by \"real\" OIDs at the point the patch gets committed? This\nwould be similar the situation with catversion bumps -- we don't\nexpect patches that will eventually need them to have them.\n\nIt's considered good practice to choose an OID that's at the beginning\nof the range shown by the unused_oids script, so naturally there is a\ngood chance that any patch that adds a system catalog entry will bit\nrot prematurely. This seems totally unnecessary to me. You could even\nhave a replace_oids script under this system. That would replace the\nknown-temporary OIDs with mapped contiguous real values at the time of\ncommit (maybe it would just print out which permanent OIDs to use in\nplace of the temporary ones, and leave the rest up to the committer).\nI don't do Perl, so I'm not volunteering for this.\n\n-- \nPeter Geoghegan\n\n",
"msg_date": "Fri, 8 Feb 2019 09:59:42 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Why don't we have a small reserved OID range for patch revisions?"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> Why don't we provide a small reserved OID range that can be used by\n> patch authors temporarily, with the expectation that they'll be\n> replaced by \"real\" OIDs at the point the patch gets committed? This\n> would be similar the situation with catversion bumps -- we don't\n> expect patches that will eventually need them to have them.\n\nQuite a few people have used OIDs up around 8000 or 9000 for this purpose;\nI doubt we need a formally reserved range for it. The main problem with\ndoing it is the hazard that the patch'll get committed just like that,\nsuddenly breaking things for everyone else doing likewise.\n\n(I would argue, in fact, that the reason we have any preassigned OIDs\nabove perhaps 6000 is that exactly this has happened before.)\n\nA script such as you suggest might be a good way to reduce the temptation\nto get lazy at the last minute. Now that the catalog data is pretty\nmachine-readable, I suspect it wouldn't be very hard --- though I'm\nnot volunteering either. I'm envisioning something simple like \"renumber\nall OIDs in range mmmm-nnnn into range xxxx-yyyy\", perhaps with the\nability to skip any already-used OIDs in the target range.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 08 Feb 2019 13:14:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Why don't we have a small reserved OID range for patch revisions?"
},
{
"msg_contents": "On Fri, Feb 8, 2019 at 10:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> A script such as you suggest might be a good way to reduce the temptation\n> to get lazy at the last minute. Now that the catalog data is pretty\n> machine-readable, I suspect it wouldn't be very hard --- though I'm\n> not volunteering either. I'm envisioning something simple like \"renumber\n> all OIDs in range mmmm-nnnn into range xxxx-yyyy\", perhaps with the\n> ability to skip any already-used OIDs in the target range.\n\nI imagined that the machine-readable catalog data would allow us to\nassign non-numeric identifiers to this OID range. Perhaps there'd be a\ntextual symbol with a number in the range of 0-20 at the end. Those\nwould stick out like a sore thumb, making it highly unlikely that\nanybody would forget about it at the last minute.\n\n-- \nPeter Geoghegan\n\n",
"msg_date": "Fri, 8 Feb 2019 10:19:41 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Why don't we have a small reserved OID range for patch revisions?"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Fri, Feb 8, 2019 at 10:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> A script such as you suggest might be a good way to reduce the temptation\n>> to get lazy at the last minute. Now that the catalog data is pretty\n>> machine-readable, I suspect it wouldn't be very hard --- though I'm\n>> not volunteering either. I'm envisioning something simple like \"renumber\n>> all OIDs in range mmmm-nnnn into range xxxx-yyyy\", perhaps with the\n>> ability to skip any already-used OIDs in the target range.\n\n> I imagined that the machine-readable catalog data would allow us to\n> assign non-numeric identifiers to this OID range. Perhaps there'd be a\n> textual symbol with a number in the range of 0-20 at the end. Those\n> would stick out like a sore thumb, making it highly unlikely that\n> anybody would forget about it at the last minute.\n\nUm. That would not be just an add-on script but something that\ngenbki.pl would have to accept. I'm not excited about that; it would\ncomplicate what's already complex, and if it works enough for test\npurposes then it wouldn't really stop a committer who wasn't paying\nattention from committing the patch un-revised.\n\nTo the extent that this works at all, OIDs in the 9000 range ought\nto be enough of a flag already, I think.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 08 Feb 2019 13:29:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Why don't we have a small reserved OID range for patch revisions?"
},
{
"msg_contents": "On Fri, Feb 8, 2019 at 10:29 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Um. That would not be just an add-on script but something that\n> genbki.pl would have to accept. I'm not excited about that; it would\n> complicate what's already complex, and if it works enough for test\n> purposes then it wouldn't really stop a committer who wasn't paying\n> attention from committing the patch un-revised.\n>\n> To the extent that this works at all, OIDs in the 9000 range ought\n> to be enough of a flag already, I think.\n\nI tend to agree that this isn't enough of a problem to justify making\ngenbki.pl significantly more complicated.\n\n-- \nPeter Geoghegan\n\n",
"msg_date": "Fri, 8 Feb 2019 10:35:14 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Why don't we have a small reserved OID range for patch revisions?"
},
{
"msg_contents": "On Fri, Feb 8, 2019 at 11:59 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> To the extent that this works at all, OIDs in the 9000 range ought\n> to be enough of a flag already, I think.\n\nA \"flag\" that isn't documented anywhere outside of a mailing list\ndiscussion and that isn't checked by any code anywhere is not much of\na flag, IMHO.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Sat, 9 Feb 2019 14:40:52 +0530",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Why don't we have a small reserved OID range for patch revisions?"
},
{
"msg_contents": "On 2/8/19, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> A script such as you suggest might be a good way to reduce the temptation\n> to get lazy at the last minute. Now that the catalog data is pretty\n> machine-readable, I suspect it wouldn't be very hard --- though I'm\n> not volunteering either. I'm envisioning something simple like \"renumber\n> all OIDs in range mmmm-nnnn into range xxxx-yyyy\", perhaps with the\n> ability to skip any already-used OIDs in the target range.\n\nThis might be something that can be done inside reformat_dat_files.pl.\nIt's a little outside it's scope, but better than the alternatives.\nAnd we already have a function in Catalog.pm to get the currently used\noids. I'll volunteer to look into it but I don't know when that will\nbe.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Mon, 11 Feb 2019 12:44:39 +0100",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Why don't we have a small reserved OID range for patch revisions?"
},
{
"msg_contents": "I wrote:\n\n> On 2/8/19, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> A script such as you suggest might be a good way to reduce the temptation\n>> to get lazy at the last minute. Now that the catalog data is pretty\n>> machine-readable, I suspect it wouldn't be very hard --- though I'm\n>> not volunteering either. I'm envisioning something simple like \"renumber\n>> all OIDs in range mmmm-nnnn into range xxxx-yyyy\", perhaps with the\n>> ability to skip any already-used OIDs in the target range.\n>\n> This might be something that can be done inside reformat_dat_files.pl.\n> It's a little outside it's scope, but better than the alternatives.\n\nAlong those lines, here's a draft patch to do just that. It handles\narray type oids as well. Run it like this:\n\nperl reformat_dat_file.pl --map-from 9000 --map-to 2000 *.dat\n\nThere is some attempt at documentation. So far it doesn't map by\ndefault, but that could be changed if we agreed on the convention of\n9000 or whatever.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 14 Feb 2019 17:01:35 +0100",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Why don't we have a small reserved OID range for patch revisions?"
},
{
"msg_contents": "I wrote:\n\n> Along those lines, here's a draft patch to do just that. It handles\n> array type oids as well. Run it like this:\n>\n> perl reformat_dat_file.pl --map-from 9000 --map-to 2000 *.dat\n>\n> There is some attempt at documentation. So far it doesn't map by\n> default, but that could be changed if we agreed on the convention of\n> 9000 or whatever.\n\nIn case we don't want to lose track of this, I added it to the March\ncommitfest with a target of v13. (I didn't see a way to add it to the\nJuly commitfest)\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Sat, 23 Feb 2019 09:13:21 +0100",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Why don't we have a small reserved OID range for patch revisions?"
},
{
"msg_contents": "On 2019-02-08 19:14, Tom Lane wrote:\n> Quite a few people have used OIDs up around 8000 or 9000 for this purpose;\n> I doubt we need a formally reserved range for it. The main problem with\n> doing it is the hazard that the patch'll get committed just like that,\n> suddenly breaking things for everyone else doing likewise.\n\nFor that reason, I'm not in favor of this. Forgetting to update the\ncatversion is already common enough (for me). Adding another step\nbetween having a seemingly final patch and being able to actually commit\nit doesn't seem attractive. Moreover, these \"final adjustments\" would\ntend to require a full rebuild and retest, adding even more overhead.\n\nOID collision doesn't seem to be a significant problem (for me).\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Wed, 27 Feb 2019 10:59:00 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Why don't we have a small reserved OID range for patch revisions?"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2019-02-08 19:14, Tom Lane wrote:\n>> Quite a few people have used OIDs up around 8000 or 9000 for this purpose;\n>> I doubt we need a formally reserved range for it. The main problem with\n>> doing it is the hazard that the patch'll get committed just like that,\n>> suddenly breaking things for everyone else doing likewise.\n\n> For that reason, I'm not in favor of this. Forgetting to update the\n> catversion is already common enough (for me). Adding another step\n> between having a seemingly final patch and being able to actually commit\n> it doesn't seem attractive. Moreover, these \"final adjustments\" would\n> tend to require a full rebuild and retest, adding even more overhead.\n\n> OID collision doesn't seem to be a significant problem (for me).\n\nUm, I beg to differ. It's not at all unusual for pending patches to\nbit-rot for no reason other than suddenly getting an OID conflict.\nI don't have to look far for a current example:\nhttps://travis-ci.org/postgresql-cfbot/postgresql/builds/498955351\n\nWe do need a couple of pieces of new infrastructure to make this idea\nconveniently workable. One is a tool to allow automatic OID renumbering\ninstead of having to do it by hand; Naylor has a draft for that upthread.\n\nPerhaps it'd be useful for genbki.pl to spit out a warning (NOT an\nerror) if it sees OIDs in the reserved range. I'm not sure that that'd\nreally be worth the trouble though, since one could easily forget\nabout it while reviewing/testing just before commit, and it'd just be\nuseless noise up until it was time to commit.\n\nAnother issue, as Robert pointed out, is that this does need to be\na formal convention not something undocumented. Naylor's patch adds\na mention of it in bki.sgml, but I wonder if anyplace else should\ntalk about it.\n\nI concede your point that a prudent committer would do a rebuild and\nretest rather than just trusting the tool. But really, how much\nextra work is that? If you've spent any time optimizing your workflow,\na full rebuild and check-world should be under five minutes on any\nhardware anyone would be using for development today.\n\nAnd, yeah, we probably will make mistakes like this, just like we\nsometimes forget the catversion bump. As long as we have a tool\nfor OID renumbering, I don't think that's the end of the world.\nFixing it after the fact isn't going to be a big deal, any more\nthan it is for catversion.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 27 Feb 2019 16:27:04 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Why don't we have a small reserved OID range for patch revisions?"
},
{
"msg_contents": "On Wed, Feb 27, 2019 at 1:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > OID collision doesn't seem to be a significant problem (for me).\n>\n> Um, I beg to differ. It's not at all unusual for pending patches to\n> bit-rot for no reason other than suddenly getting an OID conflict.\n> I don't have to look far for a current example:\n> https://travis-ci.org/postgresql-cfbot/postgresql/builds/498955351\n\nOID conflicts are not that big of a deal when building a patch\nlocally, because almost everyone knows what the exact problem is\nimmediately, and because you probably have more than a passing\ninterest in the patch to even do that much. However, the continuous\nintegration stuff has created an expectation that your patch shouldn't\nbe left to bitrot for long. Silly mechanical bitrot now seems like a\nmuch bigger problem than it was before these developments. It unfairly\nputs reviewers off engaging.\n\nPatch authors shouldn't be left with any excuse for leaving their\npatch to bitrot for long. And, more casual patch reviewers shouldn't\nhave any excuse for not downloading a patch and applying it locally,\nso that they can spend a spare 10 minutes kicking the tires.\n\n> Perhaps it'd be useful for genbki.pl to spit out a warning (NOT an\n> error) if it sees OIDs in the reserved range. I'm not sure that that'd\n> really be worth the trouble though, since one could easily forget\n> about it while reviewing/testing just before commit, and it'd just be\n> useless noise up until it was time to commit.\n\nMy sense is that we should err on the side of being informative.\n\n> Another issue, as Robert pointed out, is that this does need to be\n> a formal convention not something undocumented. Naylor's patch adds\n> a mention of it in bki.sgml, but I wonder if anyplace else should\n> talk about it.\n\nWhy not have unused_oids reference the convention as a \"tip\"?\n\n> I concede your point that a prudent committer would do a rebuild and\n> retest rather than just trusting the tool. But really, how much\n> extra work is that? If you've spent any time optimizing your workflow,\n> a full rebuild and check-world should be under five minutes on any\n> hardware anyone would be using for development today.\n\nIf you use the \"check-world parallel\" recipe on the committing\nchecklist Wiki page, and if you use ccache, ~2 minutes is attainable\nfor optimized builds (though the recipe doesn't work on all release\nbranches). I don't think that a committer should be a committing\nanything if they're not willing to do this much. It's not just prudent\n-- it is the *bare minimum* when committing a patch that creates\nsystem catalog entries.\n\n-- \nPeter Geoghegan\n\n",
"msg_date": "Wed, 27 Feb 2019 13:50:35 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Why don't we have a small reserved OID range for patch revisions?"
},
{
"msg_contents": "I wrote:\n> We do need a couple of pieces of new infrastructure to make this idea\n> conveniently workable. One is a tool to allow automatic OID renumbering\n> instead of having to do it by hand; Naylor has a draft for that upthread.\n\nOh: arguably, something else we'd need to do to ensure that OID\nrenumbering is trouble-free is to institute a strict rule that OID\nreferences in the *.dat files must be symbolic. We had not bothered\nto convert every single reference type before, reasoning that some\nof them were too little-used to be worth the trouble; but someday\nthat'll rise up to bite us, if semi-automated renumbering becomes\na thing.\n\nIt looks to me like the following OID columns remain unconverted:\n\npg_class.reltype\npg_database.dattablespace\npg_ts_config.cfgparser\npg_ts_config_map.mapcfg, mapdict\npg_ts_dict.dicttemplate\npg_type.typcollation\npg_type.typrelid\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 27 Feb 2019 16:59:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Why don't we have a small reserved OID range for patch revisions?"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Wed, Feb 27, 2019 at 1:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> OID collision doesn't seem to be a significant problem (for me).\n\n>> Um, I beg to differ. It's not at all unusual for pending patches to\n>> bit-rot for no reason other than suddenly getting an OID conflict.\n>> I don't have to look far for a current example:\n>> https://travis-ci.org/postgresql-cfbot/postgresql/builds/498955351\n\n> Patch authors shouldn't be left with any excuse for leaving their\n> patch to bitrot for long. And, more casual patch reviewers shouldn't\n> have any excuse for not downloading a patch and applying it locally,\n> so that they can spend a spare 10 minutes kicking the tires.\n\nYeah, that latter point is really the killer argument. We don't want\nto make people spend valuable review time on fixing uninteresting OID\nconflicts. It's even more annoying that several people might have to\nduplicate the same work, if they're testing a patch independently.\n\nGiven a convention that under-development patches use OIDs in the 9K\nrange, the only time anybody would have to resolve OID conflicts for\ntesting would be if they were trying to test the combination of two\nor more patches. Even then, an OID-renumbering script would make it\npretty painless: apply patch 1, renumber its OIDs to someplace else,\napply patch 2, repeat as needed.\n\n> Why not have unused_oids reference the convention as a \"tip\"?\n\nHmm, could be helpful.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 27 Feb 2019 17:09:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Why don't we have a small reserved OID range for patch revisions?"
},
{
"msg_contents": "On 2019-02-27 22:27, Tom Lane wrote:\n>> OID collision doesn't seem to be a significant problem (for me).\n> \n> Um, I beg to differ. It's not at all unusual for pending patches to\n> bit-rot for no reason other than suddenly getting an OID conflict.\n> I don't have to look far for a current example:\n\nI'm not saying it doesn't happen, but that it's not a significant\nproblem overall.\n\nThe changes of a patch (a) allocating a new OID, (b) a second patch\nallocating a new OID, (c) both being in flight at the same time, (d)\nactually picking the same OID, are small. I guess the overall time lost\nto this issue is perhaps 2 hours per year. On the other hand, with\nabout 2000 commits to master per year, if this renumbering business only\nadds 2 seconds of overhead to committing, we're coming out behind.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Wed, 27 Feb 2019 23:38:58 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Why don't we have a small reserved OID range for patch revisions?"
},
{
"msg_contents": "On 2019-02-27 22:50, Peter Geoghegan wrote:\n> However, the continuous\n> integration stuff has created an expectation that your patch shouldn't\n> be left to bitrot for long. Silly mechanical bitrot now seems like a\n> much bigger problem than it was before these developments. It unfairly\n> puts reviewers off engaging.\n\nIf this is the problem (although I think we'd find that OID collisions\nare rather rare compared to other gratuitous cfbot failures), why not\nhave the cfbot build with a flag that ignores OID collisions?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Wed, 27 Feb 2019 23:44:30 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Why don't we have a small reserved OID range for patch revisions?"
},
{
"msg_contents": "On Wed, Feb 27, 2019 at 2:39 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> The changes of a patch (a) allocating a new OID, (b) a second patch\n> allocating a new OID, (c) both being in flight at the same time, (d)\n> actually picking the same OID, are small.\n\nBut...they are. Most patches don't create new system catalog entries\nat all. Of those that do, the conventions around assigning new OIDs\nmake it fairly likely that problems will emerge.\n\n> I guess the overall time lost\n> to this issue is perhaps 2 hours per year. On the other hand, with\n> about 2000 commits to master per year, if this renumbering business only\n> adds 2 seconds of overhead to committing, we're coming out behind.\n\nThe time spent on the final commit is not the cost we're concerned\nabout, though. It isn't necessary to do that more than once, whereas\nall but the most trivial of patches receive multiple rounds of review\nand revision.\n\n-- \nPeter Geoghegan\n\n",
"msg_date": "Wed, 27 Feb 2019 14:44:32 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Why don't we have a small reserved OID range for patch revisions?"
},
{
"msg_contents": "On Wed, Feb 27, 2019 at 2:44 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> If this is the problem (although I think we'd find that OID collisions\n> are rather rare compared to other gratuitous cfbot failures), why not\n> have the cfbot build with a flag that ignores OID collisions?\n\nHow would that work?\n\n-- \nPeter Geoghegan\n\n",
"msg_date": "Wed, 27 Feb 2019 14:45:14 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Why don't we have a small reserved OID range for patch revisions?"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Wed, Feb 27, 2019 at 2:44 PM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n>> If this is the problem (although I think we'd find that OID collisions\n>> are rather rare compared to other gratuitous cfbot failures), why not\n>> have the cfbot build with a flag that ignores OID collisions?\n\n> How would that work?\n\nIt could work for conflicting OIDs in different system catalogs (so that\nthe \"conflict\" is an artifact of our assignment rules rather than an\nintrinsic problem). But I think the majority of new hand-assigned OIDs\nare in pg_proc, so that this kind of hack would not help as much as one\nmight wish.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 27 Feb 2019 17:57:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Why don't we have a small reserved OID range for patch revisions?"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2019-02-27 22:27, Tom Lane wrote:\n>>> OID collision doesn't seem to be a significant problem (for me).\n\n>> Um, I beg to differ. It's not at all unusual for pending patches to\n>> bit-rot for no reason other than suddenly getting an OID conflict.\n>> I don't have to look far for a current example:\n\n> I'm not saying it doesn't happen, but that it's not a significant\n> problem overall.\n\nI do not think you are correct. It may not be a big problem across\nall our incoming patches, but that's only because most of them don't\nhave anything to do with hand-assigned OIDs. Among those that do,\nI think there is a significant problem.\n\nTo try to quantify this a bit, I looked through v12-cycle and pending\npatches that touch the catalog data.\n\nWe've only committed 12 patches adding new hand-assigned OIDs since v11\nwas branched off. (I suspect that's lower than in a typical cycle,\nbut have not attempted to quantify things further back.) Of those,\nonly two seem to have needed OID adjustments after initial posting,\nbut that's mostly because most of them were committer-originated patches\nthat got pushed within a week or two. That's certainly not the typical\nwait time for a patch submitted by anybody else. Also, a lot of these\npatches recycled OIDs that'd recently been freed by patches such as the\nabstime-ectomy, which means that the amount of OID conflict created for\npending patches is probably *really* low in this cycle-so-far, compared\nto our historical norms.\n\nOf what's in the queue to be reviewed right now, there are just\n20 (out of 150-plus) patches that touch the catalog/*.dat files.\nI got this number by groveling through the cfbot's reports of\npatch applications, to see which patches touched those files.\nIt might omit some patches that the cfbot failed to make sense of.\nAlso, I'm pretty sure that a few of these patches don't actually\nassign any new OIDs but just change existing entries, or create\nonly entries with auto-assigned OIDs. I did not try to separate\nthose out, however, since the point here is to estimate for how\nmany patches a committer would even need to think about this.\n\nOf those twenty patches, three have unresolved OID conflicts\nright now:\n\nmultivariate MCV lists and histograms\ncommontype polymorphics\nlog10/hyper functions\n\nAnother one has recently had to resolve an OID conflict:\n\nSortSupport implementation on inet/cdir\t\n\nwhich is notable considering that that thread is less than three weeks\nold. (The log10 and commontype threads aren't really ancient either.)\n\nI spent some extra effort looking at the patches that both create more\nthan a few new OIDs and have been around for awhile:\n\nGeneric type subscripting\nKNN for btree\nCustom compression methods\nSQL/JSON: functions\nSQL/JSON: jsonpath\nGenerated columns\nBRIN bloom indexes\n\nThe first four of those have all had to reassign OIDs during their\nlifetime. jsonpath has avoided doing so by choosing fairly high-numbered\nOIDs (6K range) to begin with; which I trust you will agree is a solution\nthat doesn't scale for long. I'm not entirely sure that the last two\nhaven't had to renumber OIDs; I ran out of energy before poking through\ntheir history in detail.\n\nIn short, this situation may look fine from the perspective of a committer\nwith a relatively short timeline to commit, but it's pretty darn awful for\neverybody else. The only way to avoid a ~ 50% failure rate is to choose\nOIDs above 6K, and once everybody starts doing it like that, things are\ngoing to get very unpleasant very quickly.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 27 Feb 2019 22:41:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Why don't we have a small reserved OID range for patch revisions?"
},
{
"msg_contents": "On Wed, Feb 27, 2019 at 10:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> In short, this situation may look fine from the perspective of a committer\n> with a relatively short timeline to commit, but it's pretty darn awful for\n> everybody else. The only way to avoid a ~ 50% failure rate is to choose\n> OIDs above 6K, and once everybody starts doing it like that, things are\n> going to get very unpleasant very quickly.\n\nThe root problem here from the perspective of a non-committer is not\nthat they might have to renumber OIDs a few times over the year or two\nit takes to get their patch merged, but rather that it takes a year or\ntwo to get their patch merged. That's not to say that I have no\nsympathy with people in that situation or don't want to make their\nlives easier, but I'm not really convinced that burdening committers\nwith additional manual steps is the right way to get patches merged\nfaster. This seems like a big piece of new mechanism being invented\nto solve an occasional annoyance. Your statistics are not convincing\nat all: you're arguing that this is a big problem because 2-3% of\npending patches current have an issue here, and some others have in\nthe past, but that's a really small percentage, and the time spent\ndoing OID renumbering must be a tiny percentage of the total time\nanyone spends hacking on PostgreSQL.\n\nI think that the problem here is have a very limited range of OIDs\n(10k) which can be used for this purpose, and the number of OIDs that\nare used in that space is now a significant fraction of the total\n(>4.5k), and the problem is further complicated by the desire to keep\nthe OIDs assigned near the low end of the available numbering space\nand/or near to other OIDs used for similar purposes. The sheer fact\nthat the address space is nearly half-used means that conflicts are\nlikely even if people choose OIDs at random, and when people choose\nOIDs non-randomly -- lowest, round numbers, near to other OIDs -- the\nchances of conflicts just go up.\n\nWe could fix that problem by caring less about keeping all the numbers\ngapless and increasing the size of the reserved space to say 100k, but\njust as a thought, what if we stopped assigning manual OIDs for new\ncatalog entries altogether, except for once at the end of each release\ncycle? Make a way that people can add an entry to pg_proc.dat or\nwhatever without fixing an OID, and let the build scripts generate\none. As many patches as happen during a release cycle will add new\nsuch entries and they'll just all get some OID or other. Then, at the\nend of the release cycle, we'll run a script that finds all of those\ncatalog entries and rewrites the .dat files, adding a permanent OID\nassignment to each one, so that those OIDs will then be fixed for all\nfuture releases (unless we drop the entries or explicitly change\nsomething).\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Thu, 28 Feb 2019 08:59:58 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Why don't we have a small reserved OID range for patch revisions?"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> This seems like a big piece of new mechanism being invented\n> to solve an occasional annoyance. Your statistics are not convincing\n> at all: you're arguing that this is a big problem because 2-3% of\n> pending patches current have an issue here, and some others have in\n> the past, but that's a really small percentage, and the time spent\n> doing OID renumbering must be a tiny percentage of the total time\n> anyone spends hacking on PostgreSQL.\n\nTBH, I find this position utterly baffling. It's true that only a\nsmall percentage of patches have an issue here, because only a small\npercentage of patches dabble in manually-assigned OIDs at all. But\n*among those that do*, there is a huge problem. I had not actually\nrealized how bad it is until I gathered those stats, but it's bad.\n\nI don't understand the objection to inventing a mechanism that will\nhelp those patches and has no impact whatever when working on patches\nthat don't involve manually-assigned OIDs.\n\nAnd, yeah, I'd like us not to have patches hanging around for years\neither, but that's a reality that's not going away.\n\n> We could fix that problem by caring less about keeping all the numbers\n> gapless and increasing the size of the reserved space to say 100k,\n\nWe already had this discussion. Moving FirstNormalObjectId is infeasible\nwithout forcing a dump/reload, which I don't think anyone wants to do.\n\n> but\n> just as a thought, what if we stopped assigning manual OIDs for new\n> catalog entries altogether, except for once at the end of each release\n> cycle?\n\nAnd that's another idea without any basis in reality. What are you\ngoing to do instead? What mechanism will you use to track these\nOIDs so you can clean up later? Who's going to write the code that\nwill support this? Not me. I think the proposal that is on the\ntable is superior.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 28 Feb 2019 10:27:40 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Why don't we have a small reserved OID range for patch revisions?"
},
{
"msg_contents": "On Thu, Feb 28, 2019 at 10:27 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > This seems like a big piece of new mechanism being invented\n> > to solve an occasional annoyance. Your statistics are not convincing\n> > at all: you're arguing that this is a big problem because 2-3% of\n> > pending patches current have an issue here, and some others have in\n> > the past, but that's a really small percentage, and the time spent\n> > doing OID renumbering must be a tiny percentage of the total time\n> > anyone spends hacking on PostgreSQL.\n>\n> TBH, I find this position utterly baffling. It's true that only a\n> small percentage of patches have an issue here, because only a small\n> percentage of patches dabble in manually-assigned OIDs at all. But\n> *among those that do*, there is a huge problem. I had not actually\n> realized how bad it is until I gathered those stats, but it's bad.\n>\n> I don't understand the objection to inventing a mechanism that will\n> help those patches and has no impact whatever when working on patches\n> that don't involve manually-assigned OIDs.\n>\n> And, yeah, I'd like us not to have patches hanging around for years\n> either, but that's a reality that's not going away.\n\nI don't think this is the worst proposal ever. However, I also think\nthat it's not unreasonable to raise the issue that writing OR\nreviewing OR committing a patch already involves adhering to a thicket\nof undocumented rules. When somebody fails to adhere to one of those\nrules, they get ignored or publicly shamed. Now you want to add yet\nanother step to the process - really two. If you want to submit a\npatch that requires new catalog entries, you must know that you're\nsupposed to put those OIDs in this new range that we're going to set\naside for such things, and if you want to commit one, you must know\nthat you're suppose to renumber those OID assignments into some other\nrange. And people are going to screw it up - submitters are going to\nfail to know about this new policy (which will probably be documented\nnowhere, just like all the other ones) - and committers are going to\nfail to remember to renumber things. So, I suspect that for every\nunit of work it saves somebody, it's probably going to generate about\none unit of extra work for somebody else.\n\nA lot of projects have a much less painful process for getting patches\nintegrated than we do. I don't know how those projects maintain\nadequate code quality, but I do know that making it easy to get a\npatch accepted makes people more likely to contribute patches, and\nincreases overall development velocity. It is not even vaguely\nunreasonable to worry about whether making this more complicated is\ngoing to hurt more than it helps, and I don't know why you think\notherwise.\n\n> > but\n> > just as a thought, what if we stopped assigning manual OIDs for new\n> > catalog entries altogether, except for once at the end of each release\n> > cycle?\n>\n> And that's another idea without any basis in reality. What are you\n> going to do instead? What mechanism will you use to track these\n> OIDs so you can clean up later?\n\nRight now every entry in pg_proc.dat includes an OID assignment. What\nI'm proposing is that we would also allow entries that did not have\none, and the build process would assign one while processing the .dat\nfiles. Then later, somebody could use a script that went through and\nrewrote the .dat file to add OID assignments to any entries that\nlacked them. Since the topic of having tools for automated rewrite of\nthose files has been discussed at length, and since we already have a\nscript called reformat_dat_file.pl in the tree which contains\ncomments indicating that it could be modified for bulk editing, said\nscript having been committed BY YOU, I don't understand why you think\nthat bulk editing is infeasible.\n\n> Who's going to write the code that\n> will support this? Not me. I think the proposal that is on the\n> table is superior.\n\nOK. Well, I think that doing nothing is superior to this proposal,\nfor reasons similar to what Peter Eisentraut has already articulated.\nAnd I think rather than blasting forward with your own preferred\nalternative in the face of disagreement, you should be willing to\ndiscuss other possible options. But if you're not willing to do that,\nI can't make you.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Thu, 28 Feb 2019 10:58:49 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Why don't we have a small reserved OID range for patch revisions?"
},
{
"msg_contents": "On Thu, Feb 28, 2019 at 7:59 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I don't think this is the worst proposal ever. However, I also think\n> that it's not unreasonable to raise the issue that writing OR\n> reviewing OR committing a patch already involves adhering to a thicket\n> of undocumented rules. When somebody fails to adhere to one of those\n> rules, they get ignored or publicly shamed. Now you want to add yet\n> another step to the process - really two.\n\nThere does seem to be a real problem with undocumented processes. For\nexample, I must confess that it came as news to me that we already had\na reserved OID range. However, I don't think that there is that much\nof an issue with adding new mechanisms like this, provided it makes it\neasy to do the right thing and hard to do the wrong thing. What Tom\nhas proposed so far is not something that self-evidently meets that\nstandard, but it's also not something that self-evidently fails to\nmeet that standard.\n\nI have attempted to institute some general guidelines for what the\nthicket of rules are by creating the \"committing checklist\" page. This\nis necessarily imperfect, because the rules are in many cases open to\ninterpretation, often for good practical reasons. I don't have any\nsympathy for committers that find it hard to remember to do a\ncatversion bump with any kind of regularity. That complexity seems\ninherent, not incidental, since it's often convenient to ignore\ncatalog incompatibilities during development.\n\n> So, I suspect that for every\n> unit of work it saves somebody, it's probably going to generate about\n> one unit of extra work for somebody else.\n\nMaybe so. I think that you're jumping to conclusions, though.\n\n> A lot of projects have a much less painful process for getting patches\n> integrated than we do. I don't know how those projects maintain\n> adequate code quality, but I do know that making it easy to get a\n> patch accepted makes people more likely to contribute patches, and\n> increases overall development velocity. It is not even vaguely\n> unreasonable to worry about whether making this more complicated is\n> going to hurt more than it helps, and I don't know why you think\n> otherwise.\n\nBut you seem to want to make the mechanism itself even more\ncomplicated, not less complicated (based on your remarks about making\nOID assignment happen during the build). In order to make the use of\nthe mechanism easier. That seems worth considering, but ISTM that this\nis talking at cross purposes. There are far simpler ways of making it\nunlikely that a committer is going to miss this step. There is also a\nsimple way of noticing that they do quickly (e.g. a simple buildfarm\ntest).\n\n> Right now every entry in pg_proc.dat includes an OID assignment. What\n> I'm proposing is that we would also allow entries that did not have\n> one, and the build process would assign one while processing the .dat\n> files. Then later, somebody could use a script that went through and\n> rewrote the .dat file to add OID assignments to any entries that\n> lacked them. Since the topic of having tools for automated rewrite of\n> those files has been discussed at length, and since we already have a\n> script called reformat_dat_file.pl in the tree which contains\n> comments indicating that it could be modified for bulk editing, said\n> script having been committed BY YOU, I don't understand why you think\n> that bulk editing is infeasible.\n\nI'm also curious to hear what Tom thinks about this.\n\n> OK. Well, I think that doing nothing is superior to this proposal,\n> for reasons similar to what Peter Eisentraut has already articulated.\n> And I think rather than blasting forward with your own preferred\n> alternative in the face of disagreement, you should be willing to\n> discuss other possible options. But if you're not willing to do that,\n> I can't make you.\n\nPeter seemed to not want to do this on the grounds that it isn't\nnecessary at all, whereas you think that it doesn't go far enough. If\nthere is a consensus against what Tom has said, it's a cacophonous one\nthat cannot really be said to be in favor of anything.\n\n-- \nPeter Geoghegan\n\n",
"msg_date": "Thu, 28 Feb 2019 14:36:16 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Why don't we have a small reserved OID range for patch revisions?"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Thu, Feb 28, 2019 at 7:59 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>> OK. Well, I think that doing nothing is superior to this proposal,\n>> for reasons similar to what Peter Eisentraut has already articulated.\n>> And I think rather than blasting forward with your own preferred\n>> alternative in the face of disagreement, you should be willing to\n>> discuss other possible options. But if you're not willing to do that,\n>> I can't make you.\n\n> Peter seemed to not want to do this on the grounds that it isn't\n> necessary at all, whereas you think that it doesn't go far enough. If\n> there is a consensus against what Tom has said, it's a cacophonous one\n> that cannot really be said to be in favor of anything.\n\nThe only thing that's really clear is that some senior committers don't\nwant to be bothered because they don't think there's a problem here that\njustifies any additional expenditure of their time. Perhaps they are\nright, because I'd expected some comments from non-committer developers\nconfirming that they see a problem, and the silence is deafening.\n\nI'm inclined to commit some form of Naylor's tool improvement anyway,\nbecause I have use for it; I remember times when I've renumbered OIDs\nmanually in patches, and it wasn't much fun. But I can't force a\nprocess change if there's not consensus for it among the committers.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 28 Feb 2019 18:09:41 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Why don't we have a small reserved OID range for patch revisions?"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n>>> just as a thought, what if we stopped assigning manual OIDs for new\n>>> catalog entries altogether, except for once at the end of each release\n>>> cycle?\n\nActually ... that leads to an idea that wouldn't add any per-commit\noverhead, or really much change at all to existing processes. Given\nthe existence of a reliable OID-renumbering tool, we could:\n\n1. Encourage people to develop new patches using chosen-at-random\nhigh OIDs, in the 7K-9K range. They do this already, it'd just\nbe encouraged instead of discouraged.\n\n2. Commit patches as received.\n\n3. Once each devel cycle, after feature freeze, somebody uses the\nrenumbering tool to shove all the new OIDs down to lower numbers,\nfreeing the high-OID range for the next devel cycle. We'd have\nto remember to do that, but it could be added to the RELEASE_CHANGES\nchecklist.\n\nIn this scheme, OID collisions are a problem for in-progress patches\nonly if two patches are unlucky enough to choose the same random\nhigh OIDs during the same devel cycle. That's unlikely, or at least\na good bit less likely than collisions are today. If/when it does\nhappen we'd have a couple of alternatives for ameliorating the problem\n--- either the not-yet-committed patch could use the renumbering tool\non their own OIDs, or we could do an off-schedule run of step 3 to get\nthe already-committed OIDs out of their way.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 28 Feb 2019 18:40:24 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Why don't we have a small reserved OID range for patch revisions?"
},
{
"msg_contents": "On Thu, Feb 28, 2019 at 3:09 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The only thing that's really clear is that some senior committers don't\n> want to be bothered because they don't think there's a problem here that\n> justifies any additional expenditure of their time. Perhaps they are\n> right, because I'd expected some comments from non-committer developers\n> confirming that they see a problem, and the silence is deafening.\n\nI don't think that you can take that as too strong a signal. The\nincentives are different for non-committers.\n\n> I'm inclined to commit some form of Naylor's tool improvement anyway,\n> because I have use for it; I remember times when I've renumbered OIDs\n> manually in patches, and it wasn't much fun. But I can't force a\n> process change if there's not consensus for it among the committers.\n\nI think that that's a reasonable thing to do, provided there is\nobvious feedback that makes it highly unlikely that the committer will\nmake an error at the last moment. I have a hard time coming up with a\nsuggestion that won't be considered annoying by at least one person,\nthough.\n\nWould it be awful if there was a #warning directive that kicked in\nwhen the temporary OID range is in use? It should be possible to do\nthat without breaking -Werror builds, which I believe Robert uses (I\nam reminded of the Flex bug that we used to have to work around). It's\nnot like there are that many patches that need to assign OIDs to new\ncatalog entries. I would suggest that we put the warning in the\nregression tests if I didn't know that that could be missed by the use\nof parallel variants, where the output flies by. There is no precedent\nfor using #warning for something like that, but offhand it seems like\nthe only thing that would work consistently.\n\nI don't really mind having to do slightly more work when the issue\ncrops up, especially if that means less work for everyone involved in\naggregate, which is the cost that I'm concerned about the most.\nHowever, an undocumented or under-documented process that requires a\nfixed amount of extra mental effort when committing *anything* is\nanother matter.\n\n-- \nPeter Geoghegan\n\n",
"msg_date": "Thu, 28 Feb 2019 15:41:18 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Why don't we have a small reserved OID range for patch revisions?"
},
{
"msg_contents": "\n\nOn 3/1/19 12:41 AM, Peter Geoghegan wrote:\n> On Thu, Feb 28, 2019 at 3:09 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The only thing that's really clear is that some senior committers don't\n>> want to be bothered because they don't think there's a problem here that\n>> justifies any additional expenditure of their time. Perhaps they are\n>> right, because I'd expected some comments from non-committer developers\n>> confirming that they see a problem, and the silence is deafening.\n> \n> I don't think that you can take that as too strong a signal. The\n> incentives are different for non-committers.\n> \n>> I'm inclined to commit some form of Naylor's tool improvement anyway,\n>> because I have use for it; I remember times when I've renumbered OIDs\n>> manually in patches, and it wasn't much fun. But I can't force a\n>> process change if there's not consensus for it among the committers.\n> \n> I think that that's a reasonable thing to do, provided there is\n> obvious feedback that makes it highly unlikely that the committer will\n> make an error at the last moment. I have a hard time coming up with a\n> suggestion that won't be considered annoying by at least one person,\n> though.\n> \n\nFWIW I personally would not mind if such tool / process was added. But I\nhave a related question - do we have some sort of list of such processes\nthat I could check? That is, a list of stuff that is expected to be done\nby a committer before a commit?\n\nI do recall we have [1], but perhaps we have something else.\n\nhttps://wiki.postgresql.org/wiki/Committing_checklist\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 1 Mar 2019 00:57:04 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Why don't we have a small reserved OID range for patch revisions?"
},
{
"msg_contents": "On Thu, Feb 28, 2019 at 3:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> >>> just as a thought, what if we stopped assigning manual OIDs for new\n> >>> catalog entries altogether, except for once at the end of each release\n> >>> cycle?\n>\n> Actually ... that leads to an idea that wouldn't add any per-commit\n> overhead, or really much change at all to existing processes. Given\n> the existence of a reliable OID-renumbering tool, we could:\n\n> In this scheme, OID collisions are a problem for in-progress patches\n> only if two patches are unlucky enough to choose the same random\n> high OIDs during the same devel cycle. That's unlikely, or at least\n> a good bit less likely than collisions are today.\n\nThat sounds like a reasonable compromise. Perhaps the unused_oids\nscript could give specific guidance on using a randomly determined\nsmall range of contiguous OIDs that fall within the current range for\nthat devel cycle. That would prevent collisions caused by the natural\nhuman tendency to prefer a round number. Having contiguous OIDs for\nthe same patch seems worth preserving.\n\n-- \nPeter Geoghegan\n\n",
"msg_date": "Thu, 28 Feb 2019 15:57:12 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Why don't we have a small reserved OID range for patch revisions?"
},
{
"msg_contents": "On Thu, Feb 28, 2019 at 6:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> 1. Encourage people to develop new patches using chosen-at-random\n> high OIDs, in the 7K-9K range. They do this already, it'd just\n> be encouraged instead of discouraged.\n>\n> 2. Commit patches as received.\n>\n> 3. Once each devel cycle, after feature freeze, somebody uses the\n> renumbering tool to shove all the new OIDs down to lower numbers,\n> freeing the high-OID range for the next devel cycle. We'd have\n> to remember to do that, but it could be added to the RELEASE_CHANGES\n> checklist.\n\nSure, that sounds nice. It seems like it might be slightly less\nconvenient for non-committers than what I was proposing, but still\nmore convenient than what they're doing right now. And it's also more\nconvenient for committers, because they're not being asked to manually\nfiddle patches at the last moment, something that I at least find\nrather error-prone. It also, and I think this is really good, moves\nin the direction of fewer things for both patch authors and patch\ncommitters to worry about doing wrong. Instead of throwing rocks at\npeople whose OID assignments are \"wrong,\" we just accept what people\ndo and adjust it later if it makes sense to do so.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Fri, 1 Mar 2019 14:36:50 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Why don't we have a small reserved OID range for patch revisions?"
},
{
"msg_contents": "On Thu, Feb 28, 2019 at 5:36 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I have attempted to institute some general guidelines for what the\n> thicket of rules are by creating the \"committing checklist\" page. This\n> is necessarily imperfect, because the rules are in many cases open to\n> interpretation, often for good practical reasons. I don't have any\n> sympathy for committers that find it hard to remember to do a\n> catversion bump with any kind of regularity. That complexity seems\n> inherent, not incidental, since it's often convenient to ignore\n> catalog incompatibilities during development.\n\nBumping catversion is something of a special case, because one does it\nso often that one gets used to remembering it. The rules are usually\nnot too hard to remember, although they are trickier when you don't\ndirectly change anything in src/include/catalog but just change the\ndefinition of some node that may be serialized in a catalog someplace.\nIt would be neat if there were a tool you could run to somehow tell\nyou whether catversion needs to be changed for a given patch.\n\nBut you know there are a log of other version numbers floating around,\nlike XLOG_PAGE_MAGIC or the pg_dump archive version, and it is not\nreally that easy to know -- as a new contributor or sometimes even as\nan experienced one -- whether your work requires any changes to that\nstuff, or even that that stuff *exists*. Indeed, XLOG_PAGE_MAGIC is a\nparticularly annoying case, both because the constant name doesn't\ncontain VERSION and because the comment just says /* can be used as\nWAL version indicator */ which does not exactly make it clear that if\nyou fail to bump it when you touch the WAL format you will Make People\nUnhappy.\n\nIndeed, Simon got complaints a number of years ago (2010, it looks\nlike) when he had the temerity to change the magic number to some\nother unrelated value instead of just incrementing it by one.\nAlthough I think that the criticism was to a certain extent\nwell-founded -- why deviate from previous practice? -- there is at the\nsame time something a little crazy about somebody getting excited\nabout the particular value that has been chosen for a number that is\ndescribed in the very name of the constant as a MAGIC number. And\nespecially because there is absolutely zip in the way of code comments\nor a README that explain to you how to do it \"right.\"\n\n> > So, I suspect that for every\n> > unit of work it saves somebody, it's probably going to generate about\n> > one unit of extra work for somebody else.\n>\n> Maybe so. I think that you're jumping to conclusions, though.\n\nI did say \"I suspect...\" which was intended as a concession that I\ndon't know for sure.\n\n> But you seem to want to make the mechanism itself even more\n> complicated, not less complicated (based on your remarks about making\n> OID assignment happen during the build). In order to make the use of\n> the mechanism easier. That seems worth considering, but ISTM that this\n> is talking at cross purposes. There are far simpler ways of making it\n> unlikely that a committer is going to miss this step. There is also a\n> simple way of noticing that they do quickly (e.g. a simple buildfarm\n> test).\n\nWell, perhaps I'm proposing some additional code, but I don't think of\nthat as making the mechanism more complicated. I want to make it\nsimpler for patch submitters and reviewers and committers to not make\nmistakes that they have to run around and fix. If there are fewer\nkinds of things that qualify as mistakes, as in Tom's latest proposal,\nthen we are moving in the right direction IMO regardless of anything\nelse.\n\n> > OK. Well, I think that doing nothing is superior to this proposal,\n> > for reasons similar to what Peter Eisentraut has already articulated.\n> > And I think rather than blasting forward with your own preferred\n> > alternative in the face of disagreement, you should be willing to\n> > discuss other possible options. But if you're not willing to do that,\n> > I can't make you.\n>\n> Peter seemed to not want to do this on the grounds that it isn't\n> necessary at all, whereas you think that it doesn't go far enough. If\n> there is a consensus against what Tom has said, it's a cacophonous one\n> that cannot really be said to be in favor of anything.\n\nI think Peter and I are more agreeing than we are at the opposite ends\nof a spectrum, but more importantly, I think it is worth having a\ndiscussion first about what people like and dislike, and what goals\nthey have, and then only if necessary, counting the votes afterwards.\nI don't like having the feeling that because I have a different view\nof something and want to write an email about that, I am somehow an\nimpediment to progress. I think if we reduce discussions to\nyou're-for-it-or-your-against-it, that's not that helpful.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Fri, 1 Mar 2019 14:56:10 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Why don't we have a small reserved OID range for patch revisions?"
},
{
"msg_contents": "On Fri, Mar 1, 2019 at 11:56 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> It would be neat if there were a tool you could run to somehow tell\n> you whether catversion needs to be changed for a given patch.\n\nThat seems infeasible because of stored rules. A lot of things bleed\ninto that. We could certainly do better at documenting this on the\n\"committing checklist\" page, though.\n\n> Indeed, Simon got complaints a number of years ago (2010, it looks\n> like) when he had the temerity to change the magic number to some\n> other unrelated value instead of just incrementing it by one.\n\nOff with his head!\n\n> Although I think that the criticism was to a certain extent\n> well-founded -- why deviate from previous practice? -- there is at the\n> same time something a little crazy about somebody getting excited\n> about the particular value that has been chosen for a number that is\n> described in the very name of the constant as a MAGIC number. And\n> especially because there is absolutely zip in the way of code comments\n> or a README that explain to you how to do it \"right.\"\n\nI have learned to avoid ambiguity more than anything else, because\nambiguity causes patches to flounder indefinitely, whereas it's\nusually not that hard to fix something that's broken. I agree -\nanything that adds ambiguity rather than taking it away is a big\nproblem.\n\n> Well, perhaps I'm proposing some additional code, but I don't think of\n> that as making the mechanism more complicated. I want to make it\n> simpler for patch submitters and reviewers and committers to not make\n> mistakes that they have to run around and fix.\n\nRight. So do I. I just don't think that it's that bad to ask the final\ncommitter to do something once, rather than getting everyone else\n(including committers) to do it multiple times. If we can avoid even\nthis burden, and totally centralize the management of the OID space,\nthen so much the better.\n\n> If there are fewer\n> kinds of things that qualify as mistakes, as in Tom's latest proposal,\n> then we are moving in the right direction IMO regardless of anything\n> else.\n\nI'm glad that we now have a plan that is a clear step forward.\n\n> I think Peter and I are more agreeing than we are at the opposite ends\n> of a spectrum, but more importantly, I think it is worth having a\n> discussion first about what people like and dislike, and what goals\n> they have, and then only if necessary, counting the votes afterwards.\n\nI agree that that's totally worthwhile.\n\n> I don't like having the feeling that because I have a different view\n> of something and want to write an email about that, I am somehow an\n> impediment to progress. I think if we reduce discussions to\n> you're-for-it-or-your-against-it, that's not that helpful.\n\nThat was not my intention. The way that you brought the issue of the\ndifficulty of being a contributor in general into it was unhelpful,\nthough. It didn't seem useful or fair to link Tom's position to a big,\nwell known controversy.\n\nWe now have a solution that everyone is happy with, or can at least\nlive with, which suggests to me that Tom wasn't being intransigent or\ninsensitive to the concerns of contributors.\n\n-- \nPeter Geoghegan\n\n",
"msg_date": "Fri, 1 Mar 2019 12:42:02 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Why don't we have a small reserved OID range for patch revisions?"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Fri, Mar 1, 2019 at 11:56 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>> It would be neat if there were a tool you could run to somehow tell\n>> you whether catversion needs to be changed for a given patch.\n\n> That seems infeasible because of stored rules. A lot of things bleed\n> into that. We could certainly do better at documenting this on the\n> \"committing checklist\" page, though.\n\nA first approximation to that is \"did you touch readfuncs.c\", though\nthat rule will give a false positive if you only changed Plan nodes.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 01 Mar 2019 18:05:05 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Why don't we have a small reserved OID range for patch revisions?"
},
{
"msg_contents": "John Naylor <john.naylor@2ndquadrant.com> writes:\n>> On 2/8/19, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> A script such as you suggest might be a good way to reduce the temptation\n>>> to get lazy at the last minute. Now that the catalog data is pretty\n>>> machine-readable, I suspect it wouldn't be very hard --- though I'm\n>>> not volunteering either. I'm envisioning something simple like \"renumber\n>>> all OIDs in range mmmm-nnnn into range xxxx-yyyy\", perhaps with the\n>>> ability to skip any already-used OIDs in the target range.\n\n>> This might be something that can be done inside reformat_dat_files.pl.\n>> It's a little outside it's scope, but better than the alternatives.\n\n> Along those lines, here's a draft patch to do just that. It handles\n> array type oids as well. Run it like this:\n\n> perl reformat_dat_file.pl --map-from 9000 --map-to 2000 *.dat\n\nI took a quick look at this. I went ahead and pushed the parts that\nwere just code cleanup in reformat_dat_file.pl, since that seemed\npretty uncontroversial. As far as the rest of it goes:\n\n* I'm really not terribly happy with sticking this functionality into\nreformat_dat_file.pl. First, there's an issue of discoverability:\nit's not obvious that a script named that way would have such an\nability. Second, it clutters the script in a way that seems to me\nto hinder its usefulness as a basis for one-off hacks. So I'd really\nrather have a separate script named something like \"renumber_oids.pl\",\neven if there's a good deal of code duplication between it and\nreformat_dat_file.pl.\n\n* In my vision of what this might be good for, I think it's important\nthat it be possible to specify a range of input OIDs to renumber, not\njust \"everything above N\". I agree the output range only needs a\nstarting OID.\n\nBTW, I changed the CF entry's target back to v12; I don't see a\nreason not to get this done this month, and indeed kind of wish\nit was available right now ;-)\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 08 Mar 2019 12:14:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Why don't we have a small reserved OID range for patch revisions?"
},
{
"msg_contents": "On Sat, Mar 9, 2019 at 1:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I took a quick look at this. I went ahead and pushed the parts that\n> were just code cleanup in reformat_dat_file.pl, since that seemed\n> pretty uncontroversial. As far as the rest of it goes:\n\nOkay, thanks.\n\n> * I'm really not terribly happy with sticking this functionality into\n> reformat_dat_file.pl. First, there's an issue of discoverability:\n> it's not obvious that a script named that way would have such an\n> ability. Second, it clutters the script in a way that seems to me\n> to hinder its usefulness as a basis for one-off hacks. So I'd really\n> rather have a separate script named something like \"renumber_oids.pl\",\n> even if there's a good deal of code duplication between it and\n> reformat_dat_file.pl.\n\n> * In my vision of what this might be good for, I think it's important\n> that it be possible to specify a range of input OIDs to renumber, not\n> just \"everything above N\". I agree the output range only needs a\n> starting OID.\n\nNow it looks like:\n\nperl renumber_oids.pl --first-mapped-oid 8000 --last-mapped-oid 8999\n --first-target-oid 2000 *.dat\n\nTo prevent a maintenance headache, I didn't copy any of the formatting\nlogic over. You'll also have to run reformat_dat_files.pl afterwards\nto restore that. It seems to work, but I haven't tested thoroughly.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 11 Mar 2019 01:11:18 +0800",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Why don't we have a small reserved OID range for patch revisions?"
},
{
"msg_contents": "John Naylor <john.naylor@2ndquadrant.com> writes:\n> Now it looks like:\n> perl renumber_oids.pl --first-mapped-oid 8000 --last-mapped-oid 8999\n> --first-target-oid 2000 *.dat\n> To prevent a maintenance headache, I didn't copy any of the formatting\n> logic over. You'll also have to run reformat_dat_files.pl afterwards\n> to restore that. It seems to work, but I haven't tested thoroughly.\n\nI didn't like the use of Data::Dumper, because it made it quite impossible\nto check what the script had done by eyeball. After some thought\nI concluded that we could probably just apply the changes via\nsearch-and-replace, which is pretty ugly and low-tech but it leads to\neasily diffable results, whether or not the initial state is exactly\nwhat reformat_dat_files would produce.\n\nI also changed things so that the OID mapping is computed before we start\nchanging any files, because as it stood the objects would get renumbered\nin a pretty random order; and I renamed one of the switches so they all\nhave unique one-letter abbreviations.\n\nExperimenting with this, I realized that it couldn't renumber OIDs that\nare defined in .h files rather than .dat files, which is a serious\ndeficiency, but given the search-and-replace implementation it's not too\nhard to fix up the .h files as well. So I did that, and removed the\nexpectation that the target files would be listed on the command line;\nthat seems more likely to be a foot-gun than to do anything useful.\n\nI've successfully done check-world after renumbering every OID above\n4000 to somewhere else. I also tried renumbering everything below\n4000, which unsurprisingly blew up because there are various catalog\ncolumns we haven't fixed to use symbolic OIDs. (The one that initdb\nimmediately trips over is pg_database.dattablespace.) I'm not sure\nif it's worth the trouble to make that totally clean, but I suspect\nwe ought to at least mop up text-search references to be symbolic.\nThat's material for a separate patch though.\n\nThis seems committable from my end --- any further comments?\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 11 Mar 2019 17:36:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Why don't we have a small reserved OID range for patch revisions?"
},
{
"msg_contents": "I wrote:\n> I've successfully done check-world after renumbering every OID above\n> 4000 to somewhere else. I also tried renumbering everything below\n> 4000, which unsurprisingly blew up because there are various catalog\n> columns we haven't fixed to use symbolic OIDs. (The one that initdb\n> immediately trips over is pg_database.dattablespace.) I'm not sure\n> if it's worth the trouble to make that totally clean, but I suspect\n> we ought to at least mop up text-search references to be symbolic.\n> That's material for a separate patch though.\n\nSo I couldn't resist poking at that, and after a couple hours' work\nI have the attached patch, which removes all remaining hard-wired\nOID references in the .dat files.\n\nUsing this, I renumbered all the OIDs in include/catalog, and behold\nthings pretty much worked. I got through check-world after hacking\nup these points:\n\n* Unsurprisingly, there are lots of regression tests that have object\nOIDs hard-wired in queries and/or expected output.\n\n* initdb.c has a couple of places that know that template1 has OID 1.\n\n* information_schema.sql has several SQL-language functions that\nhard-wire the OIDs of assorted built-in types.\n\nI'm not particularly fussed about the first two points, but the\nlast is a bit worrisome. It's not too hard to imagine somebody\nadding knowledge of their new type to those functions, and the\ncode getting broken by a renumbering pass, and us not noticing\nif the point isn't stressed by a regression test (which mostly\nthose functions aren't).\n\nWe could imagine fixing those functions along the lines of\n\n CASE WHEN $2 = -1 /* default typmod */\n THEN null\n- WHEN $1 IN (1042, 1043) /* char, varchar */\n+ WHEN $1 IN ('pg_catalog.bpchar'::pg_catalog.regtype,\n+ 'pg_catalog.varchar'::pg_catalog.regtype)\n THEN $2 - 4\n\nwhich would add some parsing overhead, but I'm not sure if anyone\nwould notice that.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 12 Mar 2019 00:56:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Why don't we have a small reserved OID range for patch revisions?"
},
{
"msg_contents": "On Tue, Mar 12, 2019 at 5:36 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> This seems committable from my end --- any further comments?\n\nI gave it a read and it looks good to me, but I haven't tried to run it.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Tue, 12 Mar 2019 15:22:24 +0800",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Why don't we have a small reserved OID range for patch revisions?"
},
{
"msg_contents": "John Naylor <john.naylor@2ndquadrant.com> writes:\n> On Tue, Mar 12, 2019 at 5:36 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> This seems committable from my end --- any further comments?\n\n> I gave it a read and it looks good to me, but I haven't tried to run it.\n\nThanks for checking. I've pushed both patches now.\n\nI noticed while looking at the pg_class data that someone had stuck in a\nhack to make genbki.pl substitute for \"PGHEAPAM\", which AFAICS is just\nfollowing the bad old precedent of PGNSP and PGUID. I got rid of that\nin favor of using the already-existing BKI_LOOKUP(pg_am) mechanism.\nMaybe someday we should try to get rid of PGNSP and PGUID too, although\nthere are stumbling blocks in the way of both:\n\n* PGNSP is also substituted for in the bodies of some SQL procedures.\n\n* Replacing PGUID with the actual name of the bootstrap superuser is a\nbit problematic because that name isn't necessarily \"postgres\". We\ncould probably make it work, but I'm not convinced it'd be any less\nconfusing than the existing special-case behavior is.\n\nAnyway I think we're basically done here. There's some additional\ncleanup that could possibly be done, like removing the hard-wired\nreferences to OID 1 in initdb.c. But I'm having a hard time convincing\nmyself that it's worth the trouble, except maybe for the question of\ninformation_schema.sql's hard-wired type OIDs. Even there, it's\ncertainly possible for a patch to use a regtype constant even if\nthe existing code doesn't.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 12 Mar 2019 12:50:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Why don't we have a small reserved OID range for patch revisions?"
}
] |
[
{
"msg_contents": "Current sequence of operations for drop database (dropdb())\n1. Start Transaction\n2. Make catalog changes\n3. Drop database buffers\n4. Forget database fsync requests\n5. Checkpoint\n6. Delete database directory\n7. Commit Transaction\n\nProblem\nThis sequence is unsafe from couple of fronts. Like if drop database,\naborts (means system can crash/shutdown can happen) right after buffers are\ndropped step 3 or step 4. The database will still exist and fully\naccessible but will loose the data from the dirty buffers. This seems very\nbad.\n\nOperation can abort after step 5 as well in which can the entries remain in\ncatalog but the database is not accessible. Which is bad as well but not as\nsevere as above case mentioned, where it exists but some stuff goes\nmagically missing.\n\nRepo:\n```\nCREATE DATABASE test;\n\\c test\nCREATE TABLE t1(a int); CREATE TABLE t2(a int); CREATE TABLE t3(a int);\n\\c postgres\nDROP DATABASE test; <<====== kill the session after DropDatabaseBuffers()\n(make sure to issue checkpoint before killing the session)\n```\n\nProposed ways to fix\n1. CommitTransactionCommand() right after step 2. This makes it fully safe\nas the catalog will have the database dropped. Files may still exist on\ndisk in some cases which is okay. This also makes it consistent with the\napproach used in movedb().\n\n2. Alternative way to make it safer is perform Checkpoint (step 5) just\nbefore dropping database buffers, to avoid the unsafe nature. Caveats of\nthis solution is:\n- Performs IO for data which in success case anyways will get deleted\n- Still doesn't cover the case where catalog has the database entry but\nfiles are removed from disk\n\n3. One more fancier approach is to use pending delete mechanism used by\nrelation drops, to perform these non-catalog related activities at commit.\nEasily, the pending delete structure can be added boolean to convey\ndatabase directory dropping instead of file. Given drop database can't be\nperformed inside transaction, not needed to be done this way, but this\nmakes it one consistent approach used to deal with on-disk removal.\n\nWe passing along patch with fix 1, as seems most promising to us. But we\nwould love to hear thoughts on other solutions mentioned.\n===================================\ndiff --git a/src/backend/commands/dbcommands.c\nb/src/backend/commands/dbcommands.c\nindex d207cd899f..af1b1e0896 100644\n--- a/src/backend/commands/dbcommands.c\n+++ b/src/backend/commands/dbcommands.c\n@@ -917,6 +917,9 @@ dropdb(const char *dbname, bool missing_ok)\n */\n dropDatabaseDependencies(db_id);\n\n+ CommitTransactionCommand();\n+ StartTransactionCommand();\n+\n /*\n * Drop db-specific replication slots.\n */\n===================================\n\nThanks,\nAshwin and Alex\n\nCurrent sequence of operations for drop database (dropdb())1. Start Transaction2. Make catalog changes3. Drop database buffers4. Forget database fsync requests5. Checkpoint6. Delete database directory7. Commit TransactionProblemThis sequence is unsafe from couple of fronts. Like if drop database, aborts (means system can crash/shutdown can happen) right after buffers are dropped step 3 or step 4. The database will still exist and fully accessible but will loose the data from the dirty buffers. This seems very bad.Operation can abort after step 5 as well in which can the entries remain in catalog but the database is not accessible. Which is bad as well but not as severe as above case mentioned, where it exists but some stuff goes magically missing.Repo:```CREATE DATABASE test;\\c testCREATE TABLE t1(a int); CREATE TABLE t2(a int); CREATE TABLE t3(a int); \\c postgresDROP DATABASE test; <<====== kill the session after DropDatabaseBuffers() (make sure to issue checkpoint before killing the session)```Proposed ways to fix1. CommitTransactionCommand() right after step 2. This makes it fully safe as the catalog will have the database dropped. Files may still exist on disk in some cases which is okay. This also makes it consistent with the approach used in movedb().2. Alternative way to make it safer is perform Checkpoint (step 5) just before dropping database buffers, to avoid the unsafe nature. Caveats of this solution is:- Performs IO for data which in success case anyways will get deleted- Still doesn't cover the case where catalog has the database entry but files are removed from disk3. One more fancier approach is to use pending delete mechanism used by relation drops, to perform these non-catalog related activities at commit. Easily, the pending delete structure can be added boolean to convey database directory dropping instead of file. Given drop database can't be performed inside transaction, not needed to be done this way, but this makes it one consistent approach used to deal with on-disk removal.We passing along patch with fix 1, as seems most promising to us. But we would love to hear thoughts on other solutions mentioned.===================================diff --git a/src/backend/commands/dbcommands.c b/src/backend/commands/dbcommands.cindex d207cd899f..af1b1e0896 100644--- a/src/backend/commands/dbcommands.c+++ b/src/backend/commands/dbcommands.c@@ -917,6 +917,9 @@ dropdb(const char *dbname, bool missing_ok) */ dropDatabaseDependencies(db_id);+ CommitTransactionCommand();+ StartTransactionCommand();+ /* * Drop db-specific replication slots. */===================================Thanks, Ashwin and Alex",
"msg_date": "Fri, 8 Feb 2019 16:36:13 -0800",
"msg_from": "Alexandra Wang <lewang@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Make drop database safer"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-08 16:36:13 -0800, Alexandra Wang wrote:\n> Current sequence of operations for drop database (dropdb())\n> 1. Start Transaction\n> 2. Make catalog changes\n> 3. Drop database buffers\n> 4. Forget database fsync requests\n> 5. Checkpoint\n> 6. Delete database directory\n> 7. Commit Transaction\n> \n> Problem\n> This sequence is unsafe from couple of fronts. Like if drop database,\n> aborts (means system can crash/shutdown can happen) right after buffers are\n> dropped step 3 or step 4. The database will still exist and fully\n> accessible but will loose the data from the dirty buffers. This seems very\n> bad.\n> \n> Operation can abort after step 5 as well in which can the entries remain in\n> catalog but the database is not accessible. Which is bad as well but not as\n> severe as above case mentioned, where it exists but some stuff goes\n> magically missing.\n> \n> Repo:\n> ```\n> CREATE DATABASE test;\n> \\c test\n> CREATE TABLE t1(a int); CREATE TABLE t2(a int); CREATE TABLE t3(a int);\n> \\c postgres\n> DROP DATABASE test; <<====== kill the session after DropDatabaseBuffers()\n> (make sure to issue checkpoint before killing the session)\n> ```\n> \n> Proposed ways to fix\n> 1. CommitTransactionCommand() right after step 2. This makes it fully safe\n> as the catalog will have the database dropped. Files may still exist on\n> disk in some cases which is okay. This also makes it consistent with the\n> approach used in movedb().\n\nTo me this seems bad. The current failure mode obviously isn't good, but\nthe data obviously isn't valuable, and just loosing track of an entire\ndatabase worth of data seems worse.\n\n\n> 2. Alternative way to make it safer is perform Checkpoint (step 5) just\n> before dropping database buffers, to avoid the unsafe nature. Caveats of\n> this solution is:\n> - Performs IO for data which in success case anyways will get deleted\n> - Still doesn't cover the case where catalog has the database entry but\n> files are removed from disk\n\nThat seems like an unacceptable slowdown.\n\n\n> 3. One more fancier approach is to use pending delete mechanism used by\n> relation drops, to perform these non-catalog related activities at commit.\n> Easily, the pending delete structure can be added boolean to convey\n> database directory dropping instead of file. Given drop database can't be\n> performed inside transaction, not needed to be done this way, but this\n> makes it one consistent approach used to deal with on-disk removal.\n\nISTM we'd need to do something like this.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Sat, 9 Feb 2019 04:51:26 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Make drop database safer"
},
{
"msg_contents": "Thanks for the response and inputs.\n\nOn Sat, Feb 9, 2019 at 4:51 AM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2019-02-08 16:36:13 -0800, Alexandra Wang wrote:\n> > Current sequence of operations for drop database (dropdb())\n> > 1. Start Transaction\n> > 2. Make catalog changes\n> > 3. Drop database buffers\n> > 4. Forget database fsync requests\n> > 5. Checkpoint\n> > 6. Delete database directory\n> > 7. Commit Transaction\n> >\n> > Problem\n> > This sequence is unsafe from couple of fronts. Like if drop database,\n> > aborts (means system can crash/shutdown can happen) right after buffers\n> are\n> > dropped step 3 or step 4. The database will still exist and fully\n> > accessible but will loose the data from the dirty buffers. This seems\n> very\n> > bad.\n> >\n> > Operation can abort after step 5 as well in which can the entries remain\n> in\n> > catalog but the database is not accessible. Which is bad as well but not\n> as\n> > severe as above case mentioned, where it exists but some stuff goes\n> > magically missing.\n> >\n> > Repo:\n> > ```\n> > CREATE DATABASE test;\n> > \\c test\n> > CREATE TABLE t1(a int); CREATE TABLE t2(a int); CREATE TABLE t3(a int);\n> > \\c postgres\n> > DROP DATABASE test; <<====== kill the session after DropDatabaseBuffers()\n> > (make sure to issue checkpoint before killing the session)\n> > ```\n> >\n> > Proposed ways to fix\n> > 1. CommitTransactionCommand() right after step 2. This makes it fully\n> safe\n> > as the catalog will have the database dropped. Files may still exist on\n> > disk in some cases which is okay. This also makes it consistent with the\n> > approach used in movedb().\n>\n> To me this seems bad. The current failure mode obviously isn't good, but\n> the data obviously isn't valuable, and just loosing track of an entire\n> database worth of data seems worse.\n>\n\nSo, based on that response seems not loosing track to the files associated\nwith the database is design choice we wish to achieve. Hence catalog having\nentry but data directory being deleted is fine behavior to have and doesn't\nneed to be solved.\n\n> 2. Alternative way to make it safer is perform Checkpoint (step 5) just\n\n> > before dropping database buffers, to avoid the unsafe nature. Caveats of\n> > this solution is:\n> > - Performs IO for data which in success case anyways will get deleted\n> > - Still doesn't cover the case where catalog has the database entry but\n> > files are removed from disk\n>\n> That seems like an unacceptable slowdown.\n>\n\nGiven dropping database should be infrequent operation and only addition IO\ncost is for buffers for that database itself as Checkpoint is anyways\nperformed in later step, is it really unacceptable slowdown, compared to\nsafety it brings ?\n\n\n>\n> > 3. One more fancier approach is to use pending delete mechanism used by\n> > relation drops, to perform these non-catalog related activities at\n> commit.\n> > Easily, the pending delete structure can be added boolean to convey\n> > database directory dropping instead of file. Given drop database can't be\n> > performed inside transaction, not needed to be done this way, but this\n> > makes it one consistent approach used to deal with on-disk removal.\n>\n> ISTM we'd need to do something like this.\n>\n\nGiven the above design choice to retain link to database files till\nactually deleted, not seeing why pending delete approach any better than\napproach 1. This approach will allow us to track the database oid in commit\ntransaction xlog record but any checkpoint post the same still looses the\nreference to the database. Which is same case in approach 1 where separate\nxlog record XLOG_DBASE_DROP is written just after committing the\ntransaction.\nWhen we proposed approach 3, we thought its functionally same as approach 1\njust differs in implementation. But your preference to this approach and\nstating approach 1 as bad, reads as pending deletes approach is\nfunctionally different, we would like to hear more how?\n\nConsidering the design choice we must meet, seems approach 2, moving\nCheckpoint from step 5 before step 3 would give us the safety desired and\nretain the desired link to the database till we actually delete the files\nfor it.\n\nThanks,\nAshwin and Alex\n\nThanks for the response and inputs.On Sat, Feb 9, 2019 at 4:51 AM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2019-02-08 16:36:13 -0800, Alexandra Wang wrote:\n> Current sequence of operations for drop database (dropdb())\n> 1. Start Transaction\n> 2. Make catalog changes\n> 3. Drop database buffers\n> 4. Forget database fsync requests\n> 5. Checkpoint\n> 6. Delete database directory\n> 7. Commit Transaction\n> \n> Problem\n> This sequence is unsafe from couple of fronts. Like if drop database,\n> aborts (means system can crash/shutdown can happen) right after buffers are\n> dropped step 3 or step 4. The database will still exist and fully\n> accessible but will loose the data from the dirty buffers. This seems very\n> bad.\n> \n> Operation can abort after step 5 as well in which can the entries remain in\n> catalog but the database is not accessible. Which is bad as well but not as\n> severe as above case mentioned, where it exists but some stuff goes\n> magically missing.\n> \n> Repo:\n> ```\n> CREATE DATABASE test;\n> \\c test\n> CREATE TABLE t1(a int); CREATE TABLE t2(a int); CREATE TABLE t3(a int);\n> \\c postgres\n> DROP DATABASE test; <<====== kill the session after DropDatabaseBuffers()\n> (make sure to issue checkpoint before killing the session)\n> ```\n> \n> Proposed ways to fix\n> 1. CommitTransactionCommand() right after step 2. This makes it fully safe\n> as the catalog will have the database dropped. Files may still exist on\n> disk in some cases which is okay. This also makes it consistent with the\n> approach used in movedb().\n\nTo me this seems bad. The current failure mode obviously isn't good, but\nthe data obviously isn't valuable, and just loosing track of an entire\ndatabase worth of data seems worse.So, based on that response seems not loosing track to the files associated with the database is design choice we wish to achieve. Hence catalog having entry but data directory being deleted is fine behavior to have and doesn't need to be solved.> 2. Alternative way to make it safer is perform Checkpoint (step 5) just\n> before dropping database buffers, to avoid the unsafe nature. Caveats of\n> this solution is:\n> - Performs IO for data which in success case anyways will get deleted\n> - Still doesn't cover the case where catalog has the database entry but\n> files are removed from disk\n\nThat seems like an unacceptable slowdown.Given dropping database should be infrequent operation and only addition\n IO cost is for buffers for that database itself as Checkpoint is \nanyways performed in later step, is it really unacceptable slowdown, \ncompared to safety it brings ? \n\n> 3. One more fancier approach is to use pending delete mechanism used by\n> relation drops, to perform these non-catalog related activities at commit.\n> Easily, the pending delete structure can be added boolean to convey\n> database directory dropping instead of file. Given drop database can't be\n> performed inside transaction, not needed to be done this way, but this\n> makes it one consistent approach used to deal with on-disk removal.\n\nISTM we'd need to do something like this.Given the above design choice to retain link to database files till actually deleted, not seeing why pending delete approach any better than approach 1. This approach will allow us to track the database oid in commit transaction xlog record but any checkpoint post the same still looses the reference to the database. Which is same case in approach 1 where separate xlog record XLOG_DBASE_DROP is written just after committing the transaction.When we proposed approach 3, we thought its functionally same as approach 1 just differs in implementation. But your preference to this approach and stating approach 1 as bad, reads as pending deletes approach is functionally different, we would like to hear more how?Considering the design choice we must meet, seems approach 2, moving Checkpoint from step 5 before \nstep 3 would give us the safety desired and retain the desired link to \nthe database till we actually delete the files for it.Thanks,Ashwin and Alex",
"msg_date": "Mon, 11 Feb 2019 15:55:30 -0800",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Make drop database safer"
},
{
"msg_contents": "On Mon, Feb 11, 2019 at 3:55 PM Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n\n>\n> Thanks for the response and inputs.\n>\n> On Sat, Feb 9, 2019 at 4:51 AM Andres Freund <andres@anarazel.de> wrote:\n>\n>> Hi,\n>>\n>> On 2019-02-08 16:36:13 -0800, Alexandra Wang wrote:\n>> > Current sequence of operations for drop database (dropdb())\n>> > 1. Start Transaction\n>> > 2. Make catalog changes\n>> > 3. Drop database buffers\n>> > 4. Forget database fsync requests\n>> > 5. Checkpoint\n>> > 6. Delete database directory\n>> > 7. Commit Transaction\n>> >\n>> > Problem\n>> > This sequence is unsafe from couple of fronts. Like if drop database,\n>> > aborts (means system can crash/shutdown can happen) right after buffers\n>> are\n>> > dropped step 3 or step 4. The database will still exist and fully\n>> > accessible but will loose the data from the dirty buffers. This seems\n>> very\n>> > bad.\n>> >\n>> > Operation can abort after step 5 as well in which can the entries\n>> remain in\n>> > catalog but the database is not accessible. Which is bad as well but\n>> not as\n>> > severe as above case mentioned, where it exists but some stuff goes\n>> > magically missing.\n>> >\n>> > Repo:\n>> > ```\n>> > CREATE DATABASE test;\n>> > \\c test\n>> > CREATE TABLE t1(a int); CREATE TABLE t2(a int); CREATE TABLE t3(a int);\n>> > \\c postgres\n>> > DROP DATABASE test; <<====== kill the session after\n>> DropDatabaseBuffers()\n>> > (make sure to issue checkpoint before killing the session)\n>> > ```\n>> >\n>> > Proposed ways to fix\n>> > 1. CommitTransactionCommand() right after step 2. This makes it fully\n>> safe\n>> > as the catalog will have the database dropped. Files may still exist on\n>> > disk in some cases which is okay. This also makes it consistent with the\n>> > approach used in movedb().\n>>\n>> To me this seems bad. The current failure mode obviously isn't good, but\n>> the data obviously isn't valuable, and just loosing track of an entire\n>> database worth of data seems worse.\n>>\n>\n> So, based on that response seems not loosing track to the files associated\n> with the database is design choice we wish to achieve. Hence catalog having\n> entry but data directory being deleted is fine behavior to have and doesn't\n> need to be solved.\n>\n> > 2. Alternative way to make it safer is perform Checkpoint (step 5) just\n>\n>> > before dropping database buffers, to avoid the unsafe nature. Caveats of\n>> > this solution is:\n>> > - Performs IO for data which in success case anyways will get deleted\n>> > - Still doesn't cover the case where catalog has the database entry but\n>> > files are removed from disk\n>>\n>> That seems like an unacceptable slowdown.\n>>\n>\n> Given dropping database should be infrequent operation and only addition\n> IO cost is for buffers for that database itself as Checkpoint is anyways\n> performed in later step, is it really unacceptable slowdown, compared to\n> safety it brings ?\n>\n>\n>>\n>> > 3. One more fancier approach is to use pending delete mechanism used by\n>> > relation drops, to perform these non-catalog related activities at\n>> commit.\n>> > Easily, the pending delete structure can be added boolean to convey\n>> > database directory dropping instead of file. Given drop database can't\n>> be\n>> > performed inside transaction, not needed to be done this way, but this\n>> > makes it one consistent approach used to deal with on-disk removal.\n>>\n>> ISTM we'd need to do something like this.\n>>\n>\n> Given the above design choice to retain link to database files till\n> actually deleted, not seeing why pending delete approach any better than\n> approach 1. This approach will allow us to track the database oid in commit\n> transaction xlog record but any checkpoint post the same still looses the\n> reference to the database. Which is same case in approach 1 where separate\n> xlog record XLOG_DBASE_DROP is written just after committing the\n> transaction.\n> When we proposed approach 3, we thought its functionally same as approach\n> 1 just differs in implementation. But your preference to this approach and\n> stating approach 1 as bad, reads as pending deletes approach is\n> functionally different, we would like to hear more how?\n>\n> Considering the design choice we must meet, seems approach 2, moving\n> Checkpoint from step 5 before step 3 would give us the safety desired and\n> retain the desired link to the database till we actually delete the files\n> for it.\n>\n\nAwaiting response on this thread, highly appreciate the inputs.\n\nOn Mon, Feb 11, 2019 at 3:55 PM Ashwin Agrawal <aagrawal@pivotal.io> wrote:Thanks for the response and inputs.On Sat, Feb 9, 2019 at 4:51 AM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2019-02-08 16:36:13 -0800, Alexandra Wang wrote:\n> Current sequence of operations for drop database (dropdb())\n> 1. Start Transaction\n> 2. Make catalog changes\n> 3. Drop database buffers\n> 4. Forget database fsync requests\n> 5. Checkpoint\n> 6. Delete database directory\n> 7. Commit Transaction\n> \n> Problem\n> This sequence is unsafe from couple of fronts. Like if drop database,\n> aborts (means system can crash/shutdown can happen) right after buffers are\n> dropped step 3 or step 4. The database will still exist and fully\n> accessible but will loose the data from the dirty buffers. This seems very\n> bad.\n> \n> Operation can abort after step 5 as well in which can the entries remain in\n> catalog but the database is not accessible. Which is bad as well but not as\n> severe as above case mentioned, where it exists but some stuff goes\n> magically missing.\n> \n> Repo:\n> ```\n> CREATE DATABASE test;\n> \\c test\n> CREATE TABLE t1(a int); CREATE TABLE t2(a int); CREATE TABLE t3(a int);\n> \\c postgres\n> DROP DATABASE test; <<====== kill the session after DropDatabaseBuffers()\n> (make sure to issue checkpoint before killing the session)\n> ```\n> \n> Proposed ways to fix\n> 1. CommitTransactionCommand() right after step 2. This makes it fully safe\n> as the catalog will have the database dropped. Files may still exist on\n> disk in some cases which is okay. This also makes it consistent with the\n> approach used in movedb().\n\nTo me this seems bad. The current failure mode obviously isn't good, but\nthe data obviously isn't valuable, and just loosing track of an entire\ndatabase worth of data seems worse.So, based on that response seems not loosing track to the files associated with the database is design choice we wish to achieve. Hence catalog having entry but data directory being deleted is fine behavior to have and doesn't need to be solved.> 2. Alternative way to make it safer is perform Checkpoint (step 5) just\n> before dropping database buffers, to avoid the unsafe nature. Caveats of\n> this solution is:\n> - Performs IO for data which in success case anyways will get deleted\n> - Still doesn't cover the case where catalog has the database entry but\n> files are removed from disk\n\nThat seems like an unacceptable slowdown.Given dropping database should be infrequent operation and only addition\n IO cost is for buffers for that database itself as Checkpoint is \nanyways performed in later step, is it really unacceptable slowdown, \ncompared to safety it brings ? \n\n> 3. One more fancier approach is to use pending delete mechanism used by\n> relation drops, to perform these non-catalog related activities at commit.\n> Easily, the pending delete structure can be added boolean to convey\n> database directory dropping instead of file. Given drop database can't be\n> performed inside transaction, not needed to be done this way, but this\n> makes it one consistent approach used to deal with on-disk removal.\n\nISTM we'd need to do something like this.Given the above design choice to retain link to database files till actually deleted, not seeing why pending delete approach any better than approach 1. This approach will allow us to track the database oid in commit transaction xlog record but any checkpoint post the same still looses the reference to the database. Which is same case in approach 1 where separate xlog record XLOG_DBASE_DROP is written just after committing the transaction.When we proposed approach 3, we thought its functionally same as approach 1 just differs in implementation. But your preference to this approach and stating approach 1 as bad, reads as pending deletes approach is functionally different, we would like to hear more how?Considering the design choice we must meet, seems approach 2, moving Checkpoint from step 5 before \nstep 3 would give us the safety desired and retain the desired link to \nthe database till we actually delete the files for it.Awaiting response on this thread, highly appreciate the inputs.",
"msg_date": "Wed, 6 Mar 2019 00:58:23 -0800",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Make drop database safer"
},
{
"msg_contents": "\n\nOn 2/12/19 12:55 AM, Ashwin Agrawal wrote:\n> \n> Thanks for the response and inputs.\n> \n> On Sat, Feb 9, 2019 at 4:51 AM Andres Freund <andres@anarazel.de \n> <mailto:andres@anarazel.de>> wrote:\n> \n> Hi,\n> \n> On 2019-02-08 16:36:13 -0800, Alexandra Wang wrote:\n> > Current sequence of operations for drop database (dropdb())\n> > 1. Start Transaction\n> > 2. Make catalog changes\n> > 3. Drop database buffers\n> > 4. Forget database fsync requests\n> > 5. Checkpoint\n> > 6. Delete database directory\n> > 7. Commit Transaction\n> >\n> > Problem\n> > This sequence is unsafe from couple of fronts. Like if drop database,\n> > aborts (means system can crash/shutdown can happen) right after\n> buffers are\n> > dropped step 3 or step 4. The database will still exist and fully\n> > accessible but will loose the data from the dirty buffers. This\n> seems very\n> > bad.\n> >\n> > Operation can abort after step 5 as well in which can the entries\n> remain in\n> > catalog but the database is not accessible. Which is bad as well\n> but not as\n> > severe as above case mentioned, where it exists but some stuff goes\n> > magically missing.\n> >\n> > Repo:\n> > ```\n> > CREATE DATABASE test;\n> > \\c test\n> > CREATE TABLE t1(a int); CREATE TABLE t2(a int); CREATE TABLE t3(a\n> int);\n> > \\c postgres\n> > DROP DATABASE test; <<====== kill the session after\n> DropDatabaseBuffers()\n> > (make sure to issue checkpoint before killing the session)\n> > ```\n> >\n> > Proposed ways to fix\n> > 1. CommitTransactionCommand() right after step 2. This makes it\n> fully safe\n> > as the catalog will have the database dropped. Files may still\n> exist on\n> > disk in some cases which is okay. This also makes it consistent\n> with the\n> > approach used in movedb().\n> \n> To me this seems bad. The current failure mode obviously isn't good, but\n> the data obviously isn't valuable, and just loosing track of an entire\n> database worth of data seems worse.\n> \n> \n> So, based on that response seems not loosing track to the files \n> associated with the database is design choice we wish to achieve. Hence \n> catalog having entry but data directory being deleted is fine behavior \n> to have and doesn't need to be solved.\n> \n\nWhat about adding 'is dropped' flag to pg_database, set it to true at \nthe beginning of DROP DATABASE and commit? And ensure no one can connect \nto such database, making DROP DATABASE the only allowed operation?\n\nISTM we could then continue doing the same thing we do today, without \nany extra checkpoints etc.\n\n> > 2. Alternative way to make it safer is perform Checkpoint (step 5) just\n> \n> > before dropping database buffers, to avoid the unsafe nature.\n> Caveats of\n> > this solution is:\n> > - Performs IO for data which in success case anyways will get deleted\n> > - Still doesn't cover the case where catalog has the database\n> entry but\n> > files are removed from disk\n> \n> That seems like an unacceptable slowdown.\n> \n> \n> Given dropping database should be infrequent operation and only addition \n> IO cost is for buffers for that database itself as Checkpoint is anyways \n> performed in later step, is it really unacceptable slowdown, compared to \n> safety it brings ?\n> \n\nThat's probably true, although I do know quite a few systems that create \nand drop databases fairly often. And the implied explicit checkpoints \nare quite painful, so I'd vote not to make this worse.\n\nFWIW I don't recall why exactly we need the checkpoints, except perhaps \nto ensure the file copies see the most recent data (in CREATE DATABASE) \nand evict stuff for the to-be-dropped database from shared bufers. I \nwonder if we could do that without a checkpoint somehow ...\n\n> \n> > 3. One more fancier approach is to use pending delete mechanism\n> used by\n> > relation drops, to perform these non-catalog related activities\n> at commit.\n> > Easily, the pending delete structure can be added boolean to convey\n> > database directory dropping instead of file. Given drop database\n> can't be\n> > performed inside transaction, not needed to be done this way, but\n> this\n> > makes it one consistent approach used to deal with on-disk removal.\n> \n> ISTM we'd need to do something like this.\n> \n> \n> Given the above design choice to retain link to database files till \n> actually deleted, not seeing why pending delete approach any better than \n> approach 1. This approach will allow us to track the database oid in \n> commit transaction xlog record but any checkpoint post the same still \n> looses the reference to the database. Which is same case in approach 1 \n> where separate xlog record XLOG_DBASE_DROP is written just after \n> committing the transaction.\n> When we proposed approach 3, we thought its functionally same as \n> approach 1 just differs in implementation. But your preference to this \n> approach and stating approach 1 as bad, reads as pending deletes \n> approach is functionally different, we would like to hear more how?\n> \n\nHmmm, I don't see how this is an improvement over option #1 either.\n\n> Considering the design choice we must meet, seems approach 2, moving \n> Checkpoint from step 5 before step 3 would give us the safety desired \n> and retain the desired link to the database till we actually delete the \n> files for it.\n> \n\nUmmm? That essentially means this order:\n\n1. Start Transaction\n2. Make catalog changes\n5. Checkpoint\n3. Drop database buffers\n4. Forget database fsync requests\n6. Delete database directory\n7. Commit Transaction\n\nI don't see how that actually fixes any of the issues? Can you explain?\n\nNot to mention we might end up doing quite a bit of I/O to checkpoint \nbuffers from the database that is going to disappear shortly ...\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Wed, 6 Mar 2019 16:56:04 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Make drop database safer"
},
{
"msg_contents": "On Wed, Mar 6, 2019 at 7:56 AM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n>\n>\n> On 2/12/19 12:55 AM, Ashwin Agrawal wrote:\n> >\n> > Thanks for the response and inputs.\n> >\n> > On Sat, Feb 9, 2019 at 4:51 AM Andres Freund <andres@anarazel.de\n> > <mailto:andres@anarazel.de>> wrote:\n> >\n> > Hi,\n> >\n> > On 2019-02-08 16:36:13 -0800, Alexandra Wang wrote:\n> > > Current sequence of operations for drop database (dropdb())\n> > > 1. Start Transaction\n> > > 2. Make catalog changes\n> > > 3. Drop database buffers\n> > > 4. Forget database fsync requests\n> > > 5. Checkpoint\n> > > 6. Delete database directory\n> > > 7. Commit Transaction\n> > >\n> > > Problem\n> > > This sequence is unsafe from couple of fronts. Like if drop\n> database,\n> > > aborts (means system can crash/shutdown can happen) right after\n> > buffers are\n> > > dropped step 3 or step 4. The database will still exist and fully\n> > > accessible but will loose the data from the dirty buffers. This\n> > seems very\n> > > bad.\n> > >\n> > > Operation can abort after step 5 as well in which can the entries\n> > remain in\n> > > catalog but the database is not accessible. Which is bad as well\n> > but not as\n> > > severe as above case mentioned, where it exists but some stuff\n> goes\n> > > magically missing.\n> > >\n> > > Repo:\n> > > ```\n> > > CREATE DATABASE test;\n> > > \\c test\n> > > CREATE TABLE t1(a int); CREATE TABLE t2(a int); CREATE TABLE t3(a\n> > int);\n> > > \\c postgres\n> > > DROP DATABASE test; <<====== kill the session after\n> > DropDatabaseBuffers()\n> > > (make sure to issue checkpoint before killing the session)\n> > > ```\n> > >\n> > > Proposed ways to fix\n> > > 1. CommitTransactionCommand() right after step 2. This makes it\n> > fully safe\n> > > as the catalog will have the database dropped. Files may still\n> > exist on\n> > > disk in some cases which is okay. This also makes it consistent\n> > with the\n> > > approach used in movedb().\n> >\n> > To me this seems bad. The current failure mode obviously isn't good,\n> but\n> > the data obviously isn't valuable, and just loosing track of an\n> entire\n> > database worth of data seems worse.\n> >\n> >\n> > So, based on that response seems not loosing track to the files\n> > associated with the database is design choice we wish to achieve. Hence\n> > catalog having entry but data directory being deleted is fine behavior\n> > to have and doesn't need to be solved.\n> >\n>\n> What about adding 'is dropped' flag to pg_database, set it to true at\n> the beginning of DROP DATABASE and commit? And ensure no one can connect\n> to such database, making DROP DATABASE the only allowed operation?\n>\n> ISTM we could then continue doing the same thing we do today, without\n> any extra checkpoints etc.\n>\n\nSure, adding 'is dropped' column flag to pg_database and committing that\nupdate before dropping database buffers can give us the safety and also\nallows to keep the link. But seems very heavy duty way to gain the desired\nbehavior with extra column in pg_database catalog table specifically just\nto protect against this failure window. If this solution gets at least one\nmore vote as okay route to take, I am fine implementing it.\n\nI am surprised though that keeping the link to database worth of data and\nnot losing track of it is preferred for dropdb() but not cared for in\nmovedb(). In movedb(), transaction commits first and then old database\ndirectory is deleted, which was the first solution proposed for dropdb().\n\nFWIW I don't recall why exactly we need the checkpoints, except perhaps\n> to ensure the file copies see the most recent data (in CREATE DATABASE)\n> and evict stuff for the to-be-dropped database from shared bufers. I\n> wonder if we could do that without a checkpoint somehow ...\n>\n\nCheckpoint during CREATE DATABASE is understandable. But yes during\ndropdb() seems unnecessary. Only rational seems for windows based on\ncomment in code \"On Windows, this also ensures that background procs don't\nhold any open files, which would cause rmdir() to fail.\" I think we can\navoid the checkpoint for all other platforms in dropdb() except windows.\nEven for windows if have way to easily ensure no background procs have open\nfiles, without checkpoint then can be avoided even for it.\n\n> Considering the design choice we must meet, seems approach 2, moving\n> > Checkpoint from step 5 before step 3 would give us the safety desired\n> > and retain the desired link to the database till we actually delete the\n> > files for it.\n> >\n>\n> Ummm? That essentially means this order:\n>\n> 1. Start Transaction\n> 2. Make catalog changes\n> 5. Checkpoint\n> 3. Drop database buffers\n> 4. Forget database fsync requests\n> 6. Delete database directory\n> 7. Commit Transaction\n>\n> I don't see how that actually fixes any of the issues? Can you explain?\n>\n\nSince checkpoint is performed, all the dirty buffers make to the disk and\navoid loosing the data for this database if command fails after this and\ndatabase doesn't get dropped, problem we started this thread with. Drop\ndatabase buffers step after it will just be removing read-only buffers.\nForget database fsync requests step ideally can be removed as not needed\nwith that sequence.\n\n\n> Not to mention we might end up doing quite a bit of I/O to checkpoint\n> buffers from the database that is going to disappear shortly ...\n>\n\nYes, that's the downside it comes with.\n\nOn Wed, Mar 6, 2019 at 7:56 AM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n\nOn 2/12/19 12:55 AM, Ashwin Agrawal wrote:\n> \n> Thanks for the response and inputs.\n> \n> On Sat, Feb 9, 2019 at 4:51 AM Andres Freund <andres@anarazel.de \n> <mailto:andres@anarazel.de>> wrote:\n> \n> Hi,\n> \n> On 2019-02-08 16:36:13 -0800, Alexandra Wang wrote:\n> > Current sequence of operations for drop database (dropdb())\n> > 1. Start Transaction\n> > 2. Make catalog changes\n> > 3. Drop database buffers\n> > 4. Forget database fsync requests\n> > 5. Checkpoint\n> > 6. Delete database directory\n> > 7. Commit Transaction\n> >\n> > Problem\n> > This sequence is unsafe from couple of fronts. Like if drop database,\n> > aborts (means system can crash/shutdown can happen) right after\n> buffers are\n> > dropped step 3 or step 4. The database will still exist and fully\n> > accessible but will loose the data from the dirty buffers. This\n> seems very\n> > bad.\n> >\n> > Operation can abort after step 5 as well in which can the entries\n> remain in\n> > catalog but the database is not accessible. Which is bad as well\n> but not as\n> > severe as above case mentioned, where it exists but some stuff goes\n> > magically missing.\n> >\n> > Repo:\n> > ```\n> > CREATE DATABASE test;\n> > \\c test\n> > CREATE TABLE t1(a int); CREATE TABLE t2(a int); CREATE TABLE t3(a\n> int);\n> > \\c postgres\n> > DROP DATABASE test; <<====== kill the session after\n> DropDatabaseBuffers()\n> > (make sure to issue checkpoint before killing the session)\n> > ```\n> >\n> > Proposed ways to fix\n> > 1. CommitTransactionCommand() right after step 2. This makes it\n> fully safe\n> > as the catalog will have the database dropped. Files may still\n> exist on\n> > disk in some cases which is okay. This also makes it consistent\n> with the\n> > approach used in movedb().\n> \n> To me this seems bad. The current failure mode obviously isn't good, but\n> the data obviously isn't valuable, and just loosing track of an entire\n> database worth of data seems worse.\n> \n> \n> So, based on that response seems not loosing track to the files \n> associated with the database is design choice we wish to achieve. Hence \n> catalog having entry but data directory being deleted is fine behavior \n> to have and doesn't need to be solved.\n> \n\nWhat about adding 'is dropped' flag to pg_database, set it to true at \nthe beginning of DROP DATABASE and commit? And ensure no one can connect \nto such database, making DROP DATABASE the only allowed operation?\n\nISTM we could then continue doing the same thing we do today, without \nany extra checkpoints etc.Sure, adding 'is dropped' column flag to pg_database and committing that update before dropping database buffers can give us the safety and also allows to keep the link. But seems very heavy duty way to gain the desired behavior with extra column in pg_database catalog table specifically just to protect against this failure window. If this solution gets at least one more vote as okay route to take, I am fine implementing it.I am surprised though that keeping the link to database worth of data and not losing track of it is preferred for dropdb() but not cared for in movedb(). In movedb(), transaction commits first and then old database directory is deleted, which was the first solution proposed for dropdb().\nFWIW I don't recall why exactly we need the checkpoints, except perhaps \nto ensure the file copies see the most recent data (in CREATE DATABASE) \nand evict stuff for the to-be-dropped database from shared bufers. I \nwonder if we could do that without a checkpoint somehow ...Checkpoint during CREATE DATABASE is understandable. But yes during dropdb() seems unnecessary. Only rational seems for windows based on comment in code \"On Windows, this also ensures that background procs don't hold any open files, which would cause rmdir() to fail.\" I think we can avoid the checkpoint for all other platforms in dropdb() except windows. Even for windows if have way to easily ensure no background procs have open files, without checkpoint then can be avoided even for it.\n> Considering the design choice we must meet, seems approach 2, moving \n> Checkpoint from step 5 before step 3 would give us the safety desired \n> and retain the desired link to the database till we actually delete the \n> files for it.\n> \n\nUmmm? That essentially means this order:\n\n1. Start Transaction\n2. Make catalog changes\n5. Checkpoint\n3. Drop database buffers\n4. Forget database fsync requests\n6. Delete database directory\n7. Commit Transaction\n\nI don't see how that actually fixes any of the issues? Can you explain?Since checkpoint is performed, all the dirty buffers make to the disk and avoid loosing the data for this database if command fails after this and database doesn't get dropped, problem we started this thread with. Drop database buffers step after it will just be removing read-only buffers. Forget database fsync requests step ideally can be removed as not needed with that sequence. \nNot to mention we might end up doing quite a bit of I/O to checkpoint \nbuffers from the database that is going to disappear shortly ...Yes, that's the downside it comes with.",
"msg_date": "Wed, 6 Mar 2019 11:45:21 -0800",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Make drop database safer"
}
] |
[
{
"msg_contents": "... are committed at\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=5996cfc4665735a7e6e8d473bd66e8b11e320bbb\n\nPlease send comments/corrections by Sunday.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 08 Feb 2019 20:19:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "First-draft release notes for next week's releases"
},
{
"msg_contents": "On Fri, Feb 8, 2019 at 5:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Please send comments/corrections by Sunday.\n\nNote that \"Fix deadlock in GIN vacuum introduced by 218f51584d5\"\n(which is commit fd83c83d on the master branch) is not that far from\nbeing a complete revert of a v10 feature (this appears as \"Reduce page\nlocking during vacuuming of GIN indexes\" in the v10 release notes).\nPerhaps that's something that needs to be pointed out directly, as\nhappened with the the recheck_on_update issue in the last point\nrelease. We may even need to revise the v10 release notes.\n\n-- \nPeter Geoghegan\n\n",
"msg_date": "Fri, 8 Feb 2019 17:57:01 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: First-draft release notes for next week's releases"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> Note that \"Fix deadlock in GIN vacuum introduced by 218f51584d5\"\n> (which is commit fd83c83d on the master branch) is not that far from\n> being a complete revert of a v10 feature (this appears as \"Reduce page\n> locking during vacuuming of GIN indexes\" in the v10 release notes).\n\nYeah, I saw that in the commit message, but didn't really think that\nthe release note entry needed to explain it that way. Could be\nargued differently though.\n\n> We may even need to revise the v10 release notes.\n\nPerhaps just remove that item from the 10.0 notes?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 08 Feb 2019 21:00:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: First-draft release notes for next week's releases"
},
{
"msg_contents": "On Fri, Feb 8, 2019 at 6:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Yeah, I saw that in the commit message, but didn't really think that\n> the release note entry needed to explain it that way. Could be\n> argued differently though.\n\nI'm pretty confident that somebody is going to miss this\nfunctionality, if this account of how the patch helped Yandex is\nanything to go by:\n\nhttps://www.postgresql.org/message-id/7B44397E-5E0A-462F-8148-1C444640FA0B%40simply.name\n\n> > We may even need to revise the v10 release notes.\n>\n> Perhaps just remove that item from the 10.0 notes?\n\nThe wording could be changed to reflect the current reality within\nGIN. It's still useful that posting trees are only locked when there\nare pages to be deleted.\n\n-- \nPeter Geoghegan\n\n",
"msg_date": "Fri, 8 Feb 2019 18:10:02 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: First-draft release notes for next week's releases"
},
{
"msg_contents": "On Sat, Feb 9, 2019 at 6:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> ... are committed at\n>\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=5996cfc4665735a7e6e8d473bd66e8b11e320bbb\n>\n\nAuthor: Alvaro Herrera <alvherre@alvh.no-ip.org>\n+Branch: master [0464fdf07] 2019-01-21 20:08:52 -0300\n+Branch: REL_11_STABLE [123cc697a] 2019-01-21 19:59:07 -0300\n+-->\n+ <para>\n+ Create or delete foreign key enforcement triggers correctly when\n+ attaching or detaching a partition in a a partitioned table that\n+ has a foreign-key constraint (Amit Langote, Álvaro Herrera)\n+ </para>\n+ </listitem>\n\nIt seems like there is a typo in the above sentence. /a a\npartitioned/a partitioned\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n",
"msg_date": "Sat, 9 Feb 2019 09:38:07 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: First-draft release notes for next week's releases"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Sat, Feb 9, 2019 at 6:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> + Create or delete foreign key enforcement triggers correctly when\n> + attaching or detaching a partition in a a partitioned table that\n> + has a foreign-key constraint (Amit Langote, Álvaro Herrera)\n\n> It seems like there is a typo in the above sentence. /a a\n> partitioned/a partitioned\n\nOoops, obviously my eyes had glazed over by the time I went back to\nproofread. Thanks for catching that.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 08 Feb 2019 23:17:51 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: First-draft release notes for next week's releases"
},
{
"msg_contents": "El 9/2/19 a las 4:19, Tom Lane escribió:\n> Please send comments/corrections by Sunday.\n\n+ tuple deletion WAL record (Stas Kelvish)\n\n-- a typo in his surname, should be Kelvich.\n\n-- \nAlexander Kuzmenkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\nEl 9/2/19 a las 4:19, Tom Lane\n escribió:\n\n\n\nPlease send comments/corrections by Sunday.\n\n\n\n+ tuple deletion WAL record (Stas Kelvish)\n-- a typo in his surname, should be Kelvich.\n\n-- \nAlexander Kuzmenkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Sat, 9 Feb 2019 14:26:03 +0300",
"msg_from": "Alexander Kuzmenkov <a.kuzmenkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: First-draft release notes for next week's releases"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Fri, Feb 8, 2019 at 6:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Yeah, I saw that in the commit message, but didn't really think that\n>> the release note entry needed to explain it that way. Could be\n>> argued differently though.\n\n> I'm pretty confident that somebody is going to miss this\n> functionality, if this account of how the patch helped Yandex is\n> anything to go by:\n> https://www.postgresql.org/message-id/7B44397E-5E0A-462F-8148-1C444640FA0B%40simply.name\n\nUgh. Well, hopefully somebody will find a less buggy solution\nin the future.\n\n>>> We may even need to revise the v10 release notes.\n\n>> Perhaps just remove that item from the 10.0 notes?\n\n> The wording could be changed to reflect the current reality within\n> GIN. It's still useful that posting trees are only locked when there\n> are pages to be deleted.\n\nThe v10 release notes just say\n\n Reduce page locking during vacuuming of <acronym>GIN</acronym> indexes\n (Andrey Borodin)\n\nso it doesn't seem like there's any difference at that level of detail.\nBut I'll expand the new release note, say\n\n This change partially reverts a performance improvement, introduced\n in version 10, that attempted to reduce the number of index pages\n locked during deletion of a GIN posting tree page. That's now been\n found to lead to deadlocks, so we've removed it pending closer\n analysis.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 10 Feb 2019 13:05:24 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: First-draft release notes for next week's releases"
},
{
"msg_contents": "On Sun, Feb 10, 2019 at 10:05 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Peter Geoghegan <pg@bowt.ie> writes:\n> > On Fri, Feb 8, 2019 at 6:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The v10 release notes just say\n>\n> Reduce page locking during vacuuming of <acronym>GIN</acronym> indexes\n> (Andrey Borodin)\n>\n> so it doesn't seem like there's any difference at that level of detail.\n> But I'll expand the new release note, say\n>\n> This change partially reverts a performance improvement, introduced\n> in version 10, that attempted to reduce the number of index pages\n> locked during deletion of a GIN posting tree page. That's now been\n> found to lead to deadlocks, so we've removed it pending closer\n> analysis.\n\nThat plan seems sensible to me. The wording looks good, too.\n\n-- \nPeter Geoghegan\n\n",
"msg_date": "Sun, 10 Feb 2019 10:24:18 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: First-draft release notes for next week's releases"
},
{
"msg_contents": "On Sat, Feb 09, 2019 at 02:26:03PM +0300, Alexander Kuzmenkov wrote:\n> El 9/2/19 a las 4:19, Tom Lane escribió:\n> > Please send comments/corrections by Sunday.\n> \n> + tuple deletion WAL record (Stas Kelvish)\n> \n> -- a typo in his surname, should be Kelvich.\n\nYou are right, the commit message is the origin of the mistake. My\napologies.\n--\nMichael",
"msg_date": "Tue, 12 Feb 2019 15:24:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: First-draft release notes for next week's releases"
}
] |
[
{
"msg_contents": "I've wondered for some time whether we couldn't make a useful\nreduction in the run time of the PG regression tests by looking\nfor scripts that run significantly longer than others in their\nparallel groups, and making an effort to trim the runtimes of\nthose particular scripts.\n\nThe first step in that of course is to get some data, so attached\nis a patch to pg_regress to cause it to print the runtime of each\nscript. This produces results like, say,\n\nparallel group (17 tests): circle line timetz path lseg point macaddr macaddr8 time interval inet tstypes date box timestamp timestamptz polygon\n point ... ok (35 ms)\n lseg ... ok (31 ms)\n line ... ok (23 ms)\n box ... ok (135 ms)\n path ... ok (24 ms)\n polygon ... ok (1256 ms)\n circle ... ok (20 ms)\n date ... ok (69 ms)\n time ... ok (40 ms)\n timetz ... ok (22 ms)\n timestamp ... ok (378 ms)\n timestamptz ... ok (378 ms)\n interval ... ok (50 ms)\n inet ... ok (56 ms)\n macaddr ... ok (33 ms)\n macaddr8 ... ok (37 ms)\n tstypes ... ok (62 ms)\n\nor on a rather slower machine,\n\nparallel group (8 tests): hash_part reloptions partition_info identity partition_join partition_aggregate partition_prune indexing\n identity ... ok (3807 ms)\n partition_join ... ok (10433 ms)\n partition_prune ... ok (19370 ms)\n reloptions ... ok (1166 ms)\n hash_part ... ok (628 ms)\n indexing ... ok (22070 ms)\n partition_aggregate ... ok (12731 ms)\n partition_info ... ok (1373 ms)\ntest event_trigger ... ok (1953 ms)\ntest fast_default ... ok (2689 ms)\ntest stats ... ok (1173 ms)\n\nDoes anyone else feel that this is interesting/useful data?\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 09 Feb 2019 22:50:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Reporting script runtimes in pg_regress"
},
{
"msg_contents": "On Sat, Feb 9, 2019 at 7:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I've wondered for some time whether we couldn't make a useful\n> reduction in the run time of the PG regression tests by looking\n> for scripts that run significantly longer than others in their\n> parallel groups, and making an effort to trim the runtimes of\n> those particular scripts.\n\n> Does anyone else feel that this is interesting/useful data?\n\nIt definitely seems useful to me.\n\n-- \nPeter Geoghegan\n\n",
"msg_date": "Sat, 9 Feb 2019 19:54:37 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Reporting script runtimes in pg_regress"
},
{
"msg_contents": "Hi,\n\nOn February 10, 2019 9:20:18 AM GMT+05:30, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>I've wondered for some time whether we couldn't make a useful\n>reduction in the run time of the PG regression tests by looking\n>for scripts that run significantly longer than others in their\n>parallel groups, and making an effort to trim the runtimes of\n>those particular scripts.\n>\n>The first step in that of course is to get some data, so attached\n>is a patch to pg_regress to cause it to print the runtime of each\n>script. This produces results like, say,\n>\n>parallel group (17 tests): circle line timetz path lseg point macaddr\n>macaddr8 time interval inet tstypes date box timestamp timestamptz\n>polygon\n> point ... ok (35 ms)\n> lseg ... ok (31 ms)\n> line ... ok (23 ms)\n> box ... ok (135 ms)\n> path ... ok (24 ms)\n> polygon ... ok (1256 ms)\n> circle ... ok (20 ms)\n> date ... ok (69 ms)\n> time ... ok (40 ms)\n> timetz ... ok (22 ms)\n> timestamp ... ok (378 ms)\n> timestamptz ... ok (378 ms)\n> interval ... ok (50 ms)\n> inet ... ok (56 ms)\n> macaddr ... ok (33 ms)\n> macaddr8 ... ok (37 ms)\n> tstypes ... ok (62 ms)\n>\n>or on a rather slower machine,\n>\n>parallel group (8 tests): hash_part reloptions partition_info identity\n>partition_join partition_aggregate partition_prune indexing\n> identity ... ok (3807 ms)\n> partition_join ... ok (10433 ms)\n> partition_prune ... ok (19370 ms)\n> reloptions ... ok (1166 ms)\n> hash_part ... ok (628 ms)\n> indexing ... ok (22070 ms)\n> partition_aggregate ... ok (12731 ms)\n> partition_info ... ok (1373 ms)\n>test event_trigger ... ok (1953 ms)\n>test fast_default ... ok (2689 ms)\n>test stats ... ok (1173 ms)\n>\n>Does anyone else feel that this is interesting/useful data?\n\nYes, it does. I've locally been playing with parallelizing isolationtester's schedule, and it's quite useful for coming up with a schedule that's optimized.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n",
"msg_date": "Sun, 10 Feb 2019 09:44:29 +0530",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Reporting script runtimes in pg_regress"
},
{
"msg_contents": "> On 10 Feb 2019, at 04:50, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Does anyone else feel that this is interesting/useful data?\n\nAbsolutely, +1 on this. In Greenplum we print the runtime of the script and\nthe runtime of the diff, both of which have provided useful feedback on where\nto best spend optimization efforts (the diff time of course being a lot less\ninteresting in upstream postgres due to gpdb having it’s own diff tool to\nhandle segment variability).\n\ncheers ./daniel\n",
"msg_date": "Sun, 10 Feb 2019 22:21:27 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Reporting script runtimes in pg_regress"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 10 Feb 2019, at 04:50, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Does anyone else feel that this is interesting/useful data?\n\n> Absolutely, +1 on this. In Greenplum we print the runtime of the script and\n> the runtime of the diff, both of which have provided useful feedback on where\n> to best spend optimization efforts (the diff time of course being a lot less\n> interesting in upstream postgres due to gpdb having it’s own diff tool to\n> handle segment variability).\n\nSeems like I'm far from the first to think of this --- I wonder why\nnobody submitted a patch before?\n\nAnyway, pushed.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 10 Feb 2019 16:55:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Reporting script runtimes in pg_regress"
},
{
"msg_contents": "On 10/02/2019 22:55, Tom Lane wrote:\n> Daniel Gustafsson <daniel@yesql.se> writes:\n>>> On 10 Feb 2019, at 04:50, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Does anyone else feel that this is interesting/useful data?\n> \n>> Absolutely, +1 on this. In Greenplum we print the runtime of the script and\n>> the runtime of the diff, both of which have provided useful feedback on where\n>> to best spend optimization efforts (the diff time of course being a lot less\n>> interesting in upstream postgres due to gpdb having it’s own diff tool to\n>> handle segment variability).\n> \n> Seems like I'm far from the first to think of this --- I wonder why\n> nobody submitted a patch before?\n\nNow that I see this in action, it makes the actual test results harder\nto identify flying by. I understand the desire to collect this timing\ndata, but that is a special use case and not relevant to the normal use\nof the test suite, which is to see whether the test passes. Can we make\nthis optional please?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Mon, 11 Feb 2019 09:44:24 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Reporting script runtimes in pg_regress"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> Now that I see this in action, it makes the actual test results harder\n> to identify flying by. I understand the desire to collect this timing\n> data, but that is a special use case and not relevant to the normal use\n> of the test suite, which is to see whether the test passes. Can we make\n> this optional please?\n\nWell, I want the buildfarm to produce this info, so it's hard to see\nhow to get that without the timings being included by default. I take\nyour point that it makes the display look a bit cluttered, though.\nWould it help to put more whitespace between the status and the timing?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 11 Feb 2019 09:30:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Reporting script runtimes in pg_regress"
},
{
"msg_contents": "On 2019/02/11 23:30, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> Now that I see this in action, it makes the actual test results harder\n>> to identify flying by. I understand the desire to collect this timing\n>> data, but that is a special use case and not relevant to the normal use\n>> of the test suite, which is to see whether the test passes. Can we make\n>> this optional please?\n> \n> Well, I want the buildfarm to produce this info, so it's hard to see\n> how to get that without the timings being included by default. I take\n> your point that it makes the display look a bit cluttered, though.\n> Would it help to put more whitespace between the status and the timing?\n\n+1. Maybe, not as much whitespace as we get today between the test name\nand \"... ok\", but at least more than just a single space.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Tue, 12 Feb 2019 10:29:40 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Reporting script runtimes in pg_regress"
},
{
"msg_contents": "On Tue, Feb 12, 2019 at 10:29:40AM +0900, Amit Langote wrote:\n> On 2019/02/11 23:30, Tom Lane wrote:\n>> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>>> Now that I see this in action, it makes the actual test results harder\n>>> to identify flying by. I understand the desire to collect this timing\n>>> data, but that is a special use case and not relevant to the normal use\n>>> of the test suite, which is to see whether the test passes. Can we make\n>>> this optional please?\n>> \n>> Well, I want the buildfarm to produce this info, so it's hard to see\n>> how to get that without the timings being included by default. I take\n>> your point that it makes the display look a bit cluttered, though.\n>> Would it help to put more whitespace between the status and the timing?\n> \n> +1. Maybe, not as much whitespace as we get today between the test name\n> and \"... ok\", but at least more than just a single space.\n\nSure, but do we need feedback immediately? I am just catching up on\nthat, and I too find a bit annoying that this is not controlled by a\nswitch which is disabled by default. It seems to me that this points\nout to another issue that there is no actual way to pass down custom\noptions to pg_regress other than PG_REGRESS_DIFF_OPTS to control the\ndiff output format. So we may also want something like\nPG_REGRESS_OPTS.\n--\nMichael",
"msg_date": "Tue, 12 Feb 2019 11:09:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Reporting script runtimes in pg_regress"
},
{
"msg_contents": "On 2019-02-11 15:30, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> Now that I see this in action, it makes the actual test results harder\n>> to identify flying by. I understand the desire to collect this timing\n>> data, but that is a special use case and not relevant to the normal use\n>> of the test suite, which is to see whether the test passes. Can we make\n>> this optional please?\n> \n> Well, I want the buildfarm to produce this info, so it's hard to see\n> how to get that without the timings being included by default. I take\n> your point that it makes the display look a bit cluttered, though.\n\nCan we enable it through the buildfarm client?\n\nOr we could write it into a separate log file.\n\n> Would it help to put more whitespace between the status and the timing?\n\nprove --timer produces this:\n\n[14:21:32] t/001_basic.pl ............ ok 9154 ms ( 0.00 usr 0.01 sys + 2.28 cusr 3.40 csys = 5.69 CPU)\n[14:21:41] t/002_databases.pl ........ ok 11294 ms ( 0.00 usr 0.00 sys + 2.16 cusr 3.84 csys = 6.00 CPU)\n[14:21:52] t/003_extrafiles.pl ....... ok 7736 ms ( 0.00 usr 0.00 sys + 1.89 cusr 2.91 csys = 4.80 CPU)\n[14:22:00] t/004_pg_xlog_symlink.pl .. ok 9035 ms ( 0.00 usr 0.00 sys + 2.03 cusr 3.02 csys = 5.05 CPU)\n[14:22:09] t/005_same_timeline.pl .... ok 8048 ms ( 0.00 usr 0.00 sys + 0.92 cusr 1.29 csys = 2.21 CPU)\n\nwhich seems quite readable. So maybe something like this:\n\n identity ... ok 238 ms\n partition_join ... ok 429 ms\n partition_prune ... ok 786 ms\n reloptions ... ok 94 ms\n hash_part ... ok 78 ms\n indexing ... ok 1298 ms\n partition_aggregate ... ok 727 ms\n partition_info ... ok 110 ms\ntest event_trigger ... ok 128 ms\ntest fast_default ... ok 173 ms\ntest stats ... ok 637 ms\n\nwhich would be\n\n- status(_(\" (%.0f ms)\"), INSTR_TIME_GET_MILLISEC(stoptimes[i]));\n+ status(_(\" %8.0f ms\"), INSTR_TIME_GET_MILLISEC(stoptimes[i]));\n\n(times two).\n\nIf we're going to keep this, should we enable the prove --timer option as well?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 15 Feb 2019 14:32:52 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Reporting script runtimes in pg_regress"
},
{
"msg_contents": "On 2019-02-15 14:32, Peter Eisentraut wrote:\n> identity ... ok 238 ms\n> partition_join ... ok 429 ms\n> partition_prune ... ok 786 ms\n> reloptions ... ok 94 ms\n> hash_part ... ok 78 ms\n> indexing ... ok 1298 ms\n> partition_aggregate ... ok 727 ms\n> partition_info ... ok 110 ms\n> test event_trigger ... ok 128 ms\n> test fast_default ... ok 173 ms\n> test stats ... ok 637 ms\n\nWe should also strive to align \"FAILED\" properly. This is currently\nquite unreadable:\n\n int4 ... ok (128 ms)\n int8 ... FAILED (153 ms)\n oid ... ok (163 ms)\n float4 ... ok (231 ms)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 15 Feb 2019 15:13:48 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Reporting script runtimes in pg_regress"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2019-02-15 14:32, Peter Eisentraut wrote:\n>> test event_trigger ... ok 128 ms\n>> test fast_default ... ok 173 ms\n>> test stats ... ok 637 ms\n\nThat looks reasonable, although on machines where test runtimes run\ninto the tens of seconds, there's not going to be nearly as much\nwhitespace as this example suggests.\n\n> We should also strive to align \"FAILED\" properly.\n\nHmm. The reasonable ways to accomplish that look to be either\n(a) pad \"ok\" to the width of \"FAILED\", or (b) rely on emitting a tab.\nI don't much like either, especially from the localization angle.\nOne should also note that FAILED often comes along with additional\nverbiage, such as \"(ignored)\" or a note about process exit status;\nso I think making such cases line up totally neatly is a lost cause\nanyway.\n\nHow do you feel about letting it do this:\n\n int4 ... ok 128 ms\n int8 ... FAILED 153 ms\n oid ... ok 163 ms\n float4 ... ok 231 ms\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 15 Feb 2019 09:54:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Reporting script runtimes in pg_regress"
},
{
"msg_contents": "On 2/15/19, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> We should also strive to align \"FAILED\" properly. This is currently\n> quite unreadable:\n>\n> int4 ... ok (128 ms)\n> int8 ... FAILED (153 ms)\n> oid ... ok (163 ms)\n> float4 ... ok (231 ms)\n\nIf I may play devil's advocate, who cares how long it takes a test to\nfail? If it's not difficult, leaving the time out for failures would\nmake them stand out more.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 15 Feb 2019 17:03:13 +0100",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Reporting script runtimes in pg_regress"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> If we're going to keep this, should we enable the prove --timer option as well?\n\nAs far as that goes: I've found that in some of my older Perl\ninstallations, prove doesn't recognize the --timer switch.\nSo turning that on would require a configuration probe of some\nsort, which seems like more trouble than it's worth.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 15 Feb 2019 11:13:34 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Reporting script runtimes in pg_regress"
},
{
"msg_contents": "John Naylor <john.naylor@2ndquadrant.com> writes:\n> On 2/15/19, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n>> We should also strive to align \"FAILED\" properly. This is currently\n>> quite unreadable:\n>> \n>> int4 ... ok (128 ms)\n>> int8 ... FAILED (153 ms)\n>> oid ... ok (163 ms)\n>> float4 ... ok (231 ms)\n\n> If I may play devil's advocate, who cares how long it takes a test to\n> fail? If it's not difficult, leaving the time out for failures would\n> make them stand out more.\n\nActually, I'd supposed that that might be useful info, sometimes.\nFor example it might help you guess whether a timeout had elapsed.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 15 Feb 2019 11:24:53 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Reporting script runtimes in pg_regress"
},
{
"msg_contents": "On 2019-02-15 15:54, Tom Lane wrote:\n>> We should also strive to align \"FAILED\" properly.\n> Hmm. The reasonable ways to accomplish that look to be either\n> (a) pad \"ok\" to the width of \"FAILED\", or (b) rely on emitting a tab.\n> I don't much like either, especially from the localization angle.\n> One should also note that FAILED often comes along with additional\n> verbiage, such as \"(ignored)\" or a note about process exit status;\n> so I think making such cases line up totally neatly is a lost cause\n> anyway.\n\nYeah, not strictly required, but someone might want to play around with\nit a bit.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Mon, 18 Feb 2019 13:59:51 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Reporting script runtimes in pg_regress"
},
{
"msg_contents": "On 2019-Feb-18, Peter Eisentraut wrote:\n\n> On 2019-02-15 15:54, Tom Lane wrote:\n> >> We should also strive to align \"FAILED\" properly.\n> > Hmm. The reasonable ways to accomplish that look to be either\n> > (a) pad \"ok\" to the width of \"FAILED\", or (b) rely on emitting a tab.\n> > I don't much like either, especially from the localization angle.\n> > One should also note that FAILED often comes along with additional\n> > verbiage, such as \"(ignored)\" or a note about process exit status;\n> > so I think making such cases line up totally neatly is a lost cause\n> > anyway.\n> \n> Yeah, not strictly required, but someone might want to play around with\n> it a bit.\n\nFWIW I don't think we localize pg_regress output currently, so that\nargument seems moot ... But I think we can get away with constant four\nspaces for now.\n\nIf we wanted to get really fancy, for interactive use we could colorize\nthe output. (I wonder if there's a way to get browsers to colorize\ntext/plain output somehow instead of printing the ansi codes).\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Mon, 18 Feb 2019 11:30:28 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Reporting script runtimes in pg_regress"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Feb-18, Peter Eisentraut wrote:\n>>> We should also strive to align \"FAILED\" properly.\n\n>> Yeah, not strictly required, but someone might want to play around with\n>> it a bit.\n\n> FWIW I don't think we localize pg_regress output currently, so that\n> argument seems moot ... But I think we can get away with constant four\n> spaces for now.\n\nI pushed Peter's suggestion for %8.0f; let's live with that for a little\nand see if it's still annoying.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 18 Feb 2019 11:18:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Reporting script runtimes in pg_regress"
},
{
"msg_contents": "Re: Tom Lane 2019-02-18 <28360.1550506699@sss.pgh.pa.us>\n> >>> We should also strive to align \"FAILED\" properly.\n> \n> >> Yeah, not strictly required, but someone might want to play around with\n> >> it a bit.\n> \n> > FWIW I don't think we localize pg_regress output currently, so that\n> > argument seems moot ... But I think we can get away with constant four\n> > spaces for now.\n> \n> I pushed Peter's suggestion for %8.0f; let's live with that for a little\n> and see if it's still annoying.\n\nThe ryu changes make postgresql-unit fail quite loudly:\n\n$ cat regression.out\ntest extension ... ok 359 ms\ntest tables ... FAILED 164 ms\ntest unit ... FAILED 867 ms\ntest binary ... ok 20 ms\ntest unicode ... ok 18 ms\ntest prefix ... FAILED 43 ms\ntest units ... FAILED 207 ms\ntest time ... FAILED 99 ms\ntest temperature ... FAILED 22 ms\n...\n\nThe misalignment annoyed me enough (especially the false alignment\nbetween \"ms\" on the first row and \"164\" on the next row) to look into\nit. Aligned it looks like this:\n\ntest extension ... ok 399 ms\ntest tables ... FAILED 190 ms\ntest unit ... FAILED 569 ms\ntest binary ... ok 14 ms\ntest unicode ... ok 15 ms\ntest prefix ... FAILED 44 ms\ntest units ... FAILED 208 ms\ntest time ... FAILED 99 ms\ntest temperature ... FAILED 21 ms\n...\n\nIt doesn't break translations because it prints the extra spaces\nseparately.\n\nIn run_single_test() (which this output is from), it simply aligns the\noutput with FAILED. In run_schedule(), there is the 3rd output string\n\"failed (ignored)\" which is considerably longer. I aligned the output\nwith that, but also made the timestamp field shorter so it's not too\nmuch to the right.\n\nChristoph",
"msg_date": "Thu, 21 Feb 2019 10:37:02 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: Reporting script runtimes in pg_regress"
},
{
"msg_contents": "On 2019-02-21 10:37, Christoph Berg wrote:\n> diff --git a/src/test/regress/pg_regress.c b/src/test/regress/pg_regress.c\n> index a18a6f6c45..8080626e94 100644\n> --- a/src/test/regress/pg_regress.c\n> +++ b/src/test/regress/pg_regress.c\n> @@ -1794,12 +1794,14 @@ run_schedule(const char *schedule, test_function tfunc)\n> \t\t\t\telse\n> \t\t\t\t{\n> \t\t\t\t\tstatus(_(\"FAILED\"));\n> +\t\t\t\t\tstatus(\" \"); /* align with failed (ignored) */\n> \t\t\t\t\tfail_count++;\n> \t\t\t\t}\n\nSo an issue here is that in theory \"FAILED\" etc. are marked for\ntranslation but your spacers do not take that into account. Personally,\nI have no ambition to translate pg_regress, so we could remove all that.\n But it should be done consistently in either case.\n\nI also think we shouldn't worry about the \"failed (ignored)\" case. That\nnever happens, and I don't want to mess up the spacing we have now for\nthat. I'd consider removing support for it altogether.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 8 Mar 2019 13:21:19 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Reporting script runtimes in pg_regress"
},
{
"msg_contents": "Re: Peter Eisentraut 2019-03-08 <3eb194cf-b878-1f63-8623-6d6add0ed0b7@2ndquadrant.com>\n> On 2019-02-21 10:37, Christoph Berg wrote:\n> > diff --git a/src/test/regress/pg_regress.c b/src/test/regress/pg_regress.c\n> > index a18a6f6c45..8080626e94 100644\n> > --- a/src/test/regress/pg_regress.c\n> > +++ b/src/test/regress/pg_regress.c\n> > @@ -1794,12 +1794,14 @@ run_schedule(const char *schedule, test_function tfunc)\n> > \t\t\t\telse\n> > \t\t\t\t{\n> > \t\t\t\t\tstatus(_(\"FAILED\"));\n> > +\t\t\t\t\tstatus(\" \"); /* align with failed (ignored) */\n> > \t\t\t\t\tfail_count++;\n> > \t\t\t\t}\n> \n> So an issue here is that in theory \"FAILED\" etc. are marked for\n> translation but your spacers do not take that into account. Personally,\n> I have no ambition to translate pg_regress, so we could remove all that.\n> But it should be done consistently in either case.\n\nOh, right. So the way to go would be to use _(\"FAILED \"), and\nask translators to use the same length.\n\n> I also think we shouldn't worry about the \"failed (ignored)\" case. That\n> never happens, and I don't want to mess up the spacing we have now for\n> that. I'd consider removing support for it altogether.\n\nYou mean removing that case from pg_regress, or removing the alignment\n\"support\"?\n\nChristoph\n\n",
"msg_date": "Fri, 8 Mar 2019 14:02:34 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: Reporting script runtimes in pg_regress"
},
{
"msg_contents": "On 2019-Mar-08, Christoph Berg wrote:\n\n> Re: Peter Eisentraut 2019-03-08 <3eb194cf-b878-1f63-8623-6d6add0ed0b7@2ndquadrant.com>\n> > On 2019-02-21 10:37, Christoph Berg wrote:\n> > > diff --git a/src/test/regress/pg_regress.c b/src/test/regress/pg_regress.c\n> > > index a18a6f6c45..8080626e94 100644\n> > > --- a/src/test/regress/pg_regress.c\n> > > +++ b/src/test/regress/pg_regress.c\n> > > @@ -1794,12 +1794,14 @@ run_schedule(const char *schedule, test_function tfunc)\n> > > \t\t\t\telse\n> > > \t\t\t\t{\n> > > \t\t\t\t\tstatus(_(\"FAILED\"));\n> > > +\t\t\t\t\tstatus(\" \"); /* align with failed (ignored) */\n> > > \t\t\t\t\tfail_count++;\n> > > \t\t\t\t}\n> > \n> > So an issue here is that in theory \"FAILED\" etc. are marked for\n> > translation but your spacers do not take that into account. Personally,\n> > I have no ambition to translate pg_regress, so we could remove all that.\n> > But it should be done consistently in either case.\n> \n> Oh, right. So the way to go would be to use _(\"FAILED \"), and\n> ask translators to use the same length.\n\nNote there's no translation for pg_regress. All these _() markers are\ncurrently dead code. It seems hard to become motivated to translate\nthat kind of program. I don't think it has much value, myself.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 8 Mar 2019 10:12:17 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Reporting script runtimes in pg_regress"
},
{
"msg_contents": "Hi Christophe,\n\nOn 3/8/19 5:12 PM, Alvaro Herrera wrote:\n> On 2019-Mar-08, Christoph Berg wrote:\n> \n>> Re: Peter Eisentraut 2019-03-08 <3eb194cf-b878-1f63-8623-6d6add0ed0b7@2ndquadrant.com>\n>>> On 2019-02-21 10:37, Christoph Berg wrote:\n>>>> diff --git a/src/test/regress/pg_regress.c b/src/test/regress/pg_regress.c\n>>>> index a18a6f6c45..8080626e94 100644\n>>>> --- a/src/test/regress/pg_regress.c\n>>>> +++ b/src/test/regress/pg_regress.c\n>>>> @@ -1794,12 +1794,14 @@ run_schedule(const char *schedule, test_function tfunc)\n>>>> \t\t\t\telse\n>>>> \t\t\t\t{\n>>>> \t\t\t\t\tstatus(_(\"FAILED\"));\n>>>> +\t\t\t\t\tstatus(\" \"); /* align with failed (ignored) */\n>>>> \t\t\t\t\tfail_count++;\n>>>> \t\t\t\t}\n>>>\n>>> So an issue here is that in theory \"FAILED\" etc. are marked for\n>>> translation but your spacers do not take that into account. Personally,\n>>> I have no ambition to translate pg_regress, so we could remove all that.\n>>> But it should be done consistently in either case.\n>>\n>> Oh, right. So the way to go would be to use _(\"FAILED \"), and\n>> ask translators to use the same length.\n> \n> Note there's no translation for pg_regress. All these _() markers are\n> currently dead code. It seems hard to become motivated to translate\n> that kind of program. I don't think it has much value, myself.\n\nThis patch has been \"Waiting on Author\" since March 8th. Do you know \nwhen you'll have a new version ready?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n",
"msg_date": "Wed, 20 Mar 2019 14:15:15 +0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: Reporting script runtimes in pg_regress"
},
{
"msg_contents": "Re: David Steele 2019-03-20 <8a85bece-b18f-0433-acf3-d106b31f0271@pgmasters.net>\n> > > Oh, right. So the way to go would be to use _(\"FAILED \"), and\n> > > ask translators to use the same length.\n> > \n> > Note there's no translation for pg_regress. All these _() markers are\n> > currently dead code. It seems hard to become motivated to translate\n> > that kind of program. I don't think it has much value, myself.\n> \n> This patch has been \"Waiting on Author\" since March 8th. Do you know when\n> you'll have a new version ready?\n\nHere is a new revision that blank-pads \"ok\" to the length of \"FAILED\".\n\nChristoph",
"msg_date": "Thu, 21 Mar 2019 12:51:00 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: Re: Reporting script runtimes in pg_regress"
},
{
"msg_contents": "On 2019-03-21 12:51, Christoph Berg wrote:\n> Re: David Steele 2019-03-20 <8a85bece-b18f-0433-acf3-d106b31f0271@pgmasters.net>\n>>>> Oh, right. So the way to go would be to use _(\"FAILED \"), and\n>>>> ask translators to use the same length.\n>>>\n>>> Note there's no translation for pg_regress. All these _() markers are\n>>> currently dead code. It seems hard to become motivated to translate\n>>> that kind of program. I don't think it has much value, myself.\n>>\n>> This patch has been \"Waiting on Author\" since March 8th. Do you know when\n>> you'll have a new version ready?\n> \n> Here is a new revision that blank-pads \"ok\" to the length of \"FAILED\".\n\ncommitted\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Mon, 25 Mar 2019 10:06:32 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Reporting script runtimes in pg_regress"
}
] |
[
{
"msg_contents": "Apparently, whoever went through indxpath.c to substitute nkeycolumns\nfor ncolumns was not paying attention. As far as I can tell, the\n*only* place in there where it's correct to reference ncolumns is in\ncheck_index_only, where we determine which columns can be extracted from\nan index-only scan. A lot of the other places are obviously wrong, eg in\nrelation_has_unique_index_for:\n\n for (c = 0; c < ind->ncolumns; c++)\n ...\n if (!list_member_oid(rinfo->mergeopfamilies, ind->opfamily[c]))\n\nEven if it were plausible that an INCLUDE column is something to consider\nwhen deciding whether the index enforces uniqueness, this code accesses\nbeyond the documented end of the opfamily[] array :-(\n\nThe fact that the regression tests haven't caught this doesn't give\nme a warm feeling about how thoroughly the included-columns logic has\nbeen tested. It's really easy to make it fall over, for instance\n\nregression=# explain select * from tenk1 where (thousand, tenthous) < (10,100);\n QUERY PLAN \n \n--------------------------------------------------------------------------------\n-----\n Bitmap Heap Scan on tenk1 (cost=5.11..233.86 rows=107 width=244)\n Recheck Cond: (ROW(thousand, tenthous) < ROW(10, 100))\n -> Bitmap Index Scan on tenk1_thous_tenthous (cost=0.00..5.09 rows=107 widt\nh=0)\n Index Cond: (ROW(thousand, tenthous) < ROW(10, 100))\n(4 rows)\n\nregression=# drop index tenk1_thous_tenthous;\nDROP INDEX\nregression=# create index on tenk1(thousand) include (tenthous);\nCREATE INDEX\nregression=# explain select * from tenk1 where (thousand, tenthous) < (10,100);\nERROR: operator 97 is not a member of opfamily 2139062142\n\nI've got mixed feelings about whether to try to fix this before\ntomorrow's wraps. The attached patch seems correct and passes\ncheck-world, but there's sure not a lot of margin for error now.\nThoughts?\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 10 Feb 2019 20:18:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "indxpath.c's references to IndexOptInfo.ncolumns are all wrong, no?"
},
{
"msg_contents": "On Sun, Feb 10, 2019 at 5:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Apparently, whoever went through indxpath.c to substitute nkeycolumns\n> for ncolumns was not paying attention. As far as I can tell, the\n> *only* place in there where it's correct to reference ncolumns is in\n> check_index_only, where we determine which columns can be extracted from\n> an index-only scan.\n\nUgh. Yeah, it's rather inconsistent.\n\n> I've got mixed feelings about whether to try to fix this before\n> tomorrow's wraps. The attached patch seems correct and passes\n> check-world, but there's sure not a lot of margin for error now.\n> Thoughts?\n\nI think that it should be fixed in the next point release if at all\npossible. The bug is a simple omission. I have a hard time imagining\nhow your patch could possibly destabilize things, since nkeycolumns is\nalready used in numerous other places in indxpath.c.\n\n-- \nPeter Geoghegan\n\n",
"msg_date": "Sun, 10 Feb 2019 17:35:04 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: indxpath.c's references to IndexOptInfo.ncolumns are all wrong,\n no?"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Sun, Feb 10, 2019 at 5:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Apparently, whoever went through indxpath.c to substitute nkeycolumns\n>> for ncolumns was not paying attention. As far as I can tell, the\n>> *only* place in there where it's correct to reference ncolumns is in\n>> check_index_only, where we determine which columns can be extracted from\n>> an index-only scan.\n\n> Ugh. Yeah, it's rather inconsistent.\n\nLooking closer, it seems like most of these are accidentally protected\nby the fact that match_clause_to_index stops at nkeycolumns, so that\nthe indexclauses lists for later columns can never become nonempty;\nthey're merely wasting cycles by iterating over later columns, rather\nthan doing anything seriously wrong. It would be possible to get\nmatch_pathkeys_to_index to trigger the assertion in\nmatch_clause_to_ordering_op if GIST supported included columns,\nbut it doesn't. And I think relation_has_unique_index_for can be\nfooled into reporting that an index doesn't prove uniqueness, when\nit does. But the only one of these that's really got obviously bad\nconsequences is the one in expand_indexqual_rowcompare, which triggers\nthe failure I showed before. It's also the most obviously wrong code:\n\n /*\n * The Var side can match any column of the index.\n */\n for (i = 0; i < index->nkeycolumns; i++)\n {\n if (...)\n break;\n }\n if (i >= index->ncolumns)\n break; /* no match found */\n\nEven without understanding the bigger picture, any C programmer ought to\nfind that loop logic pretty fishy. (I'm a bit surprised Coverity hasn't\nwhined about it.)\n\nMaybe the right compromise is to fix expand_indexqual_rowcompare\nnow and leave the rest for post-wrap.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 10 Feb 2019 21:10:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: indxpath.c's references to IndexOptInfo.ncolumns are all wrong,\n no?"
}
] |
[
{
"msg_contents": "The following bug has been logged on the website:\n\nBug reference: 15629\nLogged by: Shouyu Luo\nEmail address: syluo1990@hotmail.com\nPostgreSQL version: 11.0\nOperating system: Ubuntu\nDescription: \n\nhttps://www.postgresql.org/docs/current/ddl-partitioning.html\r\n\r\n5.10.5. Partitioning and Constraint Exclusion\r\nConstraint exclusion is only applied during query planning; unlike partition\npruning, it cannot be applied during query execution.\r\n\r\nIs it supposed to be \"unlike partition pruning, it can be applied during\nquery execution\"?",
"msg_date": "Mon, 11 Feb 2019 03:26:13 +0000",
"msg_from": "PG Bug reporting form <noreply@postgresql.org>",
"msg_from_op": true,
"msg_subject": "BUG #15629: Typo in Documentation"
},
{
"msg_contents": "On Mon, 11 Feb 2019 at 20:49, PG Bug reporting form\n<noreply@postgresql.org> wrote:\n> https://www.postgresql.org/docs/current/ddl-partitioning.html\n>\n> 5.10.5. Partitioning and Constraint Exclusion\n> Constraint exclusion is only applied during query planning; unlike partition\n> pruning, it cannot be applied during query execution.\n>\n> Is it supposed to be \"unlike partition pruning, it can be applied during\n> query execution\"?\n\nThat's a bit confusing. \"it\" looks like must have been intended to\nreference constraint exclusion, but since partition pruning was\nmentioned afterwards, then it makes more sense to apply it to that.\n\nMaybe it would be more clear to write:\n\nConstraint exclusion is only applied during query planning; unlike\npartition pruning which can also be applied during query execution.\n\nSmall patch doing it that way is attached.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Mon, 11 Feb 2019 21:25:24 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15629: Typo in Documentation"
},
{
"msg_contents": "On Mon, Feb 11, 2019 at 5:25 PM David Rowley\n<david.rowley@2ndquadrant.com> wrote:\n> On Mon, 11 Feb 2019 at 20:49, PG Bug reporting form\n> <noreply@postgresql.org> wrote:\n> > https://www.postgresql.org/docs/current/ddl-partitioning.html\n> >\n> > 5.10.5. Partitioning and Constraint Exclusion\n> > Constraint exclusion is only applied during query planning; unlike partition\n> > pruning, it cannot be applied during query execution.\n> >\n> > Is it supposed to be \"unlike partition pruning, it can be applied during\n> > query execution\"?\n>\n> That's a bit confusing. \"it\" looks like must have been intended to\n> reference constraint exclusion, but since partition pruning was\n> mentioned afterwards, then it makes more sense to apply it to that.\n>\n> Maybe it would be more clear to write:\n>\n> Constraint exclusion is only applied during query planning; unlike\n> partition pruning which can also be applied during query execution.\n>\n> Small patch doing it that way is attached.\n\n+1\n\nMaybe, the semicolon should be replaced by a comma?\n\nThanks,\nAmit\n\n",
"msg_date": "Mon, 11 Feb 2019 22:18:56 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15629: Typo in Documentation"
},
{
"msg_contents": "On 02/11/19 08:18, Amit Langote wrote:\n> Maybe, the semicolon should be replaced by a comma?\n\nI agree, and here is a version of the patch with that change.\n\nRegards,\n-Chap",
"msg_date": "Mon, 11 Feb 2019 20:21:35 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: BUG #15629: Typo in Documentation"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: tested, passed\n\nThis is a simple documentation change for which I do not see any controversy. (As a native speaker, I didn't find the original wording unclear, but if this change widens the audience for which it is clear, that's worthwhile.)\n\nThe new status of this patch is: Ready for Committer\n",
"msg_date": "Tue, 12 Feb 2019 01:24:22 +0000",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15629: Typo in Documentation"
},
{
"msg_contents": "hmm, cf app has not seen <5C621F9F.3020307@anastigmatix.net> yet ... changing status back until that happens.\n\nThe new status of this patch is: Needs review\n",
"msg_date": "Tue, 12 Feb 2019 01:27:56 +0000",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15629: Typo in Documentation"
},
{
"msg_contents": "On Tue, 12 Feb 2019 at 14:21, Chapman Flack <chap@anastigmatix.net> wrote:\n>\n> On 02/11/19 08:18, Amit Langote wrote:\n> > Maybe, the semicolon should be replaced by a comma?\n>\n> I agree, and here is a version of the patch with that change.\n\nAgreed about the comma. Thanks for updating the patch.\n\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Tue, 12 Feb 2019 15:09:30 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: BUG #15629: Typo in Documentation"
},
{
"msg_contents": "On Tue, Feb 12, 2019 at 03:09:30PM +1300, David Rowley wrote:\n> Agreed about the comma. Thanks for updating the patch.\n\nCool for me, so pushed down to v11.\n--\nMichael",
"msg_date": "Tue, 12 Feb 2019 12:04:25 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Re: BUG #15629: Typo in Documentation"
},
{
"msg_contents": "On Tue, 12 Feb 2019 at 16:04, Michael Paquier <michael@paquier.xyz> wrote:\n> Cool for me, so pushed down to v11.\n\nThanks for pushing.\n\n(belated due to being on leave)\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Mon, 18 Feb 2019 13:04:22 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: BUG #15629: Typo in Documentation"
},
{
"msg_contents": "On Mon, Feb 18, 2019 at 01:04:22PM +1300, David Rowley wrote:\n> Thanks for pushing.\n> \n> (belated due to being on leave)\n\nNo problem. Hope you had a good rest.\n--\nMichael",
"msg_date": "Mon, 18 Feb 2019 09:26:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Re: BUG #15629: Typo in Documentation"
}
] |
[
{
"msg_contents": "Hello,\n\n step lsto: SET lock_timeout = 5000; SET statement_timeout = 6000;\n step update: DELETE FROM accounts WHERE accountid = 'checking'; <waiting ...>\n step update: <... completed>\n-ERROR: canceling statement due to lock timeout\n+ERROR: canceling statement due to statement timeout\n\nNo matter how slow the machine is, how can you manage to get statement\ntimeout to fire first? Given the coding that prefers lock timeouts if\nthere is a tie (unlikely), but otherwise processes them in registered\ntime order (assuming the clock only travels forward), well, I must be\nmissing something...\n\n-- \nThomas Munro\nhttp://www.enterprisedb.com\n\n",
"msg_date": "Mon, 11 Feb 2019 14:50:43 +1100",
"msg_from": "Thomas Munro <thomas.munro@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "anole's failed timeouts test"
},
{
"msg_contents": "Thomas Munro <thomas.munro@enterprisedb.com> writes:\n> Hello,\n> step lsto: SET lock_timeout = 5000; SET statement_timeout = 6000;\n> step update: DELETE FROM accounts WHERE accountid = 'checking'; <waiting ...>\n> step update: <... completed>\n> -ERROR: canceling statement due to lock timeout\n> +ERROR: canceling statement due to statement timeout\n\n> No matter how slow the machine is, how can you manage to get statement\n> timeout to fire first?\n\nThe statement timer starts running first; the lock timer only starts\nto run when we begin to wait for a lock. So if the session goes to\nsleep for > 1 second in between those events, this is unsurprising.\n\nThere are a bunch of tests in timeouts.spec that are unreasonably\nslow because the timeouts have been whacked until even very slow/\noverloaded machines will pass the tests. Maybe we need to tweak\nthis one too.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 10 Feb 2019 23:08:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: anole's failed timeouts test"
}
] |
[
{
"msg_contents": "nightjar just did this:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=nightjar&dt=2019-02-11%2004%3A33%3A07\n\nThe critical bit seems to be that the publisher side of the\n010_truncate.pl test failed like so:\n\n2019-02-10 23:55:58.765 EST [40771] sub3 LOG: statement: BEGIN READ ONLY ISOLATION LEVEL REPEATABLE READ\n2019-02-10 23:55:58.765 EST [40771] sub3 LOG: received replication command: CREATE_REPLICATION_SLOT \"sub3_16414_sync_16394\" TEMPORARY LOGICAL pgoutput USE_SNAPSHOT\n2019-02-10 23:55:58.798 EST [40728] sub1 PANIC: could not open file \"pg_logical/snapshots/0-160B578.snap\": No such file or directory\n2019-02-10 23:55:58.800 EST [40771] sub3 LOG: logical decoding found consistent point at 0/160B578\n2019-02-10 23:55:58.800 EST [40771] sub3 DETAIL: There are no running transactions.\n\nI'm not sure what to make of that, but I notice that nightjar has\nfailed subscriptionCheck seven times since mid-December, and every one\nof those shows this same PANIC. Meanwhile, no other buildfarm member\nhas produced such a failure. It smells like a race condition with\na rather tight window, but that's just a guess.\n\nSo: (1) what's causing the failure? (2) could we respond with\nsomething less than take-down-the-whole-database when a failure\nhappens in this area?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 11 Feb 2019 01:31:23 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "subscriptionCheck failures on nightjar"
},
{
"msg_contents": "On Mon, Feb 11, 2019 at 7:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> 2019-02-10 23:55:58.798 EST [40728] sub1 PANIC: could not open file \"pg_logical/snapshots/0-160B578.snap\": No such file or directory\n\n<pokes at totally unfamiliar code>\n\nThey get atomically renamed into place, which seems kosher even if\nsnapshots for the same LSN are created concurrently by different\nbackends (and tracing syscalls confirms that that does occasionally\nhappen). It's hard to believe that nightjar's rename() ceased to be\natomic a couple of months ago. It looks like the only way for files\nto get unlinked after that is by CheckPointSnapBuild() deciding they\nare too old.\n\nHmm. Could this be relevant, and cause a well timed checkpoint to\nunlink files too soon?\n\n2019-02-12 21:52:58.304 EST [22922] WARNING: out of logical\nreplication worker slots\n\n-- \nThomas Munro\nhttp://www.enterprisedb.com\n\n",
"msg_date": "Thu, 14 Feb 2019 00:55:59 +1300",
"msg_from": "Thomas Munro <thomas.munro@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: subscriptionCheck failures on nightjar"
},
{
"msg_contents": "Thomas Munro <thomas.munro@enterprisedb.com> writes:\n> On Mon, Feb 11, 2019 at 7:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> 2019-02-10 23:55:58.798 EST [40728] sub1 PANIC: could not open file \"pg_logical/snapshots/0-160B578.snap\": No such file or directory\n\n> <pokes at totally unfamiliar code>\n\n> They get atomically renamed into place, which seems kosher even if\n> snapshots for the same LSN are created concurrently by different\n> backends (and tracing syscalls confirms that that does occasionally\n> happen). It's hard to believe that nightjar's rename() ceased to be\n> atomic a couple of months ago. It looks like the only way for files\n> to get unlinked after that is by CheckPointSnapBuild() deciding they\n> are too old.\n\n> Hmm. Could this be relevant, and cause a well timed checkpoint to\n> unlink files too soon?\n> 2019-02-12 21:52:58.304 EST [22922] WARNING: out of logical\n> replication worker slots\n\nI've managed to reproduce this locally, and obtained this PANIC:\n\nlog/010_truncate_publisher.log:2019-02-13 11:29:04.759 EST [9973] sub3 PANIC: could not open file \"pg_logical/snapshots/0-16067B0.snap\": No such file or directory\n\nwith this stack trace:\n\n#0 0x0000000801ab610c in kill () from /lib/libc.so.7\n#1 0x0000000801ab4d3b in abort () from /lib/libc.so.7\n#2 0x000000000089202e in errfinish (dummy=Variable \"dummy\" is not available.\n) at elog.c:552\n#3 0x000000000075d339 in fsync_fname_ext (\n fname=0x7fffffffba20 \"pg_logical/snapshots/0-16067B0.snap\", isdir=Variable \"isdir\" is not available.\n)\n at fd.c:3372\n#4 0x0000000000730c75 in SnapBuildSerialize (builder=0x80243c118, \n lsn=23095216) at snapbuild.c:1669\n#5 0x0000000000731297 in SnapBuildProcessRunningXacts (builder=0x80243c118, \n lsn=23095216, running=0x8024424f0) at snapbuild.c:1110\n#6 0x0000000000722eac in LogicalDecodingProcessRecord (ctx=0x802414118, \n record=0x8024143d8) at decode.c:318\n#7 0x0000000000736f30 in XLogSendLogical () at walsender.c:2826\n#8 0x00000000007389c7 in WalSndLoop (send_data=0x736ed0 <XLogSendLogical>)\n at walsender.c:2184\n#9 0x0000000000738c3d in StartLogicalReplication (cmd=Variable \"cmd\" is not available.\n) at walsender.c:1118\n#10 0x0000000000739895 in exec_replication_command (\n cmd_string=0x80240e118 \"START_REPLICATION SLOT \\\"sub3\\\" LOGICAL 0/0 (proto_version '1', publication_names '\\\"pub3\\\"')\") at walsender.c:1536\n#11 0x000000000078b272 in PostgresMain (argc=-14464, argv=Variable \"argv\" is not available.\n) at postgres.c:4252\n#12 0x00000000007113fa in PostmasterMain (argc=-14256, argv=0x7fffffffcc60)\n at postmaster.c:4382\n\nSo the problem seems to boil down to \"somebody removed the snapshot\nfile between the time we renamed it into place and the time we tried\nto fsync it\".\n\nI do find messages like the one you mention, but they are on the\nsubscriber not the publisher side, so I'm not sure if this matches\nthe scenario you have in mind?\n\nlog/010_truncate_subscriber.log:2019-02-13 11:29:02.343 EST [9970] WARNING: out of logical replication worker slots\nlog/010_truncate_subscriber.log:2019-02-13 11:29:02.344 EST [9970] WARNING: out of logical replication worker slots\nlog/010_truncate_subscriber.log:2019-02-13 11:29:03.401 EST [9970] WARNING: out of logical replication worker slots\n\nAnyway, I think we might be able to fix this along the lines of\n\n CloseTransientFile(fd);\n\n+ /* ensure snapshot file is down to stable storage */\n+ fsync_fname(tmppath, false);\n fsync_fname(\"pg_logical/snapshots\", true);\n\n /*\n * We may overwrite the work from some other backend, but that's ok, our\n * snapshot is valid as well, we'll just have done some superfluous work.\n */\n if (rename(tmppath, path) != 0)\n {\n ereport(ERROR,\n (errcode_for_file_access(),\n errmsg(\"could not rename file \\\"%s\\\" to \\\"%s\\\": %m\",\n tmppath, path)));\n }\n\n- /* make sure we persist */\n- fsync_fname(path, false);\n+ /* ensure we persist the file rename */\n fsync_fname(\"pg_logical/snapshots\", true);\n\nThe existing code here seems simply wacky/unsafe to me regardless\nof this race condition: couldn't it potentially result in a corrupt\nsnapshot file appearing with a valid name, if the system crashes\nafter persisting the rename but before it's pushed the data out?\n\nI also wonder why bother with the directory sync just before the\nrename.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 13 Feb 2019 11:57:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: subscriptionCheck failures on nightjar"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-13 11:57:32 -0500, Tom Lane wrote:\n> I've managed to reproduce this locally, and obtained this PANIC:\n\nCool. How exactly?\n\nNice catch.\n\n> Anyway, I think we might be able to fix this along the lines of\n> \n> CloseTransientFile(fd);\n> \n> + /* ensure snapshot file is down to stable storage */\n> + fsync_fname(tmppath, false);\n> fsync_fname(\"pg_logical/snapshots\", true);\n> \n> /*\n> * We may overwrite the work from some other backend, but that's ok, our\n> * snapshot is valid as well, we'll just have done some superfluous work.\n> */\n> if (rename(tmppath, path) != 0)\n> {\n> ereport(ERROR,\n> (errcode_for_file_access(),\n> errmsg(\"could not rename file \\\"%s\\\" to \\\"%s\\\": %m\",\n> tmppath, path)));\n> }\n> \n> - /* make sure we persist */\n> - fsync_fname(path, false);\n> + /* ensure we persist the file rename */\n> fsync_fname(\"pg_logical/snapshots\", true);\n\nHm, but that's not the same? On some filesystems one needs the directory\nfsync, on some the file fsync, and I think both in some cases. ISTM we\nshould just open the file before the rename, and then use fsync() on the\nfilehandle rather than the filename. Then a concurrent rename ought not\nto hurt us?\n\n\n> The existing code here seems simply wacky/unsafe to me regardless\n> of this race condition: couldn't it potentially result in a corrupt\n> snapshot file appearing with a valid name, if the system crashes\n> after persisting the rename but before it's pushed the data out?\n\nWhat do you mean precisely with \"before it's pushed the data out\"?\n\n\n> I also wonder why bother with the directory sync just before the\n> rename.\n\nBecause on some FS/OS combinations the size of the renamed-into-place\nfile isn't guaranteed to be durable unless the directory was\nfsynced. Otherwise there's the possibility to have incomplete data if we\ncrash after renaming, but before fsyncing.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Wed, 13 Feb 2019 09:11:01 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: subscriptionCheck failures on nightjar"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-02-13 11:57:32 -0500, Tom Lane wrote:\n>> I've managed to reproduce this locally, and obtained this PANIC:\n\n> Cool. How exactly?\n\nAndrew told me that nightjar is actually running in a qemu VM,\nso I set up freebsd 9.0 in a qemu VM, and boom. It took a bit\nof fiddling with qemu parameters, but for such a timing-sensitive\nproblem, that's not surprising.\n\n>> Anyway, I think we might be able to fix this along the lines of\n>> [ fsync the data before renaming not after ]\n\n> Hm, but that's not the same? On some filesystems one needs the directory\n> fsync, on some the file fsync, and I think both in some cases.\n\nNow that I look at it, there's a pg_fsync() just above this, so\nI wonder why we need a second fsync on the file at all. fsync'ing\nthe directory is needed to ensure the directory entry is on disk;\nbut the file data should be out already, or else the kernel is\nsimply failing to honor fsync.\n\n>> The existing code here seems simply wacky/unsafe to me regardless\n>> of this race condition: couldn't it potentially result in a corrupt\n>> snapshot file appearing with a valid name, if the system crashes\n>> after persisting the rename but before it's pushed the data out?\n\n> What do you mean precisely with \"before it's pushed the data out\"?\n\nGiven the previous pg_fsync, this isn't an issue.\n\n>> I also wonder why bother with the directory sync just before the\n>> rename.\n\n> Because on some FS/OS combinations the size of the renamed-into-place\n> file isn't guaranteed to be durable unless the directory was\n> fsynced.\n\nBleah. But in any case, the rename should not create a situation\nin which we need to fsync the file data again.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 13 Feb 2019 12:37:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: subscriptionCheck failures on nightjar"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-13 12:37:35 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-02-13 11:57:32 -0500, Tom Lane wrote:\n> >> I've managed to reproduce this locally, and obtained this PANIC:\n> \n> > Cool. How exactly?\n> \n> Andrew told me that nightjar is actually running in a qemu VM,\n> so I set up freebsd 9.0 in a qemu VM, and boom. It took a bit\n> of fiddling with qemu parameters, but for such a timing-sensitive\n> problem, that's not surprising.\n\nAh.\n\n\n> >> I also wonder why bother with the directory sync just before the\n> >> rename.\n> \n> > Because on some FS/OS combinations the size of the renamed-into-place\n> > file isn't guaranteed to be durable unless the directory was\n> > fsynced.\n> \n> Bleah. But in any case, the rename should not create a situation\n> in which we need to fsync the file data again.\n\nWell, it's not super well defined which of either you need to make the\nrename durable, and it appears to differ between OSs. Any argument\nagainst fixing it up like I suggested, by using an fd from before the\nrename?\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Wed, 13 Feb 2019 09:41:51 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: subscriptionCheck failures on nightjar"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-02-13 12:37:35 -0500, Tom Lane wrote:\n>> Bleah. But in any case, the rename should not create a situation\n>> in which we need to fsync the file data again.\n\n> Well, it's not super well defined which of either you need to make the\n> rename durable, and it appears to differ between OSs. Any argument\n> against fixing it up like I suggested, by using an fd from before the\n> rename?\n\nI'm unimpressed. You're speculating about the filesystem having random\ndeviations from POSIX behavior, and using that weak argument to justify a\ntotally untested technique having its own obvious portability hazards.\nWho's to say that an fsync on a file opened before a rename is going to do\nanything good after the rename? (On, eg, NFS there are obvious reasons\nwhy it might not.)\n\nAlso, I wondered why this is coming out as a PANIC. I thought originally\nthat somebody must be causing this code to run in a critical section,\nbut it looks like the real issue is just that fsync_fname() uses\ndata_sync_elevel, which is\n\nint\ndata_sync_elevel(int elevel)\n{\n\treturn data_sync_retry ? elevel : PANIC;\n}\n\nI really really don't want us doing questionably-necessary fsyncs with a\nPANIC as the result. Perhaps more to the point, the way this was coded,\nthe PANIC applies to open() failures in fsync_fname_ext() not just fsync()\nfailures; that's painting with too broad a brush isn't it?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 13 Feb 2019 12:59:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: subscriptionCheck failures on nightjar"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-13 12:59:19 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-02-13 12:37:35 -0500, Tom Lane wrote:\n> >> Bleah. But in any case, the rename should not create a situation\n> >> in which we need to fsync the file data again.\n> \n> > Well, it's not super well defined which of either you need to make the\n> > rename durable, and it appears to differ between OSs. Any argument\n> > against fixing it up like I suggested, by using an fd from before the\n> > rename?\n> \n> I'm unimpressed. You're speculating about the filesystem having random\n> deviations from POSIX behavior, and using that weak argument to justify a\n> totally untested technique having its own obvious portability hazards.\n\nUhm, we've reproduced failures due to the lack of such fsyncs at some\npoint. And not some fringe OS, but ext4 (albeit with data=writeback).\n\nI don't think POSIX has yet figured out what they actually think is\nrequired:\nhttp://austingroupbugs.net/view.php?id=672\n\nI guess we could just ignore ENOENT in this case, that ought to be just\nas safe as using the old fd.\n\n\n> Also, I wondered why this is coming out as a PANIC. I thought originally\n> that somebody must be causing this code to run in a critical section,\n> but it looks like the real issue is just that fsync_fname() uses\n> data_sync_elevel, which is\n> \n> int\n> data_sync_elevel(int elevel)\n> {\n> \treturn data_sync_retry ? elevel : PANIC;\n> }\n> \n> I really really don't want us doing questionably-necessary fsyncs with a\n> PANIC as the result.\n\nWell, given the 'failed fsync throws dirty data away' issue, I don't\nquite see what we can do otherwise. But:\n\n\n> Perhaps more to the point, the way this was coded, the PANIC applies\n> to open() failures in fsync_fname_ext() not just fsync() failures;\n> that's painting with too broad a brush isn't it?\n\nThat indeed seems wrong. Thomas? I'm not quite sure how to best fix\nthis though - I guess we could rename fsync_fname_ext's eleval parameter\nto fsync_failure_elevel? It's not visible outside fd.c, so that'd not be\nto bad?\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Wed, 13 Feb 2019 10:12:25 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: subscriptionCheck failures on nightjar"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-02-13 12:59:19 -0500, Tom Lane wrote:\n>> Perhaps more to the point, the way this was coded, the PANIC applies\n>> to open() failures in fsync_fname_ext() not just fsync() failures;\n>> that's painting with too broad a brush isn't it?\n\n> That indeed seems wrong. Thomas? I'm not quite sure how to best fix\n> this though - I guess we could rename fsync_fname_ext's eleval parameter\n> to fsync_failure_elevel? It's not visible outside fd.c, so that'd not be\n> to bad?\n\nPerhaps fsync_fname() should pass the elevel through as-is, and\nthen fsync_fname_ext() apply the data_sync_elevel() promotion only\nto the fsync() call not the open()? I'm not sure.\n\nThe bigger picture here is that there are probably some call sites where\nPANIC on open() failure is appropriate too. So having fsync_fname[_ext]\ndeciding what to do on its own is likely to be inadequate.\n\nIf we fix this by allowing ENOENT to not be an error in this particular\ncall case, that's going to involve an fsync_fname_ext API change anyway...\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 13 Feb 2019 13:24:03 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: subscriptionCheck failures on nightjar"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-13 13:24:03 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-02-13 12:59:19 -0500, Tom Lane wrote:\n> >> Perhaps more to the point, the way this was coded, the PANIC applies\n> >> to open() failures in fsync_fname_ext() not just fsync() failures;\n> >> that's painting with too broad a brush isn't it?\n> \n> > That indeed seems wrong. Thomas? I'm not quite sure how to best fix\n> > this though - I guess we could rename fsync_fname_ext's eleval parameter\n> > to fsync_failure_elevel? It's not visible outside fd.c, so that'd not be\n> > to bad?\n> \n> Perhaps fsync_fname() should pass the elevel through as-is, and\n> then fsync_fname_ext() apply the data_sync_elevel() promotion only\n> to the fsync() call not the open()? I'm not sure.\n\nYea, that's probably not a bad plan. It'd address your:\n\n> The bigger picture here is that there are probably some call sites where\n> PANIC on open() failure is appropriate too. So having fsync_fname[_ext]\n> deciding what to do on its own is likely to be inadequate.\n\nSeems to me we ought to do this regardless of the bug discussed\nhere. But we'd nee dto be careful that we'd take the \"more severe\" error\nbetween the passed in elevel and data_sync_elevel(). Otherwise we'd end\nup downgrading errors...\n\n\n> If we fix this by allowing ENOENT to not be an error in this particular\n> call case, that's going to involve an fsync_fname_ext API change anyway...\n\nI was kinda pondering just open coding it. I am not yet convinced that\nmy idea of just using an open FD isn't the least bad approach for the\nissue at hand. What precisely is the NFS issue you're concerned about?\n\nRight now fsync_fname_ext isn't exposed outside fd.c...\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Wed, 13 Feb 2019 10:33:03 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: subscriptionCheck failures on nightjar"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I was kinda pondering just open coding it. I am not yet convinced that\n> my idea of just using an open FD isn't the least bad approach for the\n> issue at hand. What precisely is the NFS issue you're concerned about?\n\nI'm not sure that fsync-on-FD after the rename will work, considering that\nthe issue here is that somebody might've unlinked the file altogether\nbefore we get to doing the fsync. I don't have a hard time believing that\nthat might result in a failure report on NFS or similar. Yeah, it's\nhypothetical, but the argument that we need a repeat fsync at all seems\nequally hypothetical.\n\n> Right now fsync_fname_ext isn't exposed outside fd.c...\n\nMmm. That makes it easier to consider changing its API.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 13 Feb 2019 14:11:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: subscriptionCheck failures on nightjar"
},
{
"msg_contents": "On Thu, Feb 14, 2019 at 8:11 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I was kinda pondering just open coding it. I am not yet convinced that\n> > my idea of just using an open FD isn't the least bad approach for the\n> > issue at hand. What precisely is the NFS issue you're concerned about?\n>\n> I'm not sure that fsync-on-FD after the rename will work, considering that\n> the issue here is that somebody might've unlinked the file altogether\n> before we get to doing the fsync. I don't have a hard time believing that\n> that might result in a failure report on NFS or similar. Yeah, it's\n> hypothetical, but the argument that we need a repeat fsync at all seems\n> equally hypothetical.\n>\n> > Right now fsync_fname_ext isn't exposed outside fd.c...\n>\n> Mmm. That makes it easier to consider changing its API.\n\nJust to make sure I understand: it's OK for the file not to be there\nwhen we try to fsync it by name, because a concurrent checkpoint can\nremove it, having determined that we don't need it anymore? In other\nwords, we really needed either missing_ok=true semantics, or to use\nthe fd we already had instead of the name?\n\nI found 3 examples of this failing with an ERROR (though not turning\nthe BF red, so nobody noticed) before the PANIC patch went in:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=nightjar&dt=2018-09-10%2020%3A54%3A21&stg=subscription-check\n2018-09-10 17:20:09.247 EDT [23287] sub1 ERROR: could not open file\n\"pg_logical/snapshots/0-161D778.snap\": No such file or directory\n2018-09-10 17:20:09.247 EDT [23285] ERROR: could not receive data\nfrom WAL stream: ERROR: could not open file\n\"pg_logical/snapshots/0-161D778.snap\": No such file or directory\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=nightjar&dt=2018-08-31%2023%3A25%3A41&stg=subscription-check\n2018-08-31 19:52:06.634 EDT [52724] sub1 ERROR: could not open file\n\"pg_logical/snapshots/0-161D718.snap\": No such file or directory\n2018-08-31 19:52:06.634 EDT [52721] ERROR: could not receive data\nfrom WAL stream: ERROR: could not open file\n\"pg_logical/snapshots/0-161D718.snap\": No such file or directory\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=nightjar&dt=2018-08-22%2021%3A49%3A18&stg=subscription-check\n2018-08-22 18:10:29.422 EDT [44208] sub1 ERROR: could not open file\n\"pg_logical/snapshots/0-161D718.snap\": No such file or directory\n2018-08-22 18:10:29.422 EDT [44206] ERROR: could not receive data\nfrom WAL stream: ERROR: could not open file\n\"pg_logical/snapshots/0-161D718.snap\": No such file or directory\n\n-- \nThomas Munro\nhttp://www.enterprisedb.com\n\n",
"msg_date": "Thu, 14 Feb 2019 09:52:33 +1300",
"msg_from": "Thomas Munro <thomas.munro@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: subscriptionCheck failures on nightjar"
},
{
"msg_contents": "Thomas Munro <thomas.munro@enterprisedb.com> writes:\n> I found 3 examples of this failing with an ERROR (though not turning\n> the BF red, so nobody noticed) before the PANIC patch went in:\n\nYeah, I suspected that had happened before with less-obvious consequences.\nNow that we know where the problem is, you could probably make it highly\nreproducible by inserting a sleep of a few msec between the rename and the\nsecond fsync.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 13 Feb 2019 16:42:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: subscriptionCheck failures on nightjar"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-14 09:52:33 +1300, Thomas Munro wrote:\n> On Thu, Feb 14, 2019 at 8:11 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > I was kinda pondering just open coding it. I am not yet convinced that\n> > > my idea of just using an open FD isn't the least bad approach for the\n> > > issue at hand. What precisely is the NFS issue you're concerned about?\n> >\n> > I'm not sure that fsync-on-FD after the rename will work, considering that\n> > the issue here is that somebody might've unlinked the file altogether\n> > before we get to doing the fsync. I don't have a hard time believing that\n> > that might result in a failure report on NFS or similar. Yeah, it's\n> > hypothetical, but the argument that we need a repeat fsync at all seems\n> > equally hypothetical.\n> >\n> > > Right now fsync_fname_ext isn't exposed outside fd.c...\n> >\n> > Mmm. That makes it easier to consider changing its API.\n> \n> Just to make sure I understand: it's OK for the file not to be there\n> when we try to fsync it by name, because a concurrent checkpoint can\n> remove it, having determined that we don't need it anymore? In other\n> words, we really needed either missing_ok=true semantics, or to use\n> the fd we already had instead of the name?\n\nI'm not yet sure that that's actually something that's supposed to\nhappen, I got to spend some time analysing how this actually\nhappens. Normally the contents of the slot should actually prevent it\nfrom being removed (as they're newer than\nReplicationSlotsComputeLogicalRestartLSN()). I kind of wonder if that's\na bug in the drop logic in newer releases.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Wed, 13 Feb 2019 13:51:47 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: subscriptionCheck failures on nightjar"
},
{
"msg_contents": "\nOn 2/13/19 1:12 PM, Andres Freund wrote:\n> Hi,\n>\n> On 2019-02-13 12:59:19 -0500, Tom Lane wrote:\n>> Andres Freund <andres@anarazel.de> writes:\n>>> On 2019-02-13 12:37:35 -0500, Tom Lane wrote:\n>>>> Bleah. But in any case, the rename should not create a situation\n>>>> in which we need to fsync the file data again.\n>>> Well, it's not super well defined which of either you need to make the\n>>> rename durable, and it appears to differ between OSs. Any argument\n>>> against fixing it up like I suggested, by using an fd from before the\n>>> rename?\n>> I'm unimpressed. You're speculating about the filesystem having random\n>> deviations from POSIX behavior, and using that weak argument to justify a\n>> totally untested technique having its own obvious portability hazards.\n> Uhm, we've reproduced failures due to the lack of such fsyncs at some\n> point. And not some fringe OS, but ext4 (albeit with data=writeback).\n>\n> I don't think POSIX has yet figured out what they actually think is\n> required:\n> http://austingroupbugs.net/view.php?id=672\n>\n> I guess we could just ignore ENOENT in this case, that ought to be just\n> as safe as using the old fd.\n>\n>\n>> Also, I wondered why this is coming out as a PANIC. I thought originally\n>> that somebody must be causing this code to run in a critical section,\n>> but it looks like the real issue is just that fsync_fname() uses\n>> data_sync_elevel, which is\n>>\n>> int\n>> data_sync_elevel(int elevel)\n>> {\n>> \treturn data_sync_retry ? elevel : PANIC;\n>> }\n>>\n>> I really really don't want us doing questionably-necessary fsyncs with a\n>> PANIC as the result.\n> Well, given the 'failed fsync throws dirty data away' issue, I don't\n> quite see what we can do otherwise. But:\n>\n>\n>> Perhaps more to the point, the way this was coded, the PANIC applies\n>> to open() failures in fsync_fname_ext() not just fsync() failures;\n>> that's painting with too broad a brush isn't it?\n> That indeed seems wrong. Thomas? I'm not quite sure how to best fix\n> this though - I guess we could rename fsync_fname_ext's eleval parameter\n> to fsync_failure_elevel? It's not visible outside fd.c, so that'd not be\n> to bad?\n>\n\nThread seems to have gone quiet ...\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Sun, 10 Mar 2019 14:40:15 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: subscriptionCheck failures on nightjar"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-02-14 09:52:33 +1300, Thomas Munro wrote:\n>> Just to make sure I understand: it's OK for the file not to be there\n>> when we try to fsync it by name, because a concurrent checkpoint can\n>> remove it, having determined that we don't need it anymore? In other\n>> words, we really needed either missing_ok=true semantics, or to use\n>> the fd we already had instead of the name?\n\n> I'm not yet sure that that's actually something that's supposed to\n> happen, I got to spend some time analysing how this actually\n> happens. Normally the contents of the slot should actually prevent it\n> from being removed (as they're newer than\n> ReplicationSlotsComputeLogicalRestartLSN()). I kind of wonder if that's\n> a bug in the drop logic in newer releases.\n\nMy animal dromedary just reproduced this failure, which we've previously\nonly seen on nightjar.\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dromedary&dt=2019-06-26%2023%3A57%3A45\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 26 Jun 2019 21:10:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: subscriptionCheck failures on nightjar"
},
{
"msg_contents": "I wrote:\n> My animal dromedary just reproduced this failure, which we've previously\n> only seen on nightjar.\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dromedary&dt=2019-06-26%2023%3A57%3A45\n\nTwice:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dromedary&dt=2019-06-28%2006%3A50%3A41\n\nSince this is a live (if old) system, not some weird qemu emulation,\nwe can no longer pretend that it might not be reachable in the field.\nI've added an open item.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 28 Jun 2019 11:17:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: subscriptionCheck failures on nightjar"
},
{
"msg_contents": "On Wed, Feb 13, 2019 at 01:51:47PM -0800, Andres Freund wrote:\n> I'm not yet sure that that's actually something that's supposed to\n> happen, I got to spend some time analysing how this actually\n> happens. Normally the contents of the slot should actually prevent it\n> from being removed (as they're newer than\n> ReplicationSlotsComputeLogicalRestartLSN()). I kind of wonder if that's\n> a bug in the drop logic in newer releases.\n\nIn the same context, could it be a consequence of 9915de6c which has\nintroduced a conditional variable to control slot operations? This\ncould have exposed more easily a pre-existing race condition.\n--\nMichael",
"msg_date": "Tue, 13 Aug 2019 17:04:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: subscriptionCheck failures on nightjar"
},
{
"msg_contents": "On Tue, Aug 13, 2019 at 05:04:35PM +0900, Michael Paquier wrote:\n>On Wed, Feb 13, 2019 at 01:51:47PM -0800, Andres Freund wrote:\n>> I'm not yet sure that that's actually something that's supposed to\n>> happen, I got to spend some time analysing how this actually\n>> happens. Normally the contents of the slot should actually prevent it\n>> from being removed (as they're newer than\n>> ReplicationSlotsComputeLogicalRestartLSN()). I kind of wonder if that's\n>> a bug in the drop logic in newer releases.\n>\n>In the same context, could it be a consequence of 9915de6c which has\n>introduced a conditional variable to control slot operations? This\n>could have exposed more easily a pre-existing race condition.\n>--\n\nThis is one of the remaining open items, and we don't seem to be moving\nforward with it :-(\n\nI'm willing to take a stab at it, but to do that I need a way to\nreproduce it. Tom, you mentioned you've managed to reproduce it in a\nqemu instance, but that it took some fiddling with qemu parmeters or\nsomething. Can you share what exactly was necessary?\n\nAn observation about the issue - while we started to notice this after\nDecemeber, that's mostly because the PANIC patch went it shortly before.\nWe've however seen the issue before, as Thomas Munro mentioned in [1].\n\nThose reports are from August, so it's quite possible something in the\nfirst CF upset the code. And there's only a single commit in 2018-07\nthat seems related to logical decoding / snapshots [2], i.e. f49a80c:\n\ncommit f49a80c481f74fa81407dce8e51dea6956cb64f8\nAuthor: Alvaro Herrera <alvherre@alvh.no-ip.org>\nDate: Tue Jun 26 16:38:34 2018 -0400\n\n Fix \"base\" snapshot handling in logical decoding\n\n ...\n\nThe other reason to suspect this is related is that the fix also made it\nto REL_11_STABLE at that time, and if you check the buildfarm data [3],\nyou'll see 11 fails on nightjar too, from time to time.\n\nThis means it's not a 12+ only issue, it's a live issue on 11. I don't\nknow if f49a80c is the culprit, or if it simply uncovered a pre-existing\nbug (e.g. due to timing).\n\n\n[1] https://www.postgresql.org/message-id/CAEepm%3D0wB7vgztC5sg2nmJ-H3bnrBT5GQfhUzP%2BFfq-WT3g8VA%40mail.gmail.com\n\n[2] https://commitfest.postgresql.org/18/1650/\n\n[3] https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=nightjar&br=REL_11_STABLE\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Mon, 26 Aug 2019 15:29:04 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: subscriptionCheck failures on nightjar"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> I'm willing to take a stab at it, but to do that I need a way to\n> reproduce it. Tom, you mentioned you've managed to reproduce it in a\n> qemu instance, but that it took some fiddling with qemu parmeters or\n> something. Can you share what exactly was necessary?\n\nI don't recall exactly what I did anymore, and it was pretty fiddly\nanyway. Upthread I suggested\n\n>> Now that we know where the problem is, you could probably make it highly\n>> reproducible by inserting a sleep of a few msec between the rename and the\n>> second fsync.\n\nso why not try that first?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 26 Aug 2019 11:01:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: subscriptionCheck failures on nightjar"
},
{
"msg_contents": "On Mon, Aug 26, 2019 at 11:01:20AM -0400, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> I'm willing to take a stab at it, but to do that I need a way to\n>> reproduce it. Tom, you mentioned you've managed to reproduce it in a\n>> qemu instance, but that it took some fiddling with qemu parmeters or\n>> something. Can you share what exactly was necessary?\n>\n>I don't recall exactly what I did anymore, and it was pretty fiddly\n>anyway. Upthread I suggested\n>\n>>> Now that we know where the problem is, you could probably make it highly\n>>> reproducible by inserting a sleep of a few msec between the rename and the\n>>> second fsync.\n>\n>so why not try that first?\n>\n\nAh, right. I'll give that a try.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Mon, 26 Aug 2019 22:23:25 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: subscriptionCheck failures on nightjar"
},
{
"msg_contents": "On Mon, Aug 26, 2019 at 9:29 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> This is one of the remaining open items, and we don't seem to be moving\n> forward with it :-(\n\nWhy exactly is this an open item, anyway?\n\nI don't find any discussion on the thread which makes a clear argument\nthat this problem originated with v12. If it didn't, it's still a bug\nand it still ought to be fixed at some point, but it's not a\nrelease-blocker.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 17 Sep 2019 11:54:45 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: subscriptionCheck failures on nightjar"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Aug 26, 2019 at 9:29 AM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n>> This is one of the remaining open items, and we don't seem to be moving\n>> forward with it :-(\n\n> Why exactly is this an open item, anyway?\n\nThe reason it's still here is that Andres expressed a concern that\nthere might be more than meets the eye in this. What meets the eye\nis that PANICing on file-not-found is not appropriate here, but Andres\nseemed to think that the file not being present might reflect an\nactual bug not just an expectable race condition [1].\n\nPersonally I'd be happy just to treat it as an expectable case and\nfix the code to not PANIC on file-not-found.\n\nIn either case, it probably belongs in the \"older bugs\" section;\nnightjar is showing the same failure on v11 from time to time.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/20190213215147.cjbymfojf6xndr4t%40alap3.anarazel.de\n\n\n",
"msg_date": "Tue, 17 Sep 2019 12:39:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: subscriptionCheck failures on nightjar"
},
{
"msg_contents": "On Tue, Sep 17, 2019 at 12:39:33PM -0400, Tom Lane wrote:\n>Robert Haas <robertmhaas@gmail.com> writes:\n>> On Mon, Aug 26, 2019 at 9:29 AM Tomas Vondra\n>> <tomas.vondra@2ndquadrant.com> wrote:\n>>> This is one of the remaining open items, and we don't seem to be moving\n>>> forward with it :-(\n>\n>> Why exactly is this an open item, anyway?\n>\n>The reason it's still here is that Andres expressed a concern that\n>there might be more than meets the eye in this. What meets the eye\n>is that PANICing on file-not-found is not appropriate here, but Andres\n>seemed to think that the file not being present might reflect an\n>actual bug not just an expectable race condition [1].\n>\n>Personally I'd be happy just to treat it as an expectable case and\n>fix the code to not PANIC on file-not-found.\n>\n\nFWIW I agree with Andres that there probably is an actual bug. The file\nshould not just disappear like this, it's clearly unexpected so the\nPANIC does not seem entirely inappropriate.\n\nI've tried reproducing the issue on my local systems, with the extra\nsleeps between fsyncs and so on, but I haven't managed to trigger it so\nfar :-(\n\n>In either case, it probably belongs in the \"older bugs\" section;\n>nightjar is showing the same failure on v11 from time to time.\n>\n\nYes, it should be moved to the older section - it's clearly a v11 bug.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Tue, 17 Sep 2019 21:45:10 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: subscriptionCheck failures on nightjar"
},
{
"msg_contents": "On Tue, Sep 17, 2019 at 09:45:10PM +0200, Tomas Vondra wrote:\n> FWIW I agree with Andres that there probably is an actual bug. The file\n> should not just disappear like this, it's clearly unexpected so the\n> PANIC does not seem entirely inappropriate.\n\nAgreed.\n\n> I've tried reproducing the issue on my local systems, with the extra\n> sleeps between fsyncs and so on, but I haven't managed to trigger it so\n> far :-(\n\nOn my side, I have let this thing run for a couple of hours with a\npatched version to include a sleep between the rename and the sync but\nI could not reproduce it either:\n#!/bin/bash\nattempt=0\nwhile true; do\n\tattempt=$((attempt+1))\n\techo \"Attempt $attempt\"\n\tcd $HOME/postgres/src/test/recovery/\n\tPROVE_TESTS=t/006_logical_decoding.pl make check > /dev/null 2>&1\n\tERRNUM=$?\n\tif [ $ERRNUM != 0 ]; then\n\t\techo \"Failed at attempt $attempt\"\n\t\texit $ERRNUM\n\tfi\ndone\n> Yes, it should be moved to the older section - it's clearly a v11 bug.\n\nAnd agreed.\n--\nMichael",
"msg_date": "Wed, 18 Sep 2019 09:58:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: subscriptionCheck failures on nightjar"
},
{
"msg_contents": "Hello Michael,\n\nOn Wed, Sep 18, 2019 at 6:28 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On my side, I have let this thing run for a couple of hours with a\n> patched version to include a sleep between the rename and the sync but\n> I could not reproduce it either:\n> #!/bin/bash\n> attempt=0\n> while true; do\n> attempt=$((attempt+1))\n> echo \"Attempt $attempt\"\n> cd $HOME/postgres/src/test/recovery/\n> PROVE_TESTS=t/006_logical_decoding.pl make check > /dev/null 2>&1\n> ERRNUM=$?\n> if [ $ERRNUM != 0 ]; then\n> echo \"Failed at attempt $attempt\"\n> exit $ERRNUM\n> fi\n> done\nI think the failing test is src/test/subscription/t/010_truncate.pl.\nI've tried to reproduce the same failure using your script in OS X\n10.14 and Ubuntu 18.04.2 (Linux version 5.0.0-23-generic), but\ncouldn't reproduce the same.\n\n\n\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 18 Sep 2019 16:25:14 +0530",
"msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: subscriptionCheck failures on nightjar"
},
{
"msg_contents": "On Wed, Sep 18, 2019 at 04:25:14PM +0530, Kuntal Ghosh wrote:\n>Hello Michael,\n>\n>On Wed, Sep 18, 2019 at 6:28 AM Michael Paquier <michael@paquier.xyz> wrote:\n>>\n>> On my side, I have let this thing run for a couple of hours with a\n>> patched version to include a sleep between the rename and the sync but\n>> I could not reproduce it either:\n>> #!/bin/bash\n>> attempt=0\n>> while true; do\n>> attempt=$((attempt+1))\n>> echo \"Attempt $attempt\"\n>> cd $HOME/postgres/src/test/recovery/\n>> PROVE_TESTS=t/006_logical_decoding.pl make check > /dev/null 2>&1\n>> ERRNUM=$?\n>> if [ $ERRNUM != 0 ]; then\n>> echo \"Failed at attempt $attempt\"\n>> exit $ERRNUM\n>> fi\n>> done\n>I think the failing test is src/test/subscription/t/010_truncate.pl.\n>I've tried to reproduce the same failure using your script in OS X\n>10.14 and Ubuntu 18.04.2 (Linux version 5.0.0-23-generic), but\n>couldn't reproduce the same.\n>\n\nI kinda suspect it might be just a coincidence that it fails during that\nparticular test. What likely plays a role here is a checkpoint timing\n(AFAICS that's the thing removing the file). On most systems the tests\ncomplete before any checkpoint is triggered, hence no issue.\n\nMaybe aggressively triggering checkpoints on the running cluter from\nanother session would do the trick ...\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Wed, 18 Sep 2019 23:58:08 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: subscriptionCheck failures on nightjar"
},
{
"msg_contents": "On Wed, Sep 18, 2019 at 11:58:08PM +0200, Tomas Vondra wrote:\n> I kinda suspect it might be just a coincidence that it fails during that\n> particular test. What likely plays a role here is a checkpoint timing\n> (AFAICS that's the thing removing the file). On most systems the tests\n> complete before any checkpoint is triggered, hence no issue.\n> \n> Maybe aggressively triggering checkpoints on the running cluter from\n> another session would do the trick ...\n\nNow that I recall, another thing I forgot to mention on this thread is\nthat I patched guc.c to reduce the minimum of checkpoint_timeout to\n1s.\n--\nMichael",
"msg_date": "Thu, 19 Sep 2019 13:23:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: subscriptionCheck failures on nightjar"
},
{
"msg_contents": "Hello hackers,\n\nIt seems there is a pattern how the error is occurring in different\nsystems. Following are the relevant log snippets:\n\nnightjar:\nsub3 LOG: received replication command: CREATE_REPLICATION_SLOT\n\"sub3_16414_sync_16394\" TEMPORARY LOGICAL pgoutput USE_SNAPSHOT\nsub3 LOG: logical decoding found consistent point at 0/160B578\nsub1 PANIC: could not open file\n\"pg_logical/snapshots/0-160B578.snap\": No such file or directory\n\ndromedary scenario 1:\nsub3_16414_sync_16399 LOG: received replication command:\nCREATE_REPLICATION_SLOT \"sub3_16414_sync_16399\" TEMPORARY LOGICAL\npgoutput USE_SNAPSHOT\nsub3_16414_sync_16399 LOG: logical decoding found consistent point at 0/15EA694\nsub2 PANIC: could not open file\n\"pg_logical/snapshots/0-15EA694.snap\": No such file or directory\n\n\ndromedary scenario 2:\nsub3_16414_sync_16399 LOG: received replication command:\nCREATE_REPLICATION_SLOT \"sub3_16414_sync_16399\" TEMPORARY LOGICAL\npgoutput USE_SNAPSHOT\nsub3_16414_sync_16399 LOG: logical decoding found consistent point at 0/15EA694\nsub1 PANIC: could not open file\n\"pg_logical/snapshots/0-15EA694.snap\": No such file or directory\n\nWhile subscription 3 is created, it eventually reaches to a consistent\nsnapshot point and prints the WAL location corresponding to it. It\nseems sub1/sub2 immediately fails to serialize the snapshot to the\n.snap file having the same WAL location.\n\nIs this helpful?\n\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 19 Sep 2019 17:20:15 +0530",
"msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: subscriptionCheck failures on nightjar"
},
{
"msg_contents": "On Thu, Sep 19, 2019 at 05:20:15PM +0530, Kuntal Ghosh wrote:\n> While subscription 3 is created, it eventually reaches to a consistent\n> snapshot point and prints the WAL location corresponding to it. It\n> seems sub1/sub2 immediately fails to serialize the snapshot to the\n> .snap file having the same WAL location.\n> \n> Is this helpful?\n\nIt looks like you are pointing to something here.\n--\nMichael",
"msg_date": "Fri, 20 Sep 2019 10:01:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: subscriptionCheck failures on nightjar"
},
{
"msg_contents": "On Thu, Sep 19, 2019 at 01:23:05PM +0900, Michael Paquier wrote:\n>On Wed, Sep 18, 2019 at 11:58:08PM +0200, Tomas Vondra wrote:\n>> I kinda suspect it might be just a coincidence that it fails during that\n>> particular test. What likely plays a role here is a checkpoint timing\n>> (AFAICS that's the thing removing the file). On most systems the tests\n>> complete before any checkpoint is triggered, hence no issue.\n>>\n>> Maybe aggressively triggering checkpoints on the running cluter from\n>> another session would do the trick ...\n>\n>Now that I recall, another thing I forgot to mention on this thread is\n>that I patched guc.c to reduce the minimum of checkpoint_timeout to\n>1s.\n\nBut even with that change you haven't managed to reproduce the issue,\nright? Or am I misunderstanding?\n\nregarss\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Fri, 20 Sep 2019 17:30:48 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: subscriptionCheck failures on nightjar"
},
{
"msg_contents": "Hi,\n\nOn 2019-09-19 17:20:15 +0530, Kuntal Ghosh wrote:\n> It seems there is a pattern how the error is occurring in different\n> systems. Following are the relevant log snippets:\n> \n> nightjar:\n> sub3 LOG: received replication command: CREATE_REPLICATION_SLOT\n> \"sub3_16414_sync_16394\" TEMPORARY LOGICAL pgoutput USE_SNAPSHOT\n> sub3 LOG: logical decoding found consistent point at 0/160B578\n> sub1 PANIC: could not open file\n> \"pg_logical/snapshots/0-160B578.snap\": No such file or directory\n> \n> dromedary scenario 1:\n> sub3_16414_sync_16399 LOG: received replication command:\n> CREATE_REPLICATION_SLOT \"sub3_16414_sync_16399\" TEMPORARY LOGICAL\n> pgoutput USE_SNAPSHOT\n> sub3_16414_sync_16399 LOG: logical decoding found consistent point at 0/15EA694\n> sub2 PANIC: could not open file\n> \"pg_logical/snapshots/0-15EA694.snap\": No such file or directory\n> \n> \n> dromedary scenario 2:\n> sub3_16414_sync_16399 LOG: received replication command:\n> CREATE_REPLICATION_SLOT \"sub3_16414_sync_16399\" TEMPORARY LOGICAL\n> pgoutput USE_SNAPSHOT\n> sub3_16414_sync_16399 LOG: logical decoding found consistent point at 0/15EA694\n> sub1 PANIC: could not open file\n> \"pg_logical/snapshots/0-15EA694.snap\": No such file or directory\n> \n> While subscription 3 is created, it eventually reaches to a consistent\n> snapshot point and prints the WAL location corresponding to it. It\n> seems sub1/sub2 immediately fails to serialize the snapshot to the\n> .snap file having the same WAL location.\n\nSince now a number of people (I tried as well), failed to reproduce this\nlocally, I propose that we increase the log-level during this test on\nmaster. And perhaps expand the set of debugging information. With the\nhope that the additional information on the cases encountered on the bf\nhelps us build a reproducer or, even better, diagnose the issue\ndirectly. If people agree, I'll come up with a patch.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 20 Sep 2019 10:08:31 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: subscriptionCheck failures on nightjar"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Since now a number of people (I tried as well), failed to reproduce this\n> locally, I propose that we increase the log-level during this test on\n> master. And perhaps expand the set of debugging information. With the\n> hope that the additional information on the cases encountered on the bf\n> helps us build a reproducer or, even better, diagnose the issue\n> directly. If people agree, I'll come up with a patch.\n\nI recreated my freebsd-9-under-qemu setup and I can still reproduce\nthe problem, though not with high reliability (order of 1 time in 10).\nAnything particular you want logged?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 20 Sep 2019 16:25:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: subscriptionCheck failures on nightjar"
},
{
"msg_contents": "Hi,\n\nOn 2019-09-20 16:25:21 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Since now a number of people (I tried as well), failed to reproduce this\n> > locally, I propose that we increase the log-level during this test on\n> > master. And perhaps expand the set of debugging information. With the\n> > hope that the additional information on the cases encountered on the bf\n> > helps us build a reproducer or, even better, diagnose the issue\n> > directly. If people agree, I'll come up with a patch.\n> \n> I recreated my freebsd-9-under-qemu setup and I can still reproduce\n> the problem, though not with high reliability (order of 1 time in 10).\n> Anything particular you want logged?\n\nA DEBUG2 log would help a fair bit, because it'd log some information\nabout what changes the \"horizons\" determining when data may be removed.\n\nPerhaps with the additional elogs attached? I lowered some messages to\nDEBUG2 so we don't have to suffer the noise of the ipc.c DEBUG3\nmessages.\n\nIf I use a TEMP_CONFIG setting log_min_messages=DEBUG2 with the patches\napplied, the subscription tests still pass.\n\nI hope they still fail on your setup, even though the increased logging\nvolume probably changes timing somewhat.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Fri, 20 Sep 2019 14:26:03 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: subscriptionCheck failures on nightjar"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-09-20 16:25:21 -0400, Tom Lane wrote:\n>> I recreated my freebsd-9-under-qemu setup and I can still reproduce\n>> the problem, though not with high reliability (order of 1 time in 10).\n>> Anything particular you want logged?\n\n> A DEBUG2 log would help a fair bit, because it'd log some information\n> about what changes the \"horizons\" determining when data may be removed.\n\nActually, what I did was as attached [1], and I am getting traces like\n[2]. The problem seems to occur only when there are two or three\nprocesses concurrently creating the same snapshot file. It's not\nobvious from the debug trace, but the snapshot file *does* exist\nafter the music stops.\n\nIt is very hard to look at this trace and conclude anything other\nthan \"rename(2) is broken, it's not atomic\". Nothing in our code\nhas deleted the file: no checkpoint has started, nor do we see\nthe DEBUG1 output that CheckPointSnapBuild ought to produce.\nBut fsync_fname momentarily can't see it (and then later another\nprocess does see it).\n\nIt is now apparent why we're only seeing this on specific ancient\nplatforms. I looked around for info about rename(2) not being\natomic, and I found this info about FreeBSD:\n\nhttps://bugs.freebsd.org/bugzilla/show_bug.cgi?id=94849\n\nThe reported symptom there isn't quite the same, so probably there\nis another issue, but there is plenty of reason to be suspicious\nthat UFS rename(2) is buggy in this release. As for dromedary's\nancient version of macOS, Apple is exceedinly untransparent about\ntheir bugs, but I found\n\nhttp://www.weirdnet.nl/apple/rename.html\n\nIn short, what we got here is OS bugs that have probably been\nresolved years ago.\n\nThe question is what to do next. Should we just retire these\nspecific buildfarm critters, or do we want to push ahead with\ngetting rid of the PANIC here?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 20 Sep 2019 17:49:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: subscriptionCheck failures on nightjar"
},
{
"msg_contents": "Sigh, forgot about attaching the attachments ...\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 20 Sep 2019 17:51:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: subscriptionCheck failures on nightjar"
},
{
"msg_contents": "Hi,\n\nOn 2019-09-20 17:49:27 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-09-20 16:25:21 -0400, Tom Lane wrote:\n> >> I recreated my freebsd-9-under-qemu setup and I can still reproduce\n> >> the problem, though not with high reliability (order of 1 time in 10).\n> >> Anything particular you want logged?\n> \n> > A DEBUG2 log would help a fair bit, because it'd log some information\n> > about what changes the \"horizons\" determining when data may be removed.\n> \n> Actually, what I did was as attached [1], and I am getting traces like\n> [2]. The problem seems to occur only when there are two or three\n> processes concurrently creating the same snapshot file. It's not\n> obvious from the debug trace, but the snapshot file *does* exist\n> after the music stops.\n> \n> It is very hard to look at this trace and conclude anything other\n> than \"rename(2) is broken, it's not atomic\". Nothing in our code\n> has deleted the file: no checkpoint has started, nor do we see\n> the DEBUG1 output that CheckPointSnapBuild ought to produce.\n> But fsync_fname momentarily can't see it (and then later another\n> process does see it).\n\nYikes. No wondering most of us weren't able to reproduce the\nproblem. And that staring at our code didn't point to a bug.\n\nNice catch.\n\n\n> In short, what we got here is OS bugs that have probably been\n> resolved years ago.\n> \n> The question is what to do next. Should we just retire these\n> specific buildfarm critters, or do we want to push ahead with\n> getting rid of the PANIC here?\n\nHm. Given that the fsync failing is actually an issue, I'm somewhat\ndisinclined to remove the PANIC. It's not like only raising an ERROR\nactually solves anything, except making the problem even harder to\ndiagnose? Or that we otherwise are ok, with renames not being atomic?\n\nSo I'd be tentatively in favor of either upgrading, replacing the\nfilesystem (perhaps ZFS isn't buggy in the same way?), or retiring\nthose animals.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 20 Sep 2019 15:03:32 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: subscriptionCheck failures on nightjar"
},
{
"msg_contents": "On 2019-Sep-20, Tom Lane wrote:\n\n> Actually, what I did was as attached [1], and I am getting traces like\n> [2]. The problem seems to occur only when there are two or three\n> processes concurrently creating the same snapshot file. It's not\n> obvious from the debug trace, but the snapshot file *does* exist\n> after the music stops.\n\nUh .. I didn't think it was possible that we would build the same\nsnapshot file more than once. Isn't that a waste of time anyway? Maybe\nwe can fix the symptom by just not doing that in the first place?\nI don't have a strategy to do that, but seems worth considering before\nretiring the bf animals.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 20 Sep 2019 19:06:20 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: subscriptionCheck failures on nightjar"
},
{
"msg_contents": "Hi, \n\nOn September 20, 2019 3:06:20 PM PDT, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>On 2019-Sep-20, Tom Lane wrote:\n>\n>> Actually, what I did was as attached [1], and I am getting traces\n>like\n>> [2]. The problem seems to occur only when there are two or three\n>> processes concurrently creating the same snapshot file. It's not\n>> obvious from the debug trace, but the snapshot file *does* exist\n>> after the music stops.\n>\n>Uh .. I didn't think it was possible that we would build the same\n>snapshot file more than once. Isn't that a waste of time anyway? \n>Maybe\n>we can fix the symptom by just not doing that in the first place?\n>I don't have a strategy to do that, but seems worth considering before\n>retiring the bf animals.\n\nWe try to avoid it, but the check is racy. Check comments in SnapBuildSerialize. We could introduce locking etc to avoid that, but that seems overkill, given that were really just dealing with a broken os.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Fri, 20 Sep 2019 15:11:22 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: subscriptionCheck failures on nightjar"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> Uh .. I didn't think it was possible that we would build the same\n> snapshot file more than once. Isn't that a waste of time anyway? Maybe\n> we can fix the symptom by just not doing that in the first place?\n> I don't have a strategy to do that, but seems worth considering before\n> retiring the bf animals.\n\nThe comment near the head of SnapBuildSerialize says\n\n * there is an obvious race condition here between the time we stat(2) the\n * file and us writing the file. But we rename the file into place\n * atomically and all files created need to contain the same data anyway,\n * so this is perfectly fine, although a bit of a resource waste. Locking\n * seems like pointless complication.\n\nwhich seems like a reasonable argument. Also, this is hardly the only\nplace where we expect rename(2) to be atomic. So I tend to agree with\nAndres that we should consider OSes with such a bug to be unsupported.\n\nDromedary is running the last release of macOS that supports 32-bit\nhardware, so if we decide to kick that to the curb, I'd either shut\ndown the box or put some newer Linux or BSD variant on it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 20 Sep 2019 18:17:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: subscriptionCheck failures on nightjar"
},
{
"msg_contents": "On Fri, Sep 20, 2019 at 05:30:48PM +0200, Tomas Vondra wrote:\n> But even with that change you haven't managed to reproduce the issue,\n> right? Or am I misunderstanding?\n\nNo, I was not able to see it on my laptop running Debian.\n--\nMichael",
"msg_date": "Sat, 21 Sep 2019 11:16:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: subscriptionCheck failures on nightjar"
},
{
"msg_contents": "\nOn 9/20/19 6:17 PM, Tom Lane wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>> Uh .. I didn't think it was possible that we would build the same\n>> snapshot file more than once. Isn't that a waste of time anyway? Maybe\n>> we can fix the symptom by just not doing that in the first place?\n>> I don't have a strategy to do that, but seems worth considering before\n>> retiring the bf animals.\n> The comment near the head of SnapBuildSerialize says\n>\n> * there is an obvious race condition here between the time we stat(2) the\n> * file and us writing the file. But we rename the file into place\n> * atomically and all files created need to contain the same data anyway,\n> * so this is perfectly fine, although a bit of a resource waste. Locking\n> * seems like pointless complication.\n>\n> which seems like a reasonable argument. Also, this is hardly the only\n> place where we expect rename(2) to be atomic. So I tend to agree with\n> Andres that we should consider OSes with such a bug to be unsupported.\n>\n> Dromedary is running the last release of macOS that supports 32-bit\n> hardware, so if we decide to kick that to the curb, I'd either shut\n> down the box or put some newer Linux or BSD variant on it.\n>\n> \t\t\t\n\n\n\nWell, nightjar is on FBSD 9.0 which is oldish. I can replace it before\nlong with an 11-stable instance if that's appropriate.\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Mon, 23 Sep 2019 16:12:45 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: subscriptionCheck failures on nightjar"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 9/20/19 6:17 PM, Tom Lane wrote:\n>> Dromedary is running the last release of macOS that supports 32-bit\n>> hardware, so if we decide to kick that to the curb, I'd either shut\n>> down the box or put some newer Linux or BSD variant on it.\n\n> Well, nightjar is on FBSD 9.0 which is oldish. I can replace it before\n> long with an 11-stable instance if that's appropriate.\n\nFYI, I've installed FreeBSD 12.0/i386 on that machine and it's\nnow running buildfarm member florican, using clang with -msse2\n(a configuration we had no buildfarm coverage of before, AFAIK).\n\nI can still boot the macOS installation if anyone is interested\nin specific tests in that environment, but I don't intend to run\ndromedary on a regular basis anymore.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 30 Sep 2019 21:40:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: subscriptionCheck failures on nightjar"
}
] |
[
{
"msg_contents": "What is the scope of logical replication if I cannot make recovery from\npg_basebackup?\n\nin this example:\nhttps://gist.github.com/vadv/e55fca418d6a14da71f01a95da493fae I get\nlogically unsynchronized data at the subscriber and publisher, but I'm not\ntold anything about it in the log.\n\nDo I understand correctly that logical replication and recovery from\npg_basebackup are incompatible things?\n\nWhat is the scope of logical replication if I cannot make recovery from pg_basebackup?in this example: https://gist.github.com/vadv/e55fca418d6a14da71f01a95da493fae I get logically unsynchronized data at the subscriber and publisher, but I'm not told anything about it in the log. Do I understand correctly that logical replication and recovery from pg_basebackup are incompatible things?",
"msg_date": "Mon, 11 Feb 2019 17:39:49 +0300",
"msg_from": "Dmitry Vasiliev <dmitry.vasiliev@coins.ph>",
"msg_from_op": true,
"msg_subject": "Logical replication and restore from pg_basebackup"
},
{
"msg_contents": "At the start of the logical subscriber is not informed that it is connected\nto the logical replication slot with a non-consistent state.\nWhether I understood correctly that, postgresql deceives the user and data\nin logical replication cannot be trusted.\n\n2019-02-11 17:22:20.103 MSK [71156] LOG: database system was shut down at\n2019-02-11 17:22:19 MSK\n2019-02-11 17:22:20.108 MSK [71154] LOG: database system is ready to\naccept connections\n2019-02-11 17:22:20.243 MSK [71171] LOG: logical replication apply worker\nfor subscription \"sub_pgbench\" has started\n2019-02-11 17:22:20.260 MSK [71174] LOG: logical replication table\nsynchronization worker for subscription \"sub_pgbench\", table\n\"pgbench_accounts\" has started\n2019-02-11 17:22:23.422 MSK [71174] LOG: logical replication table\nsynchronization worker for subscription \"sub_pgbench\", table\n\"pgbench_accounts\" has finished\n2019-02-11 17:22:27.807 MSK [71269] LOG: database system was interrupted;\nlast known up at 2019-02-11 17:22:24 MSK\n2019-02-11 17:22:27.824 MSK [71269] LOG: recovered replication state of\nnode 1 to 0/0\n2019-02-11 17:22:27.825 MSK [71269] LOG: redo starts at 0/C000028\n2019-02-11 17:22:27.825 MSK [71269] LOG: consistent recovery state reached\nat 0/C0000F8\n2019-02-11 17:22:27.826 MSK [71269] LOG: redo done at 0/C0000F8\n2019-02-11 17:22:27.883 MSK [71267] LOG: database system is ready to\naccept connections\n2019-02-11 17:22:27.893 MSK [71276] LOG: logical replication apply worker\nfor subscription \"sub_pgbench\" has started\n\nOn Mon, Feb 11, 2019 at 5:39 PM Dmitry Vasiliev <dmitry.vasiliev@coins.ph>\nwrote:\n\n> What is the scope of logical replication if I cannot make recovery from\n> pg_basebackup?\n>\n> in this example:\n> https://gist.github.com/vadv/e55fca418d6a14da71f01a95da493fae I get\n> logically unsynchronized data at the subscriber and publisher, but I'm not\n> told anything about it in the log.\n>\n> Do I understand correctly that logical replication and recovery from\n> pg_basebackup are incompatible things?\n>\n\nAt the start of the logical subscriber is not informed that it is connected to the logical replication slot with a non-consistent state.Whether I understood correctly that, postgresql deceives the user and data in logical replication cannot be trusted.2019-02-11 17:22:20.103 MSK [71156] LOG: database system was shut down at 2019-02-11 17:22:19 MSK2019-02-11 17:22:20.108 MSK [71154] LOG: database system is ready to accept connections2019-02-11 17:22:20.243 MSK [71171] LOG: logical replication apply worker for subscription \"sub_pgbench\" has started2019-02-11 17:22:20.260 MSK [71174] LOG: logical replication table synchronization worker for subscription \"sub_pgbench\", table \"pgbench_accounts\" has started2019-02-11 17:22:23.422 MSK [71174] LOG: logical replication table synchronization worker for subscription \"sub_pgbench\", table \"pgbench_accounts\" has finished2019-02-11 17:22:27.807 MSK [71269] LOG: database system was interrupted; last known up at 2019-02-11 17:22:24 MSK2019-02-11 17:22:27.824 MSK [71269] LOG: recovered replication state of node 1 to 0/02019-02-11 17:22:27.825 MSK [71269] LOG: redo starts at 0/C0000282019-02-11 17:22:27.825 MSK [71269] LOG: consistent recovery state reached at 0/C0000F82019-02-11 17:22:27.826 MSK [71269] LOG: redo done at 0/C0000F82019-02-11 17:22:27.883 MSK [71267] LOG: database system is ready to accept connections2019-02-11 17:22:27.893 MSK [71276] LOG: logical replication apply worker for subscription \"sub_pgbench\" has startedOn Mon, Feb 11, 2019 at 5:39 PM Dmitry Vasiliev <dmitry.vasiliev@coins.ph> wrote:What is the scope of logical replication if I cannot make recovery from pg_basebackup?in this example: https://gist.github.com/vadv/e55fca418d6a14da71f01a95da493fae I get logically unsynchronized data at the subscriber and publisher, but I'm not told anything about it in the log. Do I understand correctly that logical replication and recovery from pg_basebackup are incompatible things?",
"msg_date": "Mon, 11 Feb 2019 18:36:00 +0300",
"msg_from": "Dmitry Vasiliev <dmitry.vasiliev@coins.ph>",
"msg_from_op": true,
"msg_subject": "Re: Logical replication and restore from pg_basebackup"
},
{
"msg_contents": "On Mon, Feb 11, 2019 at 05:39:49PM +0300, Dmitry Vasiliev wrote:\n> Do I understand correctly that logical replication and recovery from\n> pg_basebackup are incompatible things?\n\nWhen using physical streaming replication, it is mandatory to have\nnodes with a system ID matching, meaning that all nodes have been\ncreated from the same source instance. With logical replication,\nnodes are separate entities from this point of view, hence you may be\nable to make a logical copy from a base backup, however this is not\nreally necessary as changes are exchanged in a logical shape.\n--\nMichael",
"msg_date": "Tue, 12 Feb 2019 08:46:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication and restore from pg_basebackup"
},
{
"msg_contents": "Hi Dmitry,\n\nOn 11.02.2019 17:39, Dmitry Vasiliev wrote:\n> What is the scope of logical replication if I cannot make recovery \n> from pg_basebackup?\n\n\nNo, you can, but there are some things to keep in mind:\n\n1) I could be wrong, but usage of pgbench in such a test seems to be a \nbad idea, since it drops and creates tables from the scratch, when -i is \npassed. However, if I recall it correctly, pub/sub slots use OIDs of \nrelations, so I expect that you should get only initial sync data on \nreplica and last pgbench results on master.\n\n2) Next, 'srsubstate' check works only for initial sync. After that you \nshould poll master's replication slot lsn for 'pg_current_wal_lsn() <= \nreplay_lsn'.\n\nPlease, find attached a slightly modified version of your test (and gist \n[1]), which works just fine. You should replace %username% with your \ncurrent username, since I did not run it as postgres user.\n\n[1] https://gist.github.com/ololobus/a8a11f11eb67dfa1b6a95bff5e8f0096\n\n\nRegards\n\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company",
"msg_date": "Tue, 12 Feb 2019 15:22:04 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication and restore from pg_basebackup"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nGrigory noticed that one of our utilities has very slow performance when \nxlogreader reads zlib archives. We found out that xlogreader sometimes \nreads a WAL file block twice.\n\nzlib has slow performance when you read an archive not in sequential \norder. I think reading a block twice in same position isn't sequential, \nbecause gzread() moves current position forward and next call gzseek() \nto the same position moves it back.\n\nIt seems that the attached patch solves the issue. I think when reqLen \n== state->readLen the requested block already is in the xlogreader's buffer.\n\nWhat do you think?\n\n-- \nArthur Zakirov\nPostgres Professional: http://www.postgrespro.com\nRussian Postgres Company",
"msg_date": "Mon, 11 Feb 2019 19:25:39 +0300",
"msg_from": "Arthur Zakirov <a.zakirov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "[PATCH] xlogreader: do not read a file block twice"
},
{
"msg_contents": "Hm, looks like it could speed up PostgreSQL recovery, but is it safe?\n\n\nOn 02/11/2019 07:25 PM, Arthur Zakirov wrote:\n> Hello hackers,\n>\n> Grigory noticed that one of our utilities has very slow performance \n> when xlogreader reads zlib archives. We found out that xlogreader \n> sometimes reads a WAL file block twice.\n>\n> zlib has slow performance when you read an archive not in sequential \n> order. I think reading a block twice in same position isn't \n> sequential, because gzread() moves current position forward and next \n> call gzseek() to the same position moves it back.\n>\n> It seems that the attached patch solves the issue. I think when reqLen \n> == state->readLen the requested block already is in the xlogreader's \n> buffer.\n>\n> What do you think?\n>\n\n-- \nGrigory Smolkin\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Mon, 11 Feb 2019 19:32:59 +0300",
"msg_from": "Grigory Smolkin <g.smolkin@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] xlogreader: do not read a file block twice"
},
{
"msg_contents": "On Mon, Feb 11, 2019 at 07:32:59PM +0300, Grigory Smolkin wrote:\n> Hm, looks like it could speed up PostgreSQL recovery, but is it\n> safe?\n\n(Please avoid top-posting.)\n\n> On 02/11/2019 07:25 PM, Arthur Zakirov wrote:\n>> Grigory noticed that one of our utilities has very slow performance when\n>> xlogreader reads zlib archives. We found out that xlogreader sometimes\n>> reads a WAL file block twice.\n>> \n>> What do you think?\n\nI think that such things, even if they look simple, need a careful\nlookup, and I have not looked at the proposal yet. Could you add it\nto the next commit fest so as we don't lose track of it?\nhttps://commitfest.postgresql.org/22/\n--\nMichael",
"msg_date": "Tue, 12 Feb 2019 13:23:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] xlogreader: do not read a file block twice"
},
{
"msg_contents": "On 12.02.2019 07:23, Michael Paquier wrote:\n>> On 02/11/2019 07:25 PM, Arthur Zakirov wrote:\n>>> Grigory noticed that one of our utilities has very slow performance when\n>>> xlogreader reads zlib archives. We found out that xlogreader sometimes\n>>> reads a WAL file block twice.\n>>>\n>>> What do you think?\n> \n> I think that such things, even if they look simple, need a careful\n> lookup, and I have not looked at the proposal yet. Could you add it\n> to the next commit fest so as we don't lose track of it?\n> https://commitfest.postgresql.org/22/\n\nOf course. Agree, it may be a non trivial case. Added as a bug fix:\nhttps://commitfest.postgresql.org/22/1994/\n\n-- \nArthur Zakirov\nPostgres Professional: http://www.postgrespro.com\nRussian Postgres Company\n\n",
"msg_date": "Tue, 12 Feb 2019 11:44:14 +0300",
"msg_from": "Arthur Zakirov <a.zakirov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] xlogreader: do not read a file block twice"
},
{
"msg_contents": "\n\nOn 11.02.2019 21:25, Arthur Zakirov wrote:\n> Hello hackers,\n> \n> Grigory noticed that one of our utilities has very slow performance when \n> xlogreader reads zlib archives. We found out that xlogreader sometimes \n> reads a WAL file block twice.\n> \n> zlib has slow performance when you read an archive not in sequential \n> order. I think reading a block twice in same position isn't sequential, \n> because gzread() moves current position forward and next call gzseek() \n> to the same position moves it back.\n> \n> It seems that the attached patch solves the issue. I think when reqLen \n> == state->readLen the requested block already is in the xlogreader's \n> buffer.\n> \n> What do you think?\nI looked at the history of the code changes:\n\n---------------------------------------------------------------\n7fcbf6a405f (Alvaro Herrera 2013-01-16 16:12:53 -0300 539) \nreqLen < state->readLen)\n\n1bb2558046c (Heikki Linnakangas 2010-01-27 15:27:51 +0000 9349) \n targetPageOff == readOff && targetRecOff < readLen)\n\neaef111396e (Tom Lane 2006-04-03 23:35:05 +0000 3842)\nlen = XLOG_BLCKSZ - RecPtr->xrecoff % XLOG_BLCKSZ;\n4d14fe0048c (Tom Lane 2001-03-13 01:17:06 +0000 3843)\nif (total_len > len) \n---------------------------------------------------------------\n\nIn the original code of Tom Lane, condition (total_len > len) caused a \npage reread from disk. As I understand it, this is equivalent to your \nproposal.\nTh code line in commit 1bb2558046c seems tantamount to the corresponding \nline in commit 7fcbf6a405f but have another semantics: the targetPageOff \nvalue can't be more or equal XLOG_BLCKSZ, but the reqLen value can be. \nIt may be a reason of appearance of possible mistake, introduced by \ncommit 7fcbf6a405f.\n\n-- \nAndrey Lepikhov\nPostgres Professional\nhttps://postgrespro.com\nThe Russian Postgres Company\n\n",
"msg_date": "Tue, 12 Feb 2019 22:47:32 +0500",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] xlogreader: do not read a file block twice"
},
{
"msg_contents": "On 12.02.2019 20:47, Andrey Lepikhov wrote:\n> I looked at the history of the code changes:\n> \n> ---------------------------------------------------------------\n> 7fcbf6a405f (Alvaro Herrera 2013-01-16 16:12:53 -0300 539) reqLen < \n> state->readLen)\n> \n> 1bb2558046c (Heikki Linnakangas 2010-01-27 15:27:51 +0000 9349) \n> targetPageOff == readOff && targetRecOff < readLen)\n> \n> eaef111396e (Tom Lane 2006-04-03 23:35:05 +0000 3842)\n> len = XLOG_BLCKSZ - RecPtr->xrecoff % XLOG_BLCKSZ;\n> 4d14fe0048c (Tom Lane 2001-03-13 01:17:06 +0000 3843)\n> if (total_len > len) \n> ---------------------------------------------------------------\n> \n> In the original code of Tom Lane, condition (total_len > len) caused a \n> page reread from disk. As I understand it, this is equivalent to your \n> proposal.\n> Th code line in commit 1bb2558046c seems tantamount to the corresponding \n> line in commit 7fcbf6a405f but have another semantics: the targetPageOff \n> value can't be more or equal XLOG_BLCKSZ, but the reqLen value can be. \n> It may be a reason of appearance of possible mistake, introduced by \n> commit 7fcbf6a405f.\n\nThank you for your research. Indeed, it makes sense now.\n\nIn my case after reading a page both `reqLen` and `state->readLen` equal \nto XLOG_BLCKSZ. This leads to a page reread, since `pageptr` is the same \nas the previous read. But `targetRecOff` is different in the second case \nbecause we want to read next record, which probably doesn't fit into the \npage wholly (that's why `reqLen` is equal to XLOG_BLCKSZ).\n\n-- \nArthur Zakirov\nPostgres Professional: http://www.postgrespro.com\nRussian Postgres Company\n\n",
"msg_date": "Wed, 13 Feb 2019 14:17:41 +0300",
"msg_from": "Arthur Zakirov <a.zakirov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] xlogreader: do not read a file block twice"
},
{
"msg_contents": "On Tue, Feb 12, 2019 at 11:44:14AM +0300, Arthur Zakirov wrote:\n> Of course. Agree, it may be a non trivial case. Added as a bug fix:\n> https://commitfest.postgresql.org/22/1994/\n\nI have been looking at the patch, and I agree that the current coding\nis a bit crazy. If the wanted data has already been read, it makes\nlittle sense to require reading it again if the size requested by the\ncaller of ReadPageInternal() exactly equals what has been read\nalready, and that's what the code is doing.\n\nNow I don't actually agree that this qualifies as a bug fix. As\nthings stand, a page may finish by being more than once if what has\nbeen read previously equals what is requested, however this does not\nprevent the code to work correctly. The performance gain is also\nheavily dependent on the callback reading a page and the way the WAL\nreader is used. How do you actually read WAL pages in your own\nplugin with compressed data? It begins by reading a full page once,\nthen it moves on to a per-record read after making sure that the page\nhas been read?\n--\nMichael",
"msg_date": "Thu, 14 Feb 2019 15:51:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] xlogreader: do not read a file block twice"
},
{
"msg_contents": "On 14.02.2019 09:51, Michael Paquier wrote:\n> Now I don't actually agree that this qualifies as a bug fix. As\n> things stand, a page may finish by being more than once if what has\n> been read previously equals what is requested, however this does not\n> prevent the code to work correctly. The performance gain is also\n> heavily dependent on the callback reading a page and the way the WAL\n> reader is used. How do you actually read WAL pages in your own\n> plugin with compressed data? It begins by reading a full page once,\n> then it moves on to a per-record read after making sure that the page\n> has been read?\n\nYes, an application reads WAL pages wholly at a time. It is done within \nSimpleXLogPageRead() (it is a read_page callback passed to \nXLogReaderAllocate()). It returns XLOG_BLCKSZ.\n\nHere is the part of the code, not sure that it will be useful though:\n\nSimpleXLogPageRead(...)\n{\n ...\n targetPageOff = targetPagePtr % private_data->xlog_seg_size;\n ...\n if (gzseek(private_data->gz_xlogfile, (z_off_t) targetPageOff,\n SEEK_SET) == -1)\n ...\n if (gzread(private_data->gz_xlogfile, readBuf, XLOG_BLCKSZ) !=\n XLOG_BLCKSZ)\n ...\n return XLOG_BLCKSZ;\n}\n\nSo we read whole page with size XLOG_BLCKSZ. The full code:\nhttps://github.com/postgrespro/pg_probackup/blob/c052651b8c8864733bcabbc2660c387b792229d8/src/parsexlog.c#L1074\n\nHere is the little optimization I made. Mainly I just add a buffer to \nstore previous read page:\nhttps://github.com/postgrespro/pg_probackup/blob/c052651b8c8864733bcabbc2660c387b792229d8/src/parsexlog.c#L1046\n\n-- \nArthur Zakirov\nPostgres Professional: http://www.postgrespro.com\nRussian Postgres Company\n\n",
"msg_date": "Thu, 14 Feb 2019 11:20:56 +0300",
"msg_from": "Arthur Zakirov <a.zakirov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] xlogreader: do not read a file block twice"
},
{
"msg_contents": "On Thu, Feb 14, 2019 at 11:20:56AM +0300, Arthur Zakirov wrote:\n> So we read whole page with size XLOG_BLCKSZ. The full code:\n> https://github.com/postgrespro/pg_probackup/blob/c052651b8c8864733bcabbc2660c387b792229d8/src/parsexlog.c#L1074\n> \n> Here is the little optimization I made. Mainly I just add a buffer to store\n> previous read page:\n> https://github.com/postgrespro/pg_probackup/blob/c052651b8c8864733bcabbc2660c387b792229d8/src/parsexlog.c#L1046\n\nThanks, I see what you have done. I cannot comment if your shortcut\nis actually fully correct based on my knowledge of this code, but\nthings cannot be in the best conditions without having the WAL reader\nhandle properly the limits. So I am planning to commit what you\npropose after an extra pass on it in the next couple of days or so.\n--\nMichael",
"msg_date": "Fri, 15 Feb 2019 08:06:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] xlogreader: do not read a file block twice"
},
{
"msg_contents": "On Fri, Feb 15, 2019 at 08:06:16AM +0900, Michael Paquier wrote:\n> Thanks, I see what you have done. I cannot comment if your shortcut\n> is actually fully correct based on my knowledge of this code, but\n> things cannot be in the best conditions without having the WAL reader\n> handle properly the limits. So I am planning to commit what you\n> propose after an extra pass on it in the next couple of days or so.\n\nAnd done, after doing and extra pass, doing more testing using by own\nplugins, pg_waldump and more fancy stuff with a primary/standby and\npgbench.\n--\nMichael",
"msg_date": "Mon, 18 Feb 2019 10:09:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] xlogreader: do not read a file block twice"
},
{
"msg_contents": "On 18.02.2019 04:09, Michael Paquier wrote:\n> And done, after doing and extra pass, doing more testing using by own\n> plugins, pg_waldump and more fancy stuff with a primary/standby and\n> pgbench.\n\nThank you Michael.\n\n-- \nArthur Zakirov\nPostgres Professional: http://www.postgrespro.com\nRussian Postgres Company\n\n",
"msg_date": "Mon, 18 Feb 2019 11:23:46 +0300",
"msg_from": "Arthur Zakirov <a.zakirov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] xlogreader: do not read a file block twice"
}
] |
[
{
"msg_contents": "Hello,\n\nThe attached patch speeds up transaction completion when any prior transaction accessed many relations in the same session.\n\nThe transaction records its own acquired lock information in the LOCALLOCK structure (a pair of lock object and lock mode). It stores LOCALLOCKs in a hash table in its local memory. The hash table automatically expands when the transaction acquires many relations. The hash table doesn't shrink. When the transaction commits or aborts, it scans the hash table to find LOCALLOCKs to release locks.\n\nThe problem is that once some transaction accesses many relations, even subsequent transactions in the same session that only access a few relations take unreasonably long time to complete, because it has to scan the expanded hash table.\n\nThe attached patch links LOCALLOCKS to PGPROC, so that releasing locks should only scan the list instead of the hash table. The hash table is kept because some functions want to find a particular LOCALLOCK quickly based on its hash value.\n\nThis problem was uncovered while evaluating partitioning performance. When the application PREPAREs a statement once and then EXECUTE-COMMIT repeatedly, the server creates a generic plan on the 6th EXECUTE. Unfortunately, creation of the generic plan of UPDATE/DELETE currently accesses all partitions of the target table (this itself needs improvement), expanding the LOCALLOCK hash table. As a result, 7th and later EXECUTEs get slower.\n\nImai-san confirmed performance improvement with this patch:\n\nhttps://commitfest.postgresql.org/22/1993/\n\n\nRegards\nTakayuki Tsunakawa",
"msg_date": "Tue, 12 Feb 2019 06:33:00 +0000",
"msg_from": "\"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "\"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com> writes:\n> The attached patch speeds up transaction completion when any prior transaction accessed many relations in the same session.\n\nHm. Putting a list header for a purely-local data structure into shared\nmemory seems quite ugly. Isn't there a better place to keep that?\n\nDo we really want a dlist here at all? I'm concerned that bloating\nLOCALLOCK will cost us when there are many locks involved. This patch\nincreases the size of LOCALLOCK by 25% if I counted right, which does\nnot seem like a negligible penalty.\n\nMy own thought about how to improve this situation was just to destroy\nand recreate LockMethodLocalHash at transaction end (or start)\nif its size exceeded $some-value. Leaving it permanently bloated seems\nlike possibly a bad idea, even if we get rid of all the hash_seq_searches\non it.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 18 Feb 2019 18:42:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Tue, 19 Feb 2019 at 12:42, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> My own thought about how to improve this situation was just to destroy\n> and recreate LockMethodLocalHash at transaction end (or start)\n> if its size exceeded $some-value. Leaving it permanently bloated seems\n> like possibly a bad idea, even if we get rid of all the hash_seq_searches\n> on it.\n\nThat seems like a good idea. Although, it would be good to know that\nit didn't add too much overhead dropping and recreating the table when\nevery transaction happened to obtain more locks than $some-value. If\nit did, then maybe we could track the average locks per of recent\ntransactions and just ditch the table after the locks are released if\nthe locks held by the last transaction exceeded the average *\n1.something. No need to go near shared memory to do that.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Tue, 19 Feb 2019 12:52:08 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-19 12:52:08 +1300, David Rowley wrote:\n> On Tue, 19 Feb 2019 at 12:42, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > My own thought about how to improve this situation was just to destroy\n> > and recreate LockMethodLocalHash at transaction end (or start)\n> > if its size exceeded $some-value. Leaving it permanently bloated seems\n> > like possibly a bad idea, even if we get rid of all the hash_seq_searches\n> > on it.\n> \n> That seems like a good idea. Although, it would be good to know that\n> it didn't add too much overhead dropping and recreating the table when\n> every transaction happened to obtain more locks than $some-value. If\n> it did, then maybe we could track the average locks per of recent\n> transactions and just ditch the table after the locks are released if\n> the locks held by the last transaction exceeded the average *\n> 1.something. No need to go near shared memory to do that.\n\nIsn't a large portion of benefits in this patch going to be mooted by\nthe locking improvements discussed in the other threads? I.e. there's\nhopefully not going to be a ton of cases with low overhead where we\nacquire a lot of locks and release them very soon after. Sure, for DDL\netc we will, but I can't see this mattering from a performance POV?\n\nI'm not against doing something like Tom proposes, but heuristics with\nmagic constants like this tend to age purely / are hard to tune well\nacross systems.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 18 Feb 2019 15:56:06 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> On Tue, 19 Feb 2019 at 12:42, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> My own thought about how to improve this situation was just to destroy\n>> and recreate LockMethodLocalHash at transaction end (or start)\n>> if its size exceeded $some-value. Leaving it permanently bloated seems\n>> like possibly a bad idea, even if we get rid of all the hash_seq_searches\n>> on it.\n\n> That seems like a good idea. Although, it would be good to know that\n> it didn't add too much overhead dropping and recreating the table when\n> every transaction happened to obtain more locks than $some-value. If\n> it did, then maybe we could track the average locks per of recent\n> transactions and just ditch the table after the locks are released if\n> the locks held by the last transaction exceeded the average *\n> 1.something. No need to go near shared memory to do that.\n\nYeah, I'd deliberately avoided saying how we'd choose $some-value ;-).\nMaking it adaptive might not be a bad plan.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 18 Feb 2019 18:57:50 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Isn't a large portion of benefits in this patch going to be mooted by\n> the locking improvements discussed in the other threads? I.e. there's\n> hopefully not going to be a ton of cases with low overhead where we\n> acquire a lot of locks and release them very soon after. Sure, for DDL\n> etc we will, but I can't see this mattering from a performance POV?\n\nMmm ... AIUI, the patches currently proposed can only help for what\nDavid called \"point lookup\" queries. There are still going to be\nqueries that scan a large proportion of a partition tree, so if you've\ngot tons of partitions, you'll be concerned about this sort of thing.\n\n> I'm not against doing something like Tom proposes, but heuristics with\n> magic constants like this tend to age purely / are hard to tune well\n> across systems.\n\nI didn't say it had to be a constant ...\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 18 Feb 2019 19:01:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Tue, 19 Feb 2019 at 12:56, Andres Freund <andres@anarazel.de> wrote:\n> Isn't a large portion of benefits in this patch going to be mooted by\n> the locking improvements discussed in the other threads? I.e. there's\n> hopefully not going to be a ton of cases with low overhead where we\n> acquire a lot of locks and release them very soon after. Sure, for DDL\n> etc we will, but I can't see this mattering from a performance POV?\n\nI think this patch was born from Amit's partition planner improvement\npatch. If not that one, which other threads did you have in mind?\n\nA problem exists where, if using a PREPAREd statement to plan a query\nto a partitioned table containing many partitions that a generic plan\nwill never be favoured over a custom plan since the generic plan might\nnot be able to prune partitions like the custom plan can. The actual\nproblem is around that we do need to at some point generate a generic\nplan in order to know it's more costly and that requires locking\npossibly every partition. When plan_cache_mode = auto, this is done\non the 6th execution of the statement. After Amit's partition planner\nchanges [1], the custom plan will only lock partitions that are not\npruned, so the 6th execution of the statement bloats the local lock\ntable.\n\n[1] https://commitfest.postgresql.org/22/1778/\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Tue, 19 Feb 2019 13:05:44 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-18 19:01:06 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Isn't a large portion of benefits in this patch going to be mooted by\n> > the locking improvements discussed in the other threads? I.e. there's\n> > hopefully not going to be a ton of cases with low overhead where we\n> > acquire a lot of locks and release them very soon after. Sure, for DDL\n> > etc we will, but I can't see this mattering from a performance POV?\n> \n> Mmm ... AIUI, the patches currently proposed can only help for what\n> David called \"point lookup\" queries. There are still going to be\n> queries that scan a large proportion of a partition tree, so if you've\n> got tons of partitions, you'll be concerned about this sort of thing.\n\nAgreed - but it seems not unlikely that for those the rest of the\nplanner / executor overhead will entirely swamp any improvement we could\nmake here. If I understand correctly the benchmarks here were made with\n\"point\" update and select queries, although the reference in the first\npost in this thread is a bit vague.\n\n\n> > I'm not against doing something like Tom proposes, but heuristics with\n> > magic constants like this tend to age purely / are hard to tune well\n> > across systems.\n> \n> I didn't say it had to be a constant ...\n\nDo you have good idea?\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 18 Feb 2019 16:06:26 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-02-18 19:01:06 -0500, Tom Lane wrote:\n>> Mmm ... AIUI, the patches currently proposed can only help for what\n>> David called \"point lookup\" queries. There are still going to be\n>> queries that scan a large proportion of a partition tree, so if you've\n>> got tons of partitions, you'll be concerned about this sort of thing.\n\n> Agreed - but it seems not unlikely that for those the rest of the\n> planner / executor overhead will entirely swamp any improvement we could\n> make here. If I understand correctly the benchmarks here were made with\n> \"point\" update and select queries, although the reference in the first\n> post in this thread is a bit vague.\n\nI think what Maumau-san is on about here is that not only does your\n$giant-query take a long time, but it has a permanent negative effect\non all subsequent transactions in the session. That seems worth\ndoing something about.\n\n>> I didn't say it had to be a constant ...\n\n> Do you have good idea?\n\nI think David's on the right track --- keep some kind of moving average of\nthe LOCALLOCK table size for each transaction, and nuke it if it exceeds\nsome multiple of the recent average. Not sure offhand about how to get\nthe data cheaply --- it might not be sufficient to look at transaction\nend, if we release LOCALLOCK entries before that (but do we?)\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 18 Feb 2019 19:13:31 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-18 18:42:32 -0500, Tom Lane wrote:\n> \"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com> writes:\n> > The attached patch speeds up transaction completion when any prior transaction accessed many relations in the same session.\n> \n> Hm. Putting a list header for a purely-local data structure into shared\n> memory seems quite ugly. Isn't there a better place to keep that?\n\nYea, I think it'd be just as fine to store that in a static\nvariable (best defined directly besides LockMethodLocalHash).\n\n(Btw, I'd be entirely unsurprised if moving away from a dynahash for\nLockMethodLocalHash would be beneficial)\n\n\n> Do we really want a dlist here at all? I'm concerned that bloating\n> LOCALLOCK will cost us when there are many locks involved. This patch\n> increases the size of LOCALLOCK by 25% if I counted right, which does\n> not seem like a negligible penalty.\n\nIt's currently\n\nstruct LOCALLOCK {\n LOCALLOCKTAG tag; /* 0 20 */\n\n /* XXX 4 bytes hole, try to pack */\n\n LOCK * lock; /* 24 8 */\n PROCLOCK * proclock; /* 32 8 */\n uint32 hashcode; /* 40 4 */\n\n /* XXX 4 bytes hole, try to pack */\n\n int64 nLocks; /* 48 8 */\n _Bool holdsStrongLockCount; /* 56 1 */\n _Bool lockCleared; /* 57 1 */\n\n /* XXX 2 bytes hole, try to pack */\n\n int numLockOwners; /* 60 4 */\n /* --- cacheline 1 boundary (64 bytes) --- */\n int maxLockOwners; /* 64 4 */\n\n /* XXX 4 bytes hole, try to pack */\n\n LOCALLOCKOWNER * lockOwners; /* 72 8 */\n\n /* size: 80, cachelines: 2, members: 10 */\n /* sum members: 66, holes: 4, sum holes: 14 */\n /* last cacheline: 16 bytes */\n};\n\nseems we could trivially squeeze most of the bytes for a dlist node out\nof padding.\n\n\n> My own thought about how to improve this situation was just to destroy\n> and recreate LockMethodLocalHash at transaction end (or start)\n> if its size exceeded $some-value. Leaving it permanently bloated seems\n> like possibly a bad idea, even if we get rid of all the hash_seq_searches\n> on it.\n\nOTOH, that'll force constant incremental resizing of the hashtable, for\nworkloads that regularly need a lot of locks. And I'd assume in most\ncases if one transaction needs a lot of locks it's quite likely that\nfuture ones will need a lot of locks, too.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 18 Feb 2019 16:16:39 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-18 19:13:31 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-02-18 19:01:06 -0500, Tom Lane wrote:\n> >> Mmm ... AIUI, the patches currently proposed can only help for what\n> >> David called \"point lookup\" queries. There are still going to be\n> >> queries that scan a large proportion of a partition tree, so if you've\n> >> got tons of partitions, you'll be concerned about this sort of thing.\n> \n> > Agreed - but it seems not unlikely that for those the rest of the\n> > planner / executor overhead will entirely swamp any improvement we could\n> > make here. If I understand correctly the benchmarks here were made with\n> > \"point\" update and select queries, although the reference in the first\n> > post in this thread is a bit vague.\n> \n> I think what Maumau-san is on about here is that not only does your\n> $giant-query take a long time, but it has a permanent negative effect\n> on all subsequent transactions in the session. That seems worth\n> doing something about.\n\nAh, yes, that makes sense. I'm inclined to think however that the\noriginal approach presented in this thread is better than the\nreset-the-whole-hashtable approach. Because:\n\n\n> I think David's on the right track --- keep some kind of moving average of\n> the LOCALLOCK table size for each transaction, and nuke it if it exceeds\n> some multiple of the recent average. Not sure offhand about how to get\n> the data cheaply --- it might not be sufficient to look at transaction\n> end, if we release LOCALLOCK entries before that (but do we?)\n\nSeems too complicated for my taste. And it doesn't solve the issue of\nhaving some transactions with few locks (say because the plan can be\nnicely pruned) interspersed with transactions where a lot of locks are\nacquired.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 18 Feb 2019 16:20:12 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-02-18 18:42:32 -0500, Tom Lane wrote:\n>> Do we really want a dlist here at all? I'm concerned that bloating\n>> LOCALLOCK will cost us when there are many locks involved. This patch\n>> increases the size of LOCALLOCK by 25% if I counted right, which does\n>> not seem like a negligible penalty.\n\n> It's currently [ 80 bytes with several padding holes ]\n> seems we could trivially squeeze most of the bytes for a dlist node out\n> of padding.\n\nYeah, but if we want to rearrange the members into an illogical order\nto save some space, we should do that independently of this patch ---\nand then the overhead of this patch would be even worse than 25%.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 18 Feb 2019 19:24:54 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-18 19:24:54 -0500, Tom Lane wrote:\n> Yeah, but if we want to rearrange the members into an illogical order\n> to save some space, we should do that independently of this patch ---\n\nSure, we should do that. I don't buy the \"illogical\" bit, just moving\nhashcode up to after tag isn't more or less logical, and saves most of\nthe padding, and moving the booleans to the end isn't better/worse\neither.\n\nYou always bring up that argument. While I agree that sometimes the most\noptimal ordering can be less natural, I think most of the time it vastly\noverstates how intelligent the original ordering was. Often new elements\nwere either just added iteratively without consideration for padding, or\nthe attention to padding was paid in 32bit times.\n\nI don't find\n\nstruct LOCALLOCK {\n LOCALLOCKTAG tag; /* 0 20 */\n uint32 hashcode; /* 20 4 */\n LOCK * lock; /* 24 8 */\n PROCLOCK * proclock; /* 32 8 */\n int64 nLocks; /* 40 8 */\n int numLockOwners; /* 48 4 */\n int maxLockOwners; /* 52 4 */\n LOCALLOCKOWNER * lockOwners; /* 56 8 */\n /* --- cacheline 1 boundary (64 bytes) --- */\n _Bool holdsStrongLockCount; /* 64 1 */\n _Bool lockCleared; /* 65 1 */\n\n /* size: 72, cachelines: 2, members: 10 */\n /* padding: 6 */\n /* last cacheline: 8 bytes */\n};\n\nless clear than\n\nstruct LOCALLOCK {\n LOCALLOCKTAG tag; /* 0 20 */\n\n /* XXX 4 bytes hole, try to pack */\n\n LOCK * lock; /* 24 8 */\n PROCLOCK * proclock; /* 32 8 */\n uint32 hashcode; /* 40 4 */\n\n /* XXX 4 bytes hole, try to pack */\n\n int64 nLocks; /* 48 8 */\n _Bool holdsStrongLockCount; /* 56 1 */\n _Bool lockCleared; /* 57 1 */\n\n /* XXX 2 bytes hole, try to pack */\n\n int numLockOwners; /* 60 4 */\n /* --- cacheline 1 boundary (64 bytes) --- */\n int maxLockOwners; /* 64 4 */\n\n /* XXX 4 bytes hole, try to pack */\n\n LOCALLOCKOWNER * lockOwners; /* 72 8 */\n\n /* size: 80, cachelines: 2, members: 10 */\n /* sum members: 66, holes: 4, sum holes: 14 */\n /* last cacheline: 16 bytes */\n};\n\nbut it's smaller (althoug there's plenty trailing space).\n\n\n> and then the overhead of this patch would be even worse than 25%.\n\nIDK, we, including you, very often make largely independent improvements\nto make the cost of something else more palpable. Why's that not OK\nhere? Especially because we're not comparing to an alternative where no\ncost is added, keeping track of e.g. a running average of the hashtable\nsize isn't free either; nor does it help in the intermittent cases.\n\n- Andres\n\n",
"msg_date": "Mon, 18 Feb 2019 16:41:07 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-02-18 19:24:54 -0500, Tom Lane wrote:\n>> Yeah, but if we want to rearrange the members into an illogical order\n>> to save some space, we should do that independently of this patch ---\n\n> Sure, we should do that. I don't buy the \"illogical\" bit, just moving\n> hashcode up to after tag isn't more or less logical, and saves most of\n> the padding, and moving the booleans to the end isn't better/worse\n> either.\n\nI hadn't looked at the details closely, but if we can squeeze out the\npadding space without any loss of intelligibility, sure let's do so.\nI still say that's independent of whether to adopt this patch though.\n\n> but it's smaller (althoug there's plenty trailing space).\n\nI think you're supposing that these things are independently palloc'd, but\nthey aren't --- dynahash lays them out in arrays without palloc padding.\n\n> IDK, we, including you, very often make largely independent improvements\n> to make the cost of something else more palpable. Why's that not OK\n> here?\n\nWhen we do that, we aren't normally talking about overheads as high as\n25% (even more, if it's measured as I think it ought to be). What I'm\nconcerned about is that the patch is being advocated for cases where\nthere are lots of LOCALLOCK entries --- which is exactly where the\nspace overhead is going to hurt the most.\n\n> Especially because we're not comparing to an alternative where no\n> cost is added, keeping track of e.g. a running average of the hashtable\n> size isn't free either; nor does it help in the intermittent cases.\n\nWhat I was hoping for --- though perhaps it's not achievable --- was\nstatistical overhead amounting to just a few more instructions per\ntransaction. Adding dlist linking would add more instructions per\nhashtable entry/removal, which seems like it'd be a substantially\nbigger time penalty. As for the intermittent-usage issue, that largely\ndepends on the details of the when-to-reset heuristic, which we don't\nhave a concrete design for yet. But I could certainly imagine it waiting\nfor a few transactions before deciding to chomp.\n\nAnyway, I'm not trying to veto the patch in this form, just suggesting\nthat there are alternatives worth thinking about.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 18 Feb 2019 20:29:29 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-18 20:29:29 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > but it's smaller (althoug there's plenty trailing space).\n> \n> I think you're supposing that these things are independently palloc'd, but\n> they aren't --- dynahash lays them out in arrays without palloc padding.\n\nI don't think that matters, given that the trailing six bytes are\nincluded in sizeof() (and have to, to guarantee suitable alignment in\narrays etc).\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 18 Feb 2019 19:24:20 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Tue, 19 Feb 2019 at 00:20, Andres Freund <andres@anarazel.de> wrote:\n\n\n> On 2019-02-18 19:13:31 -0500, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > On 2019-02-18 19:01:06 -0500, Tom Lane wrote:\n> > >> Mmm ... AIUI, the patches currently proposed can only help for what\n> > >> David called \"point lookup\" queries. There are still going to be\n> > >> queries that scan a large proportion of a partition tree, so if you've\n> > >> got tons of partitions, you'll be concerned about this sort of thing.\n>\n\n\n> > I think what Maumau-san is on about here is that not only does your\n> > $giant-query take a long time, but it has a permanent negative effect\n> > on all subsequent transactions in the session. That seems worth\n> > doing something about.\n>\n> Ah, yes, that makes sense. I'm inclined to think however that the\n> original approach presented in this thread is better than the\n> reset-the-whole-hashtable approach.\n>\n\nIf it was just many-tables then blowing away the hash table would work fine.\n\nThe main issue seems to be with partitioning, not with the general case of\nmany-tables. For that case, it seems like reset hashtable is too much.\n\nCan we use our knowledge of the structure of locks, i.e. partition locks\nare all children of the partitioned table, to do a better job?\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nOn Tue, 19 Feb 2019 at 00:20, Andres Freund <andres@anarazel.de> wrote: On 2019-02-18 19:13:31 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-02-18 19:01:06 -0500, Tom Lane wrote:\n> >> Mmm ... AIUI, the patches currently proposed can only help for what\n> >> David called \"point lookup\" queries. There are still going to be\n> >> queries that scan a large proportion of a partition tree, so if you've\n> >> got tons of partitions, you'll be concerned about this sort of thing. > I think what Maumau-san is on about here is that not only does your\n> $giant-query take a long time, but it has a permanent negative effect\n> on all subsequent transactions in the session. That seems worth\n> doing something about.\n\nAh, yes, that makes sense. I'm inclined to think however that the\noriginal approach presented in this thread is better than the\nreset-the-whole-hashtable approach.If it was just many-tables then blowing away the hash table would work fine.The main issue seems to be with partitioning, not with the general case of many-tables. For that case, it seems like reset hashtable is too much.Can we use our knowledge of the structure of locks, i.e. partition locks are all children of the partitioned table, to do a better job?-- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 19 Feb 2019 08:50:31 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "\n\nOn 2/12/19 7:33 AM, Tsunakawa, Takayuki wrote:\n> ...\n> \n> This problem was uncovered while evaluating partitioning performance.\n> When the application PREPAREs a statement once and then\n> EXECUTE-COMMIT repeatedly, the server creates a generic plan on the\n> 6th EXECUTE. Unfortunately, creation of the generic plan of\n> UPDATE/DELETE currently accesses all partitions of the target table\n> (this itself needs improvement), expanding the LOCALLOCK hash table.\n> As a result, 7th and later EXECUTEs get slower.\n> \n> Imai-san confirmed performance improvement with this patch:\n> \n> https://commitfest.postgresql.org/22/1993/\n> \n\nCan you quantify the effects? That is, how much slower/faster does it get?\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Tue, 19 Feb 2019 12:57:32 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "From: Tomas Vondra [mailto:tomas.vondra@2ndquadrant.com]\n> On 2/12/19 7:33 AM, Tsunakawa, Takayuki wrote:\n> > Imai-san confirmed performance improvement with this patch:\n> >\n> > https://commitfest.postgresql.org/22/1993/\n> >\n> \n> Can you quantify the effects? That is, how much slower/faster does it get?\n\nUgh, sorry, I wrote a wrong URL. The correct page is:\n\nhttps://www.postgresql.org/message-id/0F97FA9ABBDBE54F91744A9B37151A512787EC%40g01jpexmbkw24\n\nThe quoted figures re:\n\n[v20 + faster-locallock-scan.patch]\nauto: 9,069 TPS\ncustom: 9,015 TPS\n\n[v20]\nauto: 8,037 TPS\ncustom: 8,933 TPS\n\n\nIn the original problematic case, plan_cache_mode = auto (default), we can see about 13% improvement.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n",
"msg_date": "Wed, 20 Feb 2019 05:42:41 +0000",
"msg_from": "\"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\r\n> Hm. Putting a list header for a purely-local data structure into shared\r\n> memory seems quite ugly. Isn't there a better place to keep that?\r\n\r\nAgreed. I put it in the global variable.\r\n\r\n\r\n> Do we really want a dlist here at all? I'm concerned that bloating\r\n> LOCALLOCK will cost us when there are many locks involved. This patch\r\n> increases the size of LOCALLOCK by 25% if I counted right, which does\r\n> not seem like a negligible penalty.\r\n\r\nTo delete the LOCALLOCK in RemoveLocalLock(), we need a dlist. slist requires the list iterator to be passed from callers.\r\n\r\n\r\nFrom: Andres Freund [mailto:andres@anarazel.de]\r\n> Sure, we should do that. I don't buy the \"illogical\" bit, just moving\r\n> hashcode up to after tag isn't more or less logical, and saves most of\r\n> the padding, and moving the booleans to the end isn't better/worse\r\n> either.\r\n> \r\n> I don't find\r\n\r\nThanks, I've done it. \r\n\r\n\r\nFrom: Simon Riggs [mailto:simon@2ndquadrant.com]\r\n> Can we use our knowledge of the structure of locks, i.e. partition locks\r\n> are all children of the partitioned table, to do a better job?\r\n\r\nI couldn't come up with a idea.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa",
"msg_date": "Wed, 20 Feb 2019 06:20:37 +0000",
"msg_from": "\"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On 2019-02-20 07:20, Tsunakawa, Takayuki wrote:\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n>> Hm. Putting a list header for a purely-local data structure into shared\n>> memory seems quite ugly. Isn't there a better place to keep that?\n> \n> Agreed. I put it in the global variable.\n\nI think there is agreement on the principles of this patch. Perhaps it\ncould be polished a bit.\n\nYour changes in LOCALLOCK still refer to PGPROC, from your first version\nof the patch.\n\nI think the reordering of struct members could be done as a separate\npreliminary patch.\n\nSome more documentation in the comment before dlist_head LocalLocks to\nexplain this whole mechanism would be nice.\n\nYou posted a link to some performance numbers, but I didn't see the test\nsetup explained there. I'd like to get some more information on this\nimpact of this. Is there an effect with 100 tables, or do you need 100000?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Mon, 18 Mar 2019 20:48:04 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "Hi Peter, Imai-san,\r\n\r\nFrom: Peter Eisentraut [mailto:peter.eisentraut@2ndquadrant.com]\r\n> Your changes in LOCALLOCK still refer to PGPROC, from your first version\r\n> of the patch.\r\n> \r\n> I think the reordering of struct members could be done as a separate\r\n> preliminary patch.\r\n> \r\n> Some more documentation in the comment before dlist_head LocalLocks to\r\n> explain this whole mechanism would be nice.\r\n\r\nFixed.\r\n\r\n\r\n> You posted a link to some performance numbers, but I didn't see the test\r\n> setup explained there. I'd like to get some more information on this\r\n> impact of this. Is there an effect with 100 tables, or do you need 100000?\r\n\r\nImai-san, can you tell us the test setup?\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa",
"msg_date": "Tue, 19 Mar 2019 07:53:27 +0000",
"msg_from": "\"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "Hi Tsunakawa-san, Peter\r\n\r\nOn Tue, Mar 19, 2019 at 7:53 AM, Tsunakawa, Takayuki wrote:\r\n> From: Peter Eisentraut [mailto:peter.eisentraut@2ndquadrant.com]\r\n> > You posted a link to some performance numbers, but I didn't see the\r\n> > test setup explained there. I'd like to get some more information on\r\n> > this impact of this. Is there an effect with 100 tables, or do you\r\n> need 100000?\r\n> \r\n> Imai-san, can you tell us the test setup?\r\n\r\nMaybe I used this test setup[1].\r\n\r\nI tested again with those settings for prepared transactions.\r\nI used Tsunakawa-san's patch for locallock[2] (which couldn't be applied to current master so I fixed it) and Amit's v32 patch for speeding up planner[3]. \r\n\r\n[settings]\r\nplan_cache_mode = 'auto' or 'force_custom_plan'\r\nmax_parallel_workers = 0\r\nmax_parallel_workers_per_gather = 0\r\nmax_locks_per_transaction = 4096\r\n\r\n[partitioning table definitions(with 4096 partitions)]\r\ncreate table rt (a int, b int, c int) partition by range (a);\r\n\r\n\\o /dev/null\r\nselect 'create table rt' || x::text || ' partition of rt for values from (' ||\r\n (x)::text || ') to (' || (x+1)::text || ');' from generate_series(1, 4096) x;\r\n\\gexec\r\n\\o\r\n\r\n[select4096.sql]\r\n\\set a random(1, 4096)\r\nselect a from rt where a = :a;\r\n\r\n[pgbench(with 4096 partitions)]\r\npgbench -n -f select4096.sql -T 60 -M prepared\r\n\r\n[results]\r\n master locallock v32 v32+locallock\r\n ------ --------- --- -------------\r\nauto 21.9 22.9 6,834 7,355\r\ncustom 19.7 20.0 7,415 7,252\r\n\r\n\r\n[1] https://www.postgresql.org/message-id/0F97FA9ABBDBE54F91744A9B37151A51256276%40g01jpexmbkw24\r\n[2] https://www.postgresql.org/message-id/0A3221C70F24FB45833433255569204D1FBDFA00%40G01JPEXMBYT05\r\n[3] https://www.postgresql.org/message-id/9feacaf6-ddb3-96dd-5b98-df5e927b1439%40lab.ntt.co.jp\r\n\r\n--\r\nYoshikazu Imai\r\n\r\n",
"msg_date": "Tue, 19 Mar 2019 09:20:22 +0000",
"msg_from": "\"Imai, Yoshikazu\" <imai.yoshikazu@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "From: Tsunakawa, Takayuki [mailto:tsunakawa.takay@jp.fujitsu.com]\r\n> Fixed.\r\n\r\nRebased on HEAD.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa",
"msg_date": "Tue, 19 Mar 2019 09:21:42 +0000",
"msg_from": "\"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On 2019-03-19 10:21, Tsunakawa, Takayuki wrote:\n> From: Tsunakawa, Takayuki [mailto:tsunakawa.takay@jp.fujitsu.com]\n>> Fixed.\n> \n> Rebased on HEAD.\n\nI have committed the first patch that reorganizes the struct. I'll have\nto spend some time evaluating the performance impact of the second\npatch, but it seems OK in principle. Performance tests from others welcome.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Tue, 19 Mar 2019 16:38:17 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On 2019-03-19 16:38, Peter Eisentraut wrote:\n> On 2019-03-19 10:21, Tsunakawa, Takayuki wrote:\n>> From: Tsunakawa, Takayuki [mailto:tsunakawa.takay@jp.fujitsu.com]\n>>> Fixed.\n>>\n>> Rebased on HEAD.\n> \n> I have committed the first patch that reorganizes the struct. I'll have\n> to spend some time evaluating the performance impact of the second\n> patch, but it seems OK in principle. Performance tests from others welcome.\n\nI did a bit of performance testing, both a plain pgbench and the\nsuggested test case with 4096 partitions. I can't detect any\nperformance improvements. In fact, within the noise, it tends to be\njust a bit on the slower side.\n\nSo I'd like to kick it back to the patch submitter now and ask for more\njustification and performance analysis.\n\nPerhaps \"speeding up planning with partitions\" needs to be accepted first?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Mon, 25 Mar 2019 11:44:26 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Mon, 25 Mar 2019 at 23:44, Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> I did a bit of performance testing, both a plain pgbench and the\n> suggested test case with 4096 partitions. I can't detect any\n> performance improvements. In fact, within the noise, it tends to be\n> just a bit on the slower side.\n>\n> So I'd like to kick it back to the patch submitter now and ask for more\n> justification and performance analysis.\n>\n> Perhaps \"speeding up planning with partitions\" needs to be accepted first?\n\nYeah, I think it likely will require that patch to be able to measure\nthe gains from this patch.\n\nIf planning a SELECT to a partitioned table with a large number of\npartitions using PREPAREd statements, when we attempt the generic plan\non the 6th execution, it does cause the local lock table to expand to\nfit all the locks for each partition. This does cause the\nLockReleaseAll() to become slow due to the hash_seq_search having to\nskip over many empty buckets. Since generating a custom plan for a\npartitioned table with many partitions is still slow in master, then I\nvery much imagine you'll struggle to see the gains brought by this\npatch.\n\nI did a quick benchmark too and couldn't measure anything:\n\ncreate table hp (a int) partition by hash (a);\nselect 'create table hp'||x|| ' partition of hp for values with\n(modulus 4096, remainder ' || x || ');' from generate_series(0,4095)\nx;\n\nbench.sql\n\\set p_a 13315\nselect * from hp where a = :p_a;\n\nMaster:\n$ pgbench -M prepared -n -T 60 -f bench.sql postgres\ntps = 31.844468 (excluding connections establishing)\ntps = 32.950154 (excluding connections establishing)\ntps = 31.534761 (excluding connections establishing)\n\nPatched:\n$ pgbench -M prepared -n -T 60 -f bench.sql postgres\ntps = 30.099133 (excluding connections establishing)\ntps = 32.157328 (excluding connections establishing)\ntps = 32.329884 (excluding connections establishing)\n\nThe situation won't be any better with plan_cache_mode =\nforce_generic_plan either. In this case, we'll only plan once but\nwe'll also have to obtain and release a lock for each partition for\neach execution of the prepared statement. LockReleaseAll() is going to\nbe slow in that case because it actually has to release a lot of\nlocks.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Tue, 26 Mar 2019 00:48:30 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "From: David Rowley [mailto:david.rowley@2ndquadrant.com]\r\n> On Mon, 25 Mar 2019 at 23:44, Peter Eisentraut\r\n> <peter.eisentraut@2ndquadrant.com> wrote:\r\n> > Perhaps \"speeding up planning with partitions\" needs to be accepted first?\r\n> \r\n> Yeah, I think it likely will require that patch to be able to measure\r\n> the gains from this patch.\r\n> \r\n> If planning a SELECT to a partitioned table with a large number of\r\n> partitions using PREPAREd statements, when we attempt the generic plan\r\n> on the 6th execution, it does cause the local lock table to expand to\r\n> fit all the locks for each partition. This does cause the\r\n> LockReleaseAll() to become slow due to the hash_seq_search having to\r\n> skip over many empty buckets. Since generating a custom plan for a\r\n> partitioned table with many partitions is still slow in master, then I\r\n> very much imagine you'll struggle to see the gains brought by this\r\n> patch.\r\n\r\nThank you David for explaining. Although I may not understand the effect of \"speeding up planning with partitions\" patch, this patch takes effect even without it. That is, perform the following in the same session:\r\n\r\n1. SELECT count(*) FROM table; on a table with many partitions. That bloats the LocalLockHash.\r\n2. PREPARE a point query, e.g., SELECT * FROM table WHERE pkey = $1;\r\n3. EXECUTE the PREPAREd query repeatedly, with each EXECUTE in a separate transaction. Without the patch, each transaction's LockReleaseAll() has to scan the bloated large hash table.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n",
"msg_date": "Tue, 26 Mar 2019 08:21:20 +0000",
"msg_from": "\"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "Tsunakawa-san,\n\nOn 2019/03/26 17:21, Tsunakawa, Takayuki wrote:\n> From: David Rowley [mailto:david.rowley@2ndquadrant.com]\n>> On Mon, 25 Mar 2019 at 23:44, Peter Eisentraut\n>> <peter.eisentraut@2ndquadrant.com> wrote:\n>>> Perhaps \"speeding up planning with partitions\" needs to be accepted first?\n>>\n>> Yeah, I think it likely will require that patch to be able to measure\n>> the gains from this patch.\n>>\n>> If planning a SELECT to a partitioned table with a large number of\n>> partitions using PREPAREd statements, when we attempt the generic plan\n>> on the 6th execution, it does cause the local lock table to expand to\n>> fit all the locks for each partition. This does cause the\n>> LockReleaseAll() to become slow due to the hash_seq_search having to\n>> skip over many empty buckets. Since generating a custom plan for a\n>> partitioned table with many partitions is still slow in master, then I\n>> very much imagine you'll struggle to see the gains brought by this\n>> patch.\n> \n> Thank you David for explaining. Although I may not understand the effect of \"speeding up planning with partitions\" patch, this patch takes effect even without it. That is, perform the following in the same session:\n> \n> 1. SELECT count(*) FROM table; on a table with many partitions. That bloats the LocalLockHash.\n> 2. PREPARE a point query, e.g., SELECT * FROM table WHERE pkey = $1;\n> 3. EXECUTE the PREPAREd query repeatedly, with each EXECUTE in a separate transaction. Without the patch, each transaction's LockReleaseAll() has to scan the bloated large hash table.\n\nMy understanding of what David wrote is that the slowness of bloated hash\ntable is hard to notice, because planning itself is pretty slow. With the\n\"speeding up planning with partitions\" patch, planning becomes quite fast,\nso the bloated hash table overhead and so your patch's benefit is easier\nto notice. This patch is clearly helpful, but it's just hard to notice it\nwhen the other big bottleneck is standing in the way.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Tue, 26 Mar 2019 17:42:33 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Tue, 26 Mar 2019 at 21:23, Tsunakawa, Takayuki\n<tsunakawa.takay@jp.fujitsu.com> wrote:\n> Thank you David for explaining. Although I may not understand the effect of \"speeding up planning with partitions\" patch, this patch takes effect even without it. That is, perform the following in the same session:\n>\n> 1. SELECT count(*) FROM table; on a table with many partitions. That bloats the LocalLockHash.\n> 2. PREPARE a point query, e.g., SELECT * FROM table WHERE pkey = $1;\n> 3. EXECUTE the PREPAREd query repeatedly, with each EXECUTE in a separate transaction. Without the patch, each transaction's LockReleaseAll() has to scan the bloated large hash table.\n\nOh. I think I see what you're saying. Really the table in #2 would\nhave to be some completely different table that's not partitioned. I\nthink in that case it should make a difference.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Tue, 26 Mar 2019 21:55:01 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Tue, 26 Mar 2019 at 21:55, David Rowley <david.rowley@2ndquadrant.com> wrote:\n>\n> On Tue, 26 Mar 2019 at 21:23, Tsunakawa, Takayuki\n> <tsunakawa.takay@jp.fujitsu.com> wrote:\n> > Thank you David for explaining. Although I may not understand the effect of \"speeding up planning with partitions\" patch, this patch takes effect even without it. That is, perform the following in the same session:\n> >\n> > 1. SELECT count(*) FROM table; on a table with many partitions. That bloats the LocalLockHash.\n> > 2. PREPARE a point query, e.g., SELECT * FROM table WHERE pkey = $1;\n> > 3. EXECUTE the PREPAREd query repeatedly, with each EXECUTE in a separate transaction. Without the patch, each transaction's LockReleaseAll() has to scan the bloated large hash table.\n>\n> Oh. I think I see what you're saying. Really the table in #2 would\n> have to be some completely different table that's not partitioned. I\n> think in that case it should make a difference.\n\nHere a benchmark doing that using pgbench's script weight feature.\n\nI've set this up so the query that hits the partitioned table runs\nonce for every 10k times the other script runs. I picked that number\nso the lock table was expanded fairly early on in the benchmark.\n\nsetup:\ncreate table t1 (a int primary key);\ncreate table hp (a int) partition by hash (a);\nselect 'create table hp'||x|| ' partition of hp for values with\n(modulus 4096, remainder ' || x || ');' from generate_series(0,4095)\nx;\n\\gexec\n\nhp.sql\nselect count(*) from hp;\n\nt1.sql\n\\set p 1\nselect a from t1 where a = :p;\n\nMaster = c8c885b7a5\n\nMaster:\n$ pgbench -T 60 -M prepared -n -f hp.sql@1 -f t1.sql@10000 postgres\nSQL script 2: t1.sql\n - 1057306 transactions (100.0% of total, tps = 17621.756473)\n - 1081905 transactions (100.0% of total, tps = 18021.449914)\n - 1122420 transactions (100.0% of total, tps = 18690.804699)\n\nMaster + 0002-speed-up-LOCALLOCK-scan.patch\n\n$ pgbench -T 60 -M prepared -n -f hp.sql@1 -f t1.sql@10000 postgres\nSQL script 2: t1.sql\n - 1277014 transactions (100.0% of total, tps = 21283.551615)\n - 1184052 transactions (100.0% of total, tps = 19734.185872)\n - 1188523 transactions (100.0% of total, tps = 19785.835662)\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Tue, 26 Mar 2019 23:47:07 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "From: Amit Langote [mailto:Langote_Amit_f8@lab.ntt.co.jp]\r\n> My understanding of what David wrote is that the slowness of bloated hash\r\n> table is hard to notice, because planning itself is pretty slow. With the\r\n> \"speeding up planning with partitions\" patch, planning becomes quite fast,\r\n> so the bloated hash table overhead and so your patch's benefit is easier\r\n> to notice. This patch is clearly helpful, but it's just hard to notice\r\n> it\r\n> when the other big bottleneck is standing in the way.\r\n\r\nAh, I see. I failed to recognize the simple fact that without your patch, EXECUTE on a table with many partitions is slow due to the custom planning time proportional to the number of partitions. Thanks for waking up my sleeping head!\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Wed, 27 Mar 2019 00:21:22 +0000",
"msg_from": "\"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "From: David Rowley [mailto:david.rowley@2ndquadrant.com]\r\n> Here a benchmark doing that using pgbench's script weight feature.\r\n\r\nWow, I didn't know that pgbench has evolved to have such a convenient feature. Thanks for telling me how to utilize it in testing. PostgreSQL is cool!\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n",
"msg_date": "Wed, 27 Mar 2019 00:26:26 +0000",
"msg_from": "\"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "Hi Peter,\r\n\r\nFrom: Peter Eisentraut [mailto:peter.eisentraut@2ndquadrant.com]\r\n> I did a bit of performance testing, both a plain pgbench and the\r\n> suggested test case with 4096 partitions. I can't detect any\r\n> performance improvements. In fact, within the noise, it tends to be\r\n> just a bit on the slower side.\r\n> \r\n> So I'd like to kick it back to the patch submitter now and ask for more\r\n> justification and performance analysis.\r\n> \r\n> Perhaps \"speeding up planning with partitions\" needs to be accepted first?\r\n\r\nDavid kindly showed how to demonstrate the performance improvement on March 26, so I changed the status to needs review. I'd appreciate it if you could continue the final check.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n",
"msg_date": "Thu, 4 Apr 2019 04:37:59 +0000",
"msg_from": "\"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "Hi,\n\nOn 2019/04/04 13:37, Tsunakawa, Takayuki wrote:\n> Hi Peter,\n> \n> From: Peter Eisentraut [mailto:peter.eisentraut@2ndquadrant.com]\n>> I did a bit of performance testing, both a plain pgbench and the\n>> suggested test case with 4096 partitions. I can't detect any\n>> performance improvements. In fact, within the noise, it tends to be\n>> just a bit on the slower side.\n>>\n>> So I'd like to kick it back to the patch submitter now and ask for more\n>> justification and performance analysis.\n>>\n>> Perhaps \"speeding up planning with partitions\" needs to be accepted first?\n> \n> David kindly showed how to demonstrate the performance improvement on March 26, so I changed the status to needs review. I'd appreciate it if you could continue the final check.\n\nAlso, since the \"speed up partition planning\" patch went in (428b260f8),\nit might be possible to see the performance boost even with the\npartitioning example you cited upthread.\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Thu, 4 Apr 2019 13:58:12 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On 2019-04-04 06:58, Amit Langote wrote:\n> Also, since the \"speed up partition planning\" patch went in (428b260f8),\n> it might be possible to see the performance boost even with the\n> partitioning example you cited upthread.\n\nI can't detect any performance improvement with the patch applied to\ncurrent master, using the test case from Yoshikazu Imai (2019-03-19).\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 4 Apr 2019 22:42:28 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "Hi Peter, Imai-san,\r\n\r\nFrom: Peter Eisentraut [mailto:peter.eisentraut@2ndquadrant.com]\r\n> I can't detect any performance improvement with the patch applied to\r\n> current master, using the test case from Yoshikazu Imai (2019-03-19).\r\n\r\nThat's strange... Peter, Imai-san, can you compare your test procedures?\r\n\r\nPeter, can you check and see the performance improvement with David's method on March 26 instead?\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Fri, 5 Apr 2019 00:04:44 +0000",
"msg_from": "\"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On 2019/04/05 5:42, Peter Eisentraut wrote:\n> On 2019-04-04 06:58, Amit Langote wrote:\n>> Also, since the \"speed up partition planning\" patch went in (428b260f8),\n>> it might be possible to see the performance boost even with the\n>> partitioning example you cited upthread.\n> \n> I can't detect any performance improvement with the patch applied to\n> current master, using the test case from Yoshikazu Imai (2019-03-19).\n\nI was able to detect it as follows.\n\n* partitioned table setup:\n\n$ cat ht.sql\ndrop table ht cascade;\ncreate table ht (a int primary key, b int, c int) partition by hash (a);\nselect 'create table ht' || x::text || ' partition of ht for values with\n(modulus 8192, remainder ' || (x)::text || ');' from generate_series(0,\n8191) x;\n\\gexec\n\n* pgbench script:\n\n$ cat select.sql\n\\set param random(1, 8192)\nselect * from ht where a = :param\n\n* pgbench (5 minute run with -M prepared)\n\npgbench -n -M prepared -T 300 -f select.sql\n\n* tps:\n\nplan_cache_mode = auto\n\n HEAD: 1915 tps\nPatched: 2394 tps\n\nplan_cache_mode = custom (non-problematic: generic plan is never created)\n\n HEAD: 2402 tps\nPatched: 2393 tps\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Fri, 5 Apr 2019 10:30:32 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Fri, Apr 5, 2019 at 1:31 AM, Amit Langote wrote:\r\n> On 2019/04/05 5:42, Peter Eisentraut wrote:\r\n> > On 2019-04-04 06:58, Amit Langote wrote:\r\n> >> Also, since the \"speed up partition planning\" patch went in\r\n> >> (428b260f8), it might be possible to see the performance boost even\r\n> >> with the partitioning example you cited upthread.\r\n> >\r\n> > I can't detect any performance improvement with the patch applied to\r\n> > current master, using the test case from Yoshikazu Imai (2019-03-19).\r\n> \r\n> I was able to detect it as follows.\r\n> \r\n> * partitioned table setup:\r\n> \r\n> $ cat ht.sql\r\n> drop table ht cascade;\r\n> create table ht (a int primary key, b int, c int) partition by hash (a);\r\n> select 'create table ht' || x::text || ' partition of ht for values with\r\n> (modulus 8192, remainder ' || (x)::text || ');' from generate_series(0,\r\n> 8191) x;\r\n> \\gexec\r\n> \r\n> * pgbench script:\r\n> \r\n> $ cat select.sql\r\n> \\set param random(1, 8192)\r\n> select * from ht where a = :param\r\n> \r\n> * pgbench (5 minute run with -M prepared)\r\n> \r\n> pgbench -n -M prepared -T 300 -f select.sql\r\n> \r\n> * tps:\r\n> \r\n> plan_cache_mode = auto\r\n> \r\n> HEAD: 1915 tps\r\n> Patched: 2394 tps\r\n> \r\n> plan_cache_mode = custom (non-problematic: generic plan is never created)\r\n> \r\n> HEAD: 2402 tps\r\n> Patched: 2393 tps\r\n\r\nAmit-san, thanks for testing this.\r\n\r\nI also re-ran my tests(3/19) with HEAD(413ccaa) and HEAD(413ccaa) + patched, and I can still detect the performance difference with plan_cache_mode = auto.\r\n\r\nThanks\r\n--\r\nYoshikazu Imai \r\n\r\n",
"msg_date": "Fri, 5 Apr 2019 01:38:26 +0000",
"msg_from": "\"Imai, Yoshikazu\" <imai.yoshikazu@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "Hi Amit-san, Imai-snan,\r\n\r\nFrom: Amit Langote [mailto:Langote_Amit_f8@lab.ntt.co.jp]\r\n> I was able to detect it as follows.\r\n> plan_cache_mode = auto\r\n> \r\n> HEAD: 1915 tps\r\n> Patched: 2394 tps\r\n> \r\n> plan_cache_mode = custom (non-problematic: generic plan is never created)\r\n> \r\n> HEAD: 2402 tps\r\n> Patched: 2393 tps\r\n\r\nThanks a lot for very quick confirmation. I'm relieved to still see the good results.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Fri, 5 Apr 2019 01:52:36 +0000",
"msg_from": "\"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Fri, Apr 5, 2019 at 0:05 AM, Tsunakawa, Takayuki wrote:\r\n> From: Peter Eisentraut [mailto:peter.eisentraut@2ndquadrant.com]\r\n> > I can't detect any performance improvement with the patch applied to\r\n> > current master, using the test case from Yoshikazu Imai (2019-03-19).\r\n> \r\n> That's strange... Peter, Imai-san, can you compare your test procedures?\r\n\r\nJust for make sure, I described my test procedures in detail.\r\n\r\nI install and setup HEAD and patched as follows.\r\n\r\n[HEAD(413ccaa)]\r\n(git pull)\r\n./configure --prefix=/usr/local/pgsql-dev --enable-depend\r\nmake clean\r\nmake\r\n\r\nmake install\r\n\r\nsu postgres\r\nexport PATH=/usr/local/pgsql-dev/bin:$PATH\r\ninitdb -D /var/lib/pgsql/data-dev\r\nvi /var/lib/pgsql/data-dev/postgresql.conf\r\n====\r\nport = 44201\r\nplan_cache_mode = 'auto' or 'force_custom_plan'\r\nmax_parallel_workers = 0\r\nmax_parallel_workers_per_gather = 0\r\nmax_locks_per_transaction = 4096\r\n====\r\npg_ctl -D /var/lib/pgsql/data-dev start\r\n\r\n\r\n\r\n[HEAD(413ccaa) + patch]\r\n(git pull)\r\npatch -u -p1 < 0002.patch\r\n./configure --prefix=/usr/local/pgsql-locallock --enable-depend\r\nmake clean\r\nmake\r\n\r\nmake install\r\n\r\nsu postgres\r\nexport PATH=/usr/local/pgsql-locallock/bin:$PATH\r\ninitdb -D /var/lib/pgsql/data-locallock\r\nvi /var/lib/pgsql/data-locallock/postgresql.conf\r\n====\r\nport = 44301\r\nplan_cache_mode = 'auto' or 'force_custom_plan'\r\nmax_parallel_workers = 0\r\nmax_parallel_workers_per_gather = 0\r\nmax_locks_per_transaction = 4096\r\n====\r\npg_ctl -D /var/lib/pgsql/data-locallock start\r\n\r\n\r\nAnd I tested as follows.\r\n\r\n(creating partitioned table for port 44201)\r\n(creating partitioned table for port 44301)\r\n(creating select4096.sql)\r\nfor i in `seq 1 5`; do\r\n pgbench -n -f select4096.sql -T 60 -M prepared -p 44201 | grep including;\r\n pgbench -n -f select4096.sql -T 60 -M prepared -p 44301 | grep including;\r\ndone\r\ntps = 8146.039546 (including connections establishing)\r\ntps = 9021.340872 (including connections establishing)\r\ntps = 8011.186017 (including connections establishing)\r\ntps = 8926.191054 (including connections establishing)\r\ntps = 8006.769690 (including connections establishing)\r\ntps = 9028.716806 (including connections establishing)\r\ntps = 8057.709961 (including connections establishing)\r\ntps = 9017.713714 (including connections establishing)\r\ntps = 7956.332863 (including connections establishing)\r\ntps = 9126.650533 (including connections establishing)\r\n\r\n\r\nThanks\r\n--\r\nYoshikazu Imai\r\n",
"msg_date": "Fri, 5 Apr 2019 01:54:23 +0000",
"msg_from": "\"Imai, Yoshikazu\" <imai.yoshikazu@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On 2019-03-19 10:21, Tsunakawa, Takayuki wrote:\n> From: Tsunakawa, Takayuki [mailto:tsunakawa.takay@jp.fujitsu.com]\n>> Fixed.\n> \n> Rebased on HEAD.\n\nDo you need the dlist_foreach_modify() calls? You are not actually\nmodifying the list structure.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 5 Apr 2019 12:56:44 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> I can't detect any performance improvement with the patch applied to\n> current master, using the test case from Yoshikazu Imai (2019-03-19).\n\nFWIW, I tried this patch against current HEAD (959d00e9d).\nUsing the test case described by Amit at\n<be25cadf-982e-3f01-88b4-443a6667e16a@lab.ntt.co.jp>\nI do measure an undeniable speedup, close to 35%.\n\nHowever ... I submit that that's a mighty extreme test case.\n(I had to increase max_locks_per_transaction to get it to run\nat all.) We should not be using that sort of edge case to drive\nperformance optimization choices.\n\nIf I reduce the number of partitions in Amit's example from 8192\nto something more real-world, like 128, I do still measure a\nperformance gain, but it's ~ 1.5% which is below what I'd consider\na reproducible win. I'm accustomed to seeing changes up to 2%\nin narrow benchmarks like this one, even when \"nothing changes\"\nexcept unrelated code.\n\nTrying a standard pgbench test case (pgbench -M prepared -S with\none client and an -s 10 database), it seems that the patch is about\n0.5% slower than HEAD. Again, that's below the noise threshold,\nbut it's not promising for the net effects of this patch on workloads\nthat aren't specifically about large and prunable partition sets.\n\nI'm also fairly concerned about the effects of the patch on\nsizeof(LOCALLOCK) --- on a 64-bit machine it goes from 72 to 88\nbytes, a 22% increase. That's a lot if you're considering cases\nwith many locks.\n\nOn the whole I don't think there's an adequate case for committing\nthis patch.\n\nI'd also point out that this is hardly the only place where we've\nseen hash_seq_search on nearly-empty hash tables become a bottleneck.\nSo I'm not thrilled about attacking that with one-table-at-time patches.\nI'd rather see us do something to let hash_seq_search win across\nthe board.\n\nI spent some time wondering whether we could adjust the data structure\nso that all the live entries in a hashtable are linked into one chain,\nbut I don't quite see how to do it without adding another list link to\nstruct HASHELEMENT, which seems pretty expensive.\n\nI'll sketch the idea I had, just in case it triggers a better idea\nin someone else. Assuming we are willing to add another pointer to\nHASHELEMENT, use the two pointers to create a doubly-linked circular\nlist that includes all live entries in the hashtable, with a list\nheader in the hashtable's control area. (Maybe we'd use a dlist for\nthis, but it's not essential.) Require this list to be organized so\nthat all entries that belong to the same hash bucket are consecutive in\nthe list, and make each non-null hash bucket header point to the first\nentry in the list for its bucket. To allow normal searches to detect\nwhen they've run through their bucket, add a flag to HASHELEMENT that\nis set only in entries that are the first, or perhaps last, of their\nbucket (so you'd detect end-of-bucket by checking the flag instead of\ntesting for a null pointer). Adding a bool field is free due to\nalignment considerations, at least on 64-bit machines. Given this,\nI think normal hash operations are more-or-less the same cost as\nbefore, while hash_seq_search just has to follow the circular list.\n\nI tried to figure out how to do the same thing with a singly-linked\ninstead of doubly-linked list, but it doesn't quite work: if you need\nto remove the first element of a bucket, you have no cheap way to find\nits predecessor in the overall list (which belongs to some other\nbucket, but you don't know which one). Maybe we could just mark such\nentries dead (there's plenty of room for another flag bit) and plan\nto clean them up later? But it's not clear how to ensure that they'd\nget cleaned up in any sort of timely fashion.\n\nAnother issue is that probably none of this works for the partitioned\nhash tables we use for some of the larger shared-memory hashes. But\nI'm not sure we care about hash_seq_search for those, so maybe we just\nsay those are a different data structure.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 05 Apr 2019 23:03:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-05 23:03:11 -0400, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> > I can't detect any performance improvement with the patch applied to\n> > current master, using the test case from Yoshikazu Imai (2019-03-19).\n> \n> FWIW, I tried this patch against current HEAD (959d00e9d).\n> Using the test case described by Amit at\n> <be25cadf-982e-3f01-88b4-443a6667e16a@lab.ntt.co.jp>\n> I do measure an undeniable speedup, close to 35%.\n> \n> However ... I submit that that's a mighty extreme test case.\n> (I had to increase max_locks_per_transaction to get it to run\n> at all.) We should not be using that sort of edge case to drive\n> performance optimization choices.\n> \n> If I reduce the number of partitions in Amit's example from 8192\n> to something more real-world, like 128, I do still measure a\n> performance gain, but it's ~ 1.5% which is below what I'd consider\n> a reproducible win. I'm accustomed to seeing changes up to 2%\n> in narrow benchmarks like this one, even when \"nothing changes\"\n> except unrelated code.\n\nI'm not sure it's actually that narrow these days. With all the\npartitioning improvements happening, the numbers of locks commonly held\nare going to rise. And while 8192 partitions is maybe on the more\nextreme side, it's a workload with only a single table, and plenty\nworkloads touch more than a single partitioned table.\n\n\n> Trying a standard pgbench test case (pgbench -M prepared -S with\n> one client and an -s 10 database), it seems that the patch is about\n> 0.5% slower than HEAD. Again, that's below the noise threshold,\n> but it's not promising for the net effects of this patch on workloads\n> that aren't specifically about large and prunable partition sets.\n\nYea, that's concerning.\n\n\n> I'm also fairly concerned about the effects of the patch on\n> sizeof(LOCALLOCK) --- on a 64-bit machine it goes from 72 to 88\n> bytes, a 22% increase. That's a lot if you're considering cases\n> with many locks.\n\nI'm not sure I'm quite that concerned. For one, a good bit of that space\nwas up for grabs until the recent reordering of LOCALLOCK and nobody\ncomplained. But more importantly, I think commonly the amount of locks\naround is fairly constrained, isn't it? We can't really have that many\nconcurrently held locks, due to the shared memory space, and the size of\na LOCALLOCK isn't that big compared to say relcache entries. We also\nprobably fairly easily could win some space back - e.g. make nLocks 32\nbits.\n\nI wonder if one approach to solve this wouldn't be to just make the\nhashtable drastically smaller. Right now we'll often have have lots\nempty entries that are 72 bytes + dynahash overhead. That's plenty of\nmemory that needs to be skipped over. I wonder if we instead should\nhave an array of held locallocks, and a hashtable with {hashcode,\noffset_in_array} + custom comparator for lookups. That'd mean we could\neither scan the array of locallocks at release (which'd need to skip\nover entries that have already been released), or we could scan the much\nsmaller hashtable sequentially.\n\nI don't think the above idea is quite there, and I'm tired, but I\nthought it might still be worth bringing up.\n\n\n> I spent some time wondering whether we could adjust the data structure\n> so that all the live entries in a hashtable are linked into one chain,\n> but I don't quite see how to do it without adding another list link to\n> struct HASHELEMENT, which seems pretty expensive.\n\nYea :(\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 5 Apr 2019 22:10:53 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I wonder if one approach to solve this wouldn't be to just make the\n> hashtable drastically smaller. Right now we'll often have have lots\n> empty entries that are 72 bytes + dynahash overhead. That's plenty of\n> memory that needs to be skipped over. I wonder if we instead should\n> have an array of held locallocks, and a hashtable with {hashcode,\n> offset_in_array} + custom comparator for lookups.\n\nWell, that's not going to work all that well for retail lock releases;\nyou'll end up with holes in the array, maybe a lot of them.\n\nHowever, it led me to think of another way we might approach the general\nhashtable problem: right now, we are not making any use of the fact that\nthe hashtable's entries are laid out in big slabs (arrays). What if we\ntried to ensure that the live entries are allocated fairly compactly in\nthose arrays, and then implemented hash_seq_search as a scan over the\narrays, ignoring the hash bucket structure per se?\n\nWe'd need a way to reliably tell a live entry from a free entry, but\nagain, there's plenty of space for a flag bit or two.\n\nThis might perform poorly if you allocated a bunch of entries,\nfreed most-but-not-all, and then wanted to seqscan the remainder;\nyou'd end up with the same problem I complained of above that\nyou're iterating over an array that's mostly uninteresting.\nIn principle we could keep count of the live vs free entries and\ndynamically decide to scan via the hash bucket structure instead of\nsearching the storage array when the array is too sparse; but that\nmight be overly complicated.\n\nI haven't tried to work this out in detail, it's just a late\nnight brainstorm. But, again, I'd much rather solve this in\ndynahash.c than by layering some kind of hack on top of it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 06 Apr 2019 01:39:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On 2019-04-06 05:03, Tom Lane wrote:\n> Trying a standard pgbench test case (pgbench -M prepared -S with\n> one client and an -s 10 database), it seems that the patch is about\n> 0.5% slower than HEAD. Again, that's below the noise threshold,\n> but it's not promising for the net effects of this patch on workloads\n> that aren't specifically about large and prunable partition sets.\n\nIn my testing, I've also noticed that it seems to be slightly on the\nslower side for these simple tests.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 7 Apr 2019 10:07:58 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Sat, 6 Apr 2019 at 16:03, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I'd also point out that this is hardly the only place where we've\n> seen hash_seq_search on nearly-empty hash tables become a bottleneck.\n> So I'm not thrilled about attacking that with one-table-at-time patches.\n> I'd rather see us do something to let hash_seq_search win across\n> the board.\n\nRewinding back to mid-Feb:\n\nYou wrote:\n> My own thought about how to improve this situation was just to destroy\n> and recreate LockMethodLocalHash at transaction end (or start)\n> if its size exceeded $some-value. Leaving it permanently bloated seems\n> like possibly a bad idea, even if we get rid of all the hash_seq_searches\n> on it.\n\nWhich I thought was an okay idea. I think the one advantage that\nwould have over making hash_seq_search() faster for large and mostly\nempty tables is that over-sized hash tables are just not very cache\nefficient, and if we don't need it to be that large then we should\nprobably consider making it smaller again.\n\nI've had a go at implementing this and using Amit's benchmark the\nperformance looks pretty good. I can't detect any slowdown for the\ngeneral case.\n\nmaster:\n\nplan_cache_mode = auto:\n\n$ pgbench -n -M prepared -T 60 -f select.sql postgres\ntps = 9373.698212 (excluding connections establishing)\ntps = 9356.993148 (excluding connections establishing)\ntps = 9367.579806 (excluding connections establishing)\n\nplan_cache_mode = force_custom_plan:\n\n$ pgbench -n -M prepared -T 60 -f select.sql postgres\ntps = 12863.758185 (excluding connections establishing)\ntps = 12787.766054 (excluding connections establishing)\ntps = 12817.878940 (excluding connections establishing)\n\nshrink_bloated_locallocktable.patch:\n\nplan_cache_mode = auto:\n\n$ pgbench -n -M prepared -T 60 -f select.sql postgres\ntps = 12756.021211 (excluding connections establishing)\ntps = 12800.939518 (excluding connections establishing)\ntps = 12804.501977 (excluding connections establishing)\n\nplan_cache_mode = force_custom_plan:\n\n$ pgbench -n -M prepared -T 60 -f select.sql postgres\ntps = 12763.448836 (excluding connections establishing)\ntps = 12901.673271 (excluding connections establishing)\ntps = 12856.512745 (excluding connections establishing)\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Mon, 8 Apr 2019 01:55:31 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> On Sat, 6 Apr 2019 at 16:03, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> My own thought about how to improve this situation was just to destroy\n>> and recreate LockMethodLocalHash at transaction end (or start)\n>> if its size exceeded $some-value. Leaving it permanently bloated seems\n>> like possibly a bad idea, even if we get rid of all the hash_seq_searches\n>> on it.\n\n> Which I thought was an okay idea. I think the one advantage that\n> would have over making hash_seq_search() faster for large and mostly\n> empty tables is that over-sized hash tables are just not very cache\n> efficient, and if we don't need it to be that large then we should\n> probably consider making it smaller again.\n\n> I've had a go at implementing this and using Amit's benchmark the\n> performance looks pretty good. I can't detect any slowdown for the\n> general case.\n\nI like the concept ... but the particular implementation, not so much.\nIt seems way overcomplicated. In the first place, why should we\nadd code to copy entries? Just don't do it except when the table\nis empty. In the second, I think we could probably have a far\ncheaper test for how big the table is --- maybe we'd need to\nexpose some function in dynahash.c, but the right way here is just\nto see how many buckets there are. I don't like adding statistics\ncounting for this, because it's got basically nothing to do with\nwhat the actual problem is. (If you acquire and release one lock,\nand do that over and over, you don't have a bloat problem no\nmatter how many times you do it.)\n\nLockMethodLocalHash is special in that it predictably goes to empty\nat the end of every transaction, so that de-bloating at that point\nis a workable strategy. I think we'd probably need something more\nrobust if we were trying to fix this generally for all hash tables.\nBut if we're going to go with the one-off hack approach, we should\ncertainly try to keep that hack as simple as possible.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 07 Apr 2019 10:20:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Mon, 8 Apr 2019 at 02:20, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I like the concept ... but the particular implementation, not so much.\n> It seems way overcomplicated. In the first place, why should we\n> add code to copy entries? Just don't do it except when the table\n> is empty. In the second, I think we could probably have a far\n> cheaper test for how big the table is --- maybe we'd need to\n> expose some function in dynahash.c, but the right way here is just\n> to see how many buckets there are. I don't like adding statistics\n> counting for this, because it's got basically nothing to do with\n> what the actual problem is. (If you acquire and release one lock,\n> and do that over and over, you don't have a bloat problem no\n> matter how many times you do it.)\n\nhash_get_num_entries() looks cheap enough to me. Can you explain why\nyou think that's too expensive?\n\n> LockMethodLocalHash is special in that it predictably goes to empty\n> at the end of every transaction, so that de-bloating at that point\n> is a workable strategy. I think we'd probably need something more\n> robust if we were trying to fix this generally for all hash tables.\n> But if we're going to go with the one-off hack approach, we should\n> certainly try to keep that hack as simple as possible.\n\nAs cheap as possible sounds good, but I'm confused at why you think\nthe table will always be empty at the end of transaction. It's my\nunderstanding and I see from debugging that session level locks remain\nin there. If I don't copy those into the new table they'll be lost.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Mon, 8 Apr 2019 02:36:05 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Mon, 8 Apr 2019 at 02:36, David Rowley <david.rowley@2ndquadrant.com> wrote:\n> > LockMethodLocalHash is special in that it predictably goes to empty\n> > at the end of every transaction, so that de-bloating at that point\n> > is a workable strategy. I think we'd probably need something more\n> > robust if we were trying to fix this generally for all hash tables.\n> > But if we're going to go with the one-off hack approach, we should\n> > certainly try to keep that hack as simple as possible.\n>\n> As cheap as possible sounds good, but I'm confused at why you think\n> the table will always be empty at the end of transaction. It's my\n> understanding and I see from debugging that session level locks remain\n> in there. If I don't copy those into the new table they'll be lost.\n\nOr we could just skip the table recreation if there are no\nsession-levels. That would require calling hash_get_num_entries() on\nthe table again and just recreating the table if there are 0 locks.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Mon, 8 Apr 2019 02:41:12 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> On Mon, 8 Apr 2019 at 02:20, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I like the concept ... but the particular implementation, not so much.\n>> It seems way overcomplicated. In the first place, why should we\n>> add code to copy entries? Just don't do it except when the table\n>> is empty. In the second, I think we could probably have a far\n>> cheaper test for how big the table is --- maybe we'd need to\n>> expose some function in dynahash.c, but the right way here is just\n>> to see how many buckets there are. I don't like adding statistics\n>> counting for this, because it's got basically nothing to do with\n>> what the actual problem is. (If you acquire and release one lock,\n>> and do that over and over, you don't have a bloat problem no\n>> matter how many times you do it.)\n\n> hash_get_num_entries() looks cheap enough to me. Can you explain why\n> you think that's too expensive?\n\nWhat I objected to cost-wise was counting the number of lock\nacquisitions/releases, which seems entirely beside the point.\n\nWe *should* be using hash_get_num_entries(), but only to verify\nthat the table is empty before resetting it. The additional bit\nthat is needed is to see whether the number of buckets is large\nenough to justify calling the table bloated.\n\n> As cheap as possible sounds good, but I'm confused at why you think\n> the table will always be empty at the end of transaction.\n\nIt's conceivable that it won't be, which is why we need a test.\nI'm simply arguing that if it is not, we can just postpone de-bloating\ntill it is. Session-level locks are so rarely used that there's no\nneed to sweat about that case.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 07 Apr 2019 10:59:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Mon, 8 Apr 2019 at 02:59, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <david.rowley@2ndquadrant.com> writes:\n> > hash_get_num_entries() looks cheap enough to me. Can you explain why\n> > you think that's too expensive?\n>\n> What I objected to cost-wise was counting the number of lock\n> acquisitions/releases, which seems entirely beside the point.\n>\n> We *should* be using hash_get_num_entries(), but only to verify\n> that the table is empty before resetting it. The additional bit\n> that is needed is to see whether the number of buckets is large\n> enough to justify calling the table bloated.\n\nThe reason I thought it was a good idea to track some history there\nwas to stop the lock table constantly being shrunk back to the default\nsize every time a simple single table query was executed. For example,\na workload repeatably doing:\n\nSELECT * FROM table_with_lots_of_partitions;\nSELECT * FROM non_partitioned_table;\n\nI was worried that obtaining locks on the partitioned table would\nbecome a little slower because it would have to expand the hash table\neach time the query is executed.\n\n> > As cheap as possible sounds good, but I'm confused at why you think\n> > the table will always be empty at the end of transaction.\n>\n> It's conceivable that it won't be, which is why we need a test.\n> I'm simply arguing that if it is not, we can just postpone de-bloating\n> till it is. Session-level locks are so rarely used that there's no\n> need to sweat about that case.\n\nThat seems fair. It would certainly simplify the patch.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Mon, 8 Apr 2019 03:07:37 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> On Mon, 8 Apr 2019 at 02:59, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> We *should* be using hash_get_num_entries(), but only to verify\n>> that the table is empty before resetting it. The additional bit\n>> that is needed is to see whether the number of buckets is large\n>> enough to justify calling the table bloated.\n\n> The reason I thought it was a good idea to track some history there\n> was to stop the lock table constantly being shrunk back to the default\n> size every time a simple single table query was executed.\n\nI think that's probably gilding the lily, considering that this whole\nissue is pretty new. There's no evidence that expanding the local\nlock table is a significant drag on queries that need a lot of locks.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 07 Apr 2019 11:20:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Mon, 8 Apr 2019 at 03:20, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <david.rowley@2ndquadrant.com> writes:\n> > The reason I thought it was a good idea to track some history there\n> > was to stop the lock table constantly being shrunk back to the default\n> > size every time a simple single table query was executed.\n>\n> I think that's probably gilding the lily, considering that this whole\n> issue is pretty new. There's no evidence that expanding the local\n> lock table is a significant drag on queries that need a lot of locks.\n\nOkay. Here's another version with all the average locks code removed\nthat only recreates the table when it's completely empty.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Mon, 8 Apr 2019 03:40:52 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-08 03:40:52 +1200, David Rowley wrote:\n> On Mon, 8 Apr 2019 at 03:20, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > David Rowley <david.rowley@2ndquadrant.com> writes:\n> > > The reason I thought it was a good idea to track some history there\n> > > was to stop the lock table constantly being shrunk back to the default\n> > > size every time a simple single table query was executed.\n> >\n> > I think that's probably gilding the lily, considering that this whole\n> > issue is pretty new. There's no evidence that expanding the local\n> > lock table is a significant drag on queries that need a lot of locks.\n> \n> Okay. Here's another version with all the average locks code removed\n> that only recreates the table when it's completely empty.\n\nCould you benchmark your adversarial case?\n\n- Andres\n\n\n",
"msg_date": "Sun, 7 Apr 2019 08:47:19 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Mon, 8 Apr 2019 at 03:47, Andres Freund <andres@anarazel.de> wrote:\n> Could you benchmark your adversarial case?\n\nWhich case?\n\nI imagine the worst case for v2 is a query that just constantly asks\nfor over 16 locks. Perhaps a prepared plan, so not to add planner\noverhead.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Mon, 8 Apr 2019 03:53:10 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> Okay. Here's another version with all the average locks code removed\n> that only recreates the table when it's completely empty.\n\nUm ... I don't see where you're destroying the old hash?\n\nAlso, I entirely dislike wiring in assumptions about hash_seq_search's\nprivate state structure here. I think it's worth having an explicit\nentry point in dynahash.c to get the current number of buckets.\n\nAlso, I would not define \"significantly bloated\" as \"the table has\ngrown at all\". I think the threshold ought to be at least ~100\nbuckets, if we're starting at 16.\n\nProbably we ought to try to gather some evidence to inform the\nchoice of cutoff here. Maybe instrument the regression tests to\nsee how big the table typically gets?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 07 Apr 2019 12:09:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Mon, 8 Apr 2019 at 04:09, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Um ... I don't see where you're destroying the old hash?\n\nIn CreateLocalLockHash.\n\n> Also, I entirely dislike wiring in assumptions about hash_seq_search's\n> private state structure here. I think it's worth having an explicit\n> entry point in dynahash.c to get the current number of buckets.\n\nOkay. Added hash_get_max_bucket()\n\n> Also, I would not define \"significantly bloated\" as \"the table has\n> grown at all\". I think the threshold ought to be at least ~100\n> buckets, if we're starting at 16.\n\nI wouldn't either. I don't think the comment says that. It says there\ncan be slowdowns when its significantly bloated, and then goes on to\nsay that we just resize when it's bigger than standard.\n\n> Probably we ought to try to gather some evidence to inform the\n> choice of cutoff here. Maybe instrument the regression tests to\n> see how big the table typically gets?\n\nIn partition_prune.sql I see use of a bucket as high as 285 on my machine with:\n\ndrop table lp, coll_pruning, rlp, mc3p, mc2p, boolpart, rp,\ncoll_pruning_multi, like_op_noprune, lparted_by_int2, rparted_by_int2;\n\nI've not added any sort of cut-off though as I benchmarked it and\nsurprisingly I don't see any slowdown with the worst case. So I'm\nthinking there might not be any point.\n\nalter system set plan_cache_mode = 'force_generic_plan';\ncreate table hp (a int primary key) partition by hash (a);\nselect 'create table hp' || x::text || ' partition of hp for values\nwith (modulus 32, remainder ' || (x)::text || ');' from\ngenerate_series(0,31) x;\n\\gexec\n\nselect.sql\n\\set p 1\nselect * from hp where a = :p\n\nMaster\n$ pgbench -n -M prepared -f select.sql -T 60 postgres\ntps = 11834.764309 (excluding connections establishing)\ntps = 12279.212223 (excluding connections establishing)\ntps = 12007.263547 (excluding connections establishing)\n\nPatched:\n$ pgbench -n -M prepared -f select.sql -T 60 postgres\ntps = 13380.684817 (excluding connections establishing)\ntps = 12790.999279 (excluding connections establishing)\ntps = 12568.892558 (excluding connections establishing)\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Mon, 8 Apr 2019 04:48:11 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> On the whole I don't think there's an adequate case for committing\n> this patch.\n\nFrom: Andres Freund [mailto:andres@anarazel.de]\n> On 2019-04-05 23:03:11 -0400, Tom Lane wrote:\n> > If I reduce the number of partitions in Amit's example from 8192\n> > to something more real-world, like 128, I do still measure a\n> > performance gain, but it's ~ 1.5% which is below what I'd consider\n> > a reproducible win. I'm accustomed to seeing changes up to 2%\n> > in narrow benchmarks like this one, even when \"nothing changes\"\n> > except unrelated code.\n> \n> I'm not sure it's actually that narrow these days. With all the\n> partitioning improvements happening, the numbers of locks commonly held\n> are going to rise. And while 8192 partitions is maybe on the more\n> extreme side, it's a workload with only a single table, and plenty\n> workloads touch more than a single partitioned table.\n\nI would feel happy if I could say such a many-partitions use case is narrow or impractical and ignore it, but it's not narrow. Two of our customers are actually requesting such usage: one uses 5,500 partitions and is trying to migrate from a commercial database on Linux, and the other requires 200,000 partitions to migrate from a legacy database on a mainframe. At first, I thought such many partitions indicate a bad application design, but it sounded valid (or at least I can't insist that's bad). PostgreSQL is now expected to handle such huge workloads.\n\n\nFrom: Andres Freund [mailto:andres@anarazel.de]\n> I'm not sure I'm quite that concerned. For one, a good bit of that space\n> was up for grabs until the recent reordering of LOCALLOCK and nobody\n> complained. But more importantly, I think commonly the amount of locks\n> around is fairly constrained, isn't it? We can't really have that many\n> concurrently held locks, due to the shared memory space, and the size of\n> a LOCALLOCK isn't that big compared to say relcache entries. We also\n> probably fairly easily could win some space back - e.g. make nLocks 32\n> bits.\n\n+1\n\n\n\nFrom: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> I'd also point out that this is hardly the only place where we've\n> seen hash_seq_search on nearly-empty hash tables become a bottleneck.\n> So I'm not thrilled about attacking that with one-table-at-time patches.\n> I'd rather see us do something to let hash_seq_search win across\n> the board.\n> \n> I spent some time wondering whether we could adjust the data structure\n> so that all the live entries in a hashtable are linked into one chain,\n> but I don't quite see how to do it without adding another list link to\n> struct HASHELEMENT, which seems pretty expensive.\n\nI think the linked list of LOCALLOCK approach is natural, simple, and good. In the Jim Gray's classic book \"Transaction processing: concepts and techniques\", we can find the following sentence in \"8.4.5 Lock Manager Internal Logic.\" The sample implementation code in the book uses a similar linked list to remember and release a transaction's acquired locks.\n\n\"All the locks of a transaction are kept in a list so they can be quickly found and released at commit or rollback.\"\n\nAnd handling this issue with the LOCALLOCK linked list is more natural than with the hash table resize. We just want to quickly find all grabbed locks, so we use a linked list. A hash table is a mechanism to find a particular item quickly. So it was merely wrong to use the hash table to iterate all grabbed locks. Also, the hash table got big because some operation in the session needed it, and some subsequent operations in the same session may need it again. So we wouldn't be relieved with shrinking the hash table.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n\n\n\n\n",
"msg_date": "Mon, 8 Apr 2019 02:28:12 +0000",
"msg_from": "\"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-08 02:28:12 +0000, Tsunakawa, Takayuki wrote:\n> I think the linked list of LOCALLOCK approach is natural, simple, and\n> good.\n\nDid you see that people measured slowdowns?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 7 Apr 2019 19:40:12 -0700",
"msg_from": "'Andres Freund' <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "From: 'Andres Freund' [mailto:andres@anarazel.de]\n> On 2019-04-08 02:28:12 +0000, Tsunakawa, Takayuki wrote:\n> > I think the linked list of LOCALLOCK approach is natural, simple, and\n> > good.\n> \n> Did you see that people measured slowdowns?\n\nYeah, 0.5% decrease with pgbench -M prepared -S (select-only), which feels like a somewhat extreme test case. And that might be within noise as was mentioned.\n\nIf we want to remove even the noise, we may have to think of removing the LocalLockHash completely. But it doesn't seem feasible...\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n\n\n\n\n",
"msg_date": "Mon, 8 Apr 2019 02:54:16 +0000",
"msg_from": "\"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Mon, 8 Apr 2019 at 14:54, Tsunakawa, Takayuki\n<tsunakawa.takay@jp.fujitsu.com> wrote:\n>\n> From: 'Andres Freund' [mailto:andres@anarazel.de]\n> > Did you see that people measured slowdowns?\n>\n> Yeah, 0.5% decrease with pgbench -M prepared -S (select-only), which feels like a somewhat extreme test case. And that might be within noise as was mentioned.\n>\n> If we want to remove even the noise, we may have to think of removing the LocalLockHash completely. But it doesn't seem feasible...\n\nIt would be good to get your view on the\nshrink_bloated_locallocktable_v3.patch I worked on last night. I was\nunable to measure any overhead to solving the problem that way.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Mon, 8 Apr 2019 14:58:43 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "From: David Rowley [mailto:david.rowley@2ndquadrant.com]\r\n> It would be good to get your view on the\r\n> shrink_bloated_locallocktable_v3.patch I worked on last night. I was\r\n> unable to measure any overhead to solving the problem that way.\r\n\r\nThanks, it looks super simple and good. I understood the idea behind your patch is:\r\n\r\n* Transactions that touch many partitions and/or tables are a special event and not normal, and the hash table bloat is an unlucky accident. So it's reasonable to revert the bloated hash table back to the original size.\r\n\r\n* Repeated transactions that acquire many locks have to enlarge the hash table every time. However, the overhead of hash table expansion should be hidden behind other various processing (acquiring/releasing locks, reading/writing the relations, accessing the catalogs of those relations)\r\n\r\n\r\nTBH, I think the linked list approach feels more intuitive because the resulting code looks what it wants to do (efficiently iterate over acquired locks) and is based on the classic book. But your approach seems to relieve people. So I'm OK with your patch.\r\n\r\nI'm registering you as another author and me as a reviewer, and marking this ready for committer.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n\r\n\r\n",
"msg_date": "Mon, 8 Apr 2019 03:46:59 +0000",
"msg_from": "\"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On 2019-04-08 05:46, Tsunakawa, Takayuki wrote:\n> I'm registering you as another author and me as a reviewer, and marking this ready for committer.\n\nMoved to next commit fest.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 8 Apr 2019 14:10:43 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Mon, 8 Apr 2019 at 04:09, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Also, I would not define \"significantly bloated\" as \"the table has\n> grown at all\". I think the threshold ought to be at least ~100\n> buckets, if we're starting at 16.\n\nI've revised the patch to add a new constant named\nLOCKMETHODLOCALHASH_SHRINK_SIZE. I've set this to 64 for now. Once the\nhash table grows over that size we shrink it back down to\nLOCKMETHODLOCALHASH_INIT_SIZE, which I've kept at 16.\n\nI'm not opposed to setting it to 128. For this particular benchmark,\nit won't make any difference as it's only going to affect something\nthat does not quite use 128 locks and has to work with a slightly\nbloated local lock table. I think hitting 64 locks in a transaction is\na good indication that it's not a simple transaction so users are\nprobably unlikely to notice the small slowdown from the hash table\nreinitialisation.\n\nSince quite a bit has changed around partition planning lately, I've\ntaken a fresh set of benchmarks on today's master. I'm using something\nvery close to Amit's benchmark from upthread. I just changed the query\nso we hit the same partition each time instead of a random one.\n\ncreate table ht (a int primary key, b int, c int) partition by hash (a);\nselect 'create table ht' || x::text || ' partition of ht for values\nwith (modulus 8192, remainder ' || (x)::text || ');' from\ngenerate_series(0,8191) x;\n\\gexec\n\nselect.sql:\n\\set p 1\nselect * from ht where a = :p\n\nmaster @ a193cbec119 + shrink_bloated_locallocktable_v4.patch:\n\nplan_cache_mode = 'auto';\n\nubuntu@ip-10-0-0-201:~$ pgbench -n -M prepared -T 60 -f select.sql postgres\ntps = 14101.226982 (excluding connections establishing)\ntps = 14034.250962 (excluding connections establishing)\ntps = 14107.937755 (excluding connections establishing)\n\nplan_cache_mode = 'force_custom_plan';\n\nubuntu@ip-10-0-0-201:~$ pgbench -n -M prepared -T 60 -f select.sql postgres\ntps = 14240.366770 (excluding connections establishing)\ntps = 14272.244886 (excluding connections establishing)\ntps = 14130.684315 (excluding connections establishing)\n\nmaster @ a193cbec119:\n\nplan_cache_mode = 'auto';\n\nubuntu@ip-10-0-0-201:~$ pgbench -n -M prepared -T 60 -f select.sql postgres\ntps = 10467.027666 (excluding connections establishing)\ntps = 10333.700917 (excluding connections establishing)\ntps = 10633.084426 (excluding connections establishing)\n\nplan_cache_mode = 'force_custom_plan';\n\nubuntu@ip-10-0-0-201:~$ pgbench -n -M prepared -T 60 -f select.sql postgres\ntps = 13938.083272 (excluding connections establishing)\ntps = 14143.241802 (excluding connections establishing)\ntps = 14097.406758 (excluding connections establishing)\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Sat, 15 Jun 2019 15:28:05 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "From: David Rowley [mailto:david.rowley@2ndquadrant.com]\r\n> I've revised the patch to add a new constant named\r\n> LOCKMETHODLOCALHASH_SHRINK_SIZE. I've set this to 64 for now. Once the hash\r\n\r\nThank you, and good performance. The patch passed make check.\r\n\r\nI'm OK with the current patch, but I have a few comments. Please take them as you see fit (I wouldn't mind if you don't.)\r\n\r\n\r\n(1)\r\n+#define LOCKMETHODLOCALHASH_SHRINK_SIZE 64\r\n\r\nHow about LOCKMETHODLOCALHASH_SHRINK_THRESHOLD, because this determines the threshold value to trigger shrinkage? Code in PostgreSQL seems to use the term threshold.\r\n\r\n\r\n(2)\r\n+/* Complain if the above are not set to something sane */\r\n+#if LOCKMETHODLOCALHASH_SHRINK_SIZE < LOCKMETHODLOCALHASH_INIT_SIZE\r\n+#error \"invalid LOCKMETHODLOCALHASH_SHRINK_SIZE\"\r\n+#endif\r\n\r\nI don't think these are necessary, because these are fixed and not configurable. FYI, src/include/utils/memutils.h doesn't have #error to test these macros.\r\n\r\n#define ALLOCSET_DEFAULT_MINSIZE 0\r\n#define ALLOCSET_DEFAULT_INITSIZE (8 * 1024)\r\n#define ALLOCSET_DEFAULT_MAXSIZE (8 * 1024 * 1024)\r\n\r\n\r\n(3)\r\n+\tif (hash_get_num_entries(LockMethodLocalHash) == 0 &&\r\n+\t\thash_get_max_bucket(LockMethodLocalHash) >\r\n+\t\tLOCKMETHODLOCALHASH_SHRINK_SIZE)\r\n+\t\tCreateLocalLockHash();\r\n\r\nI get an impression that Create just creates something where there's nothing. How about Init or Recreate?\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Mon, 17 Jun 2019 03:04:51 +0000",
"msg_from": "\"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Mon, 17 Jun 2019 at 15:05, Tsunakawa, Takayuki\n<tsunakawa.takay@jp.fujitsu.com> wrote:\n> (1)\n> +#define LOCKMETHODLOCALHASH_SHRINK_SIZE 64\n>\n> How about LOCKMETHODLOCALHASH_SHRINK_THRESHOLD, because this determines the threshold value to trigger shrinkage? Code in PostgreSQL seems to use the term threshold.\n\nThat's probably better. I've renamed it to that.\n\n> (2)\n> +/* Complain if the above are not set to something sane */\n> +#if LOCKMETHODLOCALHASH_SHRINK_SIZE < LOCKMETHODLOCALHASH_INIT_SIZE\n> +#error \"invalid LOCKMETHODLOCALHASH_SHRINK_SIZE\"\n> +#endif\n>\n> I don't think these are necessary, because these are fixed and not configurable. FYI, src/include/utils/memutils.h doesn't have #error to test these macros.\n\nYeah. I was thinking it was overkill when I wrote it, but somehow\ncouldn't bring myself to remove it. Done now.\n\n> (3)\n> + if (hash_get_num_entries(LockMethodLocalHash) == 0 &&\n> + hash_get_max_bucket(LockMethodLocalHash) >\n> + LOCKMETHODLOCALHASH_SHRINK_SIZE)\n> + CreateLocalLockHash();\n>\n> I get an impression that Create just creates something where there's nothing. How about Init or Recreate?\n\nRenamed to InitLocalLoclHash()\n\nv5 is attached.\n\nThank you for the review.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Thu, 27 Jun 2019 11:00:56 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "From: David Rowley [mailto:david.rowley@2ndquadrant.com]\r\nv5 is attached.\r\n\r\n\r\nThank you, looks good. I find it ready for committer (I noticed the status is already set so.)\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n",
"msg_date": "Thu, 27 Jun 2019 00:58:51 +0000",
"msg_from": "\"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Thu, 27 Jun 2019 at 12:59, Tsunakawa, Takayuki\n<tsunakawa.takay@jp.fujitsu.com> wrote:\n>\n> From: David Rowley [mailto:david.rowley@2ndquadrant.com]\n> Thank you, looks good. I find it ready for committer (I noticed the status is already set so.)\n\nThanks for looking.\n\nI've just been looking at this again and I thought I'd better check\nthe performance of the worst case for the patch, where the hash table\nis rebuilt each query.\n\nTo do this I first created a single column 70 partition partitioned\ntable (\"p\") and left it empty.\n\nI then checked the performance of:\n\nSELECT * FROM p;\n\nHaving 70 partitions means that the lock table's max bucket goes over\nthe LOCKMETHODLOCALHASH_SHRINK_THRESHOLD which is set to 64 and\nresults in the table being rebuilt each time the query is run.\n\nThe performance was as follows:\n\n70 partitions: LOCKMETHODLOCALHASH_SHRINK_THRESHOLD = 64\n\nmaster + shrink_bloated_locallocktable_v5.patch:\n\nubuntu@ip-10-0-0-201:~$ pgbench -n -T 60 -f select1.sql -M prepared postgres\ntps = 8427.053378 (excluding connections establishing)\ntps = 8583.251821 (excluding connections establishing)\ntps = 8569.587268 (excluding connections establishing)\ntps = 8552.988483 (excluding connections establishing)\ntps = 8527.735108 (excluding connections establishing)\n\nmaster (93907478):\n\nubuntu@ip-10-0-0-201:~$ pgbench -n -T 60 -f select1.sql -M prepared postgres\ntps = 8712.919411 (excluding connections establishing)\ntps = 8760.190372 (excluding connections establishing)\ntps = 8755.069470 (excluding connections establishing)\ntps = 8747.389735 (excluding connections establishing)\ntps = 8758.275202 (excluding connections establishing)\n\npatched is 2.45% slower\n\n\nIf I increase the partition count to 140 and put the\nLOCKMETHODLOCALHASH_SHRINK_THRESHOLD up to 128, then the performance\nis as follows:\n\nmaster + shrink_bloated_locallocktable_v5.patch:\n\nubuntu@ip-10-0-0-201:~$ pgbench -n -T 60 -f select1.sql -M prepared postgres\ntps = 2548.917856 (excluding connections establishing)\ntps = 2561.283564 (excluding connections establishing)\ntps = 2549.669870 (excluding connections establishing)\ntps = 2421.971864 (excluding connections establishing)\ntps = 2428.983660 (excluding connections establishing)\n\nMaster (93907478):\n\nubuntu@ip-10-0-0-201:~$ pgbench -n -T 60 -f select1.sql -M prepared postgres\ntps = 2605.407529 (excluding connections establishing)\ntps = 2600.691426 (excluding connections establishing)\ntps = 2594.123983 (excluding connections establishing)\ntps = 2455.745644 (excluding connections establishing)\ntps = 2450.061483 (excluding connections establishing)\n\npatched is 1.54% slower\n\nI'd rather not put the LOCKMETHODLOCALHASH_SHRINK_THRESHOLD up any\nhigher than 128 since it can detract from the improvement we're trying\nto make with this patch.\n\nNow, this case of querying a partitioned table that happens to be\ncompletely empty seems a bit unrealistic. Something more realistic\nmight be index scanning all partitions to find a value that only\nexists in a single partition. Assuming the partitions actually have\nsome records, then that's going to be a more expensive query, so the\noverhead of rebuilding the table will be less noticeable.\n\nA previous version of the patch has already had some heuristics to try\nto only rebuild the hash table when it's likely beneficial. I'd rather\nnot go exploring in that area again.\n\nIs anyone particularly concerned about the worst-case slowdown here\nbeing about 1.54%? The best case, and arguably a more realistic case\nabove showed a 34% speedup for the best case.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Thu, 18 Jul 2019 14:53:50 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Thu, 18 Jul 2019 at 14:53, David Rowley <david.rowley@2ndquadrant.com> wrote:\n> Is anyone particularly concerned about the worst-case slowdown here\n> being about 1.54%? The best case, and arguably a more realistic case\n> above showed a 34% speedup for the best case.\n\nI took a bit more time to test the performance on this. I thought I\nmight have been a bit unfair on the patch by giving it completely\nempty tables to look at. It just seems too unrealistic to have a large\nnumber of completely empty partitions. I decided to come up with a\nmore realistic case where there are 140 partitions but we're\nperforming an index scan to find a record that can exist in only 1 of\nthe 140 partitions.\n\ncreate table hp (a int, b int) partition by hash(a);\nselect 'create table hp'||x||' partition of hp for values with\n(modulus 140, remainder ' || x || ');' from generate_series(0,139)x;\ncreate index on hp (b);\ninsert into hp select x,x from generate_series(1, 140000) x;\nanalyze hp;\n\nselect3.sql: select * from hp where b = 1\n\nMaster:\n\n$ pgbench -n -f select3.sql -T 60 -M prepared postgres\ntps = 2124.591367 (excluding connections establishing)\ntps = 2158.754837 (excluding connections establishing)\ntps = 2146.348465 (excluding connections establishing)\ntps = 2148.995800 (excluding connections establishing)\ntps = 2154.223600 (excluding connections establishing)\n\nPatched:\n\n$ pgbench -n -f select3.sql -T 60 -M prepared postgres\ntps = 2002.480729 (excluding connections establishing)\ntps = 1997.272944 (excluding connections establishing)\ntps = 1992.527079 (excluding connections establishing)\ntps = 1995.789367 (excluding connections establishing)\ntps = 2001.195760 (excluding connections establishing)\n\nso it turned out it's even slower, and not by a small amount either!\nPatched is 6.93% slower on average with this case :-(\n\nI went back to the drawing board on this and I've added some code that\ncounts the number of times we've seen the table to be oversized and\njust shrinks the table back down on the 1000th time. 6.93% / 1000 is\nnot all that much. Of course, not all the extra overhead might be from\nrebuilding the table, so here's a test with the updated patch.\n\n$ pgbench -n -f select3.sql -T 60 -M prepared postgres\ntps = 2142.414323 (excluding connections establishing)\ntps = 2139.666899 (excluding connections establishing)\ntps = 2138.744789 (excluding connections establishing)\ntps = 2138.300299 (excluding connections establishing)\ntps = 2137.346278 (excluding connections establishing)\n\nJust a 0.34% drop. Pretty hard to pick that out the noise.\n\nTesting the original case that shows the speedup:\n\ncreate table ht (a int primary key, b int, c int) partition by hash (a);\nselect 'create table ht' || x::text || ' partition of ht for values\nwith (modulus 8192, remainder ' || (x)::text || ');' from\ngenerate_series(0,8191) x;\n\\gexec\n\nselect.sql:\n\\set p 1\nselect * from ht where a = :p\n\nMaster:\n\n$ pgbench -n -f select.sql -T 60 -M prepared postgres\ntps = 10172.035036 (excluding connections establishing)\ntps = 10192.780529 (excluding connections establishing)\ntps = 10331.306003 (excluding connections establishing)\n\nPatched:\n\n$ pgbench -n -f select.sql -T 60 -M prepared postgres\ntps = 15080.765549 (excluding connections establishing)\ntps = 14994.404069 (excluding connections establishing)\ntps = 14982.923480 (excluding connections establishing)\n\nThat seems fine, 46% faster.\n\nv6 is attached.\n\nI plan to push this in a few days unless someone objects.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Sun, 21 Jul 2019 21:37:28 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "From: David Rowley [mailto:david.rowley@2ndquadrant.com]\r\n> I went back to the drawing board on this and I've added some code that counts\r\n> the number of times we've seen the table to be oversized and just shrinks\r\n> the table back down on the 1000th time. 6.93% / 1000 is not all that much.\r\n\r\nI'm afraid this kind of hidden behavior would appear mysterious to users. They may wonder \"Why is the same query fast at first in the session (5 or 6 times of execution), then gets slower for a while, and gets faster again? Is there something to tune? Am I missing something wrong with my app (e.g. how to use prepared statements)?\" So I prefer v5.\r\n\r\n\r\n> Of course, not all the extra overhead might be from rebuilding the table,\r\n> so here's a test with the updated patch.\r\n\r\nWhere else does the extra overhead come from?\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Mon, 22 Jul 2019 00:47:28 +0000",
"msg_from": "\"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Mon, 22 Jul 2019 at 12:48, Tsunakawa, Takayuki\n<tsunakawa.takay@jp.fujitsu.com> wrote:\n>\n> From: David Rowley [mailto:david.rowley@2ndquadrant.com]\n> > I went back to the drawing board on this and I've added some code that counts\n> > the number of times we've seen the table to be oversized and just shrinks\n> > the table back down on the 1000th time. 6.93% / 1000 is not all that much.\n>\n> I'm afraid this kind of hidden behavior would appear mysterious to users. They may wonder \"Why is the same query fast at first in the session (5 or 6 times of execution), then gets slower for a while, and gets faster again? Is there something to tune? Am I missing something wrong with my app (e.g. how to use prepared statements)?\" So I prefer v5.\n\nI personally don't think that's true. The only way you'll notice the\nLockReleaseAll() overhead is to execute very fast queries with a\nbloated lock table. It's pretty hard to notice that a single 0.1ms\nquery is slow. You'll need to execute thousands of them before you'll\nbe able to measure it, and once you've done that, the lock shrink code\nwill have run and the query will be performing optimally again.\n\nI voice my concerns with v5 and I wasn't really willing to push it\nwith a known performance regression of 7% in a fairly valid case. v6\ndoes not suffer from that.\n\n> > Of course, not all the extra overhead might be from rebuilding the table,\n> > so here's a test with the updated patch.\n>\n> Where else does the extra overhead come from?\n\nhash_get_num_entries(LockMethodLocalHash) == 0 &&\n+ hash_get_max_bucket(LockMethodLocalHash) >\n+ LOCKMETHODLOCALHASH_SHRINK_THRESHOLD)\n\nthat's executed every time, not every 1000 times.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Mon, 22 Jul 2019 13:21:04 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "From: David Rowley [mailto:david.rowley@2ndquadrant.com]\r\n> I personally don't think that's true. The only way you'll notice the\r\n> LockReleaseAll() overhead is to execute very fast queries with a\r\n> bloated lock table. It's pretty hard to notice that a single 0.1ms\r\n> query is slow. You'll need to execute thousands of them before you'll\r\n> be able to measure it, and once you've done that, the lock shrink code\r\n> will have run and the query will be performing optimally again.\r\n\r\nMaybe so. Will the difference be noticeable between plan_cache_mode=auto (default) and plan_cache_mode=custom?\r\n\r\n\r\n> I voice my concerns with v5 and I wasn't really willing to push it\r\n> with a known performance regression of 7% in a fairly valid case. v6\r\n> does not suffer from that.\r\n\r\nYou're right. We may have to consider the unpredictability to users by this hidden behavior as a compromise for higher throughput.\r\n\r\n\r\n> > Where else does the extra overhead come from?\r\n> \r\n> hash_get_num_entries(LockMethodLocalHash) == 0 &&\r\n> + hash_get_max_bucket(LockMethodLocalHash) >\r\n> + LOCKMETHODLOCALHASH_SHRINK_THRESHOLD)\r\n> \r\n> that's executed every time, not every 1000 times.\r\n\r\nI see. Thanks.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Mon, 22 Jul 2019 02:21:16 +0000",
"msg_from": "\"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Mon, 22 Jul 2019 at 14:21, Tsunakawa, Takayuki\n<tsunakawa.takay@jp.fujitsu.com> wrote:\n>\n> From: David Rowley [mailto:david.rowley@2ndquadrant.com]\n> > I personally don't think that's true. The only way you'll notice the\n> > LockReleaseAll() overhead is to execute very fast queries with a\n> > bloated lock table. It's pretty hard to notice that a single 0.1ms\n> > query is slow. You'll need to execute thousands of them before you'll\n> > be able to measure it, and once you've done that, the lock shrink code\n> > will have run and the query will be performing optimally again.\n>\n> Maybe so. Will the difference be noticeable between plan_cache_mode=auto (default) and plan_cache_mode=custom?\n\nFor the use case we've been measuring with partitioned tables and the\ngeneric plan generation causing a sudden spike in the number of\nobtained locks, then having plan_cache_mode = force_custom_plan will\ncause the lock table not to become bloated. I'm not sure there's\nanything interesting to measure there. The only additional code that\ngets executed is the hash_get_num_entries() and possibly\nhash_get_max_bucket. Maybe it's worth swapping the order of those\ncalls since most of the time the entry will be 0 and the max bucket\ncount threshold won't be exceeded.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Mon, 22 Jul 2019 14:29:33 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "From: David Rowley [mailto:david.rowley@2ndquadrant.com]\r\n> For the use case we've been measuring with partitioned tables and the\r\n> generic plan generation causing a sudden spike in the number of\r\n> obtained locks, then having plan_cache_mode = force_custom_plan will\r\n> cause the lock table not to become bloated. I'm not sure there's\r\n> anything interesting to measure there.\r\n\r\nI meant the difference between the following two cases, where the query only touches one partition (e.g. SELECT ... WHERE pkey = value):\r\n\r\n* plan_cache_mode=force_custom_plan: LocalLockHash won't bloat. The query execution time is steady.\r\n\r\n* plan_cache_mode=auto: LocalLockHash bloats on the sixth execution due to the creation of the generic plan. The generic plan is not adopted because its cost is high. Later executions of the query will suffer from the bloat until the 1006th execution when LocalLockHash is shrunk.\r\n\r\n\r\nDepending on the number of transactions and what each transaction does, I thought the difference will be noticeable or not.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n",
"msg_date": "Mon, 22 Jul 2019 04:34:59 +0000",
"msg_from": "\"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Mon, 22 Jul 2019 at 16:36, Tsunakawa, Takayuki\n<tsunakawa.takay@jp.fujitsu.com> wrote:\n>\n> From: David Rowley [mailto:david.rowley@2ndquadrant.com]\n> > For the use case we've been measuring with partitioned tables and the\n> > generic plan generation causing a sudden spike in the number of\n> > obtained locks, then having plan_cache_mode = force_custom_plan will\n> > cause the lock table not to become bloated. I'm not sure there's\n> > anything interesting to measure there.\n>\n> I meant the difference between the following two cases, where the query only touches one partition (e.g. SELECT ... WHERE pkey = value):\n>\n> * plan_cache_mode=force_custom_plan: LocalLockHash won't bloat. The query execution time is steady.\n>\n> * plan_cache_mode=auto: LocalLockHash bloats on the sixth execution due to the creation of the generic plan. The generic plan is not adopted because its cost is high. Later executions of the query will suffer from the bloat until the 1006th execution when LocalLockHash is shrunk.\n\nI measured this again in\nhttps://www.postgresql.org/message-id/CAKJS1f_ycJ-6QTKC70pZRYdwsAwUo+t0_CV0eXC=J-b5A2MegQ@mail.gmail.com\nwhere I posted the v6 patch. It's the final results in the email. I\ndidn't measure plan_cache_mode = force_custom_plan. There'd be no lock\ntable bloat in that case and the additional overhead would just be\nfrom the hash_get_num_entries() && hash_get_max_bucket() calls, which\nthe first results show next to no overhead for.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Mon, 22 Jul 2019 16:50:38 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Mon, 22 Jul 2019 at 12:48, Tsunakawa, Takayuki\n<tsunakawa.takay@jp.fujitsu.com> wrote:\n>\n> From: David Rowley [mailto:david.rowley@2ndquadrant.com]\n> > I went back to the drawing board on this and I've added some code that counts\n> > the number of times we've seen the table to be oversized and just shrinks\n> > the table back down on the 1000th time. 6.93% / 1000 is not all that much.\n>\n> I'm afraid this kind of hidden behavior would appear mysterious to users. They may wonder \"Why is the same query fast at first in the session (5 or 6 times of execution), then gets slower for a while, and gets faster again? Is there something to tune? Am I missing something wrong with my app (e.g. how to use prepared statements)?\" So I prefer v5.\n\nAnother counter-argument to this is that there's already an\nunexplainable slowdown after you run a query which obtains a large\nnumber of locks in a session or use prepared statements and a\npartitioned table with the default plan_cache_mode setting. Are we not\njust righting a wrong here? Albeit, possibly 1000 queries later.\n\nI am, of course, open to other ideas which solve the problem that v5\nhas, but failing that, I don't see v6 as all that bad. At least all\nthe logic is contained in one function. I know Tom wanted to steer\nclear of heuristics to reinitialise the table, but most of the stuff\nthat was in the patch back when he complained was trying to track the\naverage number of locks over the previous N transactions, and his\nconcerns were voiced before I showed the 7% performance regression\nwith unconditionally rebuilding the table.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Mon, 22 Jul 2019 23:14:48 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-21 21:37:28 +1200, David Rowley wrote:\n> select.sql:\n> \\set p 1\n> select * from ht where a = :p\n> \n> Master:\n> \n> $ pgbench -n -f select.sql -T 60 -M prepared postgres\n> tps = 10172.035036 (excluding connections establishing)\n> tps = 10192.780529 (excluding connections establishing)\n> tps = 10331.306003 (excluding connections establishing)\n> \n> Patched:\n> \n> $ pgbench -n -f select.sql -T 60 -M prepared postgres\n> tps = 15080.765549 (excluding connections establishing)\n> tps = 14994.404069 (excluding connections establishing)\n> tps = 14982.923480 (excluding connections establishing)\n> \n> That seems fine, 46% faster.\n> \n> v6 is attached.\n> \n> I plan to push this in a few days unless someone objects.\n\nIt does seem far less objectionable than the other case. I hate to\nthrow in one more wrench into a topic finally making progress, but: Have\neither of you considered just replacing the dynahash table with a\nsimplehash style one? Given the obvious speed sensitivity, and the fact\nthat for it (in contrast to the shared lock table) no partitioning is\nneeded, that seems like a good thing to try. It seems quite possible\nthat both the iteration and plain manipulations are going to be faster,\ndue to far less indirections - e.g. the iteration through the array will\njust be an array walk with a known stride, far easier for the CPU to\nprefetch.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 22 Jul 2019 09:07:00 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "From: David Rowley [mailto:david.rowley@2ndquadrant.com]\r\n> Another counter-argument to this is that there's already an\r\n> unexplainable slowdown after you run a query which obtains a large\r\n> number of locks in a session or use prepared statements and a\r\n> partitioned table with the default plan_cache_mode setting. Are we not\r\n> just righting a wrong here? Albeit, possibly 1000 queries later.\r\n> \r\n> I am, of course, open to other ideas which solve the problem that v5\r\n> has, but failing that, I don't see v6 as all that bad. At least all\r\n> the logic is contained in one function. I know Tom wanted to steer\r\n> clear of heuristics to reinitialise the table, but most of the stuff\r\n> that was in the patch back when he complained was trying to track the\r\n> average number of locks over the previous N transactions, and his\r\n> concerns were voiced before I showed the 7% performance regression\r\n> with unconditionally rebuilding the table.\r\n\r\nI think I understood what you mean. Sorry, I don't have a better idea. This unexplanability is probably what we should accept to avoid the shocking 7% slowdown.\r\n\r\nOTOH, how about my original patch that is based on the local lock list? I expect that it won't that significant slowdown in the same test case. If it's not satisfactory, then v6 is the best to commit.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n",
"msg_date": "Tue, 23 Jul 2019 03:46:59 +0000",
"msg_from": "\"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Tue, 23 Jul 2019 at 15:47, Tsunakawa, Takayuki\n<tsunakawa.takay@jp.fujitsu.com> wrote:\n> OTOH, how about my original patch that is based on the local lock list? I expect that it won't that significant slowdown in the same test case. If it's not satisfactory, then v6 is the best to commit.\n\nI think we need to move beyond the versions that have been reviewed\nand rejected. I don't think the reasons for their rejection will have\nchanged.\n\nI've attached v7, which really is v6 with some comments adjusted and\nthe order of the hash_get_num_entries and hash_get_max_bucket function\ncalls swapped. I think hash_get_num_entries() will return 0 most of\nthe time where we're calling it, so it makes sense to put the\ncondition that's less likely to be true first in the if condition.\n\nI'm pretty happy with v7 now. If anyone has any objections to it,\nplease speak up very soon. Otherwise, I plan to push it about this\ntime tomorrow.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Tue, 23 Jul 2019 18:11:51 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> I've attached v7, which really is v6 with some comments adjusted and\n> the order of the hash_get_num_entries and hash_get_max_bucket function\n> calls swapped. I think hash_get_num_entries() will return 0 most of\n> the time where we're calling it, so it makes sense to put the\n> condition that's less likely to be true first in the if condition.\n\n> I'm pretty happy with v7 now. If anyone has any objections to it,\n> please speak up very soon. Otherwise, I plan to push it about this\n> time tomorrow.\n\nI dunno, this seems close to useless in this form. As it stands,\nonce hash_get_max_bucket has exceeded the threshold, you will\narbitrarily reset the table 1000 transactions later (since the\nmax bucket is certainly not gonna decrease otherwise). So that's\ngot two big problems:\n\n1. In the assumed-common case where most of the transactions take\nfew locks, you wasted cycles for 999 transactions.\n\n2. You'll reset the table independently of subsequent history,\neven if the session's usage is pretty much always over the\nthreshold. Admittedly, if you do this only once per 1K\ntransactions, it's probably not a horrible overhead --- but you\ncan't improve point 1 without making it a bigger overhead.\n\nI did complain about the cost of tracking the stats proposed by\nsome earlier patches, but I don't think the answer to that is to\nnot track any stats at all. The proposed hash_get_max_bucket()\nfunction is quite cheap, so I think it wouldn't be out of line to\ntrack the average value of that at transaction end over the\nsession's lifespan, and reset if the current value is more than\nsome-multiple of the average.\n\nThe tricky part here is that if some xact kicks that up to\nsay 64 entries, and we don't choose to reset, then the reading\nfor subsequent transactions will be 64 even if they use very\nfew locks. So we probably need to not use a plain average,\nbut account for that effect somehow. Maybe we could look at\nhow quickly the number goes up after we reset?\n\n[ thinks for awhile... ] As a straw-man proposal, I suggest\nthe following (completely untested) idea:\n\n* Make the table size threshold variable, not constant.\n\n* If, at end-of-transaction when the table is empty,\nthe table bucket count exceeds the threshold, reset\nimmediately; but if it's been less than K transactions\nsince the last reset, increase the threshold (by say 10%).\n\nI think K can be a constant; somewhere between 10 and 100 would\nprobably work. At process start, we should act as though the last\nreset were more than K transactions ago (i.e., don't increase the\nthreshold at the first reset).\n\nThe main advantage this has over v7 is that we don't have the\n1000-transaction delay before reset, which ISTM is giving up\nmuch of the benefit of the whole idea. Also, if the session\nis consistently using lots of locks, we'll adapt to that after\nawhile and not do useless table resets.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Jul 2019 12:13:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "Thanks for having a look at this.\n\nOn Wed, 24 Jul 2019 at 04:13, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <david.rowley@2ndquadrant.com> writes:\n> > I'm pretty happy with v7 now. If anyone has any objections to it,\n> > please speak up very soon. Otherwise, I plan to push it about this\n> > time tomorrow.\n>\n> I dunno, this seems close to useless in this form. As it stands,\n> once hash_get_max_bucket has exceeded the threshold, you will\n> arbitrarily reset the table 1000 transactions later (since the\n> max bucket is certainly not gonna decrease otherwise). So that's\n> got two big problems:\n>\n> 1. In the assumed-common case where most of the transactions take\n> few locks, you wasted cycles for 999 transactions.\n>\n> 2. You'll reset the table independently of subsequent history,\n> even if the session's usage is pretty much always over the\n> threshold. Admittedly, if you do this only once per 1K\n> transactions, it's probably not a horrible overhead --- but you\n> can't improve point 1 without making it a bigger overhead.\n\nThis is true, but I think you might be overestimating just how much\neffort is wasted with #1. We're only seeing this overhead in small\nvery fast to execute xacts. In the test case in [1], I was getting\nabout 10k TPS unpatched and about 15k patched. This means that, on\naverage, 1 unpatched xact takes 100 microseconds and 1 average patched\nxact takes 66 microseconds, so the additional time spent doing the\nhash_seq_search() must be about 34 microseconds. So we'll waste a\ntotal of 34 *milliseconds* if we wait for 1000 xacts before we reset\nthe table. With 10k TPS we're going to react to the change in 0.1\nseconds.\n\nI think you'd struggle to measure that 1 xact is taking 34\nmicroseconds longer without running a few thousand queries. My view is\nthat nobody is ever going to notice that it takes 1k xacts to shrink\nthe table, and I've already shown that the overhead of the shrink\nevery 1k xact is tiny. I mentioned 0.34% in [1] using v6. This is\nlikely a bit smaller in v7 due to swapping the order of the if\ncondition to put the less likely case first. Since the overhead of\nrebuilding the table was 7% when done every xact, then it stands to\nreason that this has become 0.007% doing it every 1k xats and that the\nadditional overhead to make up that 0.34% is from testing if the reset\ncondition has been met (or noise). That's not something we can remove\ncompletely with any solution that resets the hash table.\n\n> I did complain about the cost of tracking the stats proposed by\n> some earlier patches, but I don't think the answer to that is to\n> not track any stats at all. The proposed hash_get_max_bucket()\n> function is quite cheap, so I think it wouldn't be out of line to\n> track the average value of that at transaction end over the\n> session's lifespan, and reset if the current value is more than\n> some-multiple of the average.\n>\n> The tricky part here is that if some xact kicks that up to\n> say 64 entries, and we don't choose to reset, then the reading\n> for subsequent transactions will be 64 even if they use very\n> few locks. So we probably need to not use a plain average,\n> but account for that effect somehow. Maybe we could look at\n> how quickly the number goes up after we reset?\n>\n> [ thinks for awhile... ] As a straw-man proposal, I suggest\n> the following (completely untested) idea:\n>\n> * Make the table size threshold variable, not constant.\n>\n> * If, at end-of-transaction when the table is empty,\n> the table bucket count exceeds the threshold, reset\n> immediately; but if it's been less than K transactions\n> since the last reset, increase the threshold (by say 10%).\n>\n> I think K can be a constant; somewhere between 10 and 100 would\n> probably work. At process start, we should act as though the last\n> reset were more than K transactions ago (i.e., don't increase the\n> threshold at the first reset).\n\nI think the problem with this idea is that there is no way once the\nthreshold has been enlarged to recover from that to work better\nworkloads that require very few locks again. If we end up with some\nlarge value for the variable threshold, there's no way to decrease\nthat again. All this method stops is the needless hash table resets\nif the typical case requires many locks. The only way to know if we\ncan reduce the threshold again is to count the locks released during\nLockReleaseAll(). Some version of the patch did that, and you\nobjected.\n\n> The main advantage this has over v7 is that we don't have the\n> 1000-transaction delay before reset, which ISTM is giving up\n> much of the benefit of the whole idea. Also, if the session\n> is consistently using lots of locks, we'll adapt to that after\n> awhile and not do useless table resets.\n\nTrue, but you neglected to mention the looming and critical drawback,\nwhich pretty much makes that idea useless. All we'd need to do is give\nthis a workload that throws that variable threshold up high so that it\ncan't recover. It would be pretty simple then to show that\nLockReleaseAll() is still slow with workloads that just require a\nsmall number of locks... permanently with no means to recover.\n\nTo be able to reduce the threshold down again we'd need to make a\nhash_get_num_entries(LockMethodLocalHash) call before performing the\nguts of LockReleaseAll(). We could then weight that onto some running\naverage counter with a weight of, say... 10, so we react to changes\nfairly quickly, but not instantly. We could then have some sort of\nlogic like \"rebuild the hash table if running average 4 times less\nthan max_bucket\"\n\nI've attached a spreadsheet of that idea and the algorithm we could\nuse to track the running average. Initially, I've mocked it up a\nseries of transactions that use 1000 locks, then at row 123 dropped\nthat to 10 locks. If we assume the max_bucket is 1000, then it takes\nuntil row 136 for the running average to drop below the max_bucket\ncount, i.e 13 xacts. There we'd reset there at the init size of 16. If\nthe average went up again, then we'd automatically expand the table as\nwe do now. To make this work we'd need an additional call to\nhash_get_num_entries(), before we release the locks, so there is more\noverhead.\n\n[1] https://www.postgresql.org/message-id/CAKJS1f_ycJ-6QTKC70pZRYdwsAwUo+t0_CV0eXC=J-b5A2MegQ@mail.gmail.com\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Wed, 24 Jul 2019 15:05:37 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Wed, 24 Jul 2019 at 15:05, David Rowley <david.rowley@2ndquadrant.com> wrote:\n> To be able to reduce the threshold down again we'd need to make a\n> hash_get_num_entries(LockMethodLocalHash) call before performing the\n> guts of LockReleaseAll(). We could then weight that onto some running\n> average counter with a weight of, say... 10, so we react to changes\n> fairly quickly, but not instantly. We could then have some sort of\n> logic like \"rebuild the hash table if running average 4 times less\n> than max_bucket\"\n>\n> I've attached a spreadsheet of that idea and the algorithm we could\n> use to track the running average. Initially, I've mocked it up a\n> series of transactions that use 1000 locks, then at row 123 dropped\n> that to 10 locks. If we assume the max_bucket is 1000, then it takes\n> until row 136 for the running average to drop below the max_bucket\n> count, i.e 13 xacts. There we'd reset there at the init size of 16. If\n> the average went up again, then we'd automatically expand the table as\n> we do now. To make this work we'd need an additional call to\n> hash_get_num_entries(), before we release the locks, so there is more\n> overhead.\n\nHere's a patch with this implemented. I've left a NOTICE in there to\nmake it easier for people to follow along at home and see when the\nlock table is reset.\n\nThere will be a bit of additional overhead to the reset detection\nlogic over the v7 patch. Namely: additional hash_get_num_entries()\ncall before releasing the locks, and more complex and floating-point\nmaths instead of more simple integer maths in v7.\n\nHere's a demo with the debug NOTICE in there to show us what's going on.\n\n-- setup\ncreate table a (a int) partition by range (a);\nselect 'create table a'||x||' partition of a for values from('||x||')\nto ('||x+1||');' from generate_Series(1,1000) x;\n\\gexec\n\n$ psql postgres\nNOTICE: max_bucket = 15, threshold = 64.000000, running_avg_locks\n0.100000 Reset? No\npsql (13devel)\n# \\o /dev/null\n# select * from a where a < 100;\nNOTICE: max_bucket = 101, threshold = 64.000000, running_avg_locks\n10.090000 Reset? Yes\n# select * from a where a < 100;\nNOTICE: max_bucket = 99, threshold = 76.324005, running_avg_locks\n19.081001 Reset? Yes\n# select * from a where a < 100;\n\nA couple of needless resets there... Maybe we can get rid of those by\nsetting the initial running average up to something higher than 0.0.\n\nNOTICE: max_bucket = 99, threshold = 108.691605, running_avg_locks\n27.172901 Reset? No\n# select * from a where a < 100;\nNOTICE: max_bucket = 99, threshold = 137.822449, running_avg_locks\n34.455612 Reset? No\n# select * from a where a < 100;\nNOTICE: max_bucket = 99, threshold = 164.040207, running_avg_locks\n41.010052 Reset? No\n# select * from a where a < 100;\nNOTICE: max_bucket = 99, threshold = 187.636185, running_avg_locks\n46.909046 Reset? No\n# select * from a where a < 100;\nNOTICE: max_bucket = 99, threshold = 208.872559, running_avg_locks\n52.218140 Reset? No\n# select * from a where a < 100;\nNOTICE: max_bucket = 99, threshold = 227.985306, running_avg_locks\n56.996326 Reset? No\n# select * from a where a < 100;\nNOTICE: max_bucket = 99, threshold = 245.186768, running_avg_locks\n61.296692 Reset? No\n# select * from a where a < 100;\nNOTICE: max_bucket = 99, threshold = 260.668091, running_avg_locks\n65.167023 Reset? No\n# select * from a where a < 100;\nNOTICE: max_bucket = 99, threshold = 274.601288, running_avg_locks\n68.650322 Reset? No\n# select * from a where a < 100;\nNOTICE: max_bucket = 99, threshold = 287.141174, running_avg_locks\n71.785294 Reset? No\n# select * from a where a < 100;\nNOTICE: max_bucket = 99, threshold = 298.427063, running_avg_locks\n74.606766 Reset? No\n# select * from a where a < 100;\nNOTICE: max_bucket = 99, threshold = 308.584351, running_avg_locks\n77.146088 Reset? No\n# select * from a where a < 100;\nNOTICE: max_bucket = 99, threshold = 317.725922, running_avg_locks\n79.431480 Reset? No\n# select * from a where a < 100;\nNOTICE: max_bucket = 99, threshold = 325.953339, running_avg_locks\n81.488335 Reset? No\n# select * from a where a < 100;\nNOTICE: max_bucket = 99, threshold = 333.358002, running_avg_locks\n83.339500 Reset? No\n# select * from a where a < 100;\nNOTICE: max_bucket = 99, threshold = 340.022217, running_avg_locks\n85.005554 Reset? No\n# select * from a where a < 100;\nNOTICE: max_bucket = 99, threshold = 346.019989, running_avg_locks\n86.504997 Reset? No\n# select * from a where a < 100;\nNOTICE: max_bucket = 99, threshold = 351.417999, running_avg_locks\n87.854500 Reset? No\n# select * from a where a < 100;\nNOTICE: max_bucket = 99, threshold = 356.276184, running_avg_locks\n89.069046 Reset? No\n# select * from a where a < 100;\nNOTICE: max_bucket = 99, threshold = 360.648560, running_avg_locks\n90.162140 Reset? No\n# select * from a where a < 100;\nNOTICE: max_bucket = 99, threshold = 364.583710, running_avg_locks\n91.145927 Reset? No\n# select * from a where a < 100;\nNOTICE: max_bucket = 99, threshold = 368.125336, running_avg_locks\n92.031334 Reset? No\n# select * from a where a < 100;\nNOTICE: max_bucket = 99, threshold = 371.312805, running_avg_locks\n92.828201 Reset? No\n# select * from a where a < 100;\nNOTICE: max_bucket = 99, threshold = 374.181519, running_avg_locks\n93.545380 Reset? No\n# select * from a where a < 100;\nNOTICE: max_bucket = 99, threshold = 376.763367, running_avg_locks\n94.190842 Reset? No\n# select * from a where a < 100;\nNOTICE: max_bucket = 99, threshold = 379.087036, running_avg_locks\n94.771759 Reset? No\n# select * from a where a < 100;\nNOTICE: max_bucket = 99, threshold = 381.178345, running_avg_locks\n95.294586 Reset? No\n# select * from a where a < 100;\nNOTICE: max_bucket = 99, threshold = 383.060516, running_avg_locks\n95.765129 Reset? No\n# select * from a where a < 100;\nNOTICE: max_bucket = 99, threshold = 384.754456, running_avg_locks\n96.188614 Reset? No\n# select * from a where a < 100;\nNOTICE: max_bucket = 99, threshold = 386.279022, running_avg_locks\n96.569756 Reset? No\n\n-- Here I switch to only selecting from 9 partitions instead of 99.\n\n# select * from a where a < 10;\nNOTICE: max_bucket = 99, threshold = 351.651123, running_avg_locks\n87.912781 Reset? No\n# select * from a where a < 10;\nNOTICE: max_bucket = 99, threshold = 320.486023, running_avg_locks\n80.121506 Reset? No\n# select * from a where a < 10;\nNOTICE: max_bucket = 99, threshold = 292.437408, running_avg_locks\n73.109352 Reset? No\n# select * from a where a < 10;\nNOTICE: max_bucket = 99, threshold = 267.193665, running_avg_locks\n66.798416 Reset? No\n# select * from a where a < 10;\nNOTICE: max_bucket = 99, threshold = 244.474304, running_avg_locks\n61.118576 Reset? No\n# select * from a where a < 10;\nNOTICE: max_bucket = 99, threshold = 224.026871, running_avg_locks\n56.006718 Reset? No\n# select * from a where a < 10;\nNOTICE: max_bucket = 99, threshold = 205.624176, running_avg_locks\n51.406044 Reset? No\n# select * from a where a < 10;\nNOTICE: max_bucket = 99, threshold = 189.061752, running_avg_locks\n47.265438 Reset? No\n# select * from a where a < 10;\nNOTICE: max_bucket = 99, threshold = 174.155579, running_avg_locks\n43.538895 Reset? No\n# select * from a where a < 10;\nNOTICE: max_bucket = 99, threshold = 160.740021, running_avg_locks\n40.185005 Reset? No\n# select * from a where a < 10;\nNOTICE: max_bucket = 99, threshold = 148.666016, running_avg_locks\n37.166504 Reset? No\n# select * from a where a < 10;\nNOTICE: max_bucket = 99, threshold = 137.799408, running_avg_locks\n34.449852 Reset? No\n# select * from a where a < 10;\nNOTICE: max_bucket = 99, threshold = 128.019470, running_avg_locks\n32.004868 Reset? No\n# select * from a where a < 10;\nNOTICE: max_bucket = 99, threshold = 119.217522, running_avg_locks\n29.804380 Reset? No\n# select * from a where a < 10;\nNOTICE: max_bucket = 99, threshold = 111.295769, running_avg_locks\n27.823942 Reset? No\n# select * from a where a < 10;\nNOTICE: max_bucket = 99, threshold = 104.166191, running_avg_locks\n26.041548 Reset? No\n# select * from a where a < 10;\nNOTICE: max_bucket = 99, threshold = 97.749573, running_avg_locks\n24.437393 Reset? Yes\n\nIt took 17 xacts to react to the change and reset the lock table.\n\n# select * from a where a < 10;\nNOTICE: max_bucket = 15, threshold = 91.974617, running_avg_locks\n22.993654 Reset? No\n\nnotice max_bucket is back at 15 again.\n\nAny thoughts on this?\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Wed, 24 Jul 2019 16:16:47 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Wed, 24 Jul 2019 at 16:16, David Rowley <david.rowley@2ndquadrant.com> wrote:\n>\n> On Wed, 24 Jul 2019 at 15:05, David Rowley <david.rowley@2ndquadrant.com> wrote:\n> > To be able to reduce the threshold down again we'd need to make a\n> > hash_get_num_entries(LockMethodLocalHash) call before performing the\n> > guts of LockReleaseAll(). We could then weight that onto some running\n> > average counter with a weight of, say... 10, so we react to changes\n> > fairly quickly, but not instantly. We could then have some sort of\n> > logic like \"rebuild the hash table if running average 4 times less\n> > than max_bucket\"\n> >\n> > I've attached a spreadsheet of that idea and the algorithm we could\n> > use to track the running average. Initially, I've mocked it up a\n> > series of transactions that use 1000 locks, then at row 123 dropped\n> > that to 10 locks. If we assume the max_bucket is 1000, then it takes\n> > until row 136 for the running average to drop below the max_bucket\n> > count, i.e 13 xacts. There we'd reset there at the init size of 16. If\n> > the average went up again, then we'd automatically expand the table as\n> > we do now. To make this work we'd need an additional call to\n> > hash_get_num_entries(), before we release the locks, so there is more\n> > overhead.\n>\n> Here's a patch with this implemented. I've left a NOTICE in there to\n> make it easier for people to follow along at home and see when the\n> lock table is reset.\n\nHere's a more polished version with the debug code removed, complete\nwith benchmarks.\n\n-- Test 1. Select 1 record from a 140 partitioned table. Tests\ncreating a large number of locks with a fast query.\n\ncreate table hp (a int, b int) partition by hash(a);\nselect 'create table hp'||x||' partition of hp for values with\n(modulus 140, remainder ' || x || ');' from generate_series(0,139)x;\ncreate index on hp (b);\ninsert into hp select x,x from generate_series(1, 140000) x;\nanalyze hp;\n\nselect3.sql: select * from hp where b = 1\n\n-\nMaster:\n\n$ pgbench -n -f select3.sql -T 60 -M prepared postgres\ntps = 2098.628975 (excluding connections establishing)\ntps = 2101.797672 (excluding connections establishing)\ntps = 2085.317292 (excluding connections establishing)\ntps = 2094.931999 (excluding connections establishing)\ntps = 2092.328908 (excluding connections establishing)\n\nPatched:\n\n$ pgbench -n -f select3.sql -T 60 -M prepared postgres\ntps = 2101.691459 (excluding connections establishing)\ntps = 2104.533249 (excluding connections establishing)\ntps = 2106.499123 (excluding connections establishing)\ntps = 2104.033459 (excluding connections establishing)\ntps = 2105.463629 (excluding connections establishing)\n\n(I'm surprised there is not more overhead in the additional tracking\nadded to calculate the running average)\n\n-- Test 2. Tests a prepared query which will perform a generic plan on\nthe 6th execution then fallback on a custom plan due to it pruning all\nbut one partition. Master suffers from the lock table becoming\nbloated after locking all partitions when planning the generic plan.\n\ncreate table ht (a int primary key, b int, c int) partition by hash (a);\nselect 'create table ht' || x::text || ' partition of ht for values\nwith (modulus 8192, remainder ' || (x)::text || ');' from\ngenerate_series(0,8191) x;\n\\gexec\n\nselect.sql:\n\\set p 1\nselect * from ht where a = :p\n\nMaster:\n\n$ pgbench -n -f select.sql -T 60 -M prepared postgres\ntps = 10207.780843 (excluding connections establishing)\ntps = 10205.772688 (excluding connections establishing)\ntps = 10214.896449 (excluding connections establishing)\ntps = 10157.572153 (excluding connections establishing)\ntps = 10147.764477 (excluding connections establishing)\n\nPatched:\n\n$ pgbench -n -f select.sql -T 60 -M prepared postgres\ntps = 14677.636570 (excluding connections establishing)\ntps = 14661.437186 (excluding connections establishing)\ntps = 14647.202877 (excluding connections establishing)\ntps = 14784.165753 (excluding connections establishing)\ntps = 14850.355344 (excluding connections establishing)\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Wed, 24 Jul 2019 19:14:35 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> Here's a more polished version with the debug code removed, complete\n> with benchmarks.\n\nA few gripes:\n\nYou're measuring the number of locks held at completion of the\ntransaction, which fails to account for locks transiently taken and\nreleased, so that the actual peak usage will be more. I think we mostly\nonly do that for system catalog accesses, so *maybe* the number of extra\nlocks involved isn't very large, but that's a very shaky assumption.\n\nI don't especially like the fact that this seems to have a hard-wired\n(and undocumented) assumption that buckets == entries, ie that the\nfillfactor of the table is set at 1.0. lock.c has no business knowing\nthat. Perhaps instead of returning the raw bucket count, you could have\ndynahash.c return bucket count times fillfactor, so that the number is in\nthe same units as the entry count.\n\nThis:\n running_avg_locks -= running_avg_locks / 10.0;\n running_avg_locks += numLocksHeld / 10.0;\nseems like a weird way to do the calculation. Personally I'd write\n running_avg_locks += (numLocksHeld - running_avg_locks) / 10.0;\nwhich is more the way I'm used to seeing exponential moving averages\ncomputed --- at least, it seems clearer to me why this will converge\ntowards the average value of numLocksHeld over time. It also makes\nit clear that it wouldn't be sane to use two different divisors,\nwhereas your formulation makes it look like maybe they could be\nset independently.\n\nYour compiler might not complain that LockReleaseAll's numLocksHeld\nis potentially uninitialized, but other people's compilers will.\n\n\nOn the whole, I don't especially like this approach, because of the\nconfusion between peak lock count and end-of-xact lock count. That\nseems way too likely to cause problems.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 Jul 2019 13:49:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Thu, Jul 25, 2019 at 5:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> David Rowley <david.rowley@2ndquadrant.com> writes:\n> > Here's a more polished version with the debug code removed, complete\n> > with benchmarks.\n>\n> A few gripes:\n>\n> [gripes]\n\nBased on the above, I've set this to \"Waiting on Author\", in the next CF.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Fri, 2 Aug 2019 10:15:54 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Thu, 25 Jul 2019 at 05:49, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> On the whole, I don't especially like this approach, because of the\n> confusion between peak lock count and end-of-xact lock count. That\n> seems way too likely to cause problems.\n\nThanks for having a look at this. I've not addressed the points\nyou've mentioned due to what you mention above. The only way I can\nthink of so far to resolve that would be to add something to track\npeak lock usage. The best I can think of to do that, short of adding\nsomething to dynahash.c is to check how many locks are held each time\nwe obtain a lock, then if that count is higher than the previous time\nwe checked, then update the maximum locks held, (probably a global\nvariable). That seems pretty horrible to me and adds overhead each\ntime we obtain a lock, which is a pretty performance-critical path.\n\nI've not tested what Andres mentioned about simplehash instead of\ndynahash yet. I did a quick scan of simplehash and it looked like\nSH_START_ITERATE would suffer the same problems as dynahash's\nhash_seq_search(), albeit, perhaps with some more efficient memory\nlookups. i.e it still has to skip over empty buckets, which might be\ncostly in a bloated table.\n\nFor now, I'm out of ideas. If anyone else feels like suggesting\nsomething of picking this up, feel free.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Wed, 14 Aug 2019 19:25:10 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Wed, Aug 14, 2019 at 07:25:10PM +1200, David Rowley wrote:\n>On Thu, 25 Jul 2019 at 05:49, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> On the whole, I don't especially like this approach, because of the\n>> confusion between peak lock count and end-of-xact lock count. That\n>> seems way too likely to cause problems.\n>\n>Thanks for having a look at this. I've not addressed the points\n>you've mentioned due to what you mention above. The only way I can\n>think of so far to resolve that would be to add something to track\n>peak lock usage. The best I can think of to do that, short of adding\n>something to dynahash.c is to check how many locks are held each time\n>we obtain a lock, then if that count is higher than the previous time\n>we checked, then update the maximum locks held, (probably a global\n>variable). That seems pretty horrible to me and adds overhead each\n>time we obtain a lock, which is a pretty performance-critical path.\n>\n\nWould it really be a measurable overhead? I mean, we only really need\none int counter, and you don't need to do the check on every lock\nacquisition - you just need to recheck on the first lock release. But\nmaybe I'm underestimating how expensive it is ...\n\nTalking about dynahash - doesn't it already track this information?\nMaybe not directly but surely it has to track the number of entries in\nthe hash table, in order to compute fill factor. Can't we piggy-back on\nthat and track the highest fill-factor for a particular period of time?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Fri, 16 Aug 2019 00:30:59 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On 2019-Aug-14, David Rowley wrote:\n\n> For now, I'm out of ideas. If anyone else feels like suggesting\n> something of picking this up, feel free.\n\nHmm ... is this patch rejected, or is somebody still trying to get it to\ncommittable state? David, you're listed as committer.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 2 Sep 2019 14:14:24 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "From: Alvaro Herrera [mailto:alvherre@2ndquadrant.com]\r\n> Hmm ... is this patch rejected, or is somebody still trying to get it to\r\n> committable state? David, you're listed as committer.\r\n\r\nI don't think it's rejected. It would be a pity (mottainai) to refuse this, because it provides significant speedup despite its simple modification.\r\n\r\nAgain, I think the v2 patch is OK. Tom's comment was as follows:\r\n\r\n\r\n[Tom's comment against v2]\r\n----------------------------------------\r\nFWIW, I tried this patch against current HEAD (959d00e9d).\r\nUsing the test case described by Amit at\r\n<be25cadf-982e-3f01-88b4-443a6667e16a(at)lab(dot)ntt(dot)co(dot)jp>\r\nI do measure an undeniable speedup, close to 35%.\r\n\r\nHowever ... I submit that that's a mighty extreme test case.\r\n(I had to increase max_locks_per_transaction to get it to run\r\nat all.) We should not be using that sort of edge case to drive\r\nperformance optimization choices.\r\n\r\nIf I reduce the number of partitions in Amit's example from 8192\r\nto something more real-world, like 128, I do still measure a\r\nperformance gain, but it's ~ 1.5% which is below what I'd consider\r\na reproducible win. I'm accustomed to seeing changes up to 2%\r\nin narrow benchmarks like this one, even when \"nothing changes\"\r\nexcept unrelated code.\r\n\r\nTrying a standard pgbench test case (pgbench -M prepared -S with\r\none client and an -s 10 database), it seems that the patch is about\r\n0.5% slower than HEAD. Again, that's below the noise threshold,\r\nbut it's not promising for the net effects of this patch on workloads\r\nthat aren't specifically about large and prunable partition sets.\r\n\r\nI'm also fairly concerned about the effects of the patch on\r\nsizeof(LOCALLOCK) --- on a 64-bit machine it goes from 72 to 88\r\nbytes, a 22% increase. That's a lot if you're considering cases\r\nwith many locks.\r\n\r\nOn the whole I don't think there's an adequate case for committing\r\nthis patch.\r\n\r\nI'd also point out that this is hardly the only place where we've\r\nseen hash_seq_search on nearly-empty hash tables become a bottleneck.\r\nSo I'm not thrilled about attacking that with one-table-at-time patches.\r\nI'd rather see us do something to let hash_seq_search win across\r\nthe board.\r\n----------------------------------------\r\n\r\n\r\n* Extreme test case: \r\nNot extreme. Two of our customers, who are considering to use PostgreSQL, are using thousands of partitions now. We hit this issue -- a point query gets nearly 20% slower after automatically creating a generic plan. That's the reason for this proposal.\r\n\r\n* 0.5% slowdown with pgbench:\r\nI think it's below the noise, as Tom said.\r\n\r\n* sizeof(LOCALLOCK):\r\nAs Andres replied to Tom in the immediately following mail, LOCALLOCK was bigger in PG 11.\r\n\r\n* Use case is narrow:\r\nNo. The bloated LockMethodLocalHash affects the performance of the items below as well as transaction commit/abort:\r\n - AtPrepare_Locks() and PostPrepare_Locks(): the hash table is scanned twice in PREPARE!\r\n - LockReleaseSession: advisory lock\r\n - LockReleaseCurrentOwner: ??\r\n - LockReassignCurrentOwner: ??\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Tue, 3 Sep 2019 05:12:53 +0000",
"msg_from": "\"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On 2019-Sep-03, Tsunakawa, Takayuki wrote:\n\n> From: Alvaro Herrera [mailto:alvherre@2ndquadrant.com]\n> > Hmm ... is this patch rejected, or is somebody still trying to get it to\n> > committable state? David, you're listed as committer.\n> \n> I don't think it's rejected. It would be a pity (mottainai) to refuse\n> this, because it provides significant speedup despite its simple\n> modification.\n\nI don't necessarily disagree with your argumentation, but Travis is\ncomplaining thusly:\n\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -g -O2 -Wall -Werror -I../../../../src/include -I/usr/include/x86_64-linux-gnu -D_GNU_SOURCE -c -o lock.o lock.c\n1840lock.c:486:1: error: conflicting types for ‘TryShrinkLocalLockHash’\n1841 TryShrinkLocalLockHash(long numLocksHeld)\n1842 ^\n1843lock.c:351:20: note: previous declaration of ‘TryShrinkLocalLockHash’ was here\n1844 static inline void TryShrinkLocalLockHash(void);\n1845 ^\n1846<builtin>: recipe for target 'lock.o' failed\n\nPlease fix.\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 25 Sep 2019 18:08:05 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "From: Alvaro Herrera [mailto:alvherre@2ndquadrant.com]\r\n> On 2019-Sep-03, Tsunakawa, Takayuki wrote:\r\n> > I don't think it's rejected. It would be a pity (mottainai) to refuse\r\n> > this, because it provides significant speedup despite its simple\r\n> > modification.\r\n> \r\n> I don't necessarily disagree with your argumentation, but Travis is\r\n> complaining thusly:\r\n\r\nI tried to revise David's latest patch (v8) and address Tom's comments in his last mail. But I'm a bit at a loss.\r\n\r\nFirst, to accurately count the maximum number of acquired locks in a transaction, we need to track the maximum entries in the hash table, and make it available via a new function like hash_get_max_entries(). However, to cover the shared partitioned hash table (that is not necessary for LockMethodLocalHash), we must add a spin lock in hashhdr and lock/unlock it when entering and removing entries in the hash table. It spoils the effort to decrease contention by hashhdr->freelists[].mutex. Do we want to track the maximum number of acquired locks in the global variable in lock.c, not in the hash table?\r\n\r\nSecond, I couldn't understand the comment about the fill factor well. I can understand that it's not correct to compare the number of hash buckets and the number of locks. But what can we do?\r\n\r\n\r\nI'm sorry to repeat what I mentioned in my previous mail, but my v2 patch's approach is based on the database textbook and seems intuitive. So I attached the rebased version.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa",
"msg_date": "Thu, 26 Sep 2019 07:11:53 +0000",
"msg_from": "\"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Thu, Sep 26, 2019 at 07:11:53AM +0000, Tsunakawa, Takayuki wrote:\n> I'm sorry to repeat what I mentioned in my previous mail, but my v2\n> patch's approach is based on the database textbook and seems\n> intuitive. So I attached the rebased version. \n\nIf you wish to do so, that's fine by me but I have not dived into the\ndetails of the thread much. Please not anyway that the patch does not\napply anymore and that it needs a rebase. So for now I have moved the\npatch to next CF, waiting on author. \n--\nMichael",
"msg_date": "Sun, 1 Dec 2019 11:59:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "Hi, the patch was in WoA since December, waiting for a rebase. I've\nmarked it as returned with feedback. Feel free to re-submit an updated\nversion into the next CF.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 1 Feb 2020 12:41:09 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Wed, 14 Aug 2019 at 19:25, David Rowley <david.rowley@2ndquadrant.com> wrote:\n> For now, I'm out of ideas. If anyone else feels like suggesting\n> something of picking this up, feel free.\n\nThis is a pretty old thread, so we might need a recap:\n\n# Recap\n\nBasically LockReleaseAll() becomes slow after a large number of locks\nhave all been held at once by a backend. The problem is that the\nLockMethodLocalHash dynahash table must grow to store all the locks\nand when later transactions only take a few locks, LockReleaseAll() is\nslow due to hash_seq_search() having to skip the sparsely filled\nbuckets in the bloated hash table.\n\nThe following things were tried on this thread. Each one failed:\n\n1) Use a dlist in LOCALLOCK to track the next and prev LOCALLOCK.\nSimply loop over the dlist in LockReleaseAll().\n2) Try dropping and recreating the dynahash table if it becomes\nbloated using some heuristics to decide what \"bloated\" means and if\nrecreating is worthwhile.\n\n#1 failed due to concerns with increasing the size of LOCALLOCK to\nstore the dlist pointers. Performance regressions were seen too.\nPossibly due to size increase or additional overhead from pushing onto\nthe dlist.\n#2 failed because it was difficult to agree on what the heuristics\nwould be and we had no efficient way to determine the maximum number\nof locks that a given transaction held at any one time. We only know\nhow many were still held at LockReleaseAll().\n\nThere were also some suggestions to fix dynahash's hash_seq_search()\nslowness, and also a suggestion to try using simplehash.h instead of\ndynahash.c. Unfortunately simplehash.h would suffer the same issues as\nit too would have to skip over empty buckets in a sparsely populated\nhash table.\n\nI'd like to revive this effort as I have a new idea on how to solve the problem.\n\n# Background\n\nOver in [1] I'm trying to improve the performance of smgropen() during\nrecovery. The slowness there comes from the dynahash table lookups to\nfind the correct SMgrRelation. Over there I proposed to use simplehash\ninstead of dynahash because it's quite a good bit faster and far\nlessens the hash lookup overhead during recovery. One problem on that\nthread is that relcache keeps a pointer into the SMgrRelation\n(RelationData.rd_smgr) and because simplehash moves things around\nduring inserts and deletes, then we can't have anything point to\nsimplehash entries, they're unstable. I fixed that over on the other\nthread by having the simplehash entry point to a palloced\nSMgrRelationData... My problem is, I don't really like that idea as it\nmeans we need to palloc() pfree() lots of little chunks of memory.\n\nTo fix the above, I really think we need a version of simplehash that\nhas stable pointers. Providing that implementation is faster than\ndynahash, then it will help solve the smgropen() slowness during\nrecovery.\n\n# A new hashtable implementation\n\nI ended up thinking of this thread again because the implementation of\nthe stable pointer hash that I ended up writing for [1] happens to be\nlightning fast for hash table sequential scans, even if the table has\nbecome bloated. The reason the seq scans are so fast is that the\nimplementation loops over the data arrays, which are tightly packed\nand store the actual data rather than pointers to the data. The code\ndoes not need to loop over the bucket array for this at all, so how\nlarge that has become is irrelevant to hash table seq scan\nperformance.\n\nThe patch stores elements in \"segments\" which is set to some power of\n2 value. When we run out of space to store new items in a segment, we\njust allocate another segment. When we remove items from the table,\nnew items reuse the first unused item in the first segment with free\nspace. This helps to keep the used elements tightly packed. A\nsegment keeps a bitmap of used items so that means scanning all used\nitems is very fast. If you flip the bits in the used_item bitmap,\nthen you get a free list for that segment, so it's also very fast to\nfind a free element when inserting a new item into the table.\n\nI've called the hash table implementation \"generichash\". It uses the\nsame preprocessor tricks as simplehash.h does and borrows the same\nlinear probing code that's used in simplehash. The bucket array just\nstores the hash value and a uint32 index into the segment item that\nstores the data. Since segments store a power of 2 items, we can\neasily address both the segment number and the item within the segment\nfrom the single uint32 index value. The 'index' field just stores a\nspecial value when the bucket is empty. No need to add another field\nfor that. This means the bucket type is just 8 bytes wide.\n\nOne thing I will mention about the new hash table implementation is\nthat GH_ITEMS_PER_SEGMENT is, by default, set to 256. This means\nthat's the minimum size for the table. I could drop this downto 64,\nbut that's still quite a bit more than the default size of the\ndynahash table of 16. I think 16 is a bit on the low side and that it\nmight be worth making this 64 anyway. I'd just need to lower\nGH_ITEMS_PER_SEGMENT down. The current code does not allow it to go\nlower as I've done nothing to allow partial bitmap words, they're\n64-bits on a 64-bit machine.\n\nI've not done too much benchmarking between it and simplehash.h, but I\nthink in some cases it will be faster. Since the bucket type is just 8\nbytes, moving stuff around during insert/deletes will be cheaper than\nwith simplehash. Lookups are likely to be a bit slower due to having\nto lookup the item within the segment, which is a few pointer\ndereferences away.\n\nA quick test shows an improvement when compared to dynahash.\n\n# select count(pg_try_advisory_lock(99999,99999)) from\ngenerate_series(1,1000000);\n\nMaster:\nTime: 190.257 ms\nTime: 189.440 ms\nTime: 188.768 ms\n\nPatched:\nTime: 182.779 ms\nTime: 182.267 ms\nTime: 186.902 ms\n\nThis is just hitting the local lock table. The advisory lock key is\nthe same each time, so it remains a lock check. Also, it's a pretty\nsmall gain, but I'm mostly trying to show that the new hash table is\nnot any slower than dynahash for probing for inserts.\n\nThe real wins come from what we're trying to solve in this thread --\nthe performance of LockReleaseAll().\n\nBenchmarking results measuring the TPS of a simple select from an\nempty table after another transaction has bloated the locallock table.\n\nMaster:\n127544 tps\n113971 tps\n123255 tps\n121615 tps\n\nPatched:\n170803 tps\n167305 tps\n165002 tps\n171976 tps\n\nAbout 38% faster.\n\nThe benchmark I used was:\n\nt1.sql:\n\\set p 1\nselect a from t1 where a = :p\n\nhp.sql:\nselect count(*) from hp\n\n\"hp\" is a hash partitioned table with 10k partitions.\n\npgbench -j 20 -c 20 -T 60 -M prepared -n -f hp.sql@1 -f t1.sql@100000 postgres\n\nI'm using the query to the hp table to bloat the locallock table. It's\nonly executed every 1 in 100,000 queries. The tps numbers above are\nthe ones to run t1.sql\n\nI've not quite looked into why yet, but the hp.sql improved\nperformance by 58%. It went from an average of 1.061377 in master to\nan average of 1.683616 in the patched version. I can't quite see where\nthis gain is coming from. It's pretty hard to get good stable\nperformance results out of this AMD machine, so it might be related to\nthat. That's why I ran 20 threads. It seems slightly better. The\nmachine seems to have trouble waking up properly for a single thread.\n\nIt would be good if someone could repeat the tests to see if the gains\nappear on other hardware.\n\nAlso, it would be good to hear what people think about solving the\nproblem this way.\n\nPatch attached.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/flat/CAApHDvpkWOGLh_bYg7jproXN8B2g2T9dWDcqsmKsXG5+WwZaqw@mail.gmail.com",
"msg_date": "Mon, 21 Jun 2021 01:56:13 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Sun, Jun 20, 2021 at 6:56 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Wed, 14 Aug 2019 at 19:25, David Rowley <david.rowley@2ndquadrant.com>\n> wrote:\n> > For now, I'm out of ideas. If anyone else feels like suggesting\n> > something of picking this up, feel free.\n>\n> This is a pretty old thread, so we might need a recap:\n>\n> # Recap\n>\n> Basically LockReleaseAll() becomes slow after a large number of locks\n> have all been held at once by a backend. The problem is that the\n> LockMethodLocalHash dynahash table must grow to store all the locks\n> and when later transactions only take a few locks, LockReleaseAll() is\n> slow due to hash_seq_search() having to skip the sparsely filled\n> buckets in the bloated hash table.\n>\n> The following things were tried on this thread. Each one failed:\n>\n> 1) Use a dlist in LOCALLOCK to track the next and prev LOCALLOCK.\n> Simply loop over the dlist in LockReleaseAll().\n> 2) Try dropping and recreating the dynahash table if it becomes\n> bloated using some heuristics to decide what \"bloated\" means and if\n> recreating is worthwhile.\n>\n> #1 failed due to concerns with increasing the size of LOCALLOCK to\n> store the dlist pointers. Performance regressions were seen too.\n> Possibly due to size increase or additional overhead from pushing onto\n> the dlist.\n> #2 failed because it was difficult to agree on what the heuristics\n> would be and we had no efficient way to determine the maximum number\n> of locks that a given transaction held at any one time. We only know\n> how many were still held at LockReleaseAll().\n>\n> There were also some suggestions to fix dynahash's hash_seq_search()\n> slowness, and also a suggestion to try using simplehash.h instead of\n> dynahash.c. Unfortunately simplehash.h would suffer the same issues as\n> it too would have to skip over empty buckets in a sparsely populated\n> hash table.\n>\n> I'd like to revive this effort as I have a new idea on how to solve the\n> problem.\n>\n> # Background\n>\n> Over in [1] I'm trying to improve the performance of smgropen() during\n> recovery. The slowness there comes from the dynahash table lookups to\n> find the correct SMgrRelation. Over there I proposed to use simplehash\n> instead of dynahash because it's quite a good bit faster and far\n> lessens the hash lookup overhead during recovery. One problem on that\n> thread is that relcache keeps a pointer into the SMgrRelation\n> (RelationData.rd_smgr) and because simplehash moves things around\n> during inserts and deletes, then we can't have anything point to\n> simplehash entries, they're unstable. I fixed that over on the other\n> thread by having the simplehash entry point to a palloced\n> SMgrRelationData... My problem is, I don't really like that idea as it\n> means we need to palloc() pfree() lots of little chunks of memory.\n>\n> To fix the above, I really think we need a version of simplehash that\n> has stable pointers. Providing that implementation is faster than\n> dynahash, then it will help solve the smgropen() slowness during\n> recovery.\n>\n> # A new hashtable implementation\n>\n> I ended up thinking of this thread again because the implementation of\n> the stable pointer hash that I ended up writing for [1] happens to be\n> lightning fast for hash table sequential scans, even if the table has\n> become bloated. The reason the seq scans are so fast is that the\n> implementation loops over the data arrays, which are tightly packed\n> and store the actual data rather than pointers to the data. The code\n> does not need to loop over the bucket array for this at all, so how\n> large that has become is irrelevant to hash table seq scan\n> performance.\n>\n> The patch stores elements in \"segments\" which is set to some power of\n> 2 value. When we run out of space to store new items in a segment, we\n> just allocate another segment. When we remove items from the table,\n> new items reuse the first unused item in the first segment with free\n> space. This helps to keep the used elements tightly packed. A\n> segment keeps a bitmap of used items so that means scanning all used\n> items is very fast. If you flip the bits in the used_item bitmap,\n> then you get a free list for that segment, so it's also very fast to\n> find a free element when inserting a new item into the table.\n>\n> I've called the hash table implementation \"generichash\". It uses the\n> same preprocessor tricks as simplehash.h does and borrows the same\n> linear probing code that's used in simplehash. The bucket array just\n> stores the hash value and a uint32 index into the segment item that\n> stores the data. Since segments store a power of 2 items, we can\n> easily address both the segment number and the item within the segment\n> from the single uint32 index value. The 'index' field just stores a\n> special value when the bucket is empty. No need to add another field\n> for that. This means the bucket type is just 8 bytes wide.\n>\n> One thing I will mention about the new hash table implementation is\n> that GH_ITEMS_PER_SEGMENT is, by default, set to 256. This means\n> that's the minimum size for the table. I could drop this downto 64,\n> but that's still quite a bit more than the default size of the\n> dynahash table of 16. I think 16 is a bit on the low side and that it\n> might be worth making this 64 anyway. I'd just need to lower\n> GH_ITEMS_PER_SEGMENT down. The current code does not allow it to go\n> lower as I've done nothing to allow partial bitmap words, they're\n> 64-bits on a 64-bit machine.\n>\n> I've not done too much benchmarking between it and simplehash.h, but I\n> think in some cases it will be faster. Since the bucket type is just 8\n> bytes, moving stuff around during insert/deletes will be cheaper than\n> with simplehash. Lookups are likely to be a bit slower due to having\n> to lookup the item within the segment, which is a few pointer\n> dereferences away.\n>\n> A quick test shows an improvement when compared to dynahash.\n>\n> # select count(pg_try_advisory_lock(99999,99999)) from\n> generate_series(1,1000000);\n>\n> Master:\n> Time: 190.257 ms\n> Time: 189.440 ms\n> Time: 188.768 ms\n>\n> Patched:\n> Time: 182.779 ms\n> Time: 182.267 ms\n> Time: 186.902 ms\n>\n> This is just hitting the local lock table. The advisory lock key is\n> the same each time, so it remains a lock check. Also, it's a pretty\n> small gain, but I'm mostly trying to show that the new hash table is\n> not any slower than dynahash for probing for inserts.\n>\n> The real wins come from what we're trying to solve in this thread --\n> the performance of LockReleaseAll().\n>\n> Benchmarking results measuring the TPS of a simple select from an\n> empty table after another transaction has bloated the locallock table.\n>\n> Master:\n> 127544 tps\n> 113971 tps\n> 123255 tps\n> 121615 tps\n>\n> Patched:\n> 170803 tps\n> 167305 tps\n> 165002 tps\n> 171976 tps\n>\n> About 38% faster.\n>\n> The benchmark I used was:\n>\n> t1.sql:\n> \\set p 1\n> select a from t1 where a = :p\n>\n> hp.sql:\n> select count(*) from hp\n>\n> \"hp\" is a hash partitioned table with 10k partitions.\n>\n> pgbench -j 20 -c 20 -T 60 -M prepared -n -f hp.sql@1 -f t1.sql@100000\n> postgres\n>\n> I'm using the query to the hp table to bloat the locallock table. It's\n> only executed every 1 in 100,000 queries. The tps numbers above are\n> the ones to run t1.sql\n>\n> I've not quite looked into why yet, but the hp.sql improved\n> performance by 58%. It went from an average of 1.061377 in master to\n> an average of 1.683616 in the patched version. I can't quite see where\n> this gain is coming from. It's pretty hard to get good stable\n> performance results out of this AMD machine, so it might be related to\n> that. That's why I ran 20 threads. It seems slightly better. The\n> machine seems to have trouble waking up properly for a single thread.\n>\n> It would be good if someone could repeat the tests to see if the gains\n> appear on other hardware.\n>\n> Also, it would be good to hear what people think about solving the\n> problem this way.\n>\n> Patch attached.\n>\n> David\n>\n> [1]\n> https://www.postgresql.org/message-id/flat/CAApHDvpkWOGLh_bYg7jproXN8B2g2T9dWDcqsmKsXG5+WwZaqw@mail.gmail.com\n\n\nHi,\n\n+ * GH_ELEMENT_TYPE defines the data type that the hashtable stores. Each\n+ * instance of GH_ELEMENT_TYPE which is stored in the hash table is done\nso\n+ * inside a GH_SEGMENT.\n\nI think the second sentence can be written as (since done means stored, it\nis redundant):\n\nEach instance of GH_ELEMENT_TYPE is stored in the hash table inside a\nGH_SEGMENT.\n\n+ * Macros for translating a bucket's index into the segment and another to\n+ * determine the item number within the segment.\n+ */\n+#define GH_INDEX_SEGMENT(i) (i) / GH_ITEMS_PER_SEGMENT\n\ninto the segment -> into the segment number (in the code I see segidx but I\nwonder if segment index may cause slight confusion).\n\n+ GH_BITMAP_WORD used_items[GH_BITMAP_WORDS]; /* A 1-bit for each used\nitem\n+ * in the items array */\n\n'A 1-bit' -> One bit (A and 1 mean the same)\n\n+ uint32 first_free_segment;\n\nSince the segment may not be totally free, maybe name the field\nfirst_segment_with_free_slot\n\n+ * This is similar to GH_NEXT_ONEBIT but flips the bits before operating on\n+ * each GH_BITMAP_WORD.\n\nIt seems the only difference from GH_NEXT_ONEBIT is in this line:\n\n+ GH_BITMAP_WORD word = ~(words[wordnum] & mask); /* flip bits */\n\nIf a 4th parameter is added to signify whether the flipping should be done,\nthese two functions can be unified.\n\n+ * next insert will store in this segment. If it's already pointing to\nan\n+ * earlier segment, then leave it be.\n\nThe last sentence is confusing: first_free_segment cannot point to some\nsegment and earlier segment at the same time.\nMaybe drop the last sentence.\n\nCheers\n\nOn Sun, Jun 20, 2021 at 6:56 AM David Rowley <dgrowleyml@gmail.com> wrote:On Wed, 14 Aug 2019 at 19:25, David Rowley <david.rowley@2ndquadrant.com> wrote:\n> For now, I'm out of ideas. If anyone else feels like suggesting\n> something of picking this up, feel free.\n\nThis is a pretty old thread, so we might need a recap:\n\n# Recap\n\nBasically LockReleaseAll() becomes slow after a large number of locks\nhave all been held at once by a backend. The problem is that the\nLockMethodLocalHash dynahash table must grow to store all the locks\nand when later transactions only take a few locks, LockReleaseAll() is\nslow due to hash_seq_search() having to skip the sparsely filled\nbuckets in the bloated hash table.\n\nThe following things were tried on this thread. Each one failed:\n\n1) Use a dlist in LOCALLOCK to track the next and prev LOCALLOCK.\nSimply loop over the dlist in LockReleaseAll().\n2) Try dropping and recreating the dynahash table if it becomes\nbloated using some heuristics to decide what \"bloated\" means and if\nrecreating is worthwhile.\n\n#1 failed due to concerns with increasing the size of LOCALLOCK to\nstore the dlist pointers. Performance regressions were seen too.\nPossibly due to size increase or additional overhead from pushing onto\nthe dlist.\n#2 failed because it was difficult to agree on what the heuristics\nwould be and we had no efficient way to determine the maximum number\nof locks that a given transaction held at any one time. We only know\nhow many were still held at LockReleaseAll().\n\nThere were also some suggestions to fix dynahash's hash_seq_search()\nslowness, and also a suggestion to try using simplehash.h instead of\ndynahash.c. Unfortunately simplehash.h would suffer the same issues as\nit too would have to skip over empty buckets in a sparsely populated\nhash table.\n\nI'd like to revive this effort as I have a new idea on how to solve the problem.\n\n# Background\n\nOver in [1] I'm trying to improve the performance of smgropen() during\nrecovery. The slowness there comes from the dynahash table lookups to\nfind the correct SMgrRelation. Over there I proposed to use simplehash\ninstead of dynahash because it's quite a good bit faster and far\nlessens the hash lookup overhead during recovery. One problem on that\nthread is that relcache keeps a pointer into the SMgrRelation\n(RelationData.rd_smgr) and because simplehash moves things around\nduring inserts and deletes, then we can't have anything point to\nsimplehash entries, they're unstable. I fixed that over on the other\nthread by having the simplehash entry point to a palloced\nSMgrRelationData... My problem is, I don't really like that idea as it\nmeans we need to palloc() pfree() lots of little chunks of memory.\n\nTo fix the above, I really think we need a version of simplehash that\nhas stable pointers. Providing that implementation is faster than\ndynahash, then it will help solve the smgropen() slowness during\nrecovery.\n\n# A new hashtable implementation\n\nI ended up thinking of this thread again because the implementation of\nthe stable pointer hash that I ended up writing for [1] happens to be\nlightning fast for hash table sequential scans, even if the table has\nbecome bloated. The reason the seq scans are so fast is that the\nimplementation loops over the data arrays, which are tightly packed\nand store the actual data rather than pointers to the data. The code\ndoes not need to loop over the bucket array for this at all, so how\nlarge that has become is irrelevant to hash table seq scan\nperformance.\n\nThe patch stores elements in \"segments\" which is set to some power of\n2 value. When we run out of space to store new items in a segment, we\njust allocate another segment. When we remove items from the table,\nnew items reuse the first unused item in the first segment with free\nspace. This helps to keep the used elements tightly packed. A\nsegment keeps a bitmap of used items so that means scanning all used\nitems is very fast. If you flip the bits in the used_item bitmap,\nthen you get a free list for that segment, so it's also very fast to\nfind a free element when inserting a new item into the table.\n\nI've called the hash table implementation \"generichash\". It uses the\nsame preprocessor tricks as simplehash.h does and borrows the same\nlinear probing code that's used in simplehash. The bucket array just\nstores the hash value and a uint32 index into the segment item that\nstores the data. Since segments store a power of 2 items, we can\neasily address both the segment number and the item within the segment\nfrom the single uint32 index value. The 'index' field just stores a\nspecial value when the bucket is empty. No need to add another field\nfor that. This means the bucket type is just 8 bytes wide.\n\nOne thing I will mention about the new hash table implementation is\nthat GH_ITEMS_PER_SEGMENT is, by default, set to 256. This means\nthat's the minimum size for the table. I could drop this downto 64,\nbut that's still quite a bit more than the default size of the\ndynahash table of 16. I think 16 is a bit on the low side and that it\nmight be worth making this 64 anyway. I'd just need to lower\nGH_ITEMS_PER_SEGMENT down. The current code does not allow it to go\nlower as I've done nothing to allow partial bitmap words, they're\n64-bits on a 64-bit machine.\n\nI've not done too much benchmarking between it and simplehash.h, but I\nthink in some cases it will be faster. Since the bucket type is just 8\nbytes, moving stuff around during insert/deletes will be cheaper than\nwith simplehash. Lookups are likely to be a bit slower due to having\nto lookup the item within the segment, which is a few pointer\ndereferences away.\n\nA quick test shows an improvement when compared to dynahash.\n\n# select count(pg_try_advisory_lock(99999,99999)) from\ngenerate_series(1,1000000);\n\nMaster:\nTime: 190.257 ms\nTime: 189.440 ms\nTime: 188.768 ms\n\nPatched:\nTime: 182.779 ms\nTime: 182.267 ms\nTime: 186.902 ms\n\nThis is just hitting the local lock table. The advisory lock key is\nthe same each time, so it remains a lock check. Also, it's a pretty\nsmall gain, but I'm mostly trying to show that the new hash table is\nnot any slower than dynahash for probing for inserts.\n\nThe real wins come from what we're trying to solve in this thread --\nthe performance of LockReleaseAll().\n\nBenchmarking results measuring the TPS of a simple select from an\nempty table after another transaction has bloated the locallock table.\n\nMaster:\n127544 tps\n113971 tps\n123255 tps\n121615 tps\n\nPatched:\n170803 tps\n167305 tps\n165002 tps\n171976 tps\n\nAbout 38% faster.\n\nThe benchmark I used was:\n\nt1.sql:\n\\set p 1\nselect a from t1 where a = :p\n\nhp.sql:\nselect count(*) from hp\n\n\"hp\" is a hash partitioned table with 10k partitions.\n\npgbench -j 20 -c 20 -T 60 -M prepared -n -f hp.sql@1 -f t1.sql@100000 postgres\n\nI'm using the query to the hp table to bloat the locallock table. It's\nonly executed every 1 in 100,000 queries. The tps numbers above are\nthe ones to run t1.sql\n\nI've not quite looked into why yet, but the hp.sql improved\nperformance by 58%. It went from an average of 1.061377 in master to\nan average of 1.683616 in the patched version. I can't quite see where\nthis gain is coming from. It's pretty hard to get good stable\nperformance results out of this AMD machine, so it might be related to\nthat. That's why I ran 20 threads. It seems slightly better. The\nmachine seems to have trouble waking up properly for a single thread.\n\nIt would be good if someone could repeat the tests to see if the gains\nappear on other hardware.\n\nAlso, it would be good to hear what people think about solving the\nproblem this way.\n\nPatch attached.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/flat/CAApHDvpkWOGLh_bYg7jproXN8B2g2T9dWDcqsmKsXG5+WwZaqw@mail.gmail.comHi,+ * GH_ELEMENT_TYPE defines the data type that the hashtable stores. Each+ * instance of GH_ELEMENT_TYPE which is stored in the hash table is done so+ * inside a GH_SEGMENT.I think the second sentence can be written as (since done means stored, it is redundant):Each instance of GH_ELEMENT_TYPE is stored in the hash table inside a GH_SEGMENT.+ * Macros for translating a bucket's index into the segment and another to+ * determine the item number within the segment.+ */+#define GH_INDEX_SEGMENT(i) (i) / GH_ITEMS_PER_SEGMENTinto the segment -> into the segment number (in the code I see segidx but I wonder if segment index may cause slight confusion).+ GH_BITMAP_WORD used_items[GH_BITMAP_WORDS]; /* A 1-bit for each used item+ * in the items array */'A 1-bit' -> One bit (A and 1 mean the same)+ uint32 first_free_segment;Since the segment may not be totally free, maybe name the field first_segment_with_free_slot+ * This is similar to GH_NEXT_ONEBIT but flips the bits before operating on+ * each GH_BITMAP_WORD.It seems the only difference from GH_NEXT_ONEBIT is in this line:+ GH_BITMAP_WORD word = ~(words[wordnum] & mask); /* flip bits */If a 4th parameter is added to signify whether the flipping should be done, these two functions can be unified.+ * next insert will store in this segment. If it's already pointing to an+ * earlier segment, then leave it be.The last sentence is confusing: first_free_segment cannot point to some segment and earlier segment at the same time.Maybe drop the last sentence.Cheers",
"msg_date": "Sun, 20 Jun 2021 10:06:47 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "Thanks for having a look at this.\n\nOn Mon, 21 Jun 2021 at 05:02, Zhihong Yu <zyu@yugabyte.com> wrote:\n> + * GH_ELEMENT_TYPE defines the data type that the hashtable stores. Each\n> + * instance of GH_ELEMENT_TYPE which is stored in the hash table is done so\n> + * inside a GH_SEGMENT.\n>\n> I think the second sentence can be written as (since done means stored, it is redundant):\n\nI've rewords this entire paragraph slightly.\n\n> Each instance of GH_ELEMENT_TYPE is stored in the hash table inside a GH_SEGMENT.\n>\n> + * Macros for translating a bucket's index into the segment and another to\n> + * determine the item number within the segment.\n> + */\n> +#define GH_INDEX_SEGMENT(i) (i) / GH_ITEMS_PER_SEGMENT\n>\n> into the segment -> into the segment number (in the code I see segidx but I wonder if segment index may cause slight confusion).\n\nI've adjusted this comment\n\n> + GH_BITMAP_WORD used_items[GH_BITMAP_WORDS]; /* A 1-bit for each used item\n> + * in the items array */\n>\n> 'A 1-bit' -> One bit (A and 1 mean the same)\n\nI think you might have misread this. We're storing a 1-bit for each\nused item rather than a 0-bit. If I remove the 'A' then it's not\nclear what the meaning of each bit's value is.\n\n> + uint32 first_free_segment;\n>\n> Since the segment may not be totally free, maybe name the field first_segment_with_free_slot\n\nI don't really like that. It feels too long to me.\n\n> + * This is similar to GH_NEXT_ONEBIT but flips the bits before operating on\n> + * each GH_BITMAP_WORD.\n>\n> It seems the only difference from GH_NEXT_ONEBIT is in this line:\n>\n> + GH_BITMAP_WORD word = ~(words[wordnum] & mask); /* flip bits */\n>\n> If a 4th parameter is added to signify whether the flipping should be done, these two functions can be unified.\n\nI don't want to do that. I'd rather have them separate to ensure the\ncompiler does not create any additional needless branching. Those\nfunctions are pretty hot.\n\n> + * next insert will store in this segment. If it's already pointing to an\n> + * earlier segment, then leave it be.\n>\n> The last sentence is confusing: first_free_segment cannot point to some segment and earlier segment at the same time.\n> Maybe drop the last sentence.\n\nI've adjusted this comment to become:\n\n* Check if we need to lower the next_free_segment. We want new inserts\n* to fill the lower segments first, so only change the first_free_segment\n* if the removed entry was stored in an earlier segment.\n\nThanks for having a look at this.\n\nI'll attach an updated patch soon.\n\nDavid\n\n\n",
"msg_date": "Mon, 12 Jul 2021 18:55:42 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Mon, 21 Jun 2021 at 01:56, David Rowley <dgrowleyml@gmail.com> wrote:\n> # A new hashtable implementation\n\n> Also, it would be good to hear what people think about solving the\n> problem this way.\n\nBecause over on [1] I'm also trying to improve the performance of\nsmgropen(), I posted the patch for the new hash table over there too.\nBetween that thread and discussions with Thomas and Andres off-list, I\nget the idea that pretty much nobody likes the idea of having another\nhash table implementation. Thomas wants to solve it another way and\nAndres has concerns that working with bitmaps is too slow. Andres\nsuggested freelists would be faster, but I'm not really a fan of the\nidea as, unless I have a freelist per array segment then I won't be\nable to quickly identify the earliest segment slot to re-fill unused\nslots with. That would mean memory would get more fragmented over time\ninstead of the fragmentation being slowly fixed as new items are added\nafter deletes. So I've not really tried implementing that to see how\nit performs.\n\nBoth Andres and Thomas expressed a dislike to the name \"generichash\" too.\n\nAnyway, since I did make a few small changes to the hash table\nimplementation before doing all that off-list talking, I thought I\nshould at least post where I got to here so that anything else that\ncomes up can get compared to where I got to, instead of where I was\nwith this.\n\nI did end up renaming the hash table to \"densehash\" rather than\ngenerichash. Naming is hard, but I went with dense as memory density\nwas on my mind when I wrote it. Having a compact 8-byte bucket width\nand packing the data into arrays in a dense way. The word dense came\nup a few times, so went with that.\n\nI also adjusted the hash seq scan code so that it performs better when\nfaced a non-sparsely populated table. Previously my benchmark for\nthat case didn't do well [2].\n\nI've attached the benchmark results from running the benchmark that's\nincluded in hashbench.tar.bz2. I ran this 10 times using the included\ntest.sh with ./test.sh 10. I included the results I got on my AMD\nmachine in the attached bz2 file in results.csv.\n\nYou can see from the attached dense_vs_generic_vs_simple.png that\ndense hash is quite comparable to simplehash for inserts/deletes and\nlookups. It's not quite as fast as simplehash at iterations when the\ntable is not bloated, but blows simplehash out the water when the\nhashtables have become bloated due to having once contained a large\nnumber of records but no longer do.\n\nAnyway, unless there is some interest in me taking this idea further\nthen, due to the general feedback received on [1], I'm not planning on\npushing this any further. I'll leave the commitfest entry as is for\nnow to give others a chance to see this.\n\nDavid\n\n[1] https://postgr.es/m/CAApHDvowgRaQupC%3DL37iZPUzx1z7-N8deD7TxQSm8LR%2Bf4L3-A%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAApHDvpuzJTQNKQ_bnAccvi-68xuh%2Bv87B4P6ycU-UiN0dqyTg%40mail.gmail.com",
"msg_date": "Mon, 12 Jul 2021 19:23:57 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Mon, 12 Jul 2021 at 19:23, David Rowley <dgrowleyml@gmail.com> wrote:\n> I also adjusted the hash seq scan code so that it performs better when\n> faced a non-sparsely populated table. Previously my benchmark for\n> that case didn't do well [2].\n\nI was running some select only pgbench tests today on an AMD 3990x\nmachine with a large number of processes.\n\nI saw that LockReleaseAll was coming up on the profile a bit at:\n\nMaster: 0.77% postgres [.] LockReleaseAll\n\nI wondered if this patch would help, so I tried it and got:\n\ndense hash lockrelease all: 0.67% postgres [.] LockReleaseAll\n\nIt's a very small increase which translated to about a 0.62% gain in\ntps. It made me think it might be worth doing something about this\nLockReleaseAll can show up when releasing small numbers of locks.\n\npgbench -T 240 -P 10 -c 132 -j 132 -S -M prepared --random-seed=12345 postgres\n\nUnits = tps\n\nSec master dense hash LockReleaseAll\n10 3758201.2 3713521.5 98.81%\n20 3810125.5 3844142.9 100.89%\n30 3806505.1 3848458 101.10%\n40 3816094.8 3855706.6 101.04%\n50 3820317.2 3851717.7 100.82%\n60 3827809 3851499.4 100.62%\n70 3828757.9 3849312 100.54%\n80 3824492.1 3852378.8 100.73%\n90 3816502.1 3854793.8 101.00%\n100 3819124.1 3860418.6 101.08%\n110 3816154.3 3845327.7 100.76%\n120 3817070.5 3845842.5 100.75%\n130 3815424.7 3847626 100.84%\n140 3823631.1 3846760.6 100.60%\n150 3820963.8 3840196.6 100.50%\n160 3827737 3841149.3 100.35%\n170 3827779.2 3840130.9 100.32%\n180 3829352 3842814.5 100.35%\n190 3825518.3 3841991 100.43%\n200 3823477.2 3839390.7 100.42%\n210 3809304.3 3836433.5 100.71%\n220 3814328.5 3842073.7 100.73%\n230 3811399.3 3843780.7 100.85%\navg 3816959.53 3840672.478 100.62%\n\nDavid\n\n\n",
"msg_date": "Tue, 20 Jul 2021 17:04:19 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Tue, Jul 20, 2021 at 05:04:19PM +1200, David Rowley wrote:\n> On Mon, 12 Jul 2021 at 19:23, David Rowley <dgrowleyml@gmail.com> wrote:\n> > I also adjusted the hash seq scan code so that it performs better when\n> > faced a non-sparsely populated table. Previously my benchmark for\n> > that case didn't do well [2].\n> \n> I was running some select only pgbench tests today on an AMD 3990x\n> machine with a large number of processes.\n> \n> I saw that LockReleaseAll was coming up on the profile a bit at:\n\nThis last update was two months ago, and the patch has not moved\nsince:\nhttps://commitfest.postgresql.org/34/3220/\n\nDo you have plans to work more on that or perhaps the CF entry should\nbe withdrawn or RwF'd?\n--\nMichael",
"msg_date": "Fri, 1 Oct 2021 16:03:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "I've made some remarks in related thread:\r\nhttps://www.postgresql.org/message-id/flat/0A3221C70F24FB45833433255569204D1FB976EF@G01JPEXMBYT05\n\nThe new status of this patch is: Waiting on Author\n",
"msg_date": "Wed, 06 Oct 2021 16:20:52 +0000",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Fri, Oct 01, 2021 at 04:03:09PM +0900, Michael Paquier wrote:\n> This last update was two months ago, and the patch has not moved\n> since:\n> https://commitfest.postgresql.org/34/3220/\n> \n> Do you have plans to work more on that or perhaps the CF entry should\n> be withdrawn or RwF'd?\n\nTwo months later, this has been switched to RwF.\n--\nMichael",
"msg_date": "Fri, 3 Dec 2021 16:36:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Fri, 3 Dec 2021 at 20:36, Michael Paquier <michael@paquier.xyz> wrote:\n> Two months later, this has been switched to RwF.\n\nI was discussing this patch with Andres. He's not very keen on my\ndensehash hash table idea and suggested that instead of relying on\ntrying to make the hash table iteration faster, why don't we just\nditch the lossiness of the resource owner code which only records the\nfirst 16 locks, and just make it have a linked list of all locks.\n\nThis is a little more along the lines of the original patch, however,\nit does not increase the size of the LOCALLOCK struct.\n\nI've attached a patch which does this. This was mostly written\n(quickly) by Andres, I just did a cleanup, fixed up a few mistakes and\nfixed a few bugs.\n\nI ran the same performance tests on this patch as I did back in [1]:\n\n-- Test 1. Select 1 record from a 140 partitioned table. Tests\ncreating a large number of locks with a fast query.\n\ncreate table hp (a int, b int) partition by hash(a);\nselect 'create table hp'||x||' partition of hp for values with\n(modulus 140, remainder ' || x || ');' from generate_series(0,139)x;\ncreate index on hp (b);\ninsert into hp select x,x from generate_series(1, 140000) x;\nanalyze hp;\n\nselect3.sql: select * from hp where b = 1\n\nselect3.sql master\n\ndrowley@amd3990x:~$ pgbench -n -f select3.sql -T 60 -M prepared postgres\ntps = 2099.708748 (without initial connection time)\ntps = 2100.398516 (without initial connection time)\ntps = 2094.882341 (without initial connection time)\ntps = 2113.218820 (without initial connection time)\ntps = 2104.717597 (without initial connection time)\n\nselect3.sql patched\n\ndrowley@amd3990x:~$ pgbench -n -f select3.sql -T 60 -M prepared postgres\ntps = 2010.070738 (without initial connection time)\ntps = 1994.963606 (without initial connection time)\ntps = 1994.668849 (without initial connection time)\ntps = 1995.948168 (without initial connection time)\ntps = 1985.650945 (without initial connection time)\n\nYou can see that there's a performance regression here. I've not yet\nstudied why this appears.\n\n-- Test 2. Tests a prepared query which will perform a generic plan on\nthe 6th execution then fallback on a custom plan due to it pruning all\nbut one partition. Master suffers from the lock table becoming\nbloated after locking all partitions when planning the generic plan.\n\ncreate table ht (a int primary key, b int, c int) partition by hash (a);\nselect 'create table ht' || x::text || ' partition of ht for values\nwith (modulus 8192, remainder ' || (x)::text || ');' from\ngenerate_series(0,8191) x;\n\\gexec\n\nselect.sql:\n\\set p 1\nselect * from ht where a = :p\n\nselect.sql master\ndrowley@amd3990x:~$ pgbench -n -f select.sql -T 60 -M prepared postgres\ntps = 18014.460090 (without initial connection time)\ntps = 17973.358889 (without initial connection time)\ntps = 17847.480647 (without initial connection time)\ntps = 18038.332507 (without initial connection time)\ntps = 17776.143206 (without initial connection time)\n\nselect.sql patched\ndrowley@amd3990x:~$ pgbench -n -f select.sql -T 60 -M prepared postgres\ntps = 32393.457106 (without initial connection time)\ntps = 32277.204349 (without initial connection time)\ntps = 32160.719830 (without initial connection time)\ntps = 32530.038130 (without initial connection time)\ntps = 32299.019657 (without initial connection time)\n\nYou can see that there are some quite good performance gains with this test.\n\nI'm going to add this to the January commitfest.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAKJS1f8Lt0kS4bb5EH%3DhV%2BksqBZNnmVa8jujoYBYu5PVhWbZZg%40mail.gmail.com",
"msg_date": "Sat, 1 Jan 2022 14:44:46 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Fri, Dec 31, 2021 at 5:45 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Fri, 3 Dec 2021 at 20:36, Michael Paquier <michael@paquier.xyz> wrote:\n> > Two months later, this has been switched to RwF.\n>\n> I was discussing this patch with Andres. He's not very keen on my\n> densehash hash table idea and suggested that instead of relying on\n> trying to make the hash table iteration faster, why don't we just\n> ditch the lossiness of the resource owner code which only records the\n> first 16 locks, and just make it have a linked list of all locks.\n>\n> This is a little more along the lines of the original patch, however,\n> it does not increase the size of the LOCALLOCK struct.\n>\n> I've attached a patch which does this. This was mostly written\n> (quickly) by Andres, I just did a cleanup, fixed up a few mistakes and\n> fixed a few bugs.\n>\n> I ran the same performance tests on this patch as I did back in [1]:\n>\n> -- Test 1. Select 1 record from a 140 partitioned table. Tests\n> creating a large number of locks with a fast query.\n>\n> create table hp (a int, b int) partition by hash(a);\n> select 'create table hp'||x||' partition of hp for values with\n> (modulus 140, remainder ' || x || ');' from generate_series(0,139)x;\n> create index on hp (b);\n> insert into hp select x,x from generate_series(1, 140000) x;\n> analyze hp;\n>\n> select3.sql: select * from hp where b = 1\n>\n> select3.sql master\n>\n> drowley@amd3990x:~$ pgbench -n -f select3.sql -T 60 -M prepared postgres\n> tps = 2099.708748 (without initial connection time)\n> tps = 2100.398516 (without initial connection time)\n> tps = 2094.882341 (without initial connection time)\n> tps = 2113.218820 (without initial connection time)\n> tps = 2104.717597 (without initial connection time)\n>\n> select3.sql patched\n>\n> drowley@amd3990x:~$ pgbench -n -f select3.sql -T 60 -M prepared postgres\n> tps = 2010.070738 (without initial connection time)\n> tps = 1994.963606 (without initial connection time)\n> tps = 1994.668849 (without initial connection time)\n> tps = 1995.948168 (without initial connection time)\n> tps = 1985.650945 (without initial connection time)\n>\n> You can see that there's a performance regression here. I've not yet\n> studied why this appears.\n>\n> -- Test 2. Tests a prepared query which will perform a generic plan on\n> the 6th execution then fallback on a custom plan due to it pruning all\n> but one partition. Master suffers from the lock table becoming\n> bloated after locking all partitions when planning the generic plan.\n>\n> create table ht (a int primary key, b int, c int) partition by hash (a);\n> select 'create table ht' || x::text || ' partition of ht for values\n> with (modulus 8192, remainder ' || (x)::text || ');' from\n> generate_series(0,8191) x;\n> \\gexec\n>\n> select.sql:\n> \\set p 1\n> select * from ht where a = :p\n>\n> select.sql master\n> drowley@amd3990x:~$ pgbench -n -f select.sql -T 60 -M prepared postgres\n> tps = 18014.460090 (without initial connection time)\n> tps = 17973.358889 (without initial connection time)\n> tps = 17847.480647 (without initial connection time)\n> tps = 18038.332507 (without initial connection time)\n> tps = 17776.143206 (without initial connection time)\n>\n> select.sql patched\n> drowley@amd3990x:~$ pgbench -n -f select.sql -T 60 -M prepared postgres\n> tps = 32393.457106 (without initial connection time)\n> tps = 32277.204349 (without initial connection time)\n> tps = 32160.719830 (without initial connection time)\n> tps = 32530.038130 (without initial connection time)\n> tps = 32299.019657 (without initial connection time)\n>\n> You can see that there are some quite good performance gains with this\n> test.\n>\n> I'm going to add this to the January commitfest.\n>\n> David\n>\n> [1]\n> https://www.postgresql.org/message-id/CAKJS1f8Lt0kS4bb5EH%3DhV%2BksqBZNnmVa8jujoYBYu5PVhWbZZg%40mail.gmail.com\n\nHi,\n\n+ locallock->nLocks -= locallockowner->nLocks;\n+ Assert(locallock->nLocks >= 0);\n\nI think the assertion is not needed since the above code is in if block :\n\n+ if (locallockowner->nLocks < locallock->nLocks)\n\nthe condition, locallock->nLocks >= 0, would always hold after the\nsubtraction.\n\nCheers\n\nOn Fri, Dec 31, 2021 at 5:45 PM David Rowley <dgrowleyml@gmail.com> wrote:On Fri, 3 Dec 2021 at 20:36, Michael Paquier <michael@paquier.xyz> wrote:\n> Two months later, this has been switched to RwF.\n\nI was discussing this patch with Andres. He's not very keen on my\ndensehash hash table idea and suggested that instead of relying on\ntrying to make the hash table iteration faster, why don't we just\nditch the lossiness of the resource owner code which only records the\nfirst 16 locks, and just make it have a linked list of all locks.\n\nThis is a little more along the lines of the original patch, however,\nit does not increase the size of the LOCALLOCK struct.\n\nI've attached a patch which does this. This was mostly written\n(quickly) by Andres, I just did a cleanup, fixed up a few mistakes and\nfixed a few bugs.\n\nI ran the same performance tests on this patch as I did back in [1]:\n\n-- Test 1. Select 1 record from a 140 partitioned table. Tests\ncreating a large number of locks with a fast query.\n\ncreate table hp (a int, b int) partition by hash(a);\nselect 'create table hp'||x||' partition of hp for values with\n(modulus 140, remainder ' || x || ');' from generate_series(0,139)x;\ncreate index on hp (b);\ninsert into hp select x,x from generate_series(1, 140000) x;\nanalyze hp;\n\nselect3.sql: select * from hp where b = 1\n\nselect3.sql master\n\ndrowley@amd3990x:~$ pgbench -n -f select3.sql -T 60 -M prepared postgres\ntps = 2099.708748 (without initial connection time)\ntps = 2100.398516 (without initial connection time)\ntps = 2094.882341 (without initial connection time)\ntps = 2113.218820 (without initial connection time)\ntps = 2104.717597 (without initial connection time)\n\nselect3.sql patched\n\ndrowley@amd3990x:~$ pgbench -n -f select3.sql -T 60 -M prepared postgres\ntps = 2010.070738 (without initial connection time)\ntps = 1994.963606 (without initial connection time)\ntps = 1994.668849 (without initial connection time)\ntps = 1995.948168 (without initial connection time)\ntps = 1985.650945 (without initial connection time)\n\nYou can see that there's a performance regression here. I've not yet\nstudied why this appears.\n\n-- Test 2. Tests a prepared query which will perform a generic plan on\nthe 6th execution then fallback on a custom plan due to it pruning all\nbut one partition. Master suffers from the lock table becoming\nbloated after locking all partitions when planning the generic plan.\n\ncreate table ht (a int primary key, b int, c int) partition by hash (a);\nselect 'create table ht' || x::text || ' partition of ht for values\nwith (modulus 8192, remainder ' || (x)::text || ');' from\ngenerate_series(0,8191) x;\n\\gexec\n\nselect.sql:\n\\set p 1\nselect * from ht where a = :p\n\nselect.sql master\ndrowley@amd3990x:~$ pgbench -n -f select.sql -T 60 -M prepared postgres\ntps = 18014.460090 (without initial connection time)\ntps = 17973.358889 (without initial connection time)\ntps = 17847.480647 (without initial connection time)\ntps = 18038.332507 (without initial connection time)\ntps = 17776.143206 (without initial connection time)\n\nselect.sql patched\ndrowley@amd3990x:~$ pgbench -n -f select.sql -T 60 -M prepared postgres\ntps = 32393.457106 (without initial connection time)\ntps = 32277.204349 (without initial connection time)\ntps = 32160.719830 (without initial connection time)\ntps = 32530.038130 (without initial connection time)\ntps = 32299.019657 (without initial connection time)\n\nYou can see that there are some quite good performance gains with this test.\n\nI'm going to add this to the January commitfest.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAKJS1f8Lt0kS4bb5EH%3DhV%2BksqBZNnmVa8jujoYBYu5PVhWbZZg%40mail.gmail.comHi,+ locallock->nLocks -= locallockowner->nLocks;+ Assert(locallock->nLocks >= 0);I think the assertion is not needed since the above code is in if block :+ if (locallockowner->nLocks < locallock->nLocks)the condition, locallock->nLocks >= 0, would always hold after the subtraction.Cheers",
"msg_date": "Fri, 31 Dec 2021 18:43:07 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Sat, 1 Jan 2022 at 15:40, Zhihong Yu <zyu@yugabyte.com> wrote:\n> + locallock->nLocks -= locallockowner->nLocks;\n> + Assert(locallock->nLocks >= 0);\n>\n> I think the assertion is not needed since the above code is in if block :\n>\n> + if (locallockowner->nLocks < locallock->nLocks)\n>\n> the condition, locallock->nLocks >= 0, would always hold after the subtraction.\n\nThat makes sense. I've removed the Assert in the attached patch.\nThanks for looking at the patch\n\nI've also spent a bit more time on the patch. There were quite a few\noutdated comments remaining. Also, the PROCLOCK releaseMask field\nappears to no longer be needed.\n\nI also did a round of benchmarking on the attached patch using a very\nrecent master. I've attached .sql files and the script I used to\nbenchmark.\n\nWith 1024 partitions, lock1.sql shows about a 4.75% performance\nincrease. This would become larger with more partitions and less with\nfewer partitions.\nWith the patch, lock2.sql about a 10% performance increase over master.\nlock3.sql does not seem to have changed much and lock4.sql shows a\nsmall regression with the patch of about 2.5%.\n\nI'm not quite sure how worried we should be about lock4.sql slowing\ndown lightly. 2.5% is fairly small given how hard I'm exercising the\nlocking code in that test. There's also nothing much to say that the\nslowdown is not just due to code alignment changes.\n\nI also understand that Amit L is working on another patch that will\nimprove the situation for lock1.sql by not taking the locks on\nrelations that will be run-time pruned at executor startup. I think\nit's still worth solving this regardless of Amit's patch as with\ncurrent master we still have a situation where short fast queries\nwhich access a small number of tables can become slower once the\nbackend has obtained a large number of locks concurrently and bloated\nthe locallocktable.\n\nAs for the patch. I feel it's a pretty invasive change to how we\nrelease locks and the resowner code. I'd be quite happy for some\nreview of it.\n\nHere are the full results as output by the attached script.\n\ndrowley@amd3990x:~$ echo master\nmaster\ndrowley@amd3990x:~$ ./lockbench.sh\nlock1.sql\ntps = 38078.011433 (without initial connection time)\ntps = 38070.016792 (without initial connection time)\ntps = 39223.118769 (without initial connection time)\ntps = 37510.105287 (without initial connection time)\ntps = 38164.394128 (without initial connection time)\nlock2.sql\ntps = 247.963797 (without initial connection time)\ntps = 247.374174 (without initial connection time)\ntps = 248.412744 (without initial connection time)\ntps = 248.192629 (without initial connection time)\ntps = 248.503728 (without initial connection time)\nlock3.sql\ntps = 1162.937692 (without initial connection time)\ntps = 1160.968689 (without initial connection time)\ntps = 1166.908643 (without initial connection time)\ntps = 1160.288547 (without initial connection time)\ntps = 1160.336572 (without initial connection time)\nlock4.sql\ntps = 282.173560 (without initial connection time)\ntps = 284.470330 (without initial connection time)\ntps = 286.089644 (without initial connection time)\ntps = 285.548487 (without initial connection time)\ntps = 284.313505 (without initial connection time)\n\n\ndrowley@amd3990x:~$ echo Patched\nPatched\ndrowley@amd3990x:~$ ./lockbench.sh\nlock1.sql\ntps = 40338.975219 (without initial connection time)\ntps = 39803.433365 (without initial connection time)\ntps = 39504.824194 (without initial connection time)\ntps = 39843.422438 (without initial connection time)\ntps = 40624.483013 (without initial connection time)\nlock2.sql\ntps = 274.413309 (without initial connection time)\ntps = 271.978813 (without initial connection time)\ntps = 275.795091 (without initial connection time)\ntps = 273.628649 (without initial connection time)\ntps = 273.049977 (without initial connection time)\nlock3.sql\ntps = 1168.557054 (without initial connection time)\ntps = 1168.139469 (without initial connection time)\ntps = 1166.366440 (without initial connection time)\ntps = 1165.464214 (without initial connection time)\ntps = 1167.250809 (without initial connection time)\nlock4.sql\ntps = 274.842298 (without initial connection time)\ntps = 277.911394 (without initial connection time)\ntps = 278.702620 (without initial connection time)\ntps = 275.715606 (without initial connection time)\ntps = 278.816060 (without initial connection time)\n\nDavid",
"msg_date": "Tue, 15 Mar 2022 15:47:17 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "Good day, David.\n\nI'm looking on patch and don't get some moments.\n\n`GrantLockLocal` allocates `LOCALLOCKOWNER` and links it into\n`locallock->locallockowners`. It links it regardless `owner` could be\nNULL. But then `RemoveLocalLock` does `Assert(locallockowner->owner != NULL);`.\nWhy it should not fail?\n\n`GrantLockLocal` allocates `LOCALLOCKOWNER` in `TopMemoryContext`.\nBut there is single `pfree(locallockowner)` in `LockReassignOwner`.\nLooks like there should be more `pfree`. Shouldn't they?\n\n`GrantLockLocal` does `dlist_push_tail`, but isn't it better to\ndo `dlist_push_head`? Resource owners usually form stack, so usually\nwhen owner searches for itself it is last added to list.\nThen `dlist_foreach` will find it sooner if it were added to the head.\n\nregards\n\n---------\n\nYura Sokolov\nPostgres Professional\ny.sokolov@postgrespro.ru\nfunny.falcon@gmail.com\n\n\n\n\n",
"msg_date": "Tue, 05 Apr 2022 18:40:52 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Wed, 6 Apr 2022 at 03:40, Yura Sokolov <y.sokolov@postgrespro.ru> wrote:\n> I'm looking on patch and don't get some moments.\n>\n> `GrantLockLocal` allocates `LOCALLOCKOWNER` and links it into\n> `locallock->locallockowners`. It links it regardless `owner` could be\n> NULL. But then `RemoveLocalLock` does `Assert(locallockowner->owner != NULL);`.\n> Why it should not fail?\n>\n> `GrantLockLocal` allocates `LOCALLOCKOWNER` in `TopMemoryContext`.\n> But there is single `pfree(locallockowner)` in `LockReassignOwner`.\n> Looks like there should be more `pfree`. Shouldn't they?\n>\n> `GrantLockLocal` does `dlist_push_tail`, but isn't it better to\n> do `dlist_push_head`? Resource owners usually form stack, so usually\n> when owner searches for itself it is last added to list.\n> Then `dlist_foreach` will find it sooner if it were added to the head.\n\nThanks for having a look at this. It's a bit unrealistic for me to\nget a look at addressing these for v15. I've pushed this one out to\nthe next CF.\n\nDavid\n\n\n",
"msg_date": "Thu, 7 Apr 2022 16:05:19 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "This entry has been waiting on author input for a while (our current\nthreshold is roughly two weeks), so I've marked it Returned with\nFeedback.\n\nOnce you think the patchset is ready for review again, you (or any\ninterested party) can resurrect the patch entry by visiting\n\n https://commitfest.postgresql.org/38/3501/\n\nand changing the status to \"Needs Review\", and then changing the\nstatus again to \"Move to next CF\". (Don't forget the second step;\nhopefully we will have streamlined this in the near future!)\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Tue, 2 Aug 2022 12:04:11 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Wed, 3 Aug 2022 at 07:04, Jacob Champion <jchampion@timescale.com> wrote:\n> This entry has been waiting on author input for a while (our current\n> threshold is roughly two weeks), so I've marked it Returned with\n> Feedback.\n\nThanks for taking care of this. You dealt with this correctly based on\nthe fact that I'd failed to rebase before or during the entire July\nCF.\n\nI'm still interested in having the LockReleaseAll slowness fixed, so\nhere's a rebased patch.\n\nDavid",
"msg_date": "Wed, 3 Aug 2022 15:33:57 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "Hi David,\r\n\r\nThis is review of speed up releasing of locks patch.\r\n\r\nContents & Purpose:\r\nSubject is missing in patch. It would have been easier to understand purpose had it been included.\r\nIncluded in the patch are change in README, but no new tests are included..\r\n\r\nInitial Run:\r\nThe patch applies cleanly to HEAD. The regression tests all pass\r\nsuccessfully against the new patch.\r\n\r\nNitpicking & conclusion:\r\nI don't see any performance improvement in tests. Lots of comments\r\nwere removed which were not fully replaced. Change of log level for ReleaseLockIfHeld: failed\r\nfrom warning to panic is mystery. \r\nChange in readme doesn't look right.\r\n`Any subsequent lockers are share lockers wait \r\nwaiting for the VXID to terminate via some other method) is for deadlock`. This sentence could be rewritten.\r\nAlso more comments could be added to explain new methods added.\r\n\r\nThanks,\r\nAnkit",
"msg_date": "Thu, 03 Nov 2022 15:42:05 +0000",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "Thank you for looking at the patch.\n\nOn Fri, 4 Nov 2022 at 04:43, Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:\n> I don't see any performance improvement in tests.\n\nAre you able to share what your test was?\n\nIn order to see a performance improvement you're likely going to have\nto obtain a large number of locks in the session so that the local\nlock table becomes bloated, then continue to run some fast query and\nobserve that LockReleaseAll has become slower as a result of the hash\ntable becoming bloated. Running pgbench running a SELECT on a hash\npartitioned table with a good number of partitions to look up a single\nrow with -M prepared. The reason this becomes slow is that the\nplanner will try a generic plan on the 6th execution which will lock\nevery partition and bloat the local lock table. From then on it will\nuse a custom plan which only locks a single leaf partition.\n\nI just tried the following:\n\n$ pgbench -i --partition-method=hash --partitions=1000 postgres\n\nMaster:\n$ pgbench -T 60 -S -M prepared postgres | grep tps\ntps = 21286.172326 (without initial connection time)\n\nPatched:\n$ pgbench -T 60 -S -M prepared postgres | grep tps\ntps = 23034.063261 (without initial connection time)\n\nIf I try again with 10,000 partitions, I get:\n\nMaster:\n$ pgbench -T 60 -S -M prepared postgres | grep tps\ntps = 13044.290903 (without initial connection time)\n\nPatched:\n$ pgbench -T 60 -S -M prepared postgres | grep tps\ntps = 22683.545686 (without initial connection time)\n\nDavid\n\n\n",
"msg_date": "Wed, 16 Nov 2022 10:03:50 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Wed, 3 Aug 2022 at 09:04, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Wed, 3 Aug 2022 at 07:04, Jacob Champion <jchampion@timescale.com> wrote:\n> > This entry has been waiting on author input for a while (our current\n> > threshold is roughly two weeks), so I've marked it Returned with\n> > Feedback.\n>\n> Thanks for taking care of this. You dealt with this correctly based on\n> the fact that I'd failed to rebase before or during the entire July\n> CF.\n>\n> I'm still interested in having the LockReleaseAll slowness fixed, so\n> here's a rebased patch.\n\nCFBot shows some compilation errors as in [1], please post an updated\nversion for the same:\n[15:40:00.287] [1239/1809] Linking target src/backend/postgres\n[15:40:00.287] FAILED: src/backend/postgres\n[15:40:00.287] cc @src/backend/postgres.rsp\n[15:40:00.287] /usr/bin/ld:\nsrc/backend/postgres_lib.a.p/replication_logical_launcher.c.o: in\nfunction `logicalrep_worker_onexit':\n[15:40:00.287] /tmp/cirrus-ci-build/build/../src/backend/replication/logical/launcher.c:773:\nundefined reference to `LockReleaseAll'\n[15:40:00.287] collect2: error: ld returned 1 exit status\n\n[1] - https://cirrus-ci.com/task/4562493863886848\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 19 Jan 2023 16:55:49 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Fri, 20 Jan 2023 at 00:26, vignesh C <vignesh21@gmail.com> wrote:\n> CFBot shows some compilation errors as in [1], please post an updated\n> version for the same:\n\nI've attached a rebased patch.\n\nWhile reading over this again, I wondered if instead of allocating the\nmemory for the LOCALLOCKOWNER in TopMemoryContext, maybe we should\ncreate a Slab context as a child of TopMemoryContext and perform the\nallocations there. I feel like slab might be a better option here as\nit'll use slightly less memory due to it not rounding up allocations\nto the next power of 2. sizeof(LOCALLOCKOWNER) == 56, so it's not a\ngreat deal of memory, but more than nothing. The primary reason that I\nthink this might be a good idea is mostly around better handling of\nchunk on block fragmentation in slab.c than aset.c. If we have\ntransactions which create a large number of locks then we may end up\ngrowing the TopMemoryContext and never releasing the AllocBlocks and\njust having a high number of 64-byte chunks left on the freelist\nthat'll maybe never be used again. I'm thinking slab.c might handle\nthat better as it'll only keep around 10 completely empty SlabBlocks\nbefore it'll start free'ing them. The slab allocator is quite a bit\nfaster now as a result of d21ded75f.\n\nI would like to get this LockReleaseAll problem finally fixed in PG16,\nbut I'd feel much better about this patch if it had some review from\nsomeone who has more in-depth knowledge of the locking code.\n\nI've also gone and adjusted all the places that upgraded the\nelog(WARNING)s of local table corruption to PANIC and put them back to\nuse WARNING again. While I think it might be a good idea to do that,\nit seems to be adding a bit more resistance to this patch which I\ndon't think it really needs. Maybe we can consider that in a separate\neffort.\n\nDavid",
"msg_date": "Tue, 24 Jan 2023 16:57:37 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "Hi David,\n\nOn Tue, Jan 24, 2023 at 12:58 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Fri, 20 Jan 2023 at 00:26, vignesh C <vignesh21@gmail.com> wrote:\n> > CFBot shows some compilation errors as in [1], please post an updated\n> > version for the same:\n>\n> I've attached a rebased patch.\n\nThanks for the new patch.\n\nMaybe you're planning to do it once this patch is post the PoC phase\n(isn't it?), but it would be helpful to have commentary on all the new\ndlist fields.\n\nEspecially, I think the following warrants a bit more explanation than other:\n\n- /* We can remember up to MAX_RESOWNER_LOCKS references to local locks. */\n- int nlocks; /* number of owned locks */\n- LOCALLOCK *locks[MAX_RESOWNER_LOCKS]; /* list of owned locks */\n+ dlist_head locks; /* dlist of owned locks */\n\nThis seems to be replacing what is a cache with an upper limit on the\nnumber of cacheds locks with something that has no limit on how many\nper-owner locks are remembered. I wonder whether we'd be doing\nadditional work in some cases with the new no-limit implementation\nthat wasn't being done before (where the owner's locks array is\noverflowed) or maybe not much because the new implementation of\nResourceOwner{Remember|Forget}Lock() is simple push/delete of a dlist\nnode from the owner's dlist?\n\nThe following comment is now obsolete:\n\n/*\n * LockReassignCurrentOwner\n * Reassign all locks belonging to CurrentResourceOwner to belong\n * to its parent resource owner.\n *\n * If the caller knows what those locks are, it can pass them as an array.\n * That speeds up the call significantly, when a lot of locks are held\n * (e.g pg_dump with a large schema). Otherwise, pass NULL for locallocks,\n * and we'll traverse through our hash table to find them.\n */\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 31 Jan 2023 23:07:20 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-24 16:57:37 +1300, David Rowley wrote:\n> I've attached a rebased patch.\n\nLooks like there's some issue causing tests to fail probabilistically:\n\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest%2F42%2F3501\n\nSeveral failures are when testing a 32bit build.\n\n\n\n> While reading over this again, I wondered if instead of allocating the\n> memory for the LOCALLOCKOWNER in TopMemoryContext, maybe we should\n> create a Slab context as a child of TopMemoryContext and perform the\n> allocations there.\n\nYes, that does make sense.\n\n\n> I would like to get this LockReleaseAll problem finally fixed in PG16,\n> but I'd feel much better about this patch if it had some review from\n> someone who has more in-depth knowledge of the locking code.\n\nI feel my review wouldn't be independent, but I'll give it a shot if nobody\nelse does.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 7 Feb 2023 10:05:36 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "Thanks for having a look at this.\n\nOn Wed, 1 Feb 2023 at 03:07, Amit Langote <amitlangote09@gmail.com> wrote:\n> Maybe you're planning to do it once this patch is post the PoC phase\n> (isn't it?), but it would be helpful to have commentary on all the new\n> dlist fields.\n\nI've added comments on the new fields. Maybe we can say the patch is \"wip\".\n\n> This seems to be replacing what is a cache with an upper limit on the\n> number of cacheds locks with something that has no limit on how many\n> per-owner locks are remembered. I wonder whether we'd be doing\n> additional work in some cases with the new no-limit implementation\n> that wasn't being done before (where the owner's locks array is\n> overflowed) or maybe not much because the new implementation of\n> ResourceOwner{Remember|Forget}Lock() is simple push/delete of a dlist\n> node from the owner's dlist?\n\nIt's a good question. The problem is I don't really have a good test\nto find out. The problem is we'd need to benchmark taking fewer than\n16 locks. On trying that, I find that there's just too much\nvariability in the performance between runs to determine if there's\nany slowdown.\n\n$ cat 10_locks.sql\nselect count(pg_advisory_lock(x)) from generate_series(1,10) x;\n\n$ pgbench -f 10_locks.sql@1000 -M prepared -T 10 -n postgres | grep -E \"(tps)\"\ntps = 47809.306088 (without initial connection time)\ntps = 66859.789072 (without initial connection time)\ntps = 37885.924616 (without initial connection time)\n\nOn trying with more locks, I see there are good wins from the patched version.\n\n$ cat 100_locks.sql\nselect count(pg_advisory_lock(x)) from generate_series(1,100) x;\n\n$ cat 1k_locks.sql\nselect count(pg_advisory_lock(x)) from generate_series(1,1000) x;\n\n$ cat 10k_locks.sql\nselect count(pg_advisory_lock(x)) from generate_series(1,10000) x;\n\nTest 1: Take 100 locks but periodically take 10k locks to bloat the\nlocal lock table.\n\nmaster:\n$ pgbench -f 100_locks.sql@1000 -f 10k_locks.sql@1 -M prepared -T 10\n-n postgres | grep -E \"(tps|script)\"\ntransaction type: multiple scripts\ntps = 2726.197037 (without initial connection time)\nSQL script 1: 100_locks.sql\n - 27219 transactions (99.9% of total, tps = 2722.496227)\nSQL script 2: 10k_locks.sql\n - 37 transactions (0.1% of total, tps = 3.700810)\n\npatched:\n$ pgbench -f 100_locks.sql@1000 -f 10k_locks.sql@1 -M prepared -T 10\n-n postgres | grep -E \"(tps|script)\"\ntransaction type: multiple scripts\ntps = 34047.297822 (without initial connection time)\nSQL script 1: 100_locks.sql\n - 340039 transactions (99.9% of total, tps = 34012.688879)\nSQL script 2: 10k_locks.sql\n - 346 transactions (0.1% of total, tps = 34.608943)\n\npatched without slab context:\n$ pgbench -f 100_locks.sql@1000 -f 10k_locks.sql@1 -M prepared -T 10\n-n postgres | grep -E \"(tps|script)\"\ntransaction type: multiple scripts\ntps = 34851.770846 (without initial connection time)\nSQL script 1: 100_locks.sql\n - 348097 transactions (99.9% of total, tps = 34818.662324)\nSQL script 2: 10k_locks.sql\n - 331 transactions (0.1% of total, tps = 33.108522)\n\nTest 2: Always take just 100 locks and don't bloat the local lock table.\n\nmaster:\n$ pgbench -f 100_locks.sql@1000 -M prepared -T 10 -n postgres | grep\n-E \"(tps|script)\"\ntps = 32682.491548 (without initial connection time)\n\npatched:\n$ pgbench -f 100_locks.sql@1000 -M prepared -T 10 -n postgres | grep\n-E \"(tps|script)\"\ntps = 35637.241815 (without initial connection time)\n\npatched without slab context:\n$ pgbench -f 100_locks.sql@1000 -M prepared -T 10 -n postgres | grep\n-E \"(tps|script)\"\ntps = 36192.185181 (without initial connection time)\n\nThe attached 0003 patch is an experiment to see if using a slab memory\ncontext has any advantages for storing the LOCALLOCKOWNER structs.\nThere seems to be a small performance hit from doing this.\n\n> The following comment is now obsolete:\n>\n> /*\n> * LockReassignCurrentOwner\n> * Reassign all locks belonging to CurrentResourceOwner to belong\n> * to its parent resource owner.\n> *\n> * If the caller knows what those locks are, it can pass them as an array.\n> * That speeds up the call significantly, when a lot of locks are held\n> * (e.g pg_dump with a large schema). Otherwise, pass NULL for locallocks,\n> * and we'll traverse through our hash table to find them.\n> */\n\nI've removed the obsolete part.\n\nI've attached another set of patches. I do need to spend longer\nlooking at this. I'm mainly attaching these as CI seems to be\nhighlighting a problem that I'm unable to recreate locally and I\nwanted to see if the attached fixes it.\n\nDavid",
"msg_date": "Fri, 10 Feb 2023 15:51:07 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On 10/02/2023 04:51, David Rowley wrote:\n> I've attached another set of patches. I do need to spend longer\n> looking at this. I'm mainly attaching these as CI seems to be\n> highlighting a problem that I'm unable to recreate locally and I\n> wanted to see if the attached fixes it.\n\nI like this patch's approach.\n\n> index 296dc82d2ee..edb8b6026e5 100644\n> --- a/src/backend/commands/discard.c\n> +++ b/src/backend/commands/discard.c\n> @@ -71,7 +71,7 @@ DiscardAll(bool isTopLevel)\n> Async_UnlistenAll();\n> - LockReleaseAll(USER_LOCKMETHOD, true);\n> + LockReleaseSession(USER_LOCKMETHOD);\n> ResetPlanCache();\n\nThis assumes that there are no transaction-level advisory locks. I think \nthat's OK. It took me a while to convince myself of that, though. I \nthink we need a high level comment somewhere that explains what \nassumptions we make on which locks can be held in session mode and which \nin transaction mode.\n\n> @@ -3224,14 +3206,6 @@ PostPrepare_Locks(TransactionId xid)\n> Assert(lock->nGranted <= lock->nRequested);\n> Assert((proclock->holdMask & ~lock->grantMask) == 0);\n> \n> - /* Ignore it if nothing to release (must be a session lock) */\n> - if (proclock->releaseMask == 0)\n> - continue;\n> -\n> - /* Else we should be releasing all locks */\n> - if (proclock->releaseMask != proclock->holdMask)\n> - elog(PANIC, \"we seem to have dropped a bit somewhere\");\n> -\n> /*\n> * We cannot simply modify proclock->tag.myProc to reassign\n> * ownership of the lock, because that's part of the hash key and\n\nThis looks wrong. If you prepare a transaction that is holding any \nsession locks, we will now transfer them to the prepared transaction. \nAnd its locallock entry will be out of sync. To fix, I think we could \nkeep around the hash table that CheckForSessionAndXactLocks() builds, \nand use that here.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Wed, 5 Jul 2023 12:44:34 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "Thank you for having a look at this. Apologies for not getting back to\nyou sooner.\n\nOn Wed, 5 Jul 2023 at 21:44, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> On 10/02/2023 04:51, David Rowley wrote:\n> > I've attached another set of patches. I do need to spend longer\n> > looking at this. I'm mainly attaching these as CI seems to be\n> > highlighting a problem that I'm unable to recreate locally and I\n> > wanted to see if the attached fixes it.\n>\n> I like this patch's approach.\n>\n> > index 296dc82d2ee..edb8b6026e5 100644\n> > --- a/src/backend/commands/discard.c\n> > +++ b/src/backend/commands/discard.c\n> > @@ -71,7 +71,7 @@ DiscardAll(bool isTopLevel)\n> > Async_UnlistenAll();\n> > - LockReleaseAll(USER_LOCKMETHOD, true);\n> > + LockReleaseSession(USER_LOCKMETHOD);\n> > ResetPlanCache();\n>\n> This assumes that there are no transaction-level advisory locks. I think\n> that's OK. It took me a while to convince myself of that, though. I\n> think we need a high level comment somewhere that explains what\n> assumptions we make on which locks can be held in session mode and which\n> in transaction mode.\n\nIsn't it ok because DISCARD ALL cannot run inside a transaction block,\nso there should be no locks taken apart from possibly session-level\nlocks?\n\nI've added a call to LockAssertNoneHeld(false) in there.\n\n> > @@ -3224,14 +3206,6 @@ PostPrepare_Locks(TransactionId xid)\n> > Assert(lock->nGranted <= lock->nRequested);\n> > Assert((proclock->holdMask & ~lock->grantMask) == 0);\n> >\n> > - /* Ignore it if nothing to release (must be a session lock) */\n> > - if (proclock->releaseMask == 0)\n> > - continue;\n> > -\n> > - /* Else we should be releasing all locks */\n> > - if (proclock->releaseMask != proclock->holdMask)\n> > - elog(PANIC, \"we seem to have dropped a bit somewhere\");\n> > -\n> > /*\n> > * We cannot simply modify proclock->tag.myProc to reassign\n> > * ownership of the lock, because that's part of the hash key and\n>\n> This looks wrong. If you prepare a transaction that is holding any\n> session locks, we will now transfer them to the prepared transaction.\n> And its locallock entry will be out of sync. To fix, I think we could\n> keep around the hash table that CheckForSessionAndXactLocks() builds,\n> and use that here.\n\nGood catch. I've modified the patch to keep the hash table built in\nCheckForSessionAndXactLocks around for longer so that we can check for\nsession locks.\n\nI've attached an updated patch mainly to get CI checking this. I\nsuspect something is wrong as subscription/015_stream is timing out.\nI've not gotten to the bottom of that yet.\n\nDavid",
"msg_date": "Tue, 12 Sep 2023 00:00:10 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On 11/09/2023 15:00, David Rowley wrote:\n> On Wed, 5 Jul 2023 at 21:44, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>>\n>>> index 296dc82d2ee..edb8b6026e5 100644\n>>> --- a/src/backend/commands/discard.c\n>>> +++ b/src/backend/commands/discard.c\n>>> @@ -71,7 +71,7 @@ DiscardAll(bool isTopLevel)\n>>> Async_UnlistenAll();\n>>> - LockReleaseAll(USER_LOCKMETHOD, true);\n>>> + LockReleaseSession(USER_LOCKMETHOD);\n>>> ResetPlanCache();\n>>\n>> This assumes that there are no transaction-level advisory locks. I think\n>> that's OK. It took me a while to convince myself of that, though. I\n>> think we need a high level comment somewhere that explains what\n>> assumptions we make on which locks can be held in session mode and which\n>> in transaction mode.\n> \n> Isn't it ok because DISCARD ALL cannot run inside a transaction block,\n> so there should be no locks taken apart from possibly session-level\n> locks?\n\nHmm, sounds valid. I think I convinced myself that it's OK through some \nother reasoning, but I don't remember it now.\n\n> I've added a call to LockAssertNoneHeld(false) in there.\n\nI don't see it in the patch?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Fri, 15 Sep 2023 13:37:11 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Fri, 15 Sept 2023 at 22:37, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> > I've added a call to LockAssertNoneHeld(false) in there.\n>\n> I don't see it in the patch?\n\nhmm. I must've git format-patch before committing that part.\n\nI'll try that again... see attached.\n\nDavid",
"msg_date": "Mon, 18 Sep 2023 16:08:55 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On 18/09/2023 07:08, David Rowley wrote:\n> On Fri, 15 Sept 2023 at 22:37, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>>> I've added a call to LockAssertNoneHeld(false) in there.\n>>\n>> I don't see it in the patch?\n> \n> hmm. I must've git format-patch before committing that part.\n> \n> I'll try that again... see attached.\n\nThis needed a rebase after my ResourceOwner refactoring. Attached.\n\nA few quick comments:\n\n- It would be nice to add a test for the issue that you fixed in patch \nv7, i.e. if you prepare a transaction while holding session-level locks.\n\n- GrantLockLocal() now calls MemoryContextAlloc(), which can fail if you \nare out of memory. Is that handled gracefully or is the lock leaked?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Thu, 9 Nov 2023 18:18:21 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
},
{
"msg_contents": "On Thu, 9 Nov 2023 at 21:48, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> On 18/09/2023 07:08, David Rowley wrote:\n> > On Fri, 15 Sept 2023 at 22:37, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> >>> I've added a call to LockAssertNoneHeld(false) in there.\n> >>\n> >> I don't see it in the patch?\n> >\n> > hmm. I must've git format-patch before committing that part.\n> >\n> > I'll try that again... see attached.\n>\n> This needed a rebase after my ResourceOwner refactoring. Attached.\n>\n> A few quick comments:\n>\n> - It would be nice to add a test for the issue that you fixed in patch\n> v7, i.e. if you prepare a transaction while holding session-level locks.\n>\n> - GrantLockLocal() now calls MemoryContextAlloc(), which can fail if you\n> are out of memory. Is that handled gracefully or is the lock leaked?\n\nCFBot shows one of the test has aborted at [1] with:\n[20:54:28.535] Core was generated by `postgres: subscriber: logical\nreplication apply worker for subscription 16397 '.\n[20:54:28.535] Program terminated with signal SIGABRT, Aborted.\n[20:54:28.535] #0 __GI_raise (sig=sig@entry=6) at\n../sysdeps/unix/sysv/linux/raise.c:50\n[20:54:28.535] Download failed: Invalid argument. Continuing without\nsource file ./signal/../sysdeps/unix/sysv/linux/raise.c.\n[20:54:28.627]\n[20:54:28.627] Thread 1 (Thread 0x7f0ea02d1a40 (LWP 50984)):\n[20:54:28.627] #0 __GI_raise (sig=sig@entry=6) at\n../sysdeps/unix/sysv/linux/raise.c:50\n...\n...\n[20:54:28.627] #2 0x00005618e989d62f in ExceptionalCondition\n(conditionName=conditionName@entry=0x5618e9b40f70\n\"dlist_is_empty(&(MyProc->myProcLocks[i]))\",\nfileName=fileName@entry=0x5618e9b40ec0\n\"../src/backend/storage/lmgr/proc.c\", lineNumber=lineNumber@entry=856)\nat ../src/backend/utils/error/assert.c:66\n[20:54:28.627] No locals.\n[20:54:28.627] #3 0x00005618e95e6847 in ProcKill (code=<optimized\nout>, arg=<optimized out>) at ../src/backend/storage/lmgr/proc.c:856\n[20:54:28.627] i = <optimized out>\n[20:54:28.627] proc = <optimized out>\n[20:54:28.627] procgloballist = <optimized out>\n[20:54:28.627] __func__ = \"ProcKill\"\n[20:54:28.627] #4 0x00005618e959ebcc in shmem_exit\n(code=code@entry=1) at ../src/backend/storage/ipc/ipc.c:276\n[20:54:28.627] __func__ = \"shmem_exit\"\n[20:54:28.627] #5 0x00005618e959ecd0 in proc_exit_prepare\n(code=code@entry=1) at ../src/backend/storage/ipc/ipc.c:198\n[20:54:28.627] __func__ = \"proc_exit_prepare\"\n[20:54:28.627] #6 0x00005618e959ee8e in proc_exit (code=code@entry=1)\nat ../src/backend/storage/ipc/ipc.c:111\n[20:54:28.627] __func__ = \"proc_exit\"\n[20:54:28.627] #7 0x00005618e94aa54d in BackgroundWorkerMain () at\n../src/backend/postmaster/bgworker.c:805\n[20:54:28.627] local_sigjmp_buf = {{__jmpbuf =\n{94665009627112, -3865857745677845768, 0, 0, 140732736634980, 1,\n3865354362587970296, 7379258256398875384}, __mask_was_saved = 1,\n__saved_mask = {__val = {18446744066192964099, 94665025527920,\n94665025527920, 94665025527920, 0, 94665025528120, 8192, 1,\n94664997686410, 94665009627040, 94664997622076, 94665025527920, 1, 0,\n0, 140732736634980}}}}\n[20:54:28.627] worker = 0x5618eb37c570\n[20:54:28.627] entrypt = <optimized out>\n[20:54:28.627] __func__ = \"BackgroundWorkerMain\"\n[20:54:28.627] #8 0x00005618e94b495c in do_start_bgworker\n(rw=rw@entry=0x5618eb3b73c8) at\n../src/backend/postmaster/postmaster.c:5697\n[20:54:28.627] worker_pid = <optimized out>\n[20:54:28.627] __func__ = \"do_start_bgworker\"\n[20:54:28.627] #9 0x00005618e94b4c32 in maybe_start_bgworkers () at\n../src/backend/postmaster/postmaster.c:5921\n[20:54:28.627] rw = 0x5618eb3b73c8\n[20:54:28.627] num_launched = 0\n[20:54:28.627] now = 0\n[20:54:28.627] iter = {cur = 0x5618eb3b79a8, next =\n0x5618eb382a20, prev = 0x5618ea44a980 <BackgroundWorkerList>}\n[20:54:28.627] #10 0x00005618e94b574a in process_pm_pmsignal () at\n../src/backend/postmaster/postmaster.c:5073\n[20:54:28.627] __func__ = \"process_pm_pmsignal\"\n[20:54:28.627] #11 0x00005618e94b5f4a in ServerLoop () at\n../src/backend/postmaster/postmaster.c:1760\n\n[1] - https://cirrus-ci.com/task/5118173163290624?logs=cores#L51\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 9 Jan 2024 11:54:19 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up transaction completion faster after many relations are\n accessed in a transaction"
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nDuring the testing allow group access permissions on the standby database\ndirectory,\none of my colleague found the issue, that pg_basebackup doesn't verify\nwhether the existing data directory has the required permissions or not?\nThis issue is not related allow group access permissions. This problem was\npresent for a long time, (I think from the time the pg_basebackup was\nintroduced).\n\nAttached patch fixes the problem similar like initdb by changing the\npermissions of the data\ndirectory to the required permissions.\n\nRegards,\nHaribabu Kommi\nFujitsu Australia",
"msg_date": "Tue, 12 Feb 2019 18:03:37 +1100",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On Tue, Feb 12, 2019 at 06:03:37PM +1100, Haribabu Kommi wrote:\n> During the testing allow group access permissions on the standby database\n> directory, one of my colleague found the issue, that pg_basebackup\n> doesn't verify whether the existing data directory has the required\n> permissions or not? This issue is not related allow group access\n> permissions. This problem was present for a long time, (I think from\n> the time the pg_basebackup was introduced).\n\nIn which case this would cause the postmaster to fail to start.\n\n> Attached patch fixes the problem similar like initdb by changing the\n> permissions of the data\n> directory to the required permissions.\n\nIt looks right to me and takes care of the case where group access is\nallowed. Still, we have not seen any complains about the current\nbehavior either and pg_basebackup is around for some time already, so\nI would tend to not back-patch that. Any thoughts from others?\n--\nMichael",
"msg_date": "Wed, 13 Feb 2019 10:42:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On Wed, Feb 13, 2019 at 12:42 PM Michael Paquier <michael@paquier.xyz>\nwrote:\n\n> On Tue, Feb 12, 2019 at 06:03:37PM +1100, Haribabu Kommi wrote:\n> > During the testing allow group access permissions on the standby database\n> > directory, one of my colleague found the issue, that pg_basebackup\n> > doesn't verify whether the existing data directory has the required\n> > permissions or not? This issue is not related allow group access\n> > permissions. This problem was present for a long time, (I think from\n> > the time the pg_basebackup was introduced).\n>\n> In which case this would cause the postmaster to fail to start.\n>\n\nYes, the postmaster fails to start, but I feel if pg_basebackup takes care\nof correcting the file permissions automatically like initdb, that will be\ngood.\n\n\n> > Attached patch fixes the problem similar like initdb by changing the\n> > permissions of the data\n> > directory to the required permissions.\n>\n> It looks right to me and takes care of the case where group access is\n> allowed. Still, we have not seen any complains about the current\n> behavior either and pg_basebackup is around for some time already, so\n> I would tend to not back-patch that. Any thoughts from others?\n>\n\nThis should back-patch till 11 where the group access is introduced.\nBecause of lack of complaints, I agree with you that there is no need of\nfurther back-patch.\n\nRegards,\nHaribabu Kommi\nFujitsu Australia\n\nOn Wed, Feb 13, 2019 at 12:42 PM Michael Paquier <michael@paquier.xyz> wrote:On Tue, Feb 12, 2019 at 06:03:37PM +1100, Haribabu Kommi wrote:\n> During the testing allow group access permissions on the standby database\n> directory, one of my colleague found the issue, that pg_basebackup\n> doesn't verify whether the existing data directory has the required\n> permissions or not? This issue is not related allow group access\n> permissions. This problem was present for a long time, (I think from\n> the time the pg_basebackup was introduced).\n\nIn which case this would cause the postmaster to fail to start.Yes, the postmaster fails to start, but I feel if pg_basebackup takes careof correcting the file permissions automatically like initdb, that will be good. \n> Attached patch fixes the problem similar like initdb by changing the\n> permissions of the data\n> directory to the required permissions.\n\nIt looks right to me and takes care of the case where group access is\nallowed. Still, we have not seen any complains about the current\nbehavior either and pg_basebackup is around for some time already, so\nI would tend to not back-patch that. Any thoughts from others?This should back-patch till 11 where the group access is introduced.Because of lack of complaints, I agree with you that there is no need offurther back-patch. Regards,Haribabu KommiFujitsu Australia",
"msg_date": "Wed, 13 Feb 2019 18:42:36 +1100",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On Wed, Feb 13, 2019 at 06:42:36PM +1100, Haribabu Kommi wrote:\n> This should back-patch till 11 where the group access is introduced.\n> Because of lack of complaints, I agree with you that there is no need of\n> further back-patch.\n\nI am confused by the link with group access. The patch you are\nsending is compatible down to v11, but we could also do it further\ndown by just using chmod with S_IRWXU on the target folder if it\nexists and is empty.\n--\nMichael",
"msg_date": "Thu, 14 Feb 2019 13:04:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On Thu, Feb 14, 2019 at 3:04 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Wed, Feb 13, 2019 at 06:42:36PM +1100, Haribabu Kommi wrote:\n> > This should back-patch till 11 where the group access is introduced.\n> > Because of lack of complaints, I agree with you that there is no need of\n> > further back-patch.\n>\n> I am confused by the link with group access.\n\n\nApologies to confuse you by linking it with group access. This patch doesn't\nhave an interaction with group access. From v11 onwards, PostgreSQL server\naccepts two set of permissions for the data directory because of group\naccess.\n\nwe have an application that is used to create the data directory with\nowner access (0700), but with initdb group permissions option, it\nautomatically\nconverts to (0750) by the initdb. But pg_basebackup doesn't change it when\nit tries to do a backup from a group access server.\n\n\n> The patch you are\n> sending is compatible down to v11, but we could also do it further\n> down by just using chmod with S_IRWXU on the target folder if it\n> exists and is empty.\n>\n\nYes, I agree with you that by changing chmod as you said fixes it in the\nback-branches.\n\nRegards,\nHaribabu Kommi\nFujitsu Australia\n\nOn Thu, Feb 14, 2019 at 3:04 PM Michael Paquier <michael@paquier.xyz> wrote:On Wed, Feb 13, 2019 at 06:42:36PM +1100, Haribabu Kommi wrote:\n> This should back-patch till 11 where the group access is introduced.\n> Because of lack of complaints, I agree with you that there is no need of\n> further back-patch.\n\nI am confused by the link with group access.Apologies to confuse you by linking it with group access. This patch doesn'thave an interaction with group access. From v11 onwards, PostgreSQL serveraccepts two set of permissions for the data directory because of group access. we have an application that is used to create the data directory withowner access (0700), but with initdb group permissions option, it automaticallyconverts to (0750) by the initdb. But pg_basebackup doesn't change it whenit tries to do a backup from a group access server. The patch you are\nsending is compatible down to v11, but we could also do it further\ndown by just using chmod with S_IRWXU on the target folder if it\nexists and is empty.Yes, I agree with you that by changing chmod as you said fixes it in theback-branches. Regards,Haribabu KommiFujitsu Australia",
"msg_date": "Thu, 14 Feb 2019 18:34:07 +1100",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On Thu, Feb 14, 2019 at 06:34:07PM +1100, Haribabu Kommi wrote:\n> we have an application that is used to create the data directory with\n\nWell, initdb would do that happily, so there is no actual any need to\ndo that to begin with. Anyway..\n\n> owner access (0700), but with initdb group permissions option, it\n> automatically\n> converts to (0750) by the initdb. But pg_basebackup doesn't change it when\n> it tries to do a backup from a group access server.\n\nSo that's basically the opposite of the case I was thinking about,\nwhere you create a path for a base backup with permissions strictly\nhigher than 700, say 755, and the base backup path does not have\nenough restrictions. And in your case the permissions are too\nrestrictive because of the application creating the folder itself but\nthey should be relaxed if group access is enabled. Actually, that's\nsomething that we may want to do consistently across all branches. If\nan application calls pg_basebackup after creating a path, they most\nlikely change the permissions anyway to allow the postmaster to\nstart.\n--\nMichael",
"msg_date": "Thu, 14 Feb 2019 17:10:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On Thu, Feb 14, 2019 at 9:10 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Thu, Feb 14, 2019 at 06:34:07PM +1100, Haribabu Kommi wrote:\n> > we have an application that is used to create the data directory with\n>\n> Well, initdb would do that happily, so there is no actual any need to\n> do that to begin with. Anyway..\n>\n> > owner access (0700), but with initdb group permissions option, it\n> > automatically\n> > converts to (0750) by the initdb. But pg_basebackup doesn't change it\n> when\n> > it tries to do a backup from a group access server.\n>\n> So that's basically the opposite of the case I was thinking about,\n> where you create a path for a base backup with permissions strictly\n> higher than 700, say 755, and the base backup path does not have\n> enough restrictions. And in your case the permissions are too\n> restrictive because of the application creating the folder itself but\n> they should be relaxed if group access is enabled. Actually, that's\n> something that we may want to do consistently across all branches. If\n> an application calls pg_basebackup after creating a path, they most\n> likely change the permissions anyway to allow the postmaster to\n> start.\n>\n\nI think it could be argued that neither initdb *or* pg_basebackup should\nchange the permissions on an existing directory, because the admin may have\ndone that intentionally. But when they do create the directory, they should\nfollow the same patterns.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Thu, Feb 14, 2019 at 9:10 AM Michael Paquier <michael@paquier.xyz> wrote:On Thu, Feb 14, 2019 at 06:34:07PM +1100, Haribabu Kommi wrote:\n> we have an application that is used to create the data directory with\n\nWell, initdb would do that happily, so there is no actual any need to\ndo that to begin with. Anyway..\n\n> owner access (0700), but with initdb group permissions option, it\n> automatically\n> converts to (0750) by the initdb. But pg_basebackup doesn't change it when\n> it tries to do a backup from a group access server.\n\nSo that's basically the opposite of the case I was thinking about,\nwhere you create a path for a base backup with permissions strictly\nhigher than 700, say 755, and the base backup path does not have\nenough restrictions. And in your case the permissions are too\nrestrictive because of the application creating the folder itself but\nthey should be relaxed if group access is enabled. Actually, that's\nsomething that we may want to do consistently across all branches. If\nan application calls pg_basebackup after creating a path, they most\nlikely change the permissions anyway to allow the postmaster to\nstart.I think it could be argued that neither initdb *or* pg_basebackup should change the permissions on an existing directory, because the admin may have done that intentionally. But when they do create the directory, they should follow the same patterns. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Thu, 14 Feb 2019 10:57:40 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On Thu, Feb 14, 2019 at 8:57 PM Magnus Hagander <magnus@hagander.net> wrote:\n\n> On Thu, Feb 14, 2019 at 9:10 AM Michael Paquier <michael@paquier.xyz>\n> wrote:\n>\n>> On Thu, Feb 14, 2019 at 06:34:07PM +1100, Haribabu Kommi wrote:\n>> > we have an application that is used to create the data directory with\n>>\n>> Well, initdb would do that happily, so there is no actual any need to\n>> do that to begin with. Anyway..\n>>\n>> > owner access (0700), but with initdb group permissions option, it\n>> > automatically\n>> > converts to (0750) by the initdb. But pg_basebackup doesn't change it\n>> when\n>> > it tries to do a backup from a group access server.\n>>\n>> So that's basically the opposite of the case I was thinking about,\n>> where you create a path for a base backup with permissions strictly\n>> higher than 700, say 755, and the base backup path does not have\n>> enough restrictions. And in your case the permissions are too\n>> restrictive because of the application creating the folder itself but\n>> they should be relaxed if group access is enabled. Actually, that's\n>> something that we may want to do consistently across all branches. If\n>> an application calls pg_basebackup after creating a path, they most\n>> likely change the permissions anyway to allow the postmaster to\n>> start.\n>>\n>\n> I think it could be argued that neither initdb *or* pg_basebackup should\n> change the permissions on an existing directory, because the admin may have\n> done that intentionally. But when they do create the directory, they should\n> follow the same patterns.\n>\n\nHmm, even if the administrator set some specific permissions to the data\ndirectory,\nPostgreSQL server doesn't allow server to start if the permissions are not\n(0700)\nfor versions less than 11 and (0700 or 0750) for version 11 or later.\n\nTo let the user to use the PostgreSQL server, user must change the\npermissions\nof the data directory. So, I don't see a problem in changing the\npermissions by these\ntools.\n\nRegards,\nHaribabu Kommi\nFujitsu Australia\n\nOn Thu, Feb 14, 2019 at 8:57 PM Magnus Hagander <magnus@hagander.net> wrote:On Thu, Feb 14, 2019 at 9:10 AM Michael Paquier <michael@paquier.xyz> wrote:On Thu, Feb 14, 2019 at 06:34:07PM +1100, Haribabu Kommi wrote:\n> we have an application that is used to create the data directory with\n\nWell, initdb would do that happily, so there is no actual any need to\ndo that to begin with. Anyway..\n\n> owner access (0700), but with initdb group permissions option, it\n> automatically\n> converts to (0750) by the initdb. But pg_basebackup doesn't change it when\n> it tries to do a backup from a group access server.\n\nSo that's basically the opposite of the case I was thinking about,\nwhere you create a path for a base backup with permissions strictly\nhigher than 700, say 755, and the base backup path does not have\nenough restrictions. And in your case the permissions are too\nrestrictive because of the application creating the folder itself but\nthey should be relaxed if group access is enabled. Actually, that's\nsomething that we may want to do consistently across all branches. If\nan application calls pg_basebackup after creating a path, they most\nlikely change the permissions anyway to allow the postmaster to\nstart.I think it could be argued that neither initdb *or* pg_basebackup should change the permissions on an existing directory, because the admin may have done that intentionally. But when they do create the directory, they should follow the same patterns.Hmm, even if the administrator set some specific permissions to the data directory,PostgreSQL server doesn't allow server to start if the permissions are not (0700)for versions less than 11 and (0700 or 0750) for version 11 or later.To let the user to use the PostgreSQL server, user must change the permissionsof the data directory. So, I don't see a problem in changing the permissions by thesetools.Regards,Haribabu KommiFujitsu Australia",
"msg_date": "Thu, 14 Feb 2019 23:21:19 +1100",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On Thu, Feb 14, 2019 at 11:21:19PM +1100, Haribabu Kommi wrote:\n> On Thu, Feb 14, 2019 at 8:57 PM Magnus Hagander <magnus@hagander.net> wrote:\n>> I think it could be argued that neither initdb *or* pg_basebackup should\n>> change the permissions on an existing directory, because the admin may have\n>> done that intentionally. But when they do create the directory, they should\n>> follow the same patterns.\n> \n> Hmm, even if the administrator set some specific permissions to the data\n> directory, PostgreSQL server doesn't allow server to start if the\n> permissions are not (0700) for versions less than 11 and (0700 or\n> 0750) for version 11 or later.\n\nYes, particularly with pg_basebackup -R this adds an extra step in the\nuser flow.\n\n> To let the user to use the PostgreSQL server, user must change the\n> permissions of the data directory. So, I don't see a problem in\n> changing the permissions by these tools.\n\nI certainly agree with the point of Magnus that both tools should\nbehave consistently, and I cannot actually imagine why it would be\nuseful for an admin to keep a more permissive data folder while all\nthe contents already have umasks set at the same level as the primary\n(or what initdb has been told to use), but perhaps I lack imagination.\nIf we doubt about potential user impact, the usual, best, answer is to\nlet back-branches behave the way they do now, and only do something on\nHEAD.\n--\nMichael",
"msg_date": "Fri, 15 Feb 2019 08:15:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "At Fri, 15 Feb 2019 08:15:24 +0900, Michael Paquier <michael@paquier.xyz> wrote in <20190214231524.GC2240@paquier.xyz>\n> On Thu, Feb 14, 2019 at 11:21:19PM +1100, Haribabu Kommi wrote:\n> > On Thu, Feb 14, 2019 at 8:57 PM Magnus Hagander <magnus@hagander.net> wrote:\n> >> I think it could be argued that neither initdb *or* pg_basebackup should\n> >> change the permissions on an existing directory, because the admin may have\n> >> done that intentionally. But when they do create the directory, they should\n> >> follow the same patterns.\n> > \n> > Hmm, even if the administrator set some specific permissions to the data\n> > directory, PostgreSQL server doesn't allow server to start if the\n> > permissions are not (0700) for versions less than 11 and (0700 or\n> > 0750) for version 11 or later.\n> \n> Yes, particularly with pg_basebackup -R this adds an extra step in the\n> user flow.\n\nI disagree that pg_basebackup rejects directories other than\nspecific permissions, since it is just a binary backup tool,\nwhich is not exclusive to making replication-standby. It ought to\nbe runnable and actually runnable by any OS users even by root,\nfor who postgres rejects to start. As mentioned upthread, it is\nsafe-side failure that server rejects to run on it.\n\n> > To let the user to use the PostgreSQL server, user must change the\n> > permissions of the data directory. So, I don't see a problem in\n> > changing the permissions by these tools.\n> \n> I certainly agree with the point of Magnus that both tools should\n> behave consistently, and I cannot actually imagine why it would be\n> useful for an admin to keep a more permissive data folder while all\n> the contents already have umasks set at the same level as the primary\n> (or what initdb has been told to use), but perhaps I lack imagination.\n> If we doubt about potential user impact, the usual, best, answer is to\n> let back-branches behave the way they do now, and only do something on\n> HEAD.\n\ninitdb is to create a directory on which server works and rather\nrejects existing directory, so I think the \"incosistency\" seems\nfine.\n\nI can live with some new options, say --create-New-directory or\n--check-directory-Permission.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 15 Feb 2019 09:24:15 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On Fri, Feb 15, 2019 at 09:24:15AM +0900, Kyotaro HORIGUCHI wrote:\n> I disagree that pg_basebackup rejects directories other than\n> specific permissions, since it is just a binary backup tool,\n> which is not exclusive to making replication-standby. It ought to\n> be runnable and actually runnable by any OS users even by root,\n> for who postgres rejects to start. As mentioned upthread, it is\n> safe-side failure that server rejects to run on it.\n\nPerhaps I do not fully understand your argument here. We do not\ndiscuss about making pg_basebackup fail in any way, just of having it\nadjust the umask of the target path so as users can simplify startups\nusing the generated base backup.\n--\nMichael",
"msg_date": "Mon, 18 Feb 2019 13:05:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On Fri, Feb 15, 2019 at 10:15 AM Michael Paquier <michael@paquier.xyz>\nwrote:\n\n> On Thu, Feb 14, 2019 at 11:21:19PM +1100, Haribabu Kommi wrote:\n> > On Thu, Feb 14, 2019 at 8:57 PM Magnus Hagander <magnus@hagander.net>\n> wrote:\n> >> I think it could be argued that neither initdb *or* pg_basebackup should\n> >> change the permissions on an existing directory, because the admin may\n> have\n> >> done that intentionally. But when they do create the directory, they\n> should\n> >> follow the same patterns.\n> >\n> > Hmm, even if the administrator set some specific permissions to the data\n> > directory, PostgreSQL server doesn't allow server to start if the\n> > permissions are not (0700) for versions less than 11 and (0700 or\n> > 0750) for version 11 or later.\n>\n> Yes, particularly with pg_basebackup -R this adds an extra step in the\n> user flow.\n>\n> > To let the user to use the PostgreSQL server, user must change the\n> > permissions of the data directory. So, I don't see a problem in\n> > changing the permissions by these tools.\n>\n> I certainly agree with the point of Magnus that both tools should\n> behave consistently, and I cannot actually imagine why it would be\n> useful for an admin to keep a more permissive data folder while all\n> the contents already have umasks set at the same level as the primary\n> (or what initdb has been told to use), but perhaps I lack imagination.\n> If we doubt about potential user impact, the usual, best, answer is to\n> let back-branches behave the way they do now, and only do something on\n> HEAD.\n>\n\nI also agree that both inidb and pg_basebackup should behave same.\nOur main concern is that standby data directory that doesn't follow\nthe primary data directory permissions can lead failures when the standby\ngets promoted.\n\nLack of complaints from the users, how about making this change in the HEAD?\n\nRegards,\nHaribabu Kommi\nFujitsu Australia\n\nOn Fri, Feb 15, 2019 at 10:15 AM Michael Paquier <michael@paquier.xyz> wrote:On Thu, Feb 14, 2019 at 11:21:19PM +1100, Haribabu Kommi wrote:\n> On Thu, Feb 14, 2019 at 8:57 PM Magnus Hagander <magnus@hagander.net> wrote:\n>> I think it could be argued that neither initdb *or* pg_basebackup should\n>> change the permissions on an existing directory, because the admin may have\n>> done that intentionally. But when they do create the directory, they should\n>> follow the same patterns.\n> \n> Hmm, even if the administrator set some specific permissions to the data\n> directory, PostgreSQL server doesn't allow server to start if the\n> permissions are not (0700) for versions less than 11 and (0700 or\n> 0750) for version 11 or later.\n\nYes, particularly with pg_basebackup -R this adds an extra step in the\nuser flow.\n\n> To let the user to use the PostgreSQL server, user must change the\n> permissions of the data directory. So, I don't see a problem in\n> changing the permissions by these tools.\n\nI certainly agree with the point of Magnus that both tools should\nbehave consistently, and I cannot actually imagine why it would be\nuseful for an admin to keep a more permissive data folder while all\nthe contents already have umasks set at the same level as the primary\n(or what initdb has been told to use), but perhaps I lack imagination.\nIf we doubt about potential user impact, the usual, best, answer is to\nlet back-branches behave the way they do now, and only do something on\nHEAD.\nI also agree that both inidb and pg_basebackup should behave same.Our main concern is that standby data directory that doesn't followthe primary data directory permissions can lead failures when the standbygets promoted.Lack of complaints from the users, how about making this change in the HEAD?Regards,Haribabu KommiFujitsu Australia",
"msg_date": "Wed, 20 Feb 2019 15:16:48 +1100",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On Wed, Feb 20, 2019 at 03:16:48PM +1100, Haribabu Kommi wrote:\n> Lack of complaints from the users, how about making this change in the HEAD?\n\nFine by me. I would tend to patch pg_basebackup so as the target\nfolder gets the correct umask instead of nuking the chmod call in\ninitdb so I think that we are on the same page. Let's see what the\nothers think.\n--\nMichael",
"msg_date": "Wed, 20 Feb 2019 13:48:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On Wed, Feb 20, 2019 at 5:17 AM Haribabu Kommi <kommi.haribabu@gmail.com>\nwrote:\n\n>\n> On Fri, Feb 15, 2019 at 10:15 AM Michael Paquier <michael@paquier.xyz>\n> wrote:\n>\n>> On Thu, Feb 14, 2019 at 11:21:19PM +1100, Haribabu Kommi wrote:\n>> > On Thu, Feb 14, 2019 at 8:57 PM Magnus Hagander <magnus@hagander.net>\n>> wrote:\n>> >> I think it could be argued that neither initdb *or* pg_basebackup\n>> should\n>> >> change the permissions on an existing directory, because the admin may\n>> have\n>> >> done that intentionally. But when they do create the directory, they\n>> should\n>> >> follow the same patterns.\n>> >\n>> > Hmm, even if the administrator set some specific permissions to the data\n>> > directory, PostgreSQL server doesn't allow server to start if the\n>> > permissions are not (0700) for versions less than 11 and (0700 or\n>> > 0750) for version 11 or later.\n>>\n>> Yes, particularly with pg_basebackup -R this adds an extra step in the\n>> user flow.\n>>\n>\nPerhaps we should make the enforcement of permissions conditional on -R?\nOTOH that's documented as \"write recovery.conf\", but we could change that\nto be \"prepare for replication\" or something?\n\n\n> To let the user to use the PostgreSQL server, user must change the\n>> > permissions of the data directory. So, I don't see a problem in\n>> > changing the permissions by these tools.\n>>\n>> I certainly agree with the point of Magnus that both tools should\n>> behave consistently, and I cannot actually imagine why it would be\n>> useful for an admin to keep a more permissive data folder while all\n>> the contents already have umasks set at the same level as the primary\n>> (or what initdb has been told to use), but perhaps I lack imagination.\n>> If we doubt about potential user impact, the usual, best, answer is to\n>> let back-branches behave the way they do now, and only do something on\n>> HEAD.\n>>\n>\n> I also agree that both inidb and pg_basebackup should behave same.\n> Our main concern is that standby data directory that doesn't follow\n> the primary data directory permissions can lead failures when the standby\n> gets promoted.\n>\n\nI don't think that follows at all. There are many scenarios where you'd\nwant the standby to have different permissions than the primary. And I'm\nnot sure it's our business to enforce that. A much much more common mistake\npeople make is run pg_basebackup as the wrong user, thereby getting the\nwrong owner of all files. But that doesn't mean we should enforce the\nowner/group of the files.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Wed, Feb 20, 2019 at 5:17 AM Haribabu Kommi <kommi.haribabu@gmail.com> wrote:On Fri, Feb 15, 2019 at 10:15 AM Michael Paquier <michael@paquier.xyz> wrote:On Thu, Feb 14, 2019 at 11:21:19PM +1100, Haribabu Kommi wrote:\n> On Thu, Feb 14, 2019 at 8:57 PM Magnus Hagander <magnus@hagander.net> wrote:\n>> I think it could be argued that neither initdb *or* pg_basebackup should\n>> change the permissions on an existing directory, because the admin may have\n>> done that intentionally. But when they do create the directory, they should\n>> follow the same patterns.\n> \n> Hmm, even if the administrator set some specific permissions to the data\n> directory, PostgreSQL server doesn't allow server to start if the\n> permissions are not (0700) for versions less than 11 and (0700 or\n> 0750) for version 11 or later.\n\nYes, particularly with pg_basebackup -R this adds an extra step in the\nuser flow.Perhaps we should make the enforcement of permissions conditional on -R? OTOH that's documented as \"write recovery.conf\", but we could change that to be \"prepare for replication\" or something?\n> To let the user to use the PostgreSQL server, user must change the\n> permissions of the data directory. So, I don't see a problem in\n> changing the permissions by these tools.\n\nI certainly agree with the point of Magnus that both tools should\nbehave consistently, and I cannot actually imagine why it would be\nuseful for an admin to keep a more permissive data folder while all\nthe contents already have umasks set at the same level as the primary\n(or what initdb has been told to use), but perhaps I lack imagination.\nIf we doubt about potential user impact, the usual, best, answer is to\nlet back-branches behave the way they do now, and only do something on\nHEAD.\nI also agree that both inidb and pg_basebackup should behave same.Our main concern is that standby data directory that doesn't followthe primary data directory permissions can lead failures when the standbygets promoted.I don't think that follows at all. There are many scenarios where you'd want the standby to have different permissions than the primary. And I'm not sure it's our business to enforce that. A much much more common mistake people make is run pg_basebackup as the wrong user, thereby getting the wrong owner of all files. But that doesn't mean we should enforce the owner/group of the files.-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Wed, 20 Feb 2019 09:40:18 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On Wed, Feb 20, 2019 at 7:40 PM Magnus Hagander <magnus@hagander.net> wrote:\n\n>\n> On Wed, Feb 20, 2019 at 5:17 AM Haribabu Kommi <kommi.haribabu@gmail.com>\n> wrote:\n>\n>>\n>> On Fri, Feb 15, 2019 at 10:15 AM Michael Paquier <michael@paquier.xyz>\n>> wrote:\n>>\n>>> On Thu, Feb 14, 2019 at 11:21:19PM +1100, Haribabu Kommi wrote:\n>>> > On Thu, Feb 14, 2019 at 8:57 PM Magnus Hagander <magnus@hagander.net>\n>>> wrote:\n>>> >> I think it could be argued that neither initdb *or* pg_basebackup\n>>> should\n>>> >> change the permissions on an existing directory, because the admin\n>>> may have\n>>> >> done that intentionally. But when they do create the directory, they\n>>> should\n>>> >> follow the same patterns.\n>>> >\n>>> > Hmm, even if the administrator set some specific permissions to the\n>>> data\n>>> > directory, PostgreSQL server doesn't allow server to start if the\n>>> > permissions are not (0700) for versions less than 11 and (0700 or\n>>> > 0750) for version 11 or later.\n>>>\n>>> Yes, particularly with pg_basebackup -R this adds an extra step in the\n>>> user flow.\n>>>\n>>\n> Perhaps we should make the enforcement of permissions conditional on -R?\n> OTOH that's documented as \"write recovery.conf\", but we could change that\n> to be \"prepare for replication\" or something?\n>\n\nYes, the enforcement of permissions can be done only when -R option is\nprovided.\nThe documentation is changed in v12 already as \"write configuration for\nreplication\".\n\n\n>\n> > To let the user to use the PostgreSQL server, user must change the\n>>> > permissions of the data directory. So, I don't see a problem in\n>>> > changing the permissions by these tools.\n>>>\n>>> I certainly agree with the point of Magnus that both tools should\n>>> behave consistently, and I cannot actually imagine why it would be\n>>> useful for an admin to keep a more permissive data folder while all\n>>> the contents already have umasks set at the same level as the primary\n>>> (or what initdb has been told to use), but perhaps I lack imagination.\n>>> If we doubt about potential user impact, the usual, best, answer is to\n>>> let back-branches behave the way they do now, and only do something on\n>>> HEAD.\n>>>\n>>\n>> I also agree that both inidb and pg_basebackup should behave same.\n>> Our main concern is that standby data directory that doesn't follow\n>> the primary data directory permissions can lead failures when the standby\n>> gets promoted.\n>>\n>\n> I don't think that follows at all. There are many scenarios where you'd\n> want the standby to have different permissions than the primary.\n>\n\nI really having a hard time to understand that how the different\npermissions are possible?\nI think of that the standby is having more restrict permissions. May be the\nstandby is not a\nhot standby?\n\nCan you please provide some more details?\n\nTill v11, PostgreSQL allows the data directory permissions to be 0700 of\nthe directory, otherwise\nserver start fails, even if the pg_basebackup is successful. In my testing\nI came to know that data\ndirectory permissions less than 0700 e.g- 0300 also the server is started.\nI feel the check of validating\ndata directory is checking whether are there any group permissions or not?\nBut it didn't whether the\ncurrent owner have all the permissions are not? Is this the scenario are\nyou expecting?\n\n\n\n> And I'm not sure it's our business to enforce that. A much much more\n> common mistake people make is run pg_basebackup as the wrong user, thereby\n> getting the wrong owner of all files. But that doesn't mean we should\n> enforce the owner/group of the files.\n>\n\nI didn't understand this point also clearly. The system user who executes\nthe pg_basebackup command,\nall the database files are associated with that user. If that corresponding\nuser don't have permissions to\naccess that existing data folder, the backup fails.\n\nRegards,\nHaribabu Kommi\nFujitsu Australia\n\nOn Wed, Feb 20, 2019 at 7:40 PM Magnus Hagander <magnus@hagander.net> wrote:On Wed, Feb 20, 2019 at 5:17 AM Haribabu Kommi <kommi.haribabu@gmail.com> wrote:On Fri, Feb 15, 2019 at 10:15 AM Michael Paquier <michael@paquier.xyz> wrote:On Thu, Feb 14, 2019 at 11:21:19PM +1100, Haribabu Kommi wrote:\n> On Thu, Feb 14, 2019 at 8:57 PM Magnus Hagander <magnus@hagander.net> wrote:\n>> I think it could be argued that neither initdb *or* pg_basebackup should\n>> change the permissions on an existing directory, because the admin may have\n>> done that intentionally. But when they do create the directory, they should\n>> follow the same patterns.\n> \n> Hmm, even if the administrator set some specific permissions to the data\n> directory, PostgreSQL server doesn't allow server to start if the\n> permissions are not (0700) for versions less than 11 and (0700 or\n> 0750) for version 11 or later.\n\nYes, particularly with pg_basebackup -R this adds an extra step in the\nuser flow.Perhaps we should make the enforcement of permissions conditional on -R? OTOH that's documented as \"write recovery.conf\", but we could change that to be \"prepare for replication\" or something?Yes, the enforcement of permissions can be done only when -R option is provided.The documentation is changed in v12 already as \"write configuration for replication\". \n> To let the user to use the PostgreSQL server, user must change the\n> permissions of the data directory. So, I don't see a problem in\n> changing the permissions by these tools.\n\nI certainly agree with the point of Magnus that both tools should\nbehave consistently, and I cannot actually imagine why it would be\nuseful for an admin to keep a more permissive data folder while all\nthe contents already have umasks set at the same level as the primary\n(or what initdb has been told to use), but perhaps I lack imagination.\nIf we doubt about potential user impact, the usual, best, answer is to\nlet back-branches behave the way they do now, and only do something on\nHEAD.\nI also agree that both inidb and pg_basebackup should behave same.Our main concern is that standby data directory that doesn't followthe primary data directory permissions can lead failures when the standbygets promoted.I don't think that follows at all. There are many scenarios where you'd want the standby to have different permissions than the primary.I really having a hard time to understand that how the different permissions are possible? I think of that the standby is having more restrict permissions. May be the standby is not ahot standby?Can you please provide some more details? Till v11, PostgreSQL allows the data directory permissions to be 0700 of the directory, otherwiseserver start fails, even if the pg_basebackup is successful. In my testing I came to know that data directory permissions less than 0700 e.g- 0300 also the server is started. I feel the check of validatingdata directory is checking whether are there any group permissions or not? But it didn't whether thecurrent owner have all the permissions are not? Is this the scenario are you expecting? And I'm not sure it's our business to enforce that. A much much more common mistake people make is run pg_basebackup as the wrong user, thereby getting the wrong owner of all files. But that doesn't mean we should enforce the owner/group of the files.\nI didn't understand this point also clearly. The system user who executes the pg_basebackup command,all the database files are associated with that user. If that corresponding user don't have permissions toaccess that existing data folder, the backup fails. Regards,Haribabu KommiFujitsu Australia",
"msg_date": "Fri, 22 Feb 2019 19:10:49 +1100",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "pg_basebackup copies the data directory permission mode from the\nupstream server. But it doesn't copy the ownership. So if say the\nupstream server allows group access and things are owned by\npostgres:postgres, and I want to make a copy for local development and\nmake a backup into a directory owned by peter:staff without group\naccess, then it would be inappropriate for pg_basebackup to change the\npermissions on that directory.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 8 Mar 2019 13:59:22 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On Fri, Mar 8, 2019 at 11:59 PM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> pg_basebackup copies the data directory permission mode from the\n> upstream server. But it doesn't copy the ownership. So if say the\n> upstream server allows group access and things are owned by\n> postgres:postgres, and I want to make a copy for local development and\n> make a backup into a directory owned by peter:staff without group\n> access, then it would be inappropriate for pg_basebackup to change the\n> permissions on that directory.\n>\n\nYes, I agree that it may be a problem if the existing data directory\npermissions\nare 0700 to changing it to 0750. But it may not be a problem for the\nscenarios,\nwhere the existing data permissions >=0750, to the upstream permissions.\nBecause user must need to change anyway to start the server, otherwise\nserver\nstart fails, and also the files inside the data folder follows the\npermissions of the\nupstream data directory.\n\nusually production systems follows same permissions are of upstream, I don't\nsee a problem in following the same for development environment also?\n\ncomments?\n\nRegards,\nHaribabu Kommi\nFujitsu Australia\n\nOn Fri, Mar 8, 2019 at 11:59 PM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:pg_basebackup copies the data directory permission mode from the\nupstream server. But it doesn't copy the ownership. So if say the\nupstream server allows group access and things are owned by\npostgres:postgres, and I want to make a copy for local development and\nmake a backup into a directory owned by peter:staff without group\naccess, then it would be inappropriate for pg_basebackup to change the\npermissions on that directory.Yes, I agree that it may be a problem if the existing data directory permissionsare 0700 to changing it to 0750. But it may not be a problem for the scenarios,where the existing data permissions >=0750, to the upstream permissions.Because user must need to change anyway to start the server, otherwise serverstart fails, and also the files inside the data folder follows the permissions of theupstream data directory.usually production systems follows same permissions are of upstream, I don'tsee a problem in following the same for development environment also?comments?Regards,Haribabu KommiFujitsu Australia",
"msg_date": "Sat, 9 Mar 2019 12:19:43 +1100",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On 2019-03-09 02:19, Haribabu Kommi wrote:\n> Yes, I agree that it may be a problem if the existing data directory\n> permissions\n> are 0700 to changing it to 0750. But it may not be a problem for the\n> scenarios,\n> where the existing data permissions >=0750, to the upstream permissions.\n> Because user must need to change anyway to start the server, otherwise\n> server\n> start fails, and also the files inside the data folder follows the\n> permissions of the\n> upstream data directory.\n> \n> usually production systems follows same permissions are of upstream, I don't\n> see a problem in following the same for development environment also?\n\nI think the potential problems of getting this wrong are bigger than the\nissue we are trying to fix.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 15 Mar 2019 00:33:44 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On Fri, Mar 15, 2019 at 10:33 AM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2019-03-09 02:19, Haribabu Kommi wrote:\n> > Yes, I agree that it may be a problem if the existing data directory\n> > permissions\n> > are 0700 to changing it to 0750. But it may not be a problem for the\n> > scenarios,\n> > where the existing data permissions >=0750, to the upstream permissions.\n> > Because user must need to change anyway to start the server, otherwise\n> > server\n> > start fails, and also the files inside the data folder follows the\n> > permissions of the\n> > upstream data directory.\n> >\n> > usually production systems follows same permissions are of upstream, I\n> don't\n> > see a problem in following the same for development environment also?\n>\n> I think the potential problems of getting this wrong are bigger than the\n> issue we are trying to fix.\n>\n\nThanks for your opinion. I am not sure exactly what are the problems.\nBut anyway I can go with your suggestion.\n\nHow about changing the data directory permissions for the -R scenario?\nif executing pg_basebackup on to an existing data directory is a common\nscenario? or leave it?\n\nRegards,\nHaribabu Kommi\nFujitsu Australia\n\nOn Fri, Mar 15, 2019 at 10:33 AM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:On 2019-03-09 02:19, Haribabu Kommi wrote:\n> Yes, I agree that it may be a problem if the existing data directory\n> permissions\n> are 0700 to changing it to 0750. But it may not be a problem for the\n> scenarios,\n> where the existing data permissions >=0750, to the upstream permissions.\n> Because user must need to change anyway to start the server, otherwise\n> server\n> start fails, and also the files inside the data folder follows the\n> permissions of the\n> upstream data directory.\n> \n> usually production systems follows same permissions are of upstream, I don't\n> see a problem in following the same for development environment also?\n\nI think the potential problems of getting this wrong are bigger than the\nissue we are trying to fix.\nThanks for your opinion. I am not sure exactly what are the problems.But anyway I can go with your suggestion. How about changing the data directory permissions for the -R scenario?if executing pg_basebackup on to an existing data directory is a commonscenario? or leave it?Regards,Haribabu KommiFujitsu Australia",
"msg_date": "Fri, 15 Mar 2019 18:23:48 +1100",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On Thu, Mar 14, 2019 at 7:34 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> I think the potential problems of getting this wrong are bigger than the\n> issue we are trying to fix.\n\nI think the question is: how do we know what the user intended? If\nthe user wants the directory to be accessible only to the owner, then\nwe ought to set the permissions on the directory itself and of\neverything inside it to 0700 (or 0600). If they want group access, we\nshould set everything to 0750 (or 0644). But how do we know what the\nuser wants?\n\nRight now, we take the position that the user wants the individual\nfiles to have the same mode that they do on the master, but the\ndirectory should retain its existing permissions. That appears to be\npretty silly, because that might end up creating a bunch of files\ninside the directory that are marked as group-readable while the\ndirectory itself isn't; surely nobody wants that. Adopting this patch\nwould fix that inconsistency.\n\nHowever, it might be better to go the other way. Maybe pg_basebackup\nshould decide whether group permission is appropriate for the\ncontained files and directories not by looking at the master, but by\nlooking at the directory into which it's writing. The basic objection\nto this patch seems to be that we should not assume that the user got\nthe permissions on the existing directory wrong, and I think that\nobjection is fair, but if we accept it, then we should ask why we're\nsetting the permission of everything under that directory according to\nsome other methodology.\n\nAnother option would be to provide a pg_basebackup option to allow the\nuser to specify what they intended i.e. --[no-]group-read. (Tying it\nto -R doesn't sound like a good decision to me.)\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Sat, 16 Mar 2019 10:29:45 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Thu, Mar 14, 2019 at 7:34 PM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n> > I think the potential problems of getting this wrong are bigger than the\n> > issue we are trying to fix.\n> \n> I think the question is: how do we know what the user intended? If\n> the user wants the directory to be accessible only to the owner, then\n> we ought to set the permissions on the directory itself and of\n> everything inside it to 0700 (or 0600). If they want group access, we\n> should set everything to 0750 (or 0644). But how do we know what the\n> user wants?\n> \n> Right now, we take the position that the user wants the individual\n> files to have the same mode that they do on the master, but the\n> directory should retain its existing permissions. That appears to be\n> pretty silly, because that might end up creating a bunch of files\n> inside the directory that are marked as group-readable while the\n> directory itself isn't; surely nobody wants that. Adopting this patch\n> would fix that inconsistency.\n> \n> However, it might be better to go the other way. Maybe pg_basebackup\n> should decide whether group permission is appropriate for the\n> contained files and directories not by looking at the master, but by\n> looking at the directory into which it's writing. The basic objection\n> to this patch seems to be that we should not assume that the user got\n> the permissions on the existing directory wrong, and I think that\n> objection is fair, but if we accept it, then we should ask why we're\n> setting the permission of everything under that directory according to\n> some other methodology.\n\nGoing based on the current setting of the directory seems defensible to\nme, with the argument of \"we trust you created the directory the way you\nwant the rest of the system to be\".\n\n> Another option would be to provide a pg_basebackup option to allow the\n> user to specify what they intended i.e. --[no-]group-read. (Tying it\n> to -R doesn't sound like a good decision to me.)\n\nI definitely think that we should add an option to allow the user to\ntell us explicitly what they want here, even if we also go based on what\nthe created directory has (and in that case, we should make everything,\nincluding the base directory, follow what the user asked for).\n\nThanks!\n\nStephen",
"msg_date": "Mon, 18 Mar 2019 02:08:00 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On Mon, Mar 18, 2019 at 7:08 AM Stephen Frost <sfrost@snowman.net> wrote:\n\n> Greetings,\n>\n> * Robert Haas (robertmhaas@gmail.com) wrote:\n> > On Thu, Mar 14, 2019 at 7:34 PM Peter Eisentraut\n> > <peter.eisentraut@2ndquadrant.com> wrote:\n> > > I think the potential problems of getting this wrong are bigger than\n> the\n> > > issue we are trying to fix.\n> >\n> > I think the question is: how do we know what the user intended? If\n> > the user wants the directory to be accessible only to the owner, then\n> > we ought to set the permissions on the directory itself and of\n> > everything inside it to 0700 (or 0600). If they want group access, we\n> > should set everything to 0750 (or 0644). But how do we know what the\n> > user wants?\n> >\n> > Right now, we take the position that the user wants the individual\n> > files to have the same mode that they do on the master, but the\n> > directory should retain its existing permissions. That appears to be\n> > pretty silly, because that might end up creating a bunch of files\n> > inside the directory that are marked as group-readable while the\n> > directory itself isn't; surely nobody wants that. Adopting this patch\n> > would fix that inconsistency.\n> >\n> > However, it might be better to go the other way. Maybe pg_basebackup\n> > should decide whether group permission is appropriate for the\n> > contained files and directories not by looking at the master, but by\n> > looking at the directory into which it's writing. The basic objection\n> > to this patch seems to be that we should not assume that the user got\n> > the permissions on the existing directory wrong, and I think that\n> > objection is fair, but if we accept it, then we should ask why we're\n> > setting the permission of everything under that directory according to\n> > some other methodology.\n>\n> Going based on the current setting of the directory seems defensible to\n> me, with the argument of \"we trust you created the directory the way you\n> want the rest of the system to be\".\n>\n\nWhich I believe is also how a plain unix cp (or tar or whatever) would\nwork, isn't it? I think that alone is a pretty strong reason to work the\nsame as those -- they're not entirely unsimilar.\n\n\n> Another option would be to provide a pg_basebackup option to allow the\n> > user to specify what they intended i.e. --[no-]group-read. (Tying it\n> > to -R doesn't sound like a good decision to me.)\n>\n> I definitely think that we should add an option to allow the user to\n> tell us explicitly what they want here, even if we also go based on what\n> the created directory has (and in that case, we should make everything,\n> including the base directory, follow what the user asked for).\n>\n\n+1 for having an option to override whatever the default is.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, Mar 18, 2019 at 7:08 AM Stephen Frost <sfrost@snowman.net> wrote:Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Thu, Mar 14, 2019 at 7:34 PM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n> > I think the potential problems of getting this wrong are bigger than the\n> > issue we are trying to fix.\n> \n> I think the question is: how do we know what the user intended? If\n> the user wants the directory to be accessible only to the owner, then\n> we ought to set the permissions on the directory itself and of\n> everything inside it to 0700 (or 0600). If they want group access, we\n> should set everything to 0750 (or 0644). But how do we know what the\n> user wants?\n> \n> Right now, we take the position that the user wants the individual\n> files to have the same mode that they do on the master, but the\n> directory should retain its existing permissions. That appears to be\n> pretty silly, because that might end up creating a bunch of files\n> inside the directory that are marked as group-readable while the\n> directory itself isn't; surely nobody wants that. Adopting this patch\n> would fix that inconsistency.\n> \n> However, it might be better to go the other way. Maybe pg_basebackup\n> should decide whether group permission is appropriate for the\n> contained files and directories not by looking at the master, but by\n> looking at the directory into which it's writing. The basic objection\n> to this patch seems to be that we should not assume that the user got\n> the permissions on the existing directory wrong, and I think that\n> objection is fair, but if we accept it, then we should ask why we're\n> setting the permission of everything under that directory according to\n> some other methodology.\n\nGoing based on the current setting of the directory seems defensible to\nme, with the argument of \"we trust you created the directory the way you\nwant the rest of the system to be\".Which I believe is also how a plain unix cp (or tar or whatever) would work, isn't it? I think that alone is a pretty strong reason to work the same as those -- they're not entirely unsimilar.\n> Another option would be to provide a pg_basebackup option to allow the\n> user to specify what they intended i.e. --[no-]group-read. (Tying it\n> to -R doesn't sound like a good decision to me.)\n\nI definitely think that we should add an option to allow the user to\ntell us explicitly what they want here, even if we also go based on what\nthe created directory has (and in that case, we should make everything,\nincluding the base directory, follow what the user asked for).+1 for having an option to override whatever the default is. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Mon, 18 Mar 2019 08:32:44 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On Mon, Mar 18, 2019 at 08:32:44AM +0100, Magnus Hagander wrote:\n> On Mon, Mar 18, 2019 at 7:08 AM Stephen Frost <sfrost@snowman.net> wrote:\n>> I definitely think that we should add an option to allow the user to\n>> tell us explicitly what they want here, even if we also go based on what\n>> the created directory has (and in that case, we should make everything,\n>> including the base directory, follow what the user asked for).\n> \n> +1 for having an option to override whatever the default is.\n\nBased on the feedback gathered, having a separate option to enforce\nthe default and not touching the behavior implemented until now,\nsounds fine to me.\n--\nMichael",
"msg_date": "Mon, 18 Mar 2019 17:16:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "At Mon, 18 Mar 2019 17:16:01 +0900, Michael Paquier <michael@paquier.xyz> wrote in <20190318081601.GI1885@paquier.xyz>\n> On Mon, Mar 18, 2019 at 08:32:44AM +0100, Magnus Hagander wrote:\n> > On Mon, Mar 18, 2019 at 7:08 AM Stephen Frost <sfrost@snowman.net> wrote:\n> >> I definitely think that we should add an option to allow the user to\n> >> tell us explicitly what they want here, even if we also go based on what\n> >> the created directory has (and in that case, we should make everything,\n> >> including the base directory, follow what the user asked for).\n> > \n> > +1 for having an option to override whatever the default is.\n> \n> Based on the feedback gathered, having a separate option to enforce\n> the default and not touching the behavior implemented until now,\n> sounds fine to me.\n\nFWIW +1 from me. That would be like cp -p.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 18 Mar 2019 19:15:12 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On 2019-03-16 15:29, Robert Haas wrote:\n> Another option would be to provide a pg_basebackup option to allow the\n> user to specify what they intended i.e. --[no-]group-read. (Tying it\n> to -R doesn't sound like a good decision to me.)\n\nI was actually surprised to learn how it works right now. I would have\npreferred that pg_basebackup require an explicit option to turn on group\naccess, similar to initdb.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Mon, 18 Mar 2019 13:36:14 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On 2019-03-18 08:32, Magnus Hagander wrote:\n> Going based on the current setting of the directory seems defensible to\n> me, with the argument of \"we trust you created the directory the way you\n> want the rest of the system to be\".\n> \n> \n> Which I believe is also how a plain unix cp (or tar or whatever) would\n> work, isn't it? I think that alone is a pretty strong reason to work the\n> same as those -- they're not entirely unsimilar.\n\nThose don't copy over the network. In the case of pg_basebackup, there\nis nothing that ensures that the remote system has the same users or\ngroups or permission setup.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Mon, 18 Mar 2019 13:38:22 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On Mon, Mar 18, 2019 at 4:16 AM Michael Paquier <michael@paquier.xyz> wrote:\n> On Mon, Mar 18, 2019 at 08:32:44AM +0100, Magnus Hagander wrote:\n> > On Mon, Mar 18, 2019 at 7:08 AM Stephen Frost <sfrost@snowman.net> wrote:\n> >> I definitely think that we should add an option to allow the user to\n> >> tell us explicitly what they want here, even if we also go based on what\n> >> the created directory has (and in that case, we should make everything,\n> >> including the base directory, follow what the user asked for).\n> >\n> > +1 for having an option to override whatever the default is.\n>\n> Based on the feedback gathered, having a separate option to enforce\n> the default and not touching the behavior implemented until now,\n> sounds fine to me.\n\nThat's not what I'm proposing. I think the behavior implemented until\nnow is not best, because the files within the directory should inherit\nthe directory's permissions, not the remote side's permissions.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Mon, 18 Mar 2019 09:47:20 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On 2019-03-18 14:47, Robert Haas wrote:\n>> Based on the feedback gathered, having a separate option to enforce\n>> the default and not touching the behavior implemented until now,\n>> sounds fine to me.\n> That's not what I'm proposing. I think the behavior implemented until\n> now is not best, because the files within the directory should inherit\n> the directory's permissions, not the remote side's permissions.\n\nI'm strongly in favor of keeping initdb and pg_basebackup options\nsimilar and consistent. They are both ways to initialize data directories.\n\nYou'll note that initdb does not behave the way you describe. It's not\nunreasonable behavior, but it's not the way it currently works.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Mon, 18 Mar 2019 16:35:56 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On Mon, Mar 18, 2019 at 11:36 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> On 2019-03-18 14:47, Robert Haas wrote:\n> >> Based on the feedback gathered, having a separate option to enforce\n> >> the default and not touching the behavior implemented until now,\n> >> sounds fine to me.\n> > That's not what I'm proposing. I think the behavior implemented until\n> > now is not best, because the files within the directory should inherit\n> > the directory's permissions, not the remote side's permissions.\n>\n> I'm strongly in favor of keeping initdb and pg_basebackup options\n> similar and consistent. They are both ways to initialize data directories.\n>\n> You'll note that initdb does not behave the way you describe. It's not\n> unreasonable behavior, but it's not the way it currently works.\n\nSo you want to default to no group access regardless of the directory\npermissions, with an option to enable group access that must be\nexplicitly specified? That seems like a reasonable option to me; note\nthat initdb does seem to chdir() an existing directory.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Mon, 18 Mar 2019 11:45:05 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On Mon, Mar 18, 2019 at 11:45:05AM -0400, Robert Haas wrote:\n> So you want to default to no group access regardless of the directory\n> permissions, with an option to enable group access that must be\n> explicitly specified? That seems like a reasonable option to me; note\n> that initdb does seem to chdir() an existing directory.\n\nHm. We have been assuming that the contents of a base backup inherit\nthe permission of the source when using pg_basebackup because this\nallows users to keep a nodes in a consistent state without deciding\nwhich option to use. Do you mean that you would like to enforce the\npermissions of only the root directory if it exists? Or the root\ndirectory with all its contents? The former may be fine. The latter\nis definitely not.\n--\nMichael",
"msg_date": "Tue, 19 Mar 2019 15:29:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On Tue, Mar 19, 2019 at 5:29 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Mon, Mar 18, 2019 at 11:45:05AM -0400, Robert Haas wrote:\n> > So you want to default to no group access regardless of the directory\n> > permissions, with an option to enable group access that must be\n> > explicitly specified? That seems like a reasonable option to me; note\n> > that initdb does seem to chdir() an existing directory.\n>\n> Hm. We have been assuming that the contents of a base backup inherit\n> the permission of the source when using pg_basebackup because this\n> allows users to keep a nodes in a consistent state without deciding\n> which option to use. Do you mean that you would like to enforce the\n> permissions of only the root directory if it exists? Or the root\n> directory with all its contents? The former may be fine. The latter\n> is definitely not.\n>\n\nAs per my understanding going through the discussion, the option is for\nroot directory with all its contents also.\n\nHow about the following change?\n\npg_basebackup --> copies the contents of the src directory (with group\naccess)\nand even the root directory permissions.\n\npg_basebackup --no-group-access --> copies the contents of the src\ndirectory\n(with no group access) even for the root directory.\n\nSo the default behavior works for many people, others that needs restrict\nbehavior\ncan use the new option.\n\nRegards,\nHaribabu Kommi\nFujitsu Australia\n\nOn Tue, Mar 19, 2019 at 5:29 PM Michael Paquier <michael@paquier.xyz> wrote:On Mon, Mar 18, 2019 at 11:45:05AM -0400, Robert Haas wrote:\n> So you want to default to no group access regardless of the directory\n> permissions, with an option to enable group access that must be\n> explicitly specified? That seems like a reasonable option to me; note\n> that initdb does seem to chdir() an existing directory.\n\nHm. We have been assuming that the contents of a base backup inherit\nthe permission of the source when using pg_basebackup because this\nallows users to keep a nodes in a consistent state without deciding\nwhich option to use. Do you mean that you would like to enforce the\npermissions of only the root directory if it exists? Or the root\ndirectory with all its contents? The former may be fine. The latter\nis definitely not.As per my understanding going through the discussion, the option is forroot directory with all its contents also.How about the following change?pg_basebackup --> copies the contents of the src directory (with group access) and even the root directory permissions.pg_basebackup --no-group-access --> copies the contents of the src directory (with no group access) even for the root directory.So the default behavior works for many people, others that needs restrict behaviorcan use the new option.Regards,Haribabu KommiFujitsu Australia",
"msg_date": "Tue, 19 Mar 2019 18:34:12 +1100",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On 2019-03-18 16:45, Robert Haas wrote:\n>> I'm strongly in favor of keeping initdb and pg_basebackup options\n>> similar and consistent. They are both ways to initialize data directories.\n>>\n>> You'll note that initdb does not behave the way you describe. It's not\n>> unreasonable behavior, but it's not the way it currently works.\n> So you want to default to no group access regardless of the directory\n> permissions, with an option to enable group access that must be\n> explicitly specified? That seems like a reasonable option to me; note\n> that initdb does seem to chdir() an existing directory.\n\nI think that would have been my preference, but PG11 is already shipping\nwith the current behavior, so I'm not sure whether that should be changed.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Wed, 20 Mar 2019 16:54:17 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On 2019-03-19 08:34, Haribabu Kommi wrote:\n> How about the following change?\n> \n> pg_basebackup --> copies the contents of the src directory (with group\n> access) \n> and even the root directory permissions.\n> \n> pg_basebackup --no-group-access --> copies the contents of the src\n> directory \n> (with no group access) even for the root directory.\n\nI'm OK with that. Perhaps a positive option --allow-group-access would\nalso be useful.\n\nLet's make sure the behavior of these options is aligned with initdb.\nAnd write tests for each variant.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Wed, 20 Mar 2019 17:02:34 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On Tue, Mar 19, 2019 at 2:29 AM Michael Paquier <michael@paquier.xyz> wrote:\n> Hm. We have been assuming that the contents of a base backup inherit\n> the permission of the source when using pg_basebackup because this\n> allows users to keep a nodes in a consistent state without deciding\n> which option to use. Do you mean that you would like to enforce the\n> permissions of only the root directory if it exists? Or the root\n> directory with all its contents? The former may be fine. The latter\n> is definitely not.\n\nWhy not?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Thu, 21 Mar 2019 14:56:24 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On Thu, Mar 21, 2019 at 3:02 AM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2019-03-19 08:34, Haribabu Kommi wrote:\n> > How about the following change?\n> >\n> > pg_basebackup --> copies the contents of the src directory (with group\n> > access)\n> > and even the root directory permissions.\n> >\n> > pg_basebackup --no-group-access --> copies the contents of the src\n> > directory\n> > (with no group access) even for the root directory.\n>\n> I'm OK with that. Perhaps a positive option --allow-group-access would\n> also be useful.\n>\n\nDo you want both --allow-group-access and --no-group-access options to\nbe added to pg_basebackup?\n\nWithout --allow-group-access is automatically --no-group-access?\n\nOr you want pg_basebackup independently decide the group access irrespective\nof the source directory?\n\nEven if the source directory is \"not group access\", pg_basebackup\n--allow-group-access\nmake it standby as \"group access\".\n\nsource directory is \"group access\", pg_basebackup --no-group-access make it\n\"no group access\" standby directory.\n\nDefault behavior of pg_basebackup is just to copy same as source directory?\n\n\n> Let's make sure the behavior of these options is aligned with initdb.\n> And write tests for each variant.\n>\n\nOK.\n\nRegards,\nHaribabu Kommi\nFujitsu Australia\n\nOn Thu, Mar 21, 2019 at 3:02 AM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:On 2019-03-19 08:34, Haribabu Kommi wrote:\n> How about the following change?\n> \n> pg_basebackup --> copies the contents of the src directory (with group\n> access) \n> and even the root directory permissions.\n> \n> pg_basebackup --no-group-access --> copies the contents of the src\n> directory \n> (with no group access) even for the root directory.\n\nI'm OK with that. Perhaps a positive option --allow-group-access would\nalso be useful.Do you want both --allow-group-access and --no-group-access options tobe added to pg_basebackup?Without --allow-group-access is automatically --no-group-access?Or you want pg_basebackup independently decide the group access irrespectiveof the source directory?Even if the source directory is \"not group access\", pg_basebackup --allow-group-accessmake it standby as \"group access\".source directory is \"group access\", pg_basebackup --no-group-access make it\"no group access\" standby directory.Default behavior of pg_basebackup is just to copy same as source directory? \nLet's make sure the behavior of these options is aligned with initdb.\nAnd write tests for each variant.OK.Regards,Haribabu KommiFujitsu Australia",
"msg_date": "Fri, 22 Mar 2019 07:26:08 +1100",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On Thu, Mar 21, 2019 at 02:56:24PM -0400, Robert Haas wrote:\n> On Tue, Mar 19, 2019 at 2:29 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> Hm. We have been assuming that the contents of a base backup inherit\n>> the permission of the source when using pg_basebackup because this\n>> allows users to keep a nodes in a consistent state without deciding\n>> which option to use. Do you mean that you would like to enforce the\n>> permissions of only the root directory if it exists? Or the root\n>> directory with all its contents? The former may be fine. The latter\n>> is definitely not.\n> \n> Why not?\n\nBecause we have released v11 so as we respect the permissions set on\nthe source instead from which the backup is taken for all the folder's\ncontent. If we begin to enforce it we may break some cases. If a new\noption is introduced, it seems to me that the default should remain\nwhat has been released with v11, but that it is additionally possible\nto enforce group permissions or non-group permissions at will on the\nbackup taken for all the contents in the data folder, including the\nroot folder, created manually or not before running the pg_basebackup\ncommand.\n--\nMichael",
"msg_date": "Fri, 22 Mar 2019 09:42:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On Fri, Mar 22, 2019 at 11:42 AM Michael Paquier <michael@paquier.xyz>\nwrote:\n\n> On Thu, Mar 21, 2019 at 02:56:24PM -0400, Robert Haas wrote:\n> > On Tue, Mar 19, 2019 at 2:29 AM Michael Paquier <michael@paquier.xyz>\n> wrote:\n> >> Hm. We have been assuming that the contents of a base backup inherit\n> >> the permission of the source when using pg_basebackup because this\n> >> allows users to keep a nodes in a consistent state without deciding\n> >> which option to use. Do you mean that you would like to enforce the\n> >> permissions of only the root directory if it exists? Or the root\n> >> directory with all its contents? The former may be fine. The latter\n> >> is definitely not.\n> >\n> > Why not?\n>\n> Because we have released v11 so as we respect the permissions set on\n> the source instead from which the backup is taken for all the folder's\n> content. If we begin to enforce it we may break some cases. If a new\n> option is introduced, it seems to me that the default should remain\n> what has been released with v11, but that it is additionally possible\n> to enforce group permissions or non-group permissions at will on the\n> backup taken for all the contents in the data folder, including the\n> root folder, created manually or not before running the pg_basebackup\n> command.\n>\n\nHow about letting the pg_basebackup to decide group permissions of the\nstandby directory irrespective of the primary directory permissions.\n\nDefault - permissions are same as primary\n--allow-group-access - standby directory have group access permissions\n--no-group--access - standby directory doesn't have group permissions\n\nThe last two options behave irrespective of the primary directory\npermissions.\n\nopinions?\n\nRegards,\nHaribabu Kommi\nFujitsu Australia\n\nOn Fri, Mar 22, 2019 at 11:42 AM Michael Paquier <michael@paquier.xyz> wrote:On Thu, Mar 21, 2019 at 02:56:24PM -0400, Robert Haas wrote:\n> On Tue, Mar 19, 2019 at 2:29 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> Hm. We have been assuming that the contents of a base backup inherit\n>> the permission of the source when using pg_basebackup because this\n>> allows users to keep a nodes in a consistent state without deciding\n>> which option to use. Do you mean that you would like to enforce the\n>> permissions of only the root directory if it exists? Or the root\n>> directory with all its contents? The former may be fine. The latter\n>> is definitely not.\n> \n> Why not?\n\nBecause we have released v11 so as we respect the permissions set on\nthe source instead from which the backup is taken for all the folder's\ncontent. If we begin to enforce it we may break some cases. If a new\noption is introduced, it seems to me that the default should remain\nwhat has been released with v11, but that it is additionally possible\nto enforce group permissions or non-group permissions at will on the\nbackup taken for all the contents in the data folder, including the\nroot folder, created manually or not before running the pg_basebackup\ncommand.\nHow about letting the pg_basebackup to decide group permissions of thestandby directory irrespective of the primary directory permissions.Default - permissions are same as primary--allow-group-access - standby directory have group access permissions--no-group--access - standby directory doesn't have group permissionsThe last two options behave irrespective of the primary directory permissions.opinions?Regards,Haribabu KommiFujitsu Australia",
"msg_date": "Fri, 22 Mar 2019 14:45:24 +1100",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On Fri, Mar 22, 2019 at 02:45:24PM +1100, Haribabu Kommi wrote:\n> How about letting the pg_basebackup to decide group permissions of the\n> standby directory irrespective of the primary directory permissions.\n> \n> Default - permissions are same as primary\n> --allow-group-access - standby directory have group access permissions\n> --no-group--access - standby directory doesn't have group permissions\n> \n> The last two options behave irrespective of the primary directory\n> permissions.\n\nYes, I'd imagine that we would want to be able to define three\ndifferent behaviors, by either having a set of options, or a sinple\noption with a switch, say --group-access:\n- \"inherit\" causes the permissions to be inherited from the source\nnode, and that's the default.\n- \"none\" enforces the default 0700/0600.\n- \"group\" enforces group read access.\n--\nMichael",
"msg_date": "Fri, 22 Mar 2019 13:00:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On 2019-03-22 05:00, Michael Paquier wrote:\n> On Fri, Mar 22, 2019 at 02:45:24PM +1100, Haribabu Kommi wrote:\n>> How about letting the pg_basebackup to decide group permissions of the\n>> standby directory irrespective of the primary directory permissions.\n>>\n>> Default - permissions are same as primary\n>> --allow-group-access - standby directory have group access permissions\n>> --no-group--access - standby directory doesn't have group permissions\n>>\n>> The last two options behave irrespective of the primary directory\n>> permissions.\n> \n> Yes, I'd imagine that we would want to be able to define three\n> different behaviors, by either having a set of options, or a sinple\n> option with a switch, say --group-access:\n> - \"inherit\" causes the permissions to be inherited from the source\n> node, and that's the default.\n> - \"none\" enforces the default 0700/0600.\n> - \"group\" enforces group read access.\n\nYes, we could use those three behaviors.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 22 Mar 2019 16:23:14 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On Sat, Mar 23, 2019 at 2:23 AM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2019-03-22 05:00, Michael Paquier wrote:\n> > On Fri, Mar 22, 2019 at 02:45:24PM +1100, Haribabu Kommi wrote:\n> >> How about letting the pg_basebackup to decide group permissions of the\n> >> standby directory irrespective of the primary directory permissions.\n> >>\n> >> Default - permissions are same as primary\n> >> --allow-group-access - standby directory have group access permissions\n> >> --no-group--access - standby directory doesn't have group permissions\n> >>\n> >> The last two options behave irrespective of the primary directory\n> >> permissions.\n> >\n> > Yes, I'd imagine that we would want to be able to define three\n> > different behaviors, by either having a set of options, or a sinple\n> > option with a switch, say --group-access:\n> > - \"inherit\" causes the permissions to be inherited from the source\n> > node, and that's the default.\n> > - \"none\" enforces the default 0700/0600.\n> > - \"group\" enforces group read access.\n>\n> Yes, we could use those three behaviors.\n>\n\nThanks for all your opinions, here I attached an updated patch as discussed.\n\nNew option -g --group-mode is added to pg_basebackup to specify the\ngroup access permissions.\n\ninherit - same permissions as source instance (default)\nnone - No group permissions irrespective of source instance\ngroup - group permissions irrespective of source instance\n\nWith the above additional options, the pg_basebackup is able to control\nthe access permissions of the backup files, but when it comes to tar mode\nall the files are sent from the server and stored as it is in backup, to\nsupport\ntar mode group access mode control, the BASE BACKUP protocol is\nenhanced with new option GROUP_MODE 'none' or GROUP_MODE 'group'\nto control the file permissions before they are sent to backup. Sending\nGROUP_MODE to the server depends on the -g option received to the\npg_basebackup utility.\n\ncomments?\n\nRegards,\nHaribabu Kommi\nFujitsu Australia",
"msg_date": "Sun, 24 Mar 2019 22:30:47 +1100",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On Thu, Mar 21, 2019 at 8:42 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > Why not?\n>\n> Because we have released v11 so as we respect the permissions set on\n> the source instead from which the backup is taken for all the folder's\n> content. If we begin to enforce it we may break some cases. If a new\n> option is introduced, it seems to me that the default should remain\n> what has been released with v11, but that it is additionally possible\n> to enforce group permissions or non-group permissions at will on the\n> backup taken for all the contents in the data folder, including the\n> root folder, created manually or not before running the pg_basebackup\n> command.\n\nI don't agree with that logic, because setting the permissions of the\ncontent one way and the directory another cannot really be what anyone\nwants.\n\nIf we're going to go with -g {inherit|none|group} then -g inherit\nought to do what was originally proposed on this thread -- i.e. set\nthe directory permissions to match the contents. I don't think that's\na change that can or should be back-patched, but we should make it\nconsistent as part of this cleanup.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Mon, 25 Mar 2019 09:08:23 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On Mon, Mar 25, 2019 at 09:08:23AM -0400, Robert Haas wrote:\n> If we're going to go with -g {inherit|none|group} then -g inherit\n> ought to do what was originally proposed on this thread -- i.e. set\n> the directory permissions to match the contents. I don't think that's\n> a change that can or should be back-patched, but we should make it\n> consistent as part of this cleanup.\n\nNo objections from me to do that actually. That's a small\ncompatibility break, but with the new options it is possible to live\nwith. Perhaps we should add into initdb the same switch? Except that\n\"inherit\" makes no sense there.\n--\nMichael",
"msg_date": "Tue, 26 Mar 2019 07:44:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On Sun, Mar 24, 2019 at 10:30:47PM +1100, Haribabu Kommi wrote:\n> With the above additional options, the pg_basebackup is able to control\n> the access permissions of the backup files, but when it comes to tar mode\n> all the files are sent from the server and stored as it is in backup, to\n> support\n> tar mode group access mode control, the BASE BACKUP protocol is\n> enhanced with new option GROUP_MODE 'none' or GROUP_MODE 'group'\n> to control the file permissions before they are sent to backup. Sending\n> GROUP_MODE to the server depends on the -g option received to the\n> pg_basebackup utility.\n\nDo we really want to extend the replication protocol to control that?\nI am really questioning if we should keep this stuff isolated within\npg_basebackup or not. At the same time, it may be confusing to have\nBASE_BACKUP only use the permissions inherited from the data\ndirectory, so some input from folks maintaining an external backup\ntool is welcome.\n--\nMichael",
"msg_date": "Tue, 26 Mar 2019 11:26:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On Tue, Mar 26, 2019 at 1:27 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Sun, Mar 24, 2019 at 10:30:47PM +1100, Haribabu Kommi wrote:\n> > With the above additional options, the pg_basebackup is able to control\n> > the access permissions of the backup files, but when it comes to tar mode\n> > all the files are sent from the server and stored as it is in backup, to\n> > support\n> > tar mode group access mode control, the BASE BACKUP protocol is\n> > enhanced with new option GROUP_MODE 'none' or GROUP_MODE 'group'\n> > to control the file permissions before they are sent to backup. Sending\n> > GROUP_MODE to the server depends on the -g option received to the\n> > pg_basebackup utility.\n>\n>\nThanks for the review.\n\n\n> Do we really want to extend the replication protocol to control that?\n>\n\nAs the backup data is passed in tar format and if the pg_basebackup\nis also storing it in tar format, i feel changing the permissions on tar\ncreation is easier than regenerating the received tar with different\npermissions at pg_basebackup side.\n\nOther than tar format, changing only in pg_basebackup can support\nindependent group access permissions of the standby directory.\n\nI am really questioning if we should keep this stuff isolated within\n> pg_basebackup or not. At the same time, it may be confusing to have\n> BASE_BACKUP only use the permissions inherited from the data\n> directory, so some input from folks maintaining an external backup\n> tool is welcome.\n>\n\nThat would be good to hear what other external backup tool authors\nthink of this change.\n\nRegards,\nHaribabu Kommi\nFujitsu Australia\n\nOn Tue, Mar 26, 2019 at 1:27 PM Michael Paquier <michael@paquier.xyz> wrote:On Sun, Mar 24, 2019 at 10:30:47PM +1100, Haribabu Kommi wrote:\n> With the above additional options, the pg_basebackup is able to control\n> the access permissions of the backup files, but when it comes to tar mode\n> all the files are sent from the server and stored as it is in backup, to\n> support\n> tar mode group access mode control, the BASE BACKUP protocol is\n> enhanced with new option GROUP_MODE 'none' or GROUP_MODE 'group'\n> to control the file permissions before they are sent to backup. Sending\n> GROUP_MODE to the server depends on the -g option received to the\n> pg_basebackup utility.\nThanks for the review. \nDo we really want to extend the replication protocol to control that?As the backup data is passed in tar format and if the pg_basebackupis also storing it in tar format, i feel changing the permissions on tarcreation is easier than regenerating the received tar with different permissions at pg_basebackup side.Other than tar format, changing only in pg_basebackup can supportindependent group access permissions of the standby directory.\nI am really questioning if we should keep this stuff isolated within\npg_basebackup or not. At the same time, it may be confusing to have\nBASE_BACKUP only use the permissions inherited from the data\ndirectory, so some input from folks maintaining an external backup\ntool is welcome.That would be good to hear what other external backup tool authorsthink of this change.Regards,Haribabu KommiFujitsu Australia",
"msg_date": "Tue, 26 Mar 2019 14:59:01 +1100",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On 2019-03-26 03:26, Michael Paquier wrote:\n> Do we really want to extend the replication protocol to control that?\n\nPerhaps we are losing sight of the original problem, which is that if\nyou create the target directory with the wrong permissions then ... it\nhas the wrong permissions. And you are free to change the permissions\nat any time. Many of the proposed solutions sound excessively\ncomplicated relative to that.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 29 Mar 2019 11:04:59 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On 3/26/19 3:59 AM, Haribabu Kommi wrote:\n> \n> I am really questioning if we should keep this stuff isolated within\n> pg_basebackup or not. At the same time, it may be confusing to have\n> BASE_BACKUP only use the permissions inherited from the data\n> directory, so some input from folks maintaining an external backup\n> tool is welcome.\n> \n> \n> That would be good to hear what other external backup tool authors\n> think of this change.\n\nI'm OK with the -g (inherit|none|group) option as implemented. I prefer \nthe default as it is (inherit), which makes sense since I wrote it that way.\n\nHaving BASE_BACKUP set the permissions inside the tar file seems OK as \nwell. I'm not aware of any external solutions that are using the \nreplication protocol directly - I believe they all use pg_basebackup, so \nI don't think they would need to change anything.\n\nHaving said that, I think setting permissions is a pretty trivial thing \nto do and there are plenty of possible scenarios that are still not \ncovered here. I have no objections to the patch but it seems like overkill.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Fri, 29 Mar 2019 11:35:49 +0000",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On Fri, Mar 29, 2019 at 9:05 PM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2019-03-26 03:26, Michael Paquier wrote:\n> > Do we really want to extend the replication protocol to control that?\n>\n> Perhaps we are losing sight of the original problem, which is that if\n> you create the target directory with the wrong permissions then ... it\n> has the wrong permissions. And you are free to change the permissions\n> at any time. Many of the proposed solutions sound excessively\n> complicated relative to that.\n>\n\nYes, I agree that the proposed solution for fixing the original problem of\nexisting\ndata directory permissions with new group options to pg_basebackup is a\nbig.\n\nwhy can't we just fix the permissions of the directory by default as per the\nsource instance? I feel with this change, it may be useful to many people\nthan problem.\n\n From understanding of the thread discussion,\n\n+1 by:\n\nMichael Paquier\nRobert Haas\nHaribabu Kommi\n\n-1 by:\n\nMagnus Hagander\nPeter Eisentraut\n\nDoes any one want to weigh their opinion on this patch to consider best\noption for controlling the existing standby data directory permissions.\n\nRegards,\nHaribabu Kommi\nFujitsu Australia\n\nOn Fri, Mar 29, 2019 at 9:05 PM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:On 2019-03-26 03:26, Michael Paquier wrote:\n> Do we really want to extend the replication protocol to control that?\n\nPerhaps we are losing sight of the original problem, which is that if\nyou create the target directory with the wrong permissions then ... it\nhas the wrong permissions. And you are free to change the permissions\nat any time. Many of the proposed solutions sound excessively\ncomplicated relative to that.Yes, I agree that the proposed solution for fixing the original problem of existingdata directory permissions with new group options to pg_basebackup is abig.why can't we just fix the permissions of the directory by default as per thesource instance? I feel with this change, it may be useful to many peoplethan problem.From understanding of the thread discussion,+1 by:Michael PaquierRobert HaasHaribabu Kommi-1 by:Magnus HaganderPeter EisentrautDoes any one want to weigh their opinion on this patch to consider best option for controlling the existing standby data directory permissions.Regards,Haribabu KommiFujitsu Australia",
"msg_date": "Wed, 3 Apr 2019 16:44:37 +1100",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On Fri, Mar 29, 2019 at 6:05 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> On 2019-03-26 03:26, Michael Paquier wrote:\n> > Do we really want to extend the replication protocol to control that?\n>\n> Perhaps we are losing sight of the original problem, which is that if\n> you create the target directory with the wrong permissions then ... it\n> has the wrong permissions. And you are free to change the permissions\n> at any time. Many of the proposed solutions sound excessively\n> complicated relative to that.\n\nI don't think I agree with that characterization of the problem. I\nmean, what do you mean by \"wrong\"? Perhaps you created the directory\nwith the \"right\" permissions, i.e. those you actually wanted, and then\npg_basebackup rather rudely insisted on ignoring them when it decided\nhow to set the permissions for the files inside that directory. On the\nother hand, perhaps you wished to abdicate responsibility for security\ndecisions to whatever rule pg_basebackup uses, and it rather rudely\ndidn't bother to enforce that decision on the top level directory,\nforcing you to think about a question you had decided to ignore.\n\nI am not sure what solution is best here, but it is hard to imagine\nthat the status quo is the right thing.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 3 Apr 2019 12:01:15 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On 2019-Apr-03, Robert Haas wrote:\n\n> I am not sure what solution is best here, but it is hard to imagine\n> that the status quo is the right thing.\n\nThis patch has been dormant for months. There's been at lot of\ndiscussion but it doesn't seem conclusive; it doesn't look like we know\nwhat we actually want to do. Can I try to restart the discussion and\nsee if we can get to an agreement, so that somebody can implement it?\nFailing that, it seems this patch would be Returned with Little Useful Feedback.\n\nThere seem to be multiple fine points here:\n\n1. We want to have initdb and pg_basebackup behave consistently.\n\n Maybe if we don't like that changing pg_basebackup would make it\n behave differently to initdb, then we ought to change both tools'\n default behavior, and give equivalent new options to both to select\n the other(s?) behavior(s?). So I talk about \"the tool\" referring to\n both initdb and pg_basebackup in the following.\n\n2. Should the case of creating a new dir behave differently from using\n an existing directory?\n\n Probably for simplicity we want both cases to behave the same.\n I mean that if an existing dir has group privs and we choose that the\n default behavior is without group privs, then those would get removed\n unless a cmd line arg is given. Contrariwise if we choose that group\n perms are to be preserved if they exist, then we should create a new\n dir with group privs unless an option is given.\n\n3. Sometimes we want to have the tool keep the permissions of an\n existing directory, but for pg_basebackup the user might sometimes\n want to preserve the permissions of upstream instead.\n\n It seems to me that we could choose the default to be the most secure\n behavior (which AFAICT is not to have any group perms), and if the\n user wants to preserve group perms in an existing dir (or give group\n perms to a directory created by the tool) they can pass a bespoke\n command line argument.\n\n I think ultimately this means that upstream privs would go ignored by\n pg_basebackup. Maybe we can add another cmdline option to enable\n preserving such.\n\nI hope I didn't completely misunderstand the thread -- always a\npossibility.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 4 Sep 2019 16:11:17 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
},
{
"msg_contents": "On Wed, Sep 04, 2019 at 04:11:17PM -0400, Alvaro Herrera wrote:\n> This patch has been dormant for months. There's been at lot of\n> discussion but it doesn't seem conclusive; it doesn't look like we know\n> what we actually want to do. Can I try to restart the discussion and\n> see if we can get to an agreement, so that somebody can implement it?\n> Failing that, it seems this patch would be Returned with Little Useful\n> Feedback.\n\nThis thread has been idle for a couple of months, so I have marked the\nCF entry as returned with little useful feedback.\n--\nMichael",
"msg_date": "Wed, 27 Nov 2019 18:14:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup ignores the existing data directory permissions"
}
] |
[
{
"msg_contents": "Peter Eisentraut wrote:\n> I'm concerned with how this would affect the future maintenance of this\n> code. You are introducing a whole separate code path for PMDK beside\n> the normal file path (and it doesn't seem very well separated either).\n> Now everyone who wants to do some surgery in the WAL code needs to take\n> that into account. And everyone who wants to do performance work in the\n> WAL code needs to check that the PMDK path doesn't regress. AFAICT,\n> this hardware isn't very popular at the moment, so it would be very hard\n> to peer review any work in this area.\n\nThank you for your comment. It is reasonable that you are concerned with\nmaintainability. Our patchset still lacks of it. I will consider about\nthat when I submit a next update. (It may take a long time, so please be\npatient...)\n\n\nRegards,\nTakashi\n\n-- \nTakashi Menjo - NTT Software Innovation Center\n<menjo.takashi@lab.ntt.co.jp>\n\n\n\n",
"msg_date": "Tue, 12 Feb 2019 16:06:53 +0900",
"msg_from": "\"Takashi Menjo\" <menjo.takashi@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS][PATCH] Applying PMDK to WAL operations for persistent\n memory"
},
{
"msg_contents": "Dear hackers,\n\nI rebased my old patchset. It would be good to compare this v4 patchset to\nnon-volatile WAL buffer's one [1].\n\n[1]\nhttps://www.postgresql.org/message-id/002101d649fb$1f5966e0$5e0c34a0$@hco.ntt.co.jp_1\n\nRegards,\nTakashi\n\n-- \nTakashi Menjo <takashi.menjo@gmail.com>",
"msg_date": "Tue, 4 Aug 2020 15:11:09 +0900",
"msg_from": "Takashi Menjo <takashi.menjo@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS][PATCH] Applying PMDK to WAL operations for persistent\n memory"
}
] |
[
{
"msg_contents": "I stumbled upon this question:\n\n https://dba.stackexchange.com/questions/229413\n\nin a nutshell: the bloom index is not used with the example from the manual. \n\nThe bloom index is only used if either Seq Scan is disabled or if the random_page_cost is set to 1 (anything about 1 triggers a Seq Scan on my Windows laptop). \n\nIf parallel execution is disabled, then the bloom index is only used if the random_page_cost is lower than 4. \n\nThis does not use the index:\n\n set random_page_cost = 4; \n set max_parallel_workers_per_gather=0;\n explain (analyze, buffers) \n select * \n from tbloom \n where i2 = 898732 \n and i5 = 123451;\n\nThis uses the bloom index:\n\n set random_page_cost = 3.5; \n set max_parallel_workers_per_gather=0;\n explain (analyze, buffers) \n select * \n from tbloom \n where i2 = 898732 \n and i5 = 123451;\n\nAnd this uses the index also: \n\n set random_page_cost = 1; \n explain (analyze, buffers) \n select * \n from tbloom \n where i2 = 898732 \n and i5 = 123451;\n\nThis is the plan with when the index is used (either through \"enable_seqscan = off\" or \"random_page_cost = 1\")\n\nBitmap Heap Scan on tbloom (cost=138436.69..138440.70 rows=1 width=24) (actual time=42.444..42.444 rows=0 loops=1) \n Recheck Cond: ((i2 = 898732) AND (i5 = 123451)) \n Rows Removed by Index Recheck: 2400 \n Heap Blocks: exact=2365 \n Buffers: shared hit=21973 \n -> Bitmap Index Scan on bloomidx (cost=0.00..138436.69 rows=1 width=0) (actual time=40.756..40.756 rows=2400 loops=1)\n Index Cond: ((i2 = 898732) AND (i5 = 123451)) \n Buffers: shared hit=19608 \nPlanning Time: 0.075 ms \nExecution Time: 42.531 ms \n\nAnd this is the plan when everything left at default settings:\n\nSeq Scan on tbloom (cost=0.00..133695.80 rows=1 width=24) (actual time=1220.116..1220.116 rows=0 loops=1)\n Filter: ((i2 = 898732) AND (i5 = 123451)) \n Rows Removed by Filter: 10000000 \n Buffers: shared hit=4697 read=58998 \n I/O Timings: read=354.670 \nPlanning Time: 0.075 ms \nExecution Time: 1220.144 ms \n\nCan this be considered a bug in the cost model of the bloom index implementation? \nOr is it expected that this is only used if random access is really cheap? \n\nThomas\n\n\n\n",
"msg_date": "Tue, 12 Feb 2019 16:08:25 +0100",
"msg_from": "Thomas Kellerer <spam_eater@gmx.net>",
"msg_from_op": true,
"msg_subject": "Bloom index cost model seems to be wrong"
},
{
"msg_contents": "Thomas Kellerer <spam_eater@gmx.net> writes:\n> The bloom index is only used if either Seq Scan is disabled or if the random_page_cost is set to 1 (anything about 1 triggers a Seq Scan on my Windows laptop). \n\nHm. blcostestimate is using the default cost calculation, except for\n\n\t/* We have to visit all index tuples anyway */\n\tcosts.numIndexTuples = index->tuples;\n\nwhich essentially tells genericcostestimate to assume that every index\ntuple will be visited. This obviously is going to increase the cost\nestimate; maybe there's something wrong with that?\n\nI notice that the bloom regression test script is only testing queries\nwhere it forces the choice of plan type, so it really doesn't prove\nanything about whether the cost estimates are sane.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 12 Feb 2019 10:41:58 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bloom index cost model seems to be wrong"
},
{
"msg_contents": "On Tue, Feb 12, 2019 at 10:42 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Thomas Kellerer <spam_eater@gmx.net> writes:\n> > The bloom index is only used if either Seq Scan is disabled or if the\n> random_page_cost is set to 1 (anything about 1 triggers a Seq Scan on my\n> Windows laptop).\n>\n> Hm. blcostestimate is using the default cost calculation, except for\n>\n> /* We have to visit all index tuples anyway */\n> costs.numIndexTuples = index->tuples;\n>\n> which essentially tells genericcostestimate to assume that every index\n> tuple will be visited. This obviously is going to increase the cost\n> estimate; maybe there's something wrong with that?\n>\n\nI assumed (without investigating yet) that genericcostestimate is applying\na cpu_operator_cost (or a few of them) on each index tuple, while the\npremise of a bloom index is that you do very fast bit-fiddling, not more\nexpense SQL operators, for each tuple and then do the recheck only on what\nsurvives to the table tuple part.\n\nCheers,\n\nJeff\n\nOn Tue, Feb 12, 2019 at 10:42 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Thomas Kellerer <spam_eater@gmx.net> writes:\n> The bloom index is only used if either Seq Scan is disabled or if the random_page_cost is set to 1 (anything about 1 triggers a Seq Scan on my Windows laptop). \n\nHm. blcostestimate is using the default cost calculation, except for\n\n /* We have to visit all index tuples anyway */\n costs.numIndexTuples = index->tuples;\n\nwhich essentially tells genericcostestimate to assume that every index\ntuple will be visited. This obviously is going to increase the cost\nestimate; maybe there's something wrong with that?I assumed (without investigating yet) that genericcostestimate is applying a cpu_operator_cost (or a few of them) on each index tuple, while the premise of a bloom index is that you do very fast bit-fiddling, not more expense SQL operators, for each tuple and then do the recheck only on what survives to the table tuple part.Cheers,Jeff",
"msg_date": "Tue, 12 Feb 2019 11:58:08 -0500",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bloom index cost model seems to be wrong"
},
{
"msg_contents": "On Tue, Feb 12, 2019 at 11:58 AM Jeff Janes <jeff.janes@gmail.com> wrote:\n\n>\n> On Tue, Feb 12, 2019 at 10:42 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>>\n>> Hm. blcostestimate is using the default cost calculation, except for\n>>\n>> /* We have to visit all index tuples anyway */\n>> costs.numIndexTuples = index->tuples;\n>>\n>> which essentially tells genericcostestimate to assume that every index\n>> tuple will be visited. This obviously is going to increase the cost\n>> estimate; maybe there's something wrong with that?\n>>\n>\n> I assumed (without investigating yet) that genericcostestimate is applying\n> a cpu_operator_cost (or a few of them) on each index tuple, while the\n> premise of a bloom index is that you do very fast bit-fiddling, not more\n> expense SQL operators, for each tuple and then do the recheck only on what\n> survives to the table tuple part.\n>\n\nIn order for bloom (or any other users of CREATE ACCESS METHOD, if there\nare any) to have a fighting chance to do better, I think many of selfuncs.c\ncurrently private functions would have to be declared in some header file,\nperhaps utils/selfuncs.h. But that then requires a cascade of other\ninclusions. Perhaps that is why it was not done.\n\nCheers,\n\nJeff\n\n>\n\nOn Tue, Feb 12, 2019 at 11:58 AM Jeff Janes <jeff.janes@gmail.com> wrote:On Tue, Feb 12, 2019 at 10:42 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nHm. blcostestimate is using the default cost calculation, except for\n\n /* We have to visit all index tuples anyway */\n costs.numIndexTuples = index->tuples;\n\nwhich essentially tells genericcostestimate to assume that every index\ntuple will be visited. This obviously is going to increase the cost\nestimate; maybe there's something wrong with that?I assumed (without investigating yet) that genericcostestimate is applying a cpu_operator_cost (or a few of them) on each index tuple, while the premise of a bloom index is that you do very fast bit-fiddling, not more expense SQL operators, for each tuple and then do the recheck only on what survives to the table tuple part.In order for bloom (or any other users of CREATE ACCESS METHOD, if there are any) to have a fighting chance to do better, I think many of selfuncs.c currently private functions would have to be declared in some header file, perhaps utils/selfuncs.h. But that then requires a cascade of other inclusions. Perhaps that is why it was not done.Cheers,Jeff",
"msg_date": "Tue, 12 Feb 2019 14:56:40 -0500",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bloom index cost model seems to be wrong"
},
{
"msg_contents": "Jeff Janes <jeff.janes@gmail.com> writes:\n> In order for bloom (or any other users of CREATE ACCESS METHOD, if there\n> are any) to have a fighting chance to do better, I think many of selfuncs.c\n> currently private functions would have to be declared in some header file,\n> perhaps utils/selfuncs.h. But that then requires a cascade of other\n> inclusions. Perhaps that is why it was not done.\n\nI'm just in the midst of refactoring that stuff, so if you have\nsuggestions, let's hear 'em.\n\nIt's possible that a good cost model for bloom is so far outside\ngenericcostestimate's ideas that trying to use it is not a good\nidea anyway.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 12 Feb 2019 16:17:23 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bloom index cost model seems to be wrong"
},
{
"msg_contents": "On Tue, Feb 12, 2019 at 4:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Jeff Janes <jeff.janes@gmail.com> writes:\n> > In order for bloom (or any other users of CREATE ACCESS METHOD, if there\n> > are any) to have a fighting chance to do better, I think many of\n> selfuncs.c\n> > currently private functions would have to be declared in some header\n> file,\n> > perhaps utils/selfuncs.h. But that then requires a cascade of other\n> > inclusions. Perhaps that is why it was not done.\n>\n> I'm just in the midst of refactoring that stuff, so if you have\n> suggestions, let's hear 'em.\n>\n\nThe goal would be that I can copy the entire definition of\ngenericcostestimate into blcost.c, change the function's name, and get it\nto compile. I don't know the correct way accomplish that. Maybe\nutils/selfuncs.h can be expanded to work, or if there should be a new\nheader file like \"utils/index_am_cost.h\"\n\nWhat I've done for now is:\n\n#include \"../../src/backend/utils/adt/selfuncs.c\"\n\nwhich I assume is not acceptable as a real solution.\n\n\nIt's possible that a good cost model for bloom is so far outside\n> genericcostestimate's ideas that trying to use it is not a good\n> idea anyway.\n>\n\nI think that might be the case. I don't know what the right answer would\nlook like, but I think it will likely end up needing to access everything\nthat genericcostestimate currently needs to access. Or if bloom doesn't,\nsome other extension implementing an ACCESS METHOD will.\n\nCheers,\n\nJeff\n\nOn Tue, Feb 12, 2019 at 4:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Jeff Janes <jeff.janes@gmail.com> writes:\n> In order for bloom (or any other users of CREATE ACCESS METHOD, if there\n> are any) to have a fighting chance to do better, I think many of selfuncs.c\n> currently private functions would have to be declared in some header file,\n> perhaps utils/selfuncs.h. But that then requires a cascade of other\n> inclusions. Perhaps that is why it was not done.\n\nI'm just in the midst of refactoring that stuff, so if you have\nsuggestions, let's hear 'em.The goal would be that I can copy the entire definition of genericcostestimate into blcost.c, change the function's name, and get it to compile. I don't know the correct way accomplish that. Maybe utils/selfuncs.h can be expanded to work, or if there should be a new header file like \"utils/index_am_cost.h\"What I've done for now is: #include \"../../src/backend/utils/adt/selfuncs.c\"which I assume is not acceptable as a real solution.It's possible that a good cost model for bloom is so far outside\ngenericcostestimate's ideas that trying to use it is not a good\nidea anyway.I think that might be the case. I don't know what the right answer would look like, but I think it will likely end up needing to access everything that genericcostestimate currently needs to access. Or if bloom doesn't, some other extension implementing an ACCESS METHOD will.Cheers,Jeff",
"msg_date": "Tue, 12 Feb 2019 17:38:28 -0500",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bloom index cost model seems to be wrong"
},
{
"msg_contents": "I've moved this to the hackers list, and added Teodor and Alexander of the\nbloom extension, as I would like to hear their opinions on the costing.\n\nOn Tue, Feb 12, 2019 at 4:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>\n> It's possible that a good cost model for bloom is so far outside\n> genericcostestimate's ideas that trying to use it is not a good\n> idea anyway.\n>\n>\nI'm attaching a crude patch based over your refactored header files.\n\nI just copied genericcostestimate into bloom, and made a few changes.\n\nI think one change should be conceptually uncontroversial, which is to\nchange the IO cost from random_page_cost to seq_page_cost. Bloom indexes\nare always scanned in their entirety.\n\nThe other change is not to charge any cpu_operator_cost per tuple. Bloom\ndoesn't go through the ADT machinery, it just does very fast\nbit-twiddling. I could assign a fraction of a cpu_operator_cost,\nmultiplied by bloomLength rather than list_length(indexQuals), to this\nbit-twiddling. But I think that that fraction would need to be quite close\nto zero, so I just removed it.\n\nWhen everything is in memory, Bloom still gets way overcharged for CPU\nusage even without the cpu_operator_cost. This patch doesn't do anything\nabout that. I don't know if the default value of cpu_index_tuple_cost is\nway too high, or if Bloom just shouldn't be charging the full value of it\nin the first place given the way it accesses index tuples. For comparison,\nwhen using a Btree as an equality filter on a non-leading column, most of\nthe time goes to index_getattr. Should the time spent there be loaded on\ncpu_operator_cost or onto cpu_index_tuple_cost? It is not strictly spent\nin the operator, but fetching the parameter to be used in an operator is\nmore closely a per-operator problem than a per-tuple problem.\n\nMost of genericcostestimate still applies. For example, ScalarArrayOpExpr\nhandling, and Mackert-Lohman formula. It is a shame that all of that has\nto be copied.\n\nThere are some other parts of genericcostestimate that probably don't apply\n(OrderBy, for example) but I left them untouched for now to make it easier\nto reconcile changes to the real genericcostestimate with the copy.\n\nFor ScalarArrayOpExpr, it would be nice to scan the index once and add to\nthe bitmap all branches of the =ANY in one index scan, but I don't see the\nmachinery to do that. It would be a matter for another patch anyway, other\nthan the way it would change the cost estimate.\n\nCheers,\n\nJeff",
"msg_date": "Sun, 24 Feb 2019 11:09:50 -0500",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bloom index cost model seems to be wrong"
},
{
"msg_contents": "On Sun, Feb 24, 2019 at 11:09 AM Jeff Janes <jeff.janes@gmail.com> wrote:\n\n> I've moved this to the hackers list, and added Teodor and Alexander of the\n> bloom extension, as I would like to hear their opinions on the costing.\n>\n\nMy previous patch had accidentally included a couple lines of a different\nthing I was working on (memory leak, now-committed), so this patch removes\nthat diff.\n\nI'm adding it to the commitfest targeting v13. I'm more interested in\nfeedback on the conceptual issues rather than stylistic ones, as I would\nprobably merge the two functions together before proposing something to\nactually be committed.\n\nShould we be trying to estimate the false positive rate and charging\ncpu_tuple_cost and cpu_operator_cost the IO costs for visiting the table to\nrecheck and reject those? I don't think other index types do that, and I'm\ninclined to think the burden should be on the user not to create silly\nindexes that produce an overwhelming number of false positives.\n\nCheers,\n\nJeff",
"msg_date": "Thu, 28 Feb 2019 13:11:16 -0500",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bloom index cost model seems to be wrong"
},
{
"msg_contents": "Jeff Janes <jeff.janes@gmail.com> writes:\n> Should we be trying to estimate the false positive rate and charging\n> cpu_tuple_cost and cpu_operator_cost the IO costs for visiting the table to\n> recheck and reject those? I don't think other index types do that, and I'm\n> inclined to think the burden should be on the user not to create silly\n> indexes that produce an overwhelming number of false positives.\n\nHeap-access costs are added on in costsize.c, not in the index\ncost estimator. I don't remember at the moment whether there's\nany explicit accounting for lossy indexes (i.e. false positives).\nUp to now, there haven't been cases where we could estimate the\nfalse-positive rate with any accuracy, so we may just be ignoring\nthe effect. But if we decide to account for it, I'd rather have\ncostsize.c continue to add on the actual cost, perhaps based on\na false-positive-rate fraction returned by the index estimator.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 28 Feb 2019 13:30:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bloom index cost model seems to be wrong"
},
{
"msg_contents": "On Fri, Mar 1, 2019 at 7:11 AM Jeff Janes <jeff.janes@gmail.com> wrote:\n> I'm adding it to the commitfest targeting v13. I'm more interested in feedback on the conceptual issues rather than stylistic ones, as I would probably merge the two functions together before proposing something to actually be committed.\n\n From the trivialities department, this patch shows up as a CI failure\nwith -Werror, because there is no declaration for\ngenericcostestimate2(). I realise that's just a temporary name in a\nWIP patch anyway so this isn't useful feedback, but for the benefit of\nanyone going through CI failures in bulk looking for things to\ncomplain about: this isn't a real one.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Mon, 8 Jul 2019 11:57:37 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bloom index cost model seems to be wrong"
},
{
"msg_contents": "It's not clear to me what the next action should be on this patch. I\nthink Jeff got some feedback from Tom, but was that enough to expect a\nnew version to be posted? That was in February; should we now (in late\nSeptember) close this as Returned with Feedback?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 25 Sep 2019 17:12:26 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Bloom index cost model seems to be wrong"
},
{
"msg_contents": "On Wed, Sep 25, 2019 at 05:12:26PM -0300, Alvaro Herrera wrote:\n> It's not clear to me what the next action should be on this patch. I\n> think Jeff got some feedback from Tom, but was that enough to expect a\n> new version to be posted? That was in February; should we now (in late\n> September) close this as Returned with Feedback?\n\nThat sounds rather right to me.\n--\nMichael",
"msg_date": "Thu, 26 Sep 2019 16:00:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Bloom index cost model seems to be wrong"
}
] |
[
{
"msg_contents": "I was just perusing our PDF docs for the first time in ages and\nrealized that of the 3,400+ pages of docs there's about 1,000 pages of\nrelease notes in it.... That seems like a bit overkill.\n\nI love having the old release notes online but perhaps they can be\nsomewhere other than the main docs? We could limit the current docs to\nincluding the release notes for just the supported versions -- after\nall you can always get the old release notes in the old docs\nthemselves.\n\n-- \ngreg\n\n",
"msg_date": "Tue, 12 Feb 2019 11:00:51 -0500",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Should we still have old release notes in docs?"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-12 11:00:51 -0500, Greg Stark wrote:\n> I was just perusing our PDF docs for the first time in ages and\n> realized that of the 3,400+ pages of docs there's about 1,000 pages of\n> release notes in it.... That seems like a bit overkill.\n> \n> I love having the old release notes online but perhaps they can be\n> somewhere other than the main docs? We could limit the current docs to\n> including the release notes for just the supported versions -- after\n> all you can always get the old release notes in the old docs\n> themselves.\n\nYou're behind the times, that just happened (in a pretty uncoordinated\nmanner).\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Tue, 12 Feb 2019 08:04:32 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Should we still have old release notes in docs?"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-02-12 11:00:51 -0500, Greg Stark wrote:\n>> I love having the old release notes online but perhaps they can be\n>> somewhere other than the main docs? We could limit the current docs to\n>> including the release notes for just the supported versions -- after\n>> all you can always get the old release notes in the old docs\n>> themselves.\n\n> You're behind the times, that just happened (in a pretty uncoordinated\n> manner).\n\nYeah, see 527b5ed1a et al.\n\nThe part about having a unified release-note archive somewhere else is\nstill WIP. The ball is in the web team's court on that, I think.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 12 Feb 2019 11:12:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Should we still have old release notes in docs?"
},
{
"msg_contents": "\n\n> On 12 Feb 2019, at 16:00, Greg Stark <stark@mit.edu> wrote:\n> \n> I was just perusing our PDF docs for the first time in ages and\n> realized that of the 3,400+ pages of docs there's about 1,000 pages of\n> release notes in it.... That seems like a bit overkill.\n> \n> I love having the old release notes online but perhaps they can be\n> somewhere other than the main docs? We could limit the current docs to\n> including the release notes for just the supported versions -- after\n> all you can always get the old release notes in the old docs\n> themselves\n\n+1. It does seem excessive.\n\n",
"msg_date": "Tue, 12 Feb 2019 16:28:07 +0000",
"msg_from": "Dave Page <dpage@pgadmin.org>",
"msg_from_op": false,
"msg_subject": "Re: Should we still have old release notes in docs?"
},
{
"msg_contents": "\n> On Feb 12, 2019, at 5:12 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> The part about having a unified release-note archive somewhere else is\n> still WIP. The ball is in the web team's court on that, I think.\n\nYes - we are going to try one thing with the existing way we load docs to try to avoid\nadditional structural changes. If that doesn’t work, we will reconvene\nand see what we need to do.\n\nThanks,\n\nJonathan\n\n\n",
"msg_date": "Tue, 12 Feb 2019 17:45:49 +0100",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Should we still have old release notes in docs?"
},
{
"msg_contents": "Tom Lane schrieb am 12.02.2019 um 17:12:\n> Yeah, see 527b5ed1a et al.\n> \n> The part about having a unified release-note archive somewhere else is\n> still WIP. The ball is in the web team's court on that, I think.\n\nThe Bucardo team has already done that:\n\nhttps://bucardo.org/postgres_all_versions.html\n\n\n\n",
"msg_date": "Tue, 12 Feb 2019 17:54:04 +0100",
"msg_from": "Thomas Kellerer <shammat@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Should we still have old release notes in docs?"
},
{
"msg_contents": "On 2/12/19 5:45 PM, Jonathan S. Katz wrote:\n> \n>> On Feb 12, 2019, at 5:12 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> The part about having a unified release-note archive somewhere else is\n>> still WIP. The ball is in the web team's court on that, I think.\n> \n> Yes - we are going to try one thing with the existing way we load docs to try to avoid\n> additional structural changes. If that doesn’t work, we will reconvene\n> and see what we need to do.\n\nI've proposed a patch for this here:\n\nhttps://www.postgresql.org/message-id/e0f09c9a-bd2b-862a-d379-601dfabc8969%40postgresql.org\n\nThanks,\n\nJonathan",
"msg_date": "Tue, 12 Feb 2019 22:10:32 -0500",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Should we still have old release notes in docs?"
}
] |
[
{
"msg_contents": "Hi all,\n\nI hit an issue yesterday where I was quickly nearing XID wraparound on a\ndatabase due to a temp table being created in a session which was then left\nIDLE out of transaction for 6 days.\n\nI solved the issue by tracing the owner of the temp table back to a session\nand terminating it - in my case I was just lucky that there was one session\nfor that user.\n\nI'm looking for a way to identify the PID of the backend which owns a temp\ntable identified as being an issue so I can terminate it.\n\n From the temp table namespace I can get the backend ID using a regex - but\nI have no idea how I can map that to a PID - any thoughts?\n\nCheers,\n\nJames Sewell,\n\n\n\nSuite 112, Jones Bay Wharf, 26-32 Pirrama Road, Pyrmont NSW 2009\n*P *(+61) 2 8099 9000 <(+61)%202%208099%209000> *W* www.jirotech.com *F *\n(+61) 2 8099 9099 <(+61)%202%208099%209000>\n\n-- \nThe contents of this email are confidential and may be subject to legal or \nprofessional privilege and copyright. No representation is made that this \nemail is free of viruses or other defects. If you have received this \ncommunication in error, you may not copy or distribute any part of it or \notherwise disclose its contents to anyone. Please advise the sender of your \nincorrect receipt of this correspondence.\n\nHi all,I hit an issue yesterday where I was quickly nearing XID wraparound on a database due to a temp table being created in a session which was then left IDLE out of transaction for 6 days.I solved the issue by tracing the owner of the temp table back to a session and terminating it - in my case I was just lucky that there was one session for that user.I'm looking for a way to identify the PID of the backend which owns a temp table identified as being an issue so I can terminate it.From the temp table namespace I can get the backend ID using a regex - but I have no idea how I can map that to a PID - any thoughts?Cheers,James Sewell,Suite 112, Jones Bay Wharf, 26-32 Pirrama Road, Pyrmont NSW 2009P (+61) 2 8099 9000 W www.jirotech.com F (+61) 2 8099 9099\n\nThe contents of this email are confidential and may be subject to legal or professional privilege and copyright. No representation is made that this email is free of viruses or other defects. If you have received this communication in error, you may not copy or distribute any part of it or otherwise disclose its contents to anyone. Please advise the sender of your incorrect receipt of this correspondence.",
"msg_date": "Wed, 13 Feb 2019 11:04:51 +1100",
"msg_from": "James Sewell <james.sewell@jirotech.com>",
"msg_from_op": true,
"msg_subject": "Reaping Temp tables to avoid XID wraparound"
},
{
"msg_contents": "From: James Sewell [mailto:james.sewell@jirotech.com]\r\n> From the temp table namespace I can get the backend ID using a regex - but\r\n> I have no idea how I can map that to a PID - any thoughts?\r\n> \r\n\r\nSELECT pg_stat_get_backend_pid(backendid);\r\n\r\nhttps://www.postgresql.org/docs/devel/monitoring-stats.html\r\n\r\nThis mailing list is for PostgreSQL development. You can post questions as a user to pgsql-general@lists.postgresql.org.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n\r\n",
"msg_date": "Wed, 13 Feb 2019 00:32:18 +0000",
"msg_from": "\"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Reaping Temp tables to avoid XID wraparound"
},
{
"msg_contents": ">>>>> \"Tsunakawa\" == Tsunakawa, Takayuki <tsunakawa.takay@jp.fujitsu.com> writes:\n\n >> From the temp table namespace I can get the backend ID using a regex\n >> - but I have no idea how I can map that to a PID - any thoughts?\n\n Tsunakawa> SELECT pg_stat_get_backend_pid(backendid);\n\nDoesn't work - that function's idea of \"backend id\" doesn't match the\nreal one, since it's looking at a local copy of the stats from which\nunused slots have been removed.\n\npostgres=# select pg_my_temp_schema()::regnamespace;\n pg_my_temp_schema \n-------------------\n pg_temp_5\n(1 row)\n\npostgres=# select pg_stat_get_backend_pid(5);\n pg_stat_get_backend_pid \n-------------------------\n 4730\n(1 row)\n\npostgres=# select pg_backend_pid();\n pg_backend_pid \n----------------\n 21086\n(1 row)\n\n-- \nAndrew (irc:RhodiumToad)\n\n",
"msg_date": "Wed, 13 Feb 2019 00:38:51 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: Reaping Temp tables to avoid XID wraparound"
},
{
"msg_contents": "On Wed, Feb 13, 2019 at 12:38:51AM +0000, Andrew Gierth wrote:\n> Doesn't work - that function's idea of \"backend id\" doesn't match the\n> real one, since it's looking at a local copy of the stats from which\n> unused slots have been removed.\n\nThe temporary namespace OID is added to PGPROC since v11, so it could\nbe easy enough to add a system function which maps a temp schema to a\nPID. Now, it could actually make sense to add this information into\npg_stat_get_activity() and that would be cheaper.\n--\nMichael",
"msg_date": "Wed, 13 Feb 2019 10:25:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Reaping Temp tables to avoid XID wraparound"
},
{
"msg_contents": "From: Andrew Gierth [mailto:andrew@tao11.riddles.org.uk]\n> Tsunakawa> SELECT pg_stat_get_backend_pid(backendid);\n> \n> Doesn't work - that function's idea of \"backend id\" doesn't match the\n> real one, since it's looking at a local copy of the stats from which\n> unused slots have been removed.\n\nOuch, the argument of pg_stat_get_backend_pid() and the number in pg_temp_N are both backend IDs, but they are allocated from two different data structures. Confusing.\n\n\nFrom: Michael Paquier [mailto:michael@paquier.xyz]\n> The temporary namespace OID is added to PGPROC since v11, so it could be\n> easy enough to add a system function which maps a temp schema to a PID.\n> Now, it could actually make sense to add this information into\n> pg_stat_get_activity() and that would be cheaper.\n\nThat sounds good.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n",
"msg_date": "Wed, 13 Feb 2019 01:48:40 +0000",
"msg_from": "\"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Reaping Temp tables to avoid XID wraparound"
},
{
"msg_contents": "On Wed, Feb 13, 2019 at 2:26 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Wed, Feb 13, 2019 at 12:38:51AM +0000, Andrew Gierth wrote:\n> > Doesn't work - that function's idea of \"backend id\" doesn't match the\n> > real one, since it's looking at a local copy of the stats from which\n> > unused slots have been removed.\n>\n> The temporary namespace OID is added to PGPROC since v11, so it could\n> be easy enough to add a system function which maps a temp schema to a\n> PID. Now, it could actually make sense to add this information into\n> pg_stat_get_activity() and that would be cheaper.\n>\n>\nI think that would be useful and make sense.\n\nAnd while at it, what would in this particular case have been even more\nuseful to the OP would be to actually identify that there is a temp table\n*and which xid it's blocking at*. For regular transactions we can look at\nbackend_xid, but IIRC that doesn't work for temp tables (unless they are\ninside a transaction). Maybe we can find a way to expose that type of\nrelevant information at a similar level while poking around that code?\n\n\n//Magnus\n\nOn Wed, Feb 13, 2019 at 2:26 AM Michael Paquier <michael@paquier.xyz> wrote:On Wed, Feb 13, 2019 at 12:38:51AM +0000, Andrew Gierth wrote:\n> Doesn't work - that function's idea of \"backend id\" doesn't match the\n> real one, since it's looking at a local copy of the stats from which\n> unused slots have been removed.\n\nThe temporary namespace OID is added to PGPROC since v11, so it could\nbe easy enough to add a system function which maps a temp schema to a\nPID. Now, it could actually make sense to add this information into\npg_stat_get_activity() and that would be cheaper.I think that would be useful and make sense.And while at it, what would in this particular case have been even more useful to the OP would be to actually identify that there is a temp table *and which xid it's blocking at*. For regular transactions we can look at backend_xid, but IIRC that doesn't work for temp tables (unless they are inside a transaction). Maybe we can find a way to expose that type of relevant information at a similar level while poking around that code? //Magnus",
"msg_date": "Wed, 13 Feb 2019 17:48:39 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Reaping Temp tables to avoid XID wraparound"
},
{
"msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> And while at it, what would in this particular case have been even more\n> useful to the OP would be to actually identify that there is a temp table\n> *and which xid it's blocking at*. For regular transactions we can look at\n> backend_xid, but IIRC that doesn't work for temp tables (unless they are\n> inside a transaction). Maybe we can find a way to expose that type of\n> relevant information at a similar level while poking around that code?\n\nMaybe I'm confused, but doesn't the table's pg_class row tell you what\nyou need to know? You can't look inside another session's temp table,\nbut you don't need to.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 13 Feb 2019 12:05:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reaping Temp tables to avoid XID wraparound"
},
{
"msg_contents": "On Wed, Feb 13, 2019 at 6:05 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Magnus Hagander <magnus@hagander.net> writes:\n> > And while at it, what would in this particular case have been even more\n> > useful to the OP would be to actually identify that there is a temp table\n> > *and which xid it's blocking at*. For regular transactions we can look at\n> > backend_xid, but IIRC that doesn't work for temp tables (unless they are\n> > inside a transaction). Maybe we can find a way to expose that type of\n> > relevant information at a similar level while poking around that code?\n>\n> Maybe I'm confused, but doesn't the table's pg_class row tell you what\n> you need to know? You can't look inside another session's temp table,\n> but you don't need to.\n>\n\nI believe it does, yes.\n\nBut that doesn't make for a way to conveniently go \"what is it that's\ncausing waparound problems\", since due to pg_class being per database, you\nhave to loop over all your databases to find that query. Having that\ninformation available in a way that's easy for monitoring to get at (much\nas the backend_xid field in pg_stat_activity can help you wrt general\nsnapshots) would be useful.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Wed, Feb 13, 2019 at 6:05 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Magnus Hagander <magnus@hagander.net> writes:\n> And while at it, what would in this particular case have been even more\n> useful to the OP would be to actually identify that there is a temp table\n> *and which xid it's blocking at*. For regular transactions we can look at\n> backend_xid, but IIRC that doesn't work for temp tables (unless they are\n> inside a transaction). Maybe we can find a way to expose that type of\n> relevant information at a similar level while poking around that code?\n\nMaybe I'm confused, but doesn't the table's pg_class row tell you what\nyou need to know? You can't look inside another session's temp table,\nbut you don't need to.I believe it does, yes.But that doesn't make for a way to conveniently go \"what is it that's causing waparound problems\", since due to pg_class being per database, you have to loop over all your databases to find that query. Having that information available in a way that's easy for monitoring to get at (much as the backend_xid field in pg_stat_activity can help you wrt general snapshots) would be useful.-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Wed, 13 Feb 2019 18:09:40 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Reaping Temp tables to avoid XID wraparound"
},
{
"msg_contents": "It's easy to identify the temp tables which are causing the problem, yes.\nThe issue here is just getting rid of them.\n\nIn an ideal world I wouldn't actually have to care about the session and I\ncould just drop the table (or vacuum the table?).\n\nDropping the session was just the best way I could find to currently solve\nthe problem.\n\nCheers,\n\nJames Sewell,\n\n\n\nSuite 112, Jones Bay Wharf, 26-32 Pirrama Road, Pyrmont NSW 2009\n*P *(+61) 2 8099 9000 <(+61)%202%208099%209000> *W* www.jirotech.com *F *\n(+61) 2 8099 9099 <(+61)%202%208099%209000>\n\n\nOn Thu, 14 Feb 2019 at 04:09, Magnus Hagander <magnus@hagander.net> wrote:\n\n> On Wed, Feb 13, 2019 at 6:05 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>> Magnus Hagander <magnus@hagander.net> writes:\n>> > And while at it, what would in this particular case have been even more\n>> > useful to the OP would be to actually identify that there is a temp\n>> table\n>> > *and which xid it's blocking at*. For regular transactions we can look\n>> at\n>> > backend_xid, but IIRC that doesn't work for temp tables (unless they are\n>> > inside a transaction). Maybe we can find a way to expose that type of\n>> > relevant information at a similar level while poking around that code?\n>>\n>> Maybe I'm confused, but doesn't the table's pg_class row tell you what\n>> you need to know? You can't look inside another session's temp table,\n>> but you don't need to.\n>>\n>\n> I believe it does, yes.\n>\n> But that doesn't make for a way to conveniently go \"what is it that's\n> causing waparound problems\", since due to pg_class being per database, you\n> have to loop over all your databases to find that query. Having that\n> information available in a way that's easy for monitoring to get at (much\n> as the backend_xid field in pg_stat_activity can help you wrt general\n> snapshots) would be useful.\n>\n> --\n> Magnus Hagander\n> Me: https://www.hagander.net/ <http://www.hagander.net/>\n> Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n>\n\n-- \nThe contents of this email are confidential and may be subject to legal or \nprofessional privilege and copyright. No representation is made that this \nemail is free of viruses or other defects. If you have received this \ncommunication in error, you may not copy or distribute any part of it or \notherwise disclose its contents to anyone. Please advise the sender of your \nincorrect receipt of this correspondence.\n\nIt's easy to identify the temp tables which are causing the problem, yes. The issue here is just getting rid of them.In an ideal world I wouldn't actually have to care about the session and I could just drop the table (or vacuum the table?).Dropping the session was just the best way I could find to currently solve the problem.Cheers,James Sewell,Suite 112, Jones Bay Wharf, 26-32 Pirrama Road, Pyrmont NSW 2009P (+61) 2 8099 9000 W www.jirotech.com F (+61) 2 8099 9099On Thu, 14 Feb 2019 at 04:09, Magnus Hagander <magnus@hagander.net> wrote:On Wed, Feb 13, 2019 at 6:05 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Magnus Hagander <magnus@hagander.net> writes:\n> And while at it, what would in this particular case have been even more\n> useful to the OP would be to actually identify that there is a temp table\n> *and which xid it's blocking at*. For regular transactions we can look at\n> backend_xid, but IIRC that doesn't work for temp tables (unless they are\n> inside a transaction). Maybe we can find a way to expose that type of\n> relevant information at a similar level while poking around that code?\n\nMaybe I'm confused, but doesn't the table's pg_class row tell you what\nyou need to know? You can't look inside another session's temp table,\nbut you don't need to.I believe it does, yes.But that doesn't make for a way to conveniently go \"what is it that's causing waparound problems\", since due to pg_class being per database, you have to loop over all your databases to find that query. Having that information available in a way that's easy for monitoring to get at (much as the backend_xid field in pg_stat_activity can help you wrt general snapshots) would be useful.-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/\n\n\nThe contents of this email are confidential and may be subject to legal or professional privilege and copyright. No representation is made that this email is free of viruses or other defects. If you have received this communication in error, you may not copy or distribute any part of it or otherwise disclose its contents to anyone. Please advise the sender of your incorrect receipt of this correspondence.",
"msg_date": "Thu, 14 Feb 2019 10:00:01 +1100",
"msg_from": "James Sewell <james.sewell@jirotech.com>",
"msg_from_op": true,
"msg_subject": "Re: Reaping Temp tables to avoid XID wraparound"
},
{
"msg_contents": "On Wed, Feb 13, 2019 at 05:48:39PM +0100, Magnus Hagander wrote:\n> On Wed, Feb 13, 2019 at 2:26 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> The temporary namespace OID is added to PGPROC since v11, so it could\n>> be easy enough to add a system function which maps a temp schema to a\n>> PID. Now, it could actually make sense to add this information into\n>> pg_stat_get_activity() and that would be cheaper.\n>\n> I think that would be useful and make sense.\n\nOne thing to keep in mind here is that tempNamespaceId in PGPROC gets\nset before the transaction creating it has committed, hence it is\nnecessary to also check that the namespace actually exists from the\npoint of view of the session running pg_stat_get_activity() before\nshowing it, which can be done with a simple\nSearchSysCacheExists1(NAMESPACEOID) normally.\n\n> And while at it, what would in this particular case have been even more\n> useful to the OP would be to actually identify that there is a temp table\n> *and which xid it's blocking at*. For regular transactions we can look at\n> backend_xid, but IIRC that doesn't work for temp tables (unless they are\n> inside a transaction). Maybe we can find a way to expose that type of\n> relevant information at a similar level while poking around that code?\n\nYeah, possibly. I think that it could be tricky though to get that at\na global level in a cheap way. It makes also little sense to only\nshow the temp namespace OID if that information is not enough.\n--\nMichael",
"msg_date": "Thu, 14 Feb 2019 09:43:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Reaping Temp tables to avoid XID wraparound"
},
{
"msg_contents": "On Thu, Feb 14, 2019 at 1:43 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Wed, Feb 13, 2019 at 05:48:39PM +0100, Magnus Hagander wrote:\n> > On Wed, Feb 13, 2019 at 2:26 AM Michael Paquier <michael@paquier.xyz>\n> wrote:\n> >> The temporary namespace OID is added to PGPROC since v11, so it could\n> >> be easy enough to add a system function which maps a temp schema to a\n> >> PID. Now, it could actually make sense to add this information into\n> >> pg_stat_get_activity() and that would be cheaper.\n> >\n> > I think that would be useful and make sense.\n>\n> One thing to keep in mind here is that tempNamespaceId in PGPROC gets\n> set before the transaction creating it has committed, hence it is\n> necessary to also check that the namespace actually exists from the\n> point of view of the session running pg_stat_get_activity() before\n> showing it, which can be done with a simple\n> SearchSysCacheExists1(NAMESPACEOID) normally.\n>\n\nOh, that's a good point.\n\n\n> And while at it, what would in this particular case have been even more\n> > useful to the OP would be to actually identify that there is a temp table\n> > *and which xid it's blocking at*. For regular transactions we can look at\n> > backend_xid, but IIRC that doesn't work for temp tables (unless they are\n> > inside a transaction). Maybe we can find a way to expose that type of\n> > relevant information at a similar level while poking around that code?\n>\n> Yeah, possibly. I think that it could be tricky though to get that at\n> a global level in a cheap way. It makes also little sense to only\n> show the temp namespace OID if that information is not enough.\n>\n\nWe could I guess add a field specifically for temp_namespace_xid or such.\nThe question is if it's worth the overhead to do that.\n\nJust having the namespace oid is at least enough to know that there is\npotentially something to go look at it. But it doesn't make for automated\nmonitoring very well, at least not in systems that have a larger number of\ndatabases.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Thu, Feb 14, 2019 at 1:43 AM Michael Paquier <michael@paquier.xyz> wrote:On Wed, Feb 13, 2019 at 05:48:39PM +0100, Magnus Hagander wrote:\n> On Wed, Feb 13, 2019 at 2:26 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> The temporary namespace OID is added to PGPROC since v11, so it could\n>> be easy enough to add a system function which maps a temp schema to a\n>> PID. Now, it could actually make sense to add this information into\n>> pg_stat_get_activity() and that would be cheaper.\n>\n> I think that would be useful and make sense.\n\nOne thing to keep in mind here is that tempNamespaceId in PGPROC gets\nset before the transaction creating it has committed, hence it is\nnecessary to also check that the namespace actually exists from the\npoint of view of the session running pg_stat_get_activity() before\nshowing it, which can be done with a simple\nSearchSysCacheExists1(NAMESPACEOID) normally.Oh, that's a good point. \n> And while at it, what would in this particular case have been even more\n> useful to the OP would be to actually identify that there is a temp table\n> *and which xid it's blocking at*. For regular transactions we can look at\n> backend_xid, but IIRC that doesn't work for temp tables (unless they are\n> inside a transaction). Maybe we can find a way to expose that type of\n> relevant information at a similar level while poking around that code?\n\nYeah, possibly. I think that it could be tricky though to get that at\na global level in a cheap way. It makes also little sense to only\nshow the temp namespace OID if that information is not enough.We could I guess add a field specifically for temp_namespace_xid or such. The question is if it's worth the overhead to do that.Just having the namespace oid is at least enough to know that there is potentially something to go look at it. But it doesn't make for automated monitoring very well, at least not in systems that have a larger number of databases. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Sun, 17 Feb 2019 17:47:09 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Reaping Temp tables to avoid XID wraparound"
},
{
"msg_contents": "> Yeah, possibly. I think that it could be tricky though to get that at\n>> a global level in a cheap way. It makes also little sense to only\n>> show the temp namespace OID if that information is not enough.\n>>\n>\n> We could I guess add a field specifically for temp_namespace_xid or such.\n> The question is if it's worth the overhead to do that.\n>\n> Just having the namespace oid is at least enough to know that there is\n> potentially something to go look at it. But it doesn't make for automated\n> monitoring very well, at least not in systems that have a larger number of\n> databases.\n>\n\nYou can get the namespace oid today with a JOIN, the issue is that this\nisn't enough information to go an look at - at the end of the day it's\nuseless unless you can remove the temp table or terminate the session which\nowns it.\n\n-- \nThe contents of this email are confidential and may be subject to legal or \nprofessional privilege and copyright. No representation is made that this \nemail is free of viruses or other defects. If you have received this \ncommunication in error, you may not copy or distribute any part of it or \notherwise disclose its contents to anyone. Please advise the sender of your \nincorrect receipt of this correspondence.\n\nYeah, possibly. I think that it could be tricky though to get that at\na global level in a cheap way. It makes also little sense to only\nshow the temp namespace OID if that information is not enough.We could I guess add a field specifically for temp_namespace_xid or such. The question is if it's worth the overhead to do that.Just having the namespace oid is at least enough to know that there is potentially something to go look at it. But it doesn't make for automated monitoring very well, at least not in systems that have a larger number of databases. You can get the namespace oid today with a JOIN, the issue is that this isn't enough information to go an look at - at the end of the day it's useless unless you can remove the temp table or terminate the session which owns it. \n\nThe contents of this email are confidential and may be subject to legal or professional privilege and copyright. No representation is made that this email is free of viruses or other defects. If you have received this communication in error, you may not copy or distribute any part of it or otherwise disclose its contents to anyone. Please advise the sender of your incorrect receipt of this correspondence.",
"msg_date": "Mon, 18 Feb 2019 11:19:01 +1100",
"msg_from": "James Sewell <james.sewell@jirotech.com>",
"msg_from_op": true,
"msg_subject": "Re: Reaping Temp tables to avoid XID wraparound"
},
{
"msg_contents": "On Sun, Feb 17, 2019 at 05:47:09PM +0100, Magnus Hagander wrote:\n> We could I guess add a field specifically for temp_namespace_xid or such.\n> The question is if it's worth the overhead to do that.\n\nThat would mean an extra 4 bytes in PGPROC, which is something we\ncould live with, still the use-case looks rather narrow to me to\njustify that.\n\n> Just having the namespace oid is at least enough to know that there is\n> potentially something to go look at it. But it doesn't make for automated\n> monitoring very well, at least not in systems that have a larger number of\n> databases.\n\nYep. It would be good to make sure about the larger picture before\ndoing something.\n--\nMichael",
"msg_date": "Mon, 18 Feb 2019 10:31:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Reaping Temp tables to avoid XID wraparound"
},
{
"msg_contents": "On Mon, 18 Feb 2019 at 12:31, Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Sun, Feb 17, 2019 at 05:47:09PM +0100, Magnus Hagander wrote:\n> > We could I guess add a field specifically for temp_namespace_xid or such.\n> > The question is if it's worth the overhead to do that.\n>\n> That would mean an extra 4 bytes in PGPROC, which is something we\n> could live with, still the use-case looks rather narrow to me to\n> justify that.\n>\n\nI agree the use case is narrow - but it's also pretty critical.\n\nThis is a very real way that transaction wraparound can be hit, with no\nautomated or manual way of solving it (apart from randomly terminating\nbackends (you have to search via user and hope there is only one, and that\nit matches the temp table owner) or restarting Postgres).\n\nI suppose an in-core way of disconnecting idle sessions after x time would\nwork too - but that seems like a sledgehammer approach.\n\n-- \nJames\n\n-- \nThe contents of this email are confidential and may be subject to legal or \nprofessional privilege and copyright. No representation is made that this \nemail is free of viruses or other defects. If you have received this \ncommunication in error, you may not copy or distribute any part of it or \notherwise disclose its contents to anyone. Please advise the sender of your \nincorrect receipt of this correspondence.\n\nOn Mon, 18 Feb 2019 at 12:31, Michael Paquier <michael@paquier.xyz> wrote:On Sun, Feb 17, 2019 at 05:47:09PM +0100, Magnus Hagander wrote:\n> We could I guess add a field specifically for temp_namespace_xid or such.\n> The question is if it's worth the overhead to do that.\n\nThat would mean an extra 4 bytes in PGPROC, which is something we\ncould live with, still the use-case looks rather narrow to me to\njustify that.I agree the use case is narrow - but it's also pretty critical. This is a very real way that transaction wraparound can be hit, with no automated or manual way of solving it (apart from randomly terminating backends (you have to search via user and hope there is only one, and that it matches the temp table owner) or restarting Postgres).I suppose an in-core way of disconnecting idle sessions after x time would work too - but that seems like a sledgehammer approach.-- James \n\nThe contents of this email are confidential and may be subject to legal or professional privilege and copyright. No representation is made that this email is free of viruses or other defects. If you have received this communication in error, you may not copy or distribute any part of it or otherwise disclose its contents to anyone. Please advise the sender of your incorrect receipt of this correspondence.",
"msg_date": "Tue, 19 Feb 2019 10:52:54 +1100",
"msg_from": "James Sewell <james.sewell@jirotech.com>",
"msg_from_op": true,
"msg_subject": "Re: Reaping Temp tables to avoid XID wraparound"
},
{
"msg_contents": "On Tue, Feb 19, 2019 at 10:52:54AM +1100, James Sewell wrote:\n> I agree the use case is narrow - but it's also pretty critical.\n\nYeah..\n\n> I suppose an in-core way of disconnecting idle sessions after x time would\n> work too - but that seems like a sledgehammer approach.\n\nSuch solutions at SQL level need to connect to a specific database and\nI implemented one for fun, please see the call to\nBackgroundWorkerInitializeConnection() here:\nhttps://github.com/michaelpq/pg_plugins/tree/master/kill_idle\n\nSo that's not the end of it as long as we don't have a cross-database\nsolution. If we can get something in PGPROC then just connecting to\nshared memory would be enough.\n--\nMichael",
"msg_date": "Tue, 19 Feb 2019 12:36:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Reaping Temp tables to avoid XID wraparound"
},
{
"msg_contents": "On Mon, Feb 18, 2019 at 2:31 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Sun, Feb 17, 2019 at 05:47:09PM +0100, Magnus Hagander wrote:\n> > We could I guess add a field specifically for temp_namespace_xid or such.\n> > The question is if it's worth the overhead to do that.\n>\n> That would mean an extra 4 bytes in PGPROC, which is something we\n> could live with, still the use-case looks rather narrow to me to\n> justify that.\n>\n\nIt does, tha'ts why I questioned if it's worth it. But, thinking some more\nabout it, some other options would be:\n\n1. This is only set once per backend in normal operations, right? (Unless I\ngo drop the schema manually, but that's not exactly normal). So maybe we\ncould invent a pg stat message and send the information through the\ncollector? Since it doesn't have to be frequently updated, like your\ntypical backend_xmin.\n\n2. Or probably even better, just put it in PgBackendStatus? Overhead here\nis a lot cheaper than PGPROC.\n\nISTM 2 is probably the most reasonable option here?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, Feb 18, 2019 at 2:31 AM Michael Paquier <michael@paquier.xyz> wrote:On Sun, Feb 17, 2019 at 05:47:09PM +0100, Magnus Hagander wrote:\n> We could I guess add a field specifically for temp_namespace_xid or such.\n> The question is if it's worth the overhead to do that.\n\nThat would mean an extra 4 bytes in PGPROC, which is something we\ncould live with, still the use-case looks rather narrow to me to\njustify that.It does, tha'ts why I questioned if it's worth it. But, thinking some more about it, some other options would be:1. This is only set once per backend in normal operations, right? (Unless I go drop the schema manually, but that's not exactly normal). So maybe we could invent a pg stat message and send the information through the collector? Since it doesn't have to be frequently updated, like your typical backend_xmin.2. Or probably even better, just put it in PgBackendStatus? Overhead here is a lot cheaper than PGPROC.ISTM 2 is probably the most reasonable option here?-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Tue, 19 Feb 2019 09:56:28 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Reaping Temp tables to avoid XID wraparound"
},
{
"msg_contents": "On Tue, Feb 19, 2019 at 09:56:28AM +0100, Magnus Hagander wrote:\n> 2. Or probably even better, just put it in PgBackendStatus? Overhead here\n> is a lot cheaper than PGPROC.\n> \n> ISTM 2 is probably the most reasonable option here?\n\nYes, I forgot this one. That would be more consistent, even if the\ninformation can be out of date quickly we don't care here.\n--\nMichael",
"msg_date": "Wed, 20 Feb 2019 11:41:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Reaping Temp tables to avoid XID wraparound"
},
{
"msg_contents": "On Wed, Feb 20, 2019 at 3:41 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Feb 19, 2019 at 09:56:28AM +0100, Magnus Hagander wrote:\n> > 2. Or probably even better, just put it in PgBackendStatus? Overhead here\n> > is a lot cheaper than PGPROC.\n> >\n> > ISTM 2 is probably the most reasonable option here?\n>\n> Yes, I forgot this one. That would be more consistent, even if the\n> information can be out of date quickly we don't care here.\n>\n\nI think it would be something like the attached. Thoughts?\n\nI did the \"insert column in the middle of pg_stat_get_activity\", I'm not\nsure that is right -- how do we treate that one? Do we just append at the\nend because people are expected to use the pg_stat_activity view? It's a\nnontrivial part of the patch.\n\nThat one aside, does the general way to track it appear reasonable? (docs\nexcluded until we have agreement on that)\n\nAnd should we also expose the oid in pg_stat_activity in this case, since\nwe have it?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>",
"msg_date": "Fri, 22 Feb 2019 16:01:02 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Reaping Temp tables to avoid XID wraparound"
},
{
"msg_contents": "On Fri, Feb 22, 2019 at 04:01:02PM +0100, Magnus Hagander wrote:\n> I did the \"insert column in the middle of pg_stat_get_activity\", I'm not\n> sure that is right -- how do we treate that one? Do we just append at the\n> end because people are expected to use the pg_stat_activity view? It's a\n> nontrivial part of the patch.\n\nI think that it would be more confusing to add the new column at the\ntail, after all the SSL fields.\n\n> That one aside, does the general way to track it appear reasonable? (docs\n> excluded until we have agreement on that)\n\nIt does. A temp table is associated to a session so it's not like\nautovacuum can work on it. With this information it is at least\npossible to take actions. We could even get autovacuum to kill such\nsessions. /me hides\n\n> And should we also expose the oid in pg_stat_activity in this case, since\n> we have it?\n\nFor the case reported here, just knowing the XID where the temporary\nnamespace has been created is enough so as the goal is to kill the\nsession associated with it. Still, it seems to me that knowing the\ntemporary schema name used by a given session is useful, and that's\ncheap to get as the information is already there.\n\nOne problem that I can see with your patch is that you would set the\nXID once any temporary object created, including when objects other\nthan tables are created in pg_temp, including functions, etc. And it\ndoes not really matter for wraparound monitoring. Still, the patch is\nsimple..\n--\nMichael",
"msg_date": "Tue, 26 Feb 2019 15:45:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Reaping Temp tables to avoid XID wraparound"
},
{
"msg_contents": "On Mon, Feb 25, 2019 at 10:45 PM Michael Paquier <michael@paquier.xyz>\nwrote:\n\n> On Fri, Feb 22, 2019 at 04:01:02PM +0100, Magnus Hagander wrote:\n> > I did the \"insert column in the middle of pg_stat_get_activity\", I'm not\n> > sure that is right -- how do we treate that one? Do we just append at the\n> > end because people are expected to use the pg_stat_activity view? It's a\n> > nontrivial part of the patch.\n>\n> I think that it would be more confusing to add the new column at the\n> tail, after all the SSL fields.\n>\n> > That one aside, does the general way to track it appear reasonable? (docs\n> > excluded until we have agreement on that)\n>\n> It does. A temp table is associated to a session so it's not like\n> autovacuum can work on it. With this information it is at least\n> possible to take actions. We could even get autovacuum to kill such\n> sessions. /me hides\n>\n> > And should we also expose the oid in pg_stat_activity in this case, since\n> > we have it?\n>\n> For the case reported here, just knowing the XID where the temporary\n> namespace has been created is enough so as the goal is to kill the\n> session associated with it. Still, it seems to me that knowing the\n> temporary schema name used by a given session is useful, and that's\n> cheap to get as the information is already there.\n>\n\nit should be since it's in pgproc.\n\nOne problem that I can see with your patch is that you would set the\n> XID once any temporary object created, including when objects other\n> than tables are created in pg_temp, including functions, etc. And it\n> does not really matter for wraparound monitoring. Still, the patch is\n> simple..\n>\n\nI'm not entirely sure what you mean here. Sure, it will log it even when a\ntemp function is created, but the namespace is still created then is it\nnot?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, Feb 25, 2019 at 10:45 PM Michael Paquier <michael@paquier.xyz> wrote:On Fri, Feb 22, 2019 at 04:01:02PM +0100, Magnus Hagander wrote:\n> I did the \"insert column in the middle of pg_stat_get_activity\", I'm not\n> sure that is right -- how do we treate that one? Do we just append at the\n> end because people are expected to use the pg_stat_activity view? It's a\n> nontrivial part of the patch.\n\nI think that it would be more confusing to add the new column at the\ntail, after all the SSL fields.\n\n> That one aside, does the general way to track it appear reasonable? (docs\n> excluded until we have agreement on that)\n\nIt does. A temp table is associated to a session so it's not like\nautovacuum can work on it. With this information it is at least\npossible to take actions. We could even get autovacuum to kill such\nsessions. /me hides\n\n> And should we also expose the oid in pg_stat_activity in this case, since\n> we have it?\n\nFor the case reported here, just knowing the XID where the temporary\nnamespace has been created is enough so as the goal is to kill the\nsession associated with it. Still, it seems to me that knowing the\ntemporary schema name used by a given session is useful, and that's\ncheap to get as the information is already there.it should be since it's in pgproc. \nOne problem that I can see with your patch is that you would set the\nXID once any temporary object created, including when objects other\nthan tables are created in pg_temp, including functions, etc. And it\ndoes not really matter for wraparound monitoring. Still, the patch is\nsimple..I'm not entirely sure what you mean here. Sure, it will log it even when a temp function is created, but the namespace is still created then is it not? -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Fri, 8 Mar 2019 11:14:46 -0800",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Reaping Temp tables to avoid XID wraparound"
},
{
"msg_contents": "On Fri, Mar 08, 2019 at 11:14:46AM -0800, Magnus Hagander wrote:\n> On Mon, Feb 25, 2019 at 10:45 PM Michael Paquier <michael@paquier.xyz>\n> wrote:\n>> One problem that I can see with your patch is that you would set the\n>> XID once any temporary object created, including when objects other\n>> than tables are created in pg_temp, including functions, etc. And it\n>> does not really matter for wraparound monitoring. Still, the patch is\n>> simple..\n> \n> I'm not entirely sure what you mean here. Sure, it will log it even when a\n> temp function is created, but the namespace is still created then is it\n> not?\n\nWhat I mean here is: imagine the case of a session which creates a\ntemporary function, creating as well the temporary schema, but creates\nno other temporary objects. In this case we don't really care about\nthe wraparound issue because, even if we have a temporary schema, we\ndo not have temporary relations. And this could confuse the user?\nPerhaps that's not worth bothering, still not all temporary objects\nare tables.\n--\nMichael",
"msg_date": "Sat, 9 Mar 2019 10:28:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Reaping Temp tables to avoid XID wraparound"
}
] |
[
{
"msg_contents": "Hi,\n\nAttached is a draft of the press release for the upcoming 2019-02-14\nrelease. Feedback & suggestions are welcome.\n\nThanks!\n\nJonathan",
"msg_date": "Tue, 12 Feb 2019 22:00:15 -0500",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "2019-02-14 Press Release Draft"
},
{
"msg_contents": "On 02/12/19 22:00, Jonathan S. Katz wrote:\n> Attached is a draft of the press release for the upcoming 2019-02-14\n> release. Feedback & suggestions are welcome.\n\n----\nUsers on PostgreSQL 9.4 should plan to upgrade to a supported version of\nPostgreSQL as the community will stop releasing fixes for it on February 12,\n2019. Please see our [versioning\npolicy](https://www.postgresql.org/support/versioning/)\n----\n\nShould that be February 13, 2020? That's what the linked page says for 9.4.\n\nFebruary 12, 2019 would be (a) today, and (b) in the past for this press\nrelease.\n\nCheers,\n-Chap\n\n",
"msg_date": "Tue, 12 Feb 2019 22:13:00 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: 2019-02-14 Press Release Draft"
},
{
"msg_contents": "On 2/13/19 4:13 AM, Chapman Flack wrote:\n> On 02/12/19 22:00, Jonathan S. Katz wrote:\n>> Attached is a draft of the press release for the upcoming 2019-02-14\n>> release. Feedback & suggestions are welcome.\n> \n> ----\n> Users on PostgreSQL 9.4 should plan to upgrade to a supported version of\n> PostgreSQL as the community will stop releasing fixes for it on February 12,\n> 2019. Please see our [versioning\n> policy](https://www.postgresql.org/support/versioning/)\n> ----\n> \n> Should that be February 13, 2020? That's what the linked page says for 9.4.\n> \n> February 12, 2019 would be (a) today, and (b) in the past for this press\n> release.\n\nYes, good catch. Fixed.\n\nThanks,\n\nJonathan",
"msg_date": "Tue, 12 Feb 2019 22:14:55 -0500",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: 2019-02-14 Press Release Draft"
},
{
"msg_contents": "> On 13 Feb 2019, at 04:00, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n\n> Attached is a draft of the press release for the upcoming 2019-02-14\n> release. Feedback & suggestions are welcome.\n\nI think it should be “versions” in the below sentence?\n\n \"all currently supported version of PostgreSQL will only contain\"\n\ncheers ./daniel\n",
"msg_date": "Wed, 13 Feb 2019 10:52:27 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: 2019-02-14 Press Release Draft"
},
{
"msg_contents": "> 2019-02-14 Cumulative Update Release\n> ====================================\n> \n> The PostgreSQL Global Development Group has released an update to all supported versions of our database system, including 11.2, 10.7, 9.6.12, 9.5.16, and 9.4.21. This release changes the behavior in how PostgreSQL interfaces with `fsync()` and includes fixes for partitioning and over 70 other bugs that were reported over the past three months.\n> \n> Users should plan to apply this update at the next scheduled downtime.\n> \n> Highlight: Change in behavior with `fsync()`\n> ------------------------------------------\n> \n> When available in an operating system and enabled in the configuration file (which it is by default), PostgreSQL uses the kernel function `fsync()` to help ensure that data is written to a disk. In some operating systems that provide `fsync()`, when the kernel is unable to write out the data, it returns a failure and flushes the data that was supposed to be written from its data buffers.\n> \n> This flushing operation has an unfortunate side-effect for PostgreSQL: if PostgreSQL tries again to write the data to disk by again calling `fsync()`, `fsync()` will report back that it succeeded, but the data that PostgreSQL believed to be saved to the disk would not actually be written. This presents a possible data corruption scenario.\n> \n> This update modifies how PostgreSQL handles a `fsync()` failure: PostgreSQL will no longer retry calling `fsync()` but instead will panic. In this case, PostgreSQL can then replay the data from the write-ahead log (WAL) to help ensure the data is written. While this may appear to be a suboptimal solution, there are presently few alternatives and, based on reports, the problem case occurs extremely rarely.\n\nShouldn't we mention that previous behavior (retrying fsync) can be\nchosen by a new GUC parameter?\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n",
"msg_date": "Wed, 13 Feb 2019 21:35:03 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: 2019-02-14 Press Release Draft,2019-02-14 Press Release Draft"
},
{
"msg_contents": "On 2/13/19 1:35 PM, Tatsuo Ishii wrote:\n>> 2019-02-14 Cumulative Update Release\n>> ====================================\n>>\n>> The PostgreSQL Global Development Group has released an update to all supported versions of our database system, including 11.2, 10.7, 9.6.12, 9.5.16, and 9.4.21. This release changes the behavior in how PostgreSQL interfaces with `fsync()` and includes fixes for partitioning and over 70 other bugs that were reported over the past three months.\n>>\n>> Users should plan to apply this update at the next scheduled downtime.\n>>\n>> Highlight: Change in behavior with `fsync()`\n>> ------------------------------------------\n>>\n>> When available in an operating system and enabled in the configuration file (which it is by default), PostgreSQL uses the kernel function `fsync()` to help ensure that data is written to a disk. In some operating systems that provide `fsync()`, when the kernel is unable to write out the data, it returns a failure and flushes the data that was supposed to be written from its data buffers.\n>>\n>> This flushing operation has an unfortunate side-effect for PostgreSQL: if PostgreSQL tries again to write the data to disk by again calling `fsync()`, `fsync()` will report back that it succeeded, but the data that PostgreSQL believed to be saved to the disk would not actually be written. This presents a possible data corruption scenario.\n>>\n>> This update modifies how PostgreSQL handles a `fsync()` failure: PostgreSQL will no longer retry calling `fsync()` but instead will panic. In this case, PostgreSQL can then replay the data from the write-ahead log (WAL) to help ensure the data is written. While this may appear to be a suboptimal solution, there are presently few alternatives and, based on reports, the problem case occurs extremely rarely.\n> \n> Shouldn't we mention that previous behavior (retrying fsync) can be\n> chosen by a new GUC parameter?\n\nAh, I had that in my original copy and accidentally took it out. I have\nadded it back in, basically taking the exact language from the release\nnotes.\n\nPer that and other feedback, I have attached v2.\n\nThanks so much for your quick responses.\n\nJonathan",
"msg_date": "Wed, 13 Feb 2019 08:05:32 -0500",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: 2019-02-14 Press Release Draft"
},
{
"msg_contents": "Hi,\n\nOn Tue, Feb 12, 2019 at 10:14:55PM -0500, Jonathan S. Katz wrote:\n> On 2/13/19 4:13 AM, Chapman Flack wrote:\n> > On 02/12/19 22:00, Jonathan S. Katz wrote:\n> >> Attached is a draft of the press release for the upcoming 2019-02-14\n> >> release. Feedback & suggestions are welcome.\n> > \n> > ----\n> > Users on PostgreSQL 9.4 should plan to upgrade to a supported version of\n> > PostgreSQL as the community will stop releasing fixes for it on February 12,\n> > 2019. Please see our [versioning\n> > policy](https://www.postgresql.org/support/versioning/)\n> > ----\n> > \n> > Should that be February 13, 2020? That's what the linked page says for 9.4.\n> > \n> > February 12, 2019 would be (a) today, and (b) in the past for this press\n> > release.\n> \n> Yes, good catch. Fixed.\n\nDoes it make sense to ring the alarm already, one year in advance? I\nhaven't checked what we have been doing in the past, but now that we\nestablished the 9.4 EOL is well off, it might make those people weary\nthat just managed to get rif off all their 9.3 instances last month...\n\n\nMichael\n\n",
"msg_date": "Wed, 13 Feb 2019 14:15:06 +0100",
"msg_from": "Michael Banck <mbanck@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: 2019-02-14 Press Release Draft"
},
{
"msg_contents": "On 2/13/19 8:15 AM, Michael Banck wrote:\n> Hi,\n> \n> On Tue, Feb 12, 2019 at 10:14:55PM -0500, Jonathan S. Katz wrote:\n>> On 2/13/19 4:13 AM, Chapman Flack wrote:\n>>> On 02/12/19 22:00, Jonathan S. Katz wrote:\n>>>> Attached is a draft of the press release for the upcoming 2019-02-14\n>>>> release. Feedback & suggestions are welcome.\n>>>\n>>> ----\n>>> Users on PostgreSQL 9.4 should plan to upgrade to a supported version of\n>>> PostgreSQL as the community will stop releasing fixes for it on February 12,\n>>> 2019. Please see our [versioning\n>>> policy](https://www.postgresql.org/support/versioning/)\n>>> ----\n>>>\n>>> Should that be February 13, 2020? That's what the linked page says for 9.4.\n>>>\n>>> February 12, 2019 would be (a) today, and (b) in the past for this press\n>>> release.\n>>\n>> Yes, good catch. Fixed.\n> \n> Does it make sense to ring the alarm already, one year in advance? I\n> haven't checked what we have been doing in the past, but now that we\n> established the 9.4 EOL is well off, it might make those people weary\n\nWith the fixed end dates in place[1], the change in major version\nnumbers, and having a few announcements leading up to the EOL were\nmissed last year (IIRC we gave one warning), we wanted to provide more\nadvanced warning about the next EOL of a version.\n\nThat said, reading it without tired eyes, the language could come across\nas being alarmist, which is not the case. Maybe something like:\n\n==snip==\nPostgreSQL 9.4 will stop receiving fixes on February 12, 2020. Please\nsee our [versioning\npolicy](https://www.postgresql.org/support/versioning/) for more\ninformation.\n==snip==\n\nand subsequent releases can gradually increase the language.\n\nThanks,\n\nJonathan\n\n[1] https://www.postgresql.org/support/versioning/",
"msg_date": "Wed, 13 Feb 2019 08:22:27 -0500",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: 2019-02-14 Press Release Draft"
}
] |
[
{
"msg_contents": "Hi,\n\nI think the following commit:\n\ncommit c6e4133fae1fde93769197379ffcc2b379845113\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Wed Nov 7 12:12:56 2018 -0500\n\n Postpone calculating total_table_pages until after pruning/exclusion.\n...\n\n\nobsoleted a sentence in the comment above index_pages_fetched(), which says:\n\n * \"index_pages\" is the amount to add to the total table space, which was\n * computed for us by query_planner.\n\ntotal_table_pages is computed by make_one_rel as of the aforementioned\ncommit. Attached fixes this.\n\nThanks,\nAmit",
"msg_date": "Wed, 13 Feb 2019 13:57:09 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "obsolete comment above index_pages_fetched"
},
{
"msg_contents": "On Wed, Feb 13, 2019 at 01:57:09PM +0900, Amit Langote wrote:\n> total_table_pages is computed by make_one_rel as of the aforementioned\n> commit. Attached fixes this.\n\nThanks, fixed.\n--\nMichael",
"msg_date": "Wed, 13 Feb 2019 16:33:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: obsolete comment above index_pages_fetched"
},
{
"msg_contents": "On 2019/02/13 16:33, Michael Paquier wrote:\n> On Wed, Feb 13, 2019 at 01:57:09PM +0900, Amit Langote wrote:\n>> total_table_pages is computed by make_one_rel as of the aforementioned\n>> commit. Attached fixes this.\n> \n> Thanks, fixed.\n\nThank you.\n\nRegards,\nAmit\n\n\n",
"msg_date": "Wed, 13 Feb 2019 16:41:01 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: obsolete comment above index_pages_fetched"
}
] |
[
{
"msg_contents": "Hi all,\n\nWhen lacking connection slots, the backend returns a simple \"sorry,\ntoo many clients\", which is something that has been tweaked by recent\ncommit ea92368 for WAL senders. However, the same message would show\nup for autovacuum workers and bgworkers. Based on the way autovacuum\nworkers are spawned by the launcher, and the way bgworkers are added,\nit seems that this cannot actually happen. Still, in my opinion,\nproviding more context can be helpful for people trying to work on\nfeatures related to such code so as they can get more information than\nwhat would normally happen for normal backends.\n\nI am wondering as well if this could not help for issues like this\none, which has popped out today and makes me wonder if we don't have\nrace conditions with the way dynamic bgworkers are spawned:\nhttps://www.postgresql.org/message-id/CAONUJSM5X259vAnnwSpqu=VnRECfGSJ-CgRHyS4P5YyRVwkXsQ@mail.gmail.com\n\nThere are four code paths which report the original \"sorry, too many\nclients\", and the one of the patch attached can easily track the\ntype of connection slot used which helps for better context messages\nwhen a process is initialized.\n\nWould that be helpful for at least debugging purposes?\n\nThanks,\n--\nMichael",
"msg_date": "Wed, 13 Feb 2019 14:13:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Better error messages when lacking connection slots for autovacuum\n workers and bgworkers"
},
{
"msg_contents": "Hello.\n\nAt Wed, 13 Feb 2019 14:13:09 +0900, Michael Paquier <michael@paquier.xyz> wrote in <20190213051309.GF5746@paquier.xyz>\n> Hi all,\n> \n> When lacking connection slots, the backend returns a simple \"sorry,\n> too many clients\", which is something that has been tweaked by recent\n> commit ea92368 for WAL senders. However, the same message would show\n> up for autovacuum workers and bgworkers. Based on the way autovacuum\n> workers are spawned by the launcher, and the way bgworkers are added,\n> it seems that this cannot actually happen. Still, in my opinion,\n> providing more context can be helpful for people trying to work on\n> features related to such code so as they can get more information than\n> what would normally happen for normal backends.\n\nI agree to the distinctive messages, but the autovaccum and\nbgworker cases are in a kind of internal error, and they are not\n\"connection\"s. I feel that elog is more suitable for them.\n\n> I am wondering as well if this could not help for issues like this\n> one, which has popped out today and makes me wonder if we don't have\n> race conditions with the way dynamic bgworkers are spawned:\n> https://www.postgresql.org/message-id/CAONUJSM5X259vAnnwSpqu=VnRECfGSJ-CgRHyS4P5YyRVwkXsQ@mail.gmail.com\n> \n> There are four code paths which report the original \"sorry, too many\n> clients\", and the one of the patch attached can easily track the\n> type of connection slot used which helps for better context messages\n> when a process is initialized.\n> Would that be helpful for at least debugging purposes?\n\nI agree that it is helpful. Errors with different causes ought to\nshow distinctive error messages.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 14 Feb 2019 21:04:37 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: [Suspect SPAM] Better error messages when lacking connection\n slots for autovacuum workers and bgworkers"
},
{
"msg_contents": "On Thu, Feb 14, 2019 at 09:04:37PM +0900, Kyotaro HORIGUCHI wrote:\n> I agree to the distinctive messages, but the autovaccum and\n> bgworker cases are in a kind of internal error, and they are not\n> \"connection\"s. I feel that elog is more suitable for them.\n\nI used ereport() for consistency with the existing code, still you are\nright that we ought to use elog() as this is the case of a problem\nwhich should not happen.\n--\nMichael",
"msg_date": "Fri, 15 Feb 2019 08:08:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: [Suspect SPAM] Better error messages when lacking connection\n slots for autovacuum workers and bgworkers"
}
] |
[
{
"msg_contents": "Bulk operations like CREATE INDEX, ALTER TABLE, or bulk loads can create\na lot of WAL. A lot of WAL at once can cause delays in replication.\nFor synchronous replication, this can make seemingly unrelated sessions\nhang. But also for asynchronous replication, it will increase latency.\n\nOne idea to address this is to slow down WAL-generating maintenance\noperations. This is similar to the vacuum delay. Where the vacuum\ndelay counts notional I/O cost before sleeping, here we would count how\nmuch WAL has been generated and sleep after some amount.\n\nI attach an example patch for this functionality. It introduces three\nsettings:\n\nwal_insert_delay_enabled\nwal_insert_delay\nwal_insert_delay_size\n\nWhen you turn on wal_insert_delay_enabled, then it will sleep for\nwal_insert_delay after the session has produced wal_insert_delay_size of\nWAL data.\n\nThe idea is that you would tune wal_insert_delay and\nwal_insert_delay_size to your required performance characteristics and\nthen turn on wal_insert_delay_enabled individually in maintenance jobs\nor similar.\n\nTo test, for example, set up pgbench with synchronous replication and\nrun an unrelated large index build in a separate session. With the\nsettings, you can make it as fast or as slow as you want.\n\nTuning these settings, however, is quite mysterious I fear. You have to\nplay around a lot to get settings that achieve the right balance.\n\nSo, some questions:\n\nIs this useful?\n\nAny other thoughts on how to configure this or do this?\n\nShould we aim for a more general delay system, possibly including vacuum\ndelay and perhaps something else?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 13 Feb 2019 13:16:07 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "WAL insert delay settings"
},
{
"msg_contents": "Hi,\n\nOn February 13, 2019 1:16:07 PM GMT+01:00, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n>Bulk operations like CREATE INDEX, ALTER TABLE, or bulk loads can\n>create\n>a lot of WAL. A lot of WAL at once can cause delays in replication.\n>For synchronous replication, this can make seemingly unrelated sessions\n>hang. But also for asynchronous replication, it will increase latency.\n>\n>One idea to address this is to slow down WAL-generating maintenance\n>operations. This is similar to the vacuum delay. Where the vacuum\n>delay counts notional I/O cost before sleeping, here we would count how\n>much WAL has been generated and sleep after some amount.\n>\n>I attach an example patch for this functionality. It introduces three\n>settings:\n>\n>wal_insert_delay_enabled\n>wal_insert_delay\n>wal_insert_delay_size\n>\n>When you turn on wal_insert_delay_enabled, then it will sleep for\n>wal_insert_delay after the session has produced wal_insert_delay_size\n>of\n>WAL data.\n>\n>The idea is that you would tune wal_insert_delay and\n>wal_insert_delay_size to your required performance characteristics and\n>then turn on wal_insert_delay_enabled individually in maintenance jobs\n>or similar.\n>\n>To test, for example, set up pgbench with synchronous replication and\n>run an unrelated large index build in a separate session. With the\n>settings, you can make it as fast or as slow as you want.\n>\n>Tuning these settings, however, is quite mysterious I fear. You have\n>to\n>play around a lot to get settings that achieve the right balance.\n>\n>So, some questions:\n>\n>Is this useful?\n>\n>Any other thoughts on how to configure this or do this?\n>\n>Should we aim for a more general delay system, possibly including\n>vacuum\n>delay and perhaps something else?\n\nInteresting idea, not yet quite sure what to think. But I don't think the way you did it is acceptable - we can't just delay while holding buffer locks, in critical sections, while not interruptible.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n",
"msg_date": "Wed, 13 Feb 2019 13:18:59 +0100",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "Hi\n\n13.02.2019 17:16, Peter Eisentraut пишет:\n> Bulk operations like CREATE INDEX, ALTER TABLE, or bulk loads can create\n> a lot of WAL. A lot of WAL at once can cause delays in replication.\n> For synchronous replication, this can make seemingly unrelated sessions\n> hang. But also for asynchronous replication, it will increase latency.\n>\n> One idea to address this is to slow down WAL-generating maintenance\n> operations. This is similar to the vacuum delay. Where the vacuum\n> delay counts notional I/O cost before sleeping, here we would count how\n> much WAL has been generated and sleep after some amount.\n>\n> I attach an example patch for this functionality. It introduces three\n> settings:\n>\n> wal_insert_delay_enabled\n> wal_insert_delay\n> wal_insert_delay_size\n>\n> When you turn on wal_insert_delay_enabled, then it will sleep for\n> wal_insert_delay after the session has produced wal_insert_delay_size of\n> WAL data.\n>\n> The idea is that you would tune wal_insert_delay and\n> wal_insert_delay_size to your required performance characteristics and\n> then turn on wal_insert_delay_enabled individually in maintenance jobs\n> or similar.\n>\n> To test, for example, set up pgbench with synchronous replication and\n> run an unrelated large index build in a separate session. With the\n> settings, you can make it as fast or as slow as you want.\n>\n> Tuning these settings, however, is quite mysterious I fear. You have to\n> play around a lot to get settings that achieve the right balance.\n>\n> So, some questions:\n>\n> Is this useful?\n>\n> Any other thoughts on how to configure this or do this?\n>\n> Should we aim for a more general delay system, possibly including vacuum\n> delay and perhaps something else?\n>\nI think it's better to have more general cost-based settings which allow \nto control performance. Something like what have been already done for \nautovacuum.\n\nFor example, introduce vacuum-similar mechanism with the following \ncontrolables:\nmaintenance_cost_page_hit\nmaintenance_cost_page_miss\nmaintenance_cost_page_dirty\nmaintenance_cost_delay\nmaintenance_cost_limit\n\nmaintenance_cost_delay=0 (default) means feature is disabled, but if \nuser wants to limit performance he can define such parameters in \nper-session, or per-user manner. Especially it can be useful for \nlimiting an already running sessions, such as mass deletion, or pg_dump.\n\nOf course, it's just an idea, because I can't imagine how many things \nshould be touched in order to implement this.\n\nRegards, Alexey Lesovsky\n\n\n",
"msg_date": "Wed, 13 Feb 2019 18:29:59 +0500",
"msg_from": "dataegret <alexey.lesovsky@dataegret.com>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On February 13, 2019 1:16:07 PM GMT+01:00, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n>> One idea to address this is to slow down WAL-generating maintenance\n>> operations. This is similar to the vacuum delay. Where the vacuum\n>> delay counts notional I/O cost before sleeping, here we would count how\n>> much WAL has been generated and sleep after some amount.\n\n> Interesting idea, not yet quite sure what to think. But I don't think the way you did it is acceptable - we can't just delay while holding buffer locks, in critical sections, while not interruptible.\n\nYeah. Maybe it could be done in a less invasive way by just having the\nWAL code keep a running sum of how much WAL this process has created,\nand then letting the existing vacuum-delay infrastructure use that as\none of its how-much-IO-have-I-done inputs.\n\nNot sure if that makes the tuning problem easier or harder, but\nit seems reasonable on its face to count WAL emission as I/O.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 13 Feb 2019 09:57:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "Greetings,\n\n* Peter Eisentraut (peter.eisentraut@2ndquadrant.com) wrote:\n> Bulk operations like CREATE INDEX, ALTER TABLE, or bulk loads can create\n> a lot of WAL. A lot of WAL at once can cause delays in replication.\n\nAgreed, though I think VACUUM should certainly be included in this.\n\nI'm all for the idea though it seems like a different approach is needed\nbased on the down-thread discussion. Ultimately, having a way to have\nthese activities happen without causing issues for replicas is a great\nidea and would definitely address a practical issue that a lot of people\nrun into.\n\nThanks!\n\nStephen",
"msg_date": "Wed, 13 Feb 2019 10:31:29 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "On 13/02/2019 13:18, Andres Freund wrote:\n> But I don't think the way you did it is acceptable - we can't just delay while holding buffer locks, in critical sections, while not interruptible.\n\nThe code I added to XLogInsertRecord() is not inside the critical section.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Wed, 13 Feb 2019 16:39:21 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "\n\nOn February 13, 2019 4:39:21 PM GMT+01:00, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n>On 13/02/2019 13:18, Andres Freund wrote:\n>> But I don't think the way you did it is acceptable - we can't just\n>delay while holding buffer locks, in critical sections, while not\n>interruptible.\n>\n>The code I added to XLogInsertRecord() is not inside the critical\n>section.\n\nMost callers do xlog insertions inside crit sections though.\n\nAndres\n\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n",
"msg_date": "Wed, 13 Feb 2019 16:40:39 +0100",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "\n\nOn 13.02.2019 19:57, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> On February 13, 2019 1:16:07 PM GMT+01:00, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n>>> One idea to address this is to slow down WAL-generating maintenance\n>>> operations. This is similar to the vacuum delay. Where the vacuum\n>>> delay counts notional I/O cost before sleeping, here we would count how\n>>> much WAL has been generated and sleep after some amount.\n> \n>> Interesting idea, not yet quite sure what to think. But I don't think the way you did it is acceptable - we can't just delay while holding buffer locks, in critical sections, while not interruptible.\n> \n> Yeah. Maybe it could be done in a less invasive way by just having the\n> WAL code keep a running sum of how much WAL this process has created,\n> and then letting the existing vacuum-delay infrastructure use that as\n> one of its how-much-IO-have-I-done inputs.\n> \n> Not sure if that makes the tuning problem easier or harder, but\n> it seems reasonable on its face to count WAL emission as I/O.\n> \n> \t\t\tregards, tom lane\n\nAlso we can add a 'soft' clause to DML queries. It will some abstraction \nfor background query execution. It can contain the WAL write velocity \nlimit parameter (as Tom proposed) and may some another.\n\n-- \nAndrey Lepikhov\nPostgres Professional\nhttps://postgrespro.com\nThe Russian Postgres Company\n\n",
"msg_date": "Thu, 14 Feb 2019 11:10:20 +0500",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "On Wed, Feb 13, 2019 at 4:16 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> Bulk operations like CREATE INDEX, ALTER TABLE, or bulk loads can create\n> a lot of WAL. A lot of WAL at once can cause delays in replication.\n> For synchronous replication, this can make seemingly unrelated sessions\n> hang. But also for asynchronous replication, it will increase latency.\n\nI think that I suggested a feature like this early during my time at\nHeroku, about 5 years ago. There would occasionally be cases where ops\nwould find it useful to throttle WAL writing using their own terrible\nkludge (it involved sending SIGSTOP to the WAL writer).\n\nI recall that this idea was not well received at the time. I still\nthink it's a good idea, though. Provided there is a safe way to get it\nto work.\n\n-- \nPeter Geoghegan\n\n",
"msg_date": "Wed, 13 Feb 2019 23:21:39 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-13 23:21:39 -0800, Peter Geoghegan wrote:\n> There would occasionally be cases where ops\n> would find it useful to throttle WAL writing using their own terrible\n> kludge (it involved sending SIGSTOP to the WAL writer).\n\nThat can't have been the workaround - either you'd interrupt it while\nholding critical locks (in which case nobody could write WAL anymore),\nor you'd just move all the writing to backends, no?\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Thu, 14 Feb 2019 00:42:18 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "On Thu, Feb 14, 2019 at 12:42 AM Andres Freund <andres@anarazel.de> wrote:\n> That can't have been the workaround - either you'd interrupt it while\n> holding critical locks (in which case nobody could write WAL anymore),\n> or you'd just move all the writing to backends, no?\n\nI imagine that it held the critical locks briefly. I'm not endorsing\nthat approach, obviously, but apparently it more or less worked. It\nwas something that was used in rare cases, only when there was no\napplication-specific way to throttle writes, and only when the server\nwas in effect destabilized by writing out WAL too quickly.\n\n-- \nPeter Geoghegan\n\n",
"msg_date": "Thu, 14 Feb 2019 00:52:17 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "On Thu, Feb 14, 2019 at 12:52 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> I imagine that it held the critical locks briefly.\n\nI didn't mention that the utility they used would send SIGSTOP and\nSIGCONT in close succession. (Yeah, I know.)\n\n-- \nPeter Geoghegan\n\n",
"msg_date": "Thu, 14 Feb 2019 00:53:54 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "On Thu, Feb 14, 2019 at 12:53 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> I didn't mention that the utility they used would send SIGSTOP and\n> SIGCONT in close succession. (Yeah, I know.)\n\nActually, it SIGSTOP'd backends, not the WAL writer or background writer.\n\n\n-- \nPeter Geoghegan\n\n",
"msg_date": "Thu, 14 Feb 2019 01:00:13 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "\n\nOn 2/13/19 4:31 PM, Stephen Frost wrote:\n> Greetings,\n> \n> * Peter Eisentraut (peter.eisentraut@2ndquadrant.com) wrote:\n>> Bulk operations like CREATE INDEX, ALTER TABLE, or bulk loads can create\n>> a lot of WAL. A lot of WAL at once can cause delays in replication.\n> \n> Agreed, though I think VACUUM should certainly be included in this.\n> \n\nWon't these two throttling criteria interact in undesirable and/or\nunpredictable way? With the regular vacuum throttling (based on\nhit/miss/dirty) it's possible to compute rough read/write I/O limits.\nBut with the additional sleeps based on amount-of-WAL, we may sleep for\none of two reasons, so we may not reach either limit. No?\n\n> I'm all for the idea though it seems like a different approach is needed\n> based on the down-thread discussion. Ultimately, having a way to have\n> these activities happen without causing issues for replicas is a great\n> idea and would definitely address a practical issue that a lot of people\n> run into.\n> \n\n+1\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Thu, 14 Feb 2019 10:00:38 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-14 10:00:38 +0100, Tomas Vondra wrote:\n> On 2/13/19 4:31 PM, Stephen Frost wrote:\n> > Greetings,\n> > \n> > * Peter Eisentraut (peter.eisentraut@2ndquadrant.com) wrote:\n> >> Bulk operations like CREATE INDEX, ALTER TABLE, or bulk loads can create\n> >> a lot of WAL. A lot of WAL at once can cause delays in replication.\n> > \n> > Agreed, though I think VACUUM should certainly be included in this.\n> > \n> \n> Won't these two throttling criteria interact in undesirable and/or\n> unpredictable way? With the regular vacuum throttling (based on\n> hit/miss/dirty) it's possible to compute rough read/write I/O limits.\n> But with the additional sleeps based on amount-of-WAL, we may sleep for\n> one of two reasons, so we may not reach either limit. No?\n\nWell, it'd be max rates for either, if done right. I think we only\nshould start adding delays for WAL logging if we're exceeding the WAL\nwrite rate. That's obviously more complicated than the stuff we do for\nthe current VACUUM throttling, but I can't see two such systems\ninteracting well. Also, the current logic just doesn't work well when\nyou consider IO actually taking time, and/or process scheduling effects\non busy systems.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Thu, 14 Feb 2019 01:06:05 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "\n\nOn 2/14/19 10:06 AM, Andres Freund wrote:\n> Hi,\n> \n> On 2019-02-14 10:00:38 +0100, Tomas Vondra wrote:\n>> On 2/13/19 4:31 PM, Stephen Frost wrote:\n>>> Greetings,\n>>>\n>>> * Peter Eisentraut (peter.eisentraut@2ndquadrant.com) wrote:\n>>>> Bulk operations like CREATE INDEX, ALTER TABLE, or bulk loads can create\n>>>> a lot of WAL. A lot of WAL at once can cause delays in replication.\n>>>\n>>> Agreed, though I think VACUUM should certainly be included in this.\n>>>\n>>\n>> Won't these two throttling criteria interact in undesirable and/or\n>> unpredictable way? With the regular vacuum throttling (based on\n>> hit/miss/dirty) it's possible to compute rough read/write I/O limits.\n>> But with the additional sleeps based on amount-of-WAL, we may sleep for\n>> one of two reasons, so we may not reach either limit. No?\n> \n> Well, it'd be max rates for either, if done right. I think we only\n> should start adding delays for WAL logging if we're exceeding the WAL\n> write rate.\n\nNot really, I think. If you add additional sleep() calls somewhere, that\nmay affect the limits in vacuum, making it throttle before reaching the\nderived throughput limits.\n\n> That's obviously more complicated than the stuff we do for\n> the current VACUUM throttling, but I can't see two such systems\n> interacting well. Also, the current logic just doesn't work well when\n> you consider IO actually taking time, and/or process scheduling effects\n> on busy systems.\n> \n\nTrue, but making it even less predictable is hardly an improvement.\n\ncheers\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Thu, 14 Feb 2019 10:31:57 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "\n\nOn February 14, 2019 10:31:57 AM GMT+01:00, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n>\n>On 2/14/19 10:06 AM, Andres Freund wrote:\n>> Hi,\n>> \n>> On 2019-02-14 10:00:38 +0100, Tomas Vondra wrote:\n>>> On 2/13/19 4:31 PM, Stephen Frost wrote:\n>>>> Greetings,\n>>>>\n>>>> * Peter Eisentraut (peter.eisentraut@2ndquadrant.com) wrote:\n>>>>> Bulk operations like CREATE INDEX, ALTER TABLE, or bulk loads can\n>create\n>>>>> a lot of WAL. A lot of WAL at once can cause delays in\n>replication.\n>>>>\n>>>> Agreed, though I think VACUUM should certainly be included in this.\n>>>>\n>>>\n>>> Won't these two throttling criteria interact in undesirable and/or\n>>> unpredictable way? With the regular vacuum throttling (based on\n>>> hit/miss/dirty) it's possible to compute rough read/write I/O\n>limits.\n>>> But with the additional sleeps based on amount-of-WAL, we may sleep\n>for\n>>> one of two reasons, so we may not reach either limit. No?\n>> \n>> Well, it'd be max rates for either, if done right. I think we only\n>> should start adding delays for WAL logging if we're exceeding the WAL\n>> write rate.\n>\n>Not really, I think. If you add additional sleep() calls somewhere,\n>that\n>may affect the limits in vacuum, making it throttle before reaching the\n>derived throughput limits.\n\nI don't understand. Obviously, if you have two limits, the scarcer resource can limit full use of the other resource. That seems OK? The thing u think we need to be careful about is not to limit in a way, e.g. by adding sleeps even when below the limit, that a WAL limit causes throttling of normal IO before the WAL limit is reached.\n\n\n>> That's obviously more complicated than the stuff we do for\n>> the current VACUUM throttling, but I can't see two such systems\n>> interacting well. Also, the current logic just doesn't work well when\n>> you consider IO actually taking time, and/or process scheduling\n>effects\n>> on busy systems.\n>> \n>\n>True, but making it even less predictable is hardly an improvement.\n\nI don't quite see the problem here. Could you expand?\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n",
"msg_date": "Thu, 14 Feb 2019 10:36:16 +0100",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "\n\nOn 2/14/19 10:36 AM, Andres Freund wrote:\n> \n> \n> On February 14, 2019 10:31:57 AM GMT+01:00, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>\n>>\n>> On 2/14/19 10:06 AM, Andres Freund wrote:\n>>> Hi,\n>>>\n>>> On 2019-02-14 10:00:38 +0100, Tomas Vondra wrote:\n>>>> On 2/13/19 4:31 PM, Stephen Frost wrote:\n>>>>> Greetings,\n>>>>>\n>>>>> * Peter Eisentraut (peter.eisentraut@2ndquadrant.com) wrote:\n>>>>>> Bulk operations like CREATE INDEX, ALTER TABLE, or bulk loads can\n>> create\n>>>>>> a lot of WAL. A lot of WAL at once can cause delays in\n>> replication.\n>>>>>\n>>>>> Agreed, though I think VACUUM should certainly be included in this.\n>>>>>\n>>>>\n>>>> Won't these two throttling criteria interact in undesirable and/or\n>>>> unpredictable way? With the regular vacuum throttling (based on\n>>>> hit/miss/dirty) it's possible to compute rough read/write I/O\n>> limits.\n>>>> But with the additional sleeps based on amount-of-WAL, we may sleep\n>> for\n>>>> one of two reasons, so we may not reach either limit. No?\n>>>\n>>> Well, it'd be max rates for either, if done right. I think we only\n>>> should start adding delays for WAL logging if we're exceeding the WAL\n>>> write rate.\n>>\n>> Not really, I think. If you add additional sleep() calls somewhere,\n>> that\n>> may affect the limits in vacuum, making it throttle before reaching the\n>> derived throughput limits.\n> \n> I don't understand. Obviously, if you have two limits, the scarcer\n> resource can limit full use of the other resource. That seems OK? The\n> thing u think we need to be careful about is not to limit in a way,\n> e.g. by adding sleeps even when below the limit, that a WAL limit\n> causes throttling of normal IO before the WAL limit is reached.\n> \n\nWith the vacuum throttling, rough I/O throughput maximums can be\ncomputed by by counting the number of pages you can read/write between\nsleeps. For example with the defaults (200 credits, 20ms sleeps, miss\ncost 10 credits) this means 20 writes/round, with 50 rounds/second, so\n8MB/s. But this is based on assumption that the work between sleeps\ntakes almost no time - that's not perfect, but generally works.\n\nBut if you add extra sleep() calls somewhere (say because there's also\nlimit on WAL throughput), it will affect how fast VACUUM works in\ngeneral. Yet it'll continue with the cost-based throttling, but it will\nnever reach the limits. Say you do another 20ms sleep somewhere.\nSuddenly it means it only does 25 rounds/second, and the actual write\nlimit drops to 4 MB/s.\n\n> \n>>> That's obviously more complicated than the stuff we do for\n>>> the current VACUUM throttling, but I can't see two such systems\n>>> interacting well. Also, the current logic just doesn't work well when\n>>> you consider IO actually taking time, and/or process scheduling\n>> effects\n>>> on busy systems.\n>>>\n>>\n>> True, but making it even less predictable is hardly an improvement.\n> \n> I don't quite see the problem here. Could you expand?\n> \n\nAll I'm saying that you can now estimate how much reads/writes vacuum\ndoes. With the extra sleeps (due to additional throttling mechanism) it\nwill be harder.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Thu, 14 Feb 2019 11:03:24 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "On 14/02/2019 11:03, Tomas Vondra wrote:\n> But if you add extra sleep() calls somewhere (say because there's also\n> limit on WAL throughput), it will affect how fast VACUUM works in\n> general. Yet it'll continue with the cost-based throttling, but it will\n> never reach the limits. Say you do another 20ms sleep somewhere.\n> Suddenly it means it only does 25 rounds/second, and the actual write\n> limit drops to 4 MB/s.\n\nI think at a first approximation, you probably don't want to add WAL\ndelays to vacuum jobs, since they are already slowed down, so the rate\nof WAL they produce might not be your first problem. The problem is\nmore things like CREATE INDEX CONCURRENTLY that run at full speed.\n\nThat leads to an alternative idea of expanding the existing cost-based\nvacuum delay system to other commands.\n\nWe could even enhance the cost system by taking WAL into account as an\nadditional factor.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Thu, 14 Feb 2019 16:14:45 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "On 13/02/2019 16:40, Andres Freund wrote:\n> On February 13, 2019 4:39:21 PM GMT+01:00, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n>> On 13/02/2019 13:18, Andres Freund wrote:\n>>> But I don't think the way you did it is acceptable - we can't just\n>> delay while holding buffer locks, in critical sections, while not\n>> interruptible.\n>>\n>> The code I added to XLogInsertRecord() is not inside the critical\n>> section.\n> \n> Most callers do xlog insertions inside crit sections though.\n\nIs it a problem that pg_usleep(CommitDelay) is inside a critical section?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Thu, 14 Feb 2019 16:16:05 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "Greetings,\n\nOn Thu, Feb 14, 2019 at 10:15 Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 14/02/2019 11:03, Tomas Vondra wrote:\n> > But if you add extra sleep() calls somewhere (say because there's also\n> > limit on WAL throughput), it will affect how fast VACUUM works in\n> > general. Yet it'll continue with the cost-based throttling, but it will\n> > never reach the limits. Say you do another 20ms sleep somewhere.\n> > Suddenly it means it only does 25 rounds/second, and the actual write\n> > limit drops to 4 MB/s.\n>\n> I think at a first approximation, you probably don't want to add WAL\n> delays to vacuum jobs, since they are already slowed down, so the rate\n> of WAL they produce might not be your first problem. The problem is\n> more things like CREATE INDEX CONCURRENTLY that run at full speed.\n>\n> That leads to an alternative idea of expanding the existing cost-based\n> vacuum delay system to other commands.\n>\n> We could even enhance the cost system by taking WAL into account as an\n> additional factor.\n\n\nThis is really what I was thinking- let’s not have multiple independent\nways of slowing down maintenance and similar jobs to reduce their impact on\nI/o to the heap and to WAL.\n\nThanks!\n\nStephen\n\nGreetings,On Thu, Feb 14, 2019 at 10:15 Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:On 14/02/2019 11:03, Tomas Vondra wrote:\n> But if you add extra sleep() calls somewhere (say because there's also\n> limit on WAL throughput), it will affect how fast VACUUM works in\n> general. Yet it'll continue with the cost-based throttling, but it will\n> never reach the limits. Say you do another 20ms sleep somewhere.\n> Suddenly it means it only does 25 rounds/second, and the actual write\n> limit drops to 4 MB/s.\n\nI think at a first approximation, you probably don't want to add WAL\ndelays to vacuum jobs, since they are already slowed down, so the rate\nof WAL they produce might not be your first problem. The problem is\nmore things like CREATE INDEX CONCURRENTLY that run at full speed.\n\nThat leads to an alternative idea of expanding the existing cost-based\nvacuum delay system to other commands.\n\nWe could even enhance the cost system by taking WAL into account as an\nadditional factor.This is really what I was thinking- let’s not have multiple independent ways of slowing down maintenance and similar jobs to reduce their impact on I/o to the heap and to WAL. Thanks!Stephen",
"msg_date": "Thu, 14 Feb 2019 11:02:24 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-14 16:16:05 +0100, Peter Eisentraut wrote:\n> On 13/02/2019 16:40, Andres Freund wrote:\n> > On February 13, 2019 4:39:21 PM GMT+01:00, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> >> On 13/02/2019 13:18, Andres Freund wrote:\n> >>> But I don't think the way you did it is acceptable - we can't just\n> >> delay while holding buffer locks, in critical sections, while not\n> >> interruptible.\n> >>\n> >> The code I added to XLogInsertRecord() is not inside the critical\n> >> section.\n> > \n> > Most callers do xlog insertions inside crit sections though.\n> \n> Is it a problem that pg_usleep(CommitDelay) is inside a critical section?\n\nWell: We can't make things sleep for considerable time while holding\ncrucial locks and not make such sleeps interruptible. And holding\nlwlocks will make it noninterruptible (but still it could throw an\nerror), but with crit sections, we can't even error out if we somehow\ngot that error.\n\nConsider throttled code writing to a popular index or bree page - they'd\nsuddenly be stalled and everyone else would also queue up in an\nuninterruptible manner via lwlocks. You'd throttle the whole system.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Thu, 14 Feb 2019 08:06:24 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "On 2019-02-14 11:02:24 -0500, Stephen Frost wrote:\n> Greetings,\n> \n> On Thu, Feb 14, 2019 at 10:15 Peter Eisentraut <\n> peter.eisentraut@2ndquadrant.com> wrote:\n> \n> > On 14/02/2019 11:03, Tomas Vondra wrote:\n> > > But if you add extra sleep() calls somewhere (say because there's also\n> > > limit on WAL throughput), it will affect how fast VACUUM works in\n> > > general. Yet it'll continue with the cost-based throttling, but it will\n> > > never reach the limits. Say you do another 20ms sleep somewhere.\n> > > Suddenly it means it only does 25 rounds/second, and the actual write\n> > > limit drops to 4 MB/s.\n> >\n> > I think at a first approximation, you probably don't want to add WAL\n> > delays to vacuum jobs, since they are already slowed down, so the rate\n> > of WAL they produce might not be your first problem. The problem is\n> > more things like CREATE INDEX CONCURRENTLY that run at full speed.\n> >\n> > That leads to an alternative idea of expanding the existing cost-based\n> > vacuum delay system to other commands.\n> >\n> > We could even enhance the cost system by taking WAL into account as an\n> > additional factor.\n> \n> \n> This is really what I was thinking- let’s not have multiple independent\n> ways of slowing down maintenance and similar jobs to reduce their impact on\n> I/o to the heap and to WAL.\n\nI think that's a bad idea. Both because the current vacuum code is\n*terrible* if you desire higher rates because both CPU and IO time\naren't taken into account. And it's extremely hard to control. And it\nseems entirely valuable to be able to limit the amount of WAL generated\nfor replication, but still try go get the rest of the work done as\nquickly as reasonably possible wrt local IO.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Thu, 14 Feb 2019 08:14:34 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2019-02-14 11:02:24 -0500, Stephen Frost wrote:\n> > On Thu, Feb 14, 2019 at 10:15 Peter Eisentraut <\n> > peter.eisentraut@2ndquadrant.com> wrote:\n> > > On 14/02/2019 11:03, Tomas Vondra wrote:\n> > > > But if you add extra sleep() calls somewhere (say because there's also\n> > > > limit on WAL throughput), it will affect how fast VACUUM works in\n> > > > general. Yet it'll continue with the cost-based throttling, but it will\n> > > > never reach the limits. Say you do another 20ms sleep somewhere.\n> > > > Suddenly it means it only does 25 rounds/second, and the actual write\n> > > > limit drops to 4 MB/s.\n> > >\n> > > I think at a first approximation, you probably don't want to add WAL\n> > > delays to vacuum jobs, since they are already slowed down, so the rate\n> > > of WAL they produce might not be your first problem. The problem is\n> > > more things like CREATE INDEX CONCURRENTLY that run at full speed.\n> > >\n> > > That leads to an alternative idea of expanding the existing cost-based\n> > > vacuum delay system to other commands.\n> > >\n> > > We could even enhance the cost system by taking WAL into account as an\n> > > additional factor.\n> > \n> > This is really what I was thinking- let’s not have multiple independent\n> > ways of slowing down maintenance and similar jobs to reduce their impact on\n> > I/o to the heap and to WAL.\n> \n> I think that's a bad idea. Both because the current vacuum code is\n> *terrible* if you desire higher rates because both CPU and IO time\n> aren't taken into account. And it's extremely hard to control. And it\n> seems entirely valuable to be able to limit the amount of WAL generated\n> for replication, but still try go get the rest of the work done as\n> quickly as reasonably possible wrt local IO.\n\nI'm all for making improvements to the vacuum code and making it easier\nto control.\n\nI don't buy off on the argument that there is some way to segregate the\nlocal I/O question from the WAL when we're talking about these kinds of\noperations (VACUUM, CREATE INDEX, CLUSTER, etc) on logged relations, nor\ndo I think we do our users a service by giving them independent knobs\nfor both that will undoubtably end up making it more difficult to\nunderstand and control what's going on overall.\n\nEven here, it seems, you're arguing that the existing approach for\nVACUUM is hard to control; wouldn't adding another set of knobs for\ncontrolling the amount of WAL generated by VACUUM make that worse? I\nhave a hard time seeing how it wouldn't.\n\nThanks!\n\nStephen",
"msg_date": "Fri, 15 Feb 2019 08:50:03 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-15 08:50:03 -0500, Stephen Frost wrote:\n> * Andres Freund (andres@anarazel.de) wrote:\n> > On 2019-02-14 11:02:24 -0500, Stephen Frost wrote:\n> > > On Thu, Feb 14, 2019 at 10:15 Peter Eisentraut <\n> > > peter.eisentraut@2ndquadrant.com> wrote:\n> > > > On 14/02/2019 11:03, Tomas Vondra wrote:\n> > > > > But if you add extra sleep() calls somewhere (say because there's also\n> > > > > limit on WAL throughput), it will affect how fast VACUUM works in\n> > > > > general. Yet it'll continue with the cost-based throttling, but it will\n> > > > > never reach the limits. Say you do another 20ms sleep somewhere.\n> > > > > Suddenly it means it only does 25 rounds/second, and the actual write\n> > > > > limit drops to 4 MB/s.\n> > > >\n> > > > I think at a first approximation, you probably don't want to add WAL\n> > > > delays to vacuum jobs, since they are already slowed down, so the rate\n> > > > of WAL they produce might not be your first problem. The problem is\n> > > > more things like CREATE INDEX CONCURRENTLY that run at full speed.\n> > > >\n> > > > That leads to an alternative idea of expanding the existing cost-based\n> > > > vacuum delay system to other commands.\n> > > >\n> > > > We could even enhance the cost system by taking WAL into account as an\n> > > > additional factor.\n> > > \n> > > This is really what I was thinking- let’s not have multiple independent\n> > > ways of slowing down maintenance and similar jobs to reduce their impact on\n> > > I/o to the heap and to WAL.\n> > \n> > I think that's a bad idea. Both because the current vacuum code is\n> > *terrible* if you desire higher rates because both CPU and IO time\n> > aren't taken into account. And it's extremely hard to control. And it\n> > seems entirely valuable to be able to limit the amount of WAL generated\n> > for replication, but still try go get the rest of the work done as\n> > quickly as reasonably possible wrt local IO.\n> \n> I'm all for making improvements to the vacuum code and making it easier\n> to control.\n> \n> I don't buy off on the argument that there is some way to segregate the\n> local I/O question from the WAL when we're talking about these kinds of\n> operations (VACUUM, CREATE INDEX, CLUSTER, etc) on logged relations, nor\n> do I think we do our users a service by giving them independent knobs\n> for both that will undoubtably end up making it more difficult to\n> understand and control what's going on overall.\n> \n> Even here, it seems, you're arguing that the existing approach for\n> VACUUM is hard to control; wouldn't adding another set of knobs for\n> controlling the amount of WAL generated by VACUUM make that worse? I\n> have a hard time seeing how it wouldn't.\n\nI think it's because I see them as, often, having two largely\nindependent use cases. If your goal is to avoid swamping replication\nwith WAL, you don't necessarily care about also throttling VACUUM (or\nREINDEX, or CLUSTER, or ...)'s local IO. By forcing to combine the two\nyou just make the whole feature less usable.\n\nI think it'd not be insane to add two things:\n- WAL write rate limiting, independent of the vacuum stuff. It'd also be\n used by lots of other bulk commands (CREATE INDEX, ALTER TABLE\n rewrites, ...)\n- Account for WAL writes in the current vacuum costing logic, by\n accounting for it using a new cost parameter\n\nThen VACUUM would be throttled by the *minimum* of the two, which seems\nto make plenty sense to me, given the usecases.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Fri, 15 Feb 2019 10:41:21 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "\nOn 2/15/19 7:41 PM, Andres Freund wrote:\n> Hi,\n> \n> On 2019-02-15 08:50:03 -0500, Stephen Frost wrote:\n>> * Andres Freund (andres@anarazel.de) wrote:\n>>> On 2019-02-14 11:02:24 -0500, Stephen Frost wrote:\n>>>> On Thu, Feb 14, 2019 at 10:15 Peter Eisentraut <\n>>>> peter.eisentraut@2ndquadrant.com> wrote:\n>>>>> On 14/02/2019 11:03, Tomas Vondra wrote:\n>>>>>> But if you add extra sleep() calls somewhere (say because there's also\n>>>>>> limit on WAL throughput), it will affect how fast VACUUM works in\n>>>>>> general. Yet it'll continue with the cost-based throttling, but it will\n>>>>>> never reach the limits. Say you do another 20ms sleep somewhere.\n>>>>>> Suddenly it means it only does 25 rounds/second, and the actual write\n>>>>>> limit drops to 4 MB/s.\n>>>>>\n>>>>> I think at a first approximation, you probably don't want to add WAL\n>>>>> delays to vacuum jobs, since they are already slowed down, so the rate\n>>>>> of WAL they produce might not be your first problem. The problem is\n>>>>> more things like CREATE INDEX CONCURRENTLY that run at full speed.\n>>>>>\n>>>>> That leads to an alternative idea of expanding the existing cost-based\n>>>>> vacuum delay system to other commands.\n>>>>>\n>>>>> We could even enhance the cost system by taking WAL into account as an\n>>>>> additional factor.\n>>>>\n>>>> This is really what I was thinking- let’s not have multiple independent\n>>>> ways of slowing down maintenance and similar jobs to reduce their impact on\n>>>> I/o to the heap and to WAL.\n>>>\n>>> I think that's a bad idea. Both because the current vacuum code is\n>>> *terrible* if you desire higher rates because both CPU and IO time\n>>> aren't taken into account. And it's extremely hard to control. And it\n>>> seems entirely valuable to be able to limit the amount of WAL generated\n>>> for replication, but still try go get the rest of the work done as\n>>> quickly as reasonably possible wrt local IO.\n>>\n>> I'm all for making improvements to the vacuum code and making it easier\n>> to control.\n>>\n>> I don't buy off on the argument that there is some way to segregate the\n>> local I/O question from the WAL when we're talking about these kinds of\n>> operations (VACUUM, CREATE INDEX, CLUSTER, etc) on logged relations, nor\n>> do I think we do our users a service by giving them independent knobs\n>> for both that will undoubtably end up making it more difficult to\n>> understand and control what's going on overall.\n>>\n>> Even here, it seems, you're arguing that the existing approach for\n>> VACUUM is hard to control; wouldn't adding another set of knobs for\n>> controlling the amount of WAL generated by VACUUM make that worse? I\n>> have a hard time seeing how it wouldn't.\n> \n> I think it's because I see them as, often, having two largely \n> independent use cases. If your goal is to avoid swamping replication \n> with WAL, you don't necessarily care about also throttling VACUUM\n> (or REINDEX, or CLUSTER, or ...)'s local IO. By forcing to combine\n> the two you just make the whole feature less usable.\n> \n\nI agree with that.\n\n> I think it'd not be insane to add two things:\n> - WAL write rate limiting, independent of the vacuum stuff. It'd also be\n> used by lots of other bulk commands (CREATE INDEX, ALTER TABLE\n> rewrites, ...)\n> - Account for WAL writes in the current vacuum costing logic, by\n> accounting for it using a new cost parameter\n> \n> Then VACUUM would be throttled by the *minimum* of the two, which seems\n> to make plenty sense to me, given the usecases.\n> \n\nIs it really minimum? If you add another cost parameter to the vacuum\nmodel, then there's almost no chance of actually reaching the limit\nbecause the budget (cost_limit) is shared with other stuff (local I/O).\n\nFWIW I do think the ability to throttle WAL is a useful feature, I just\ndon't want to shoot myself in the foot by making other things worse.\n\nAs you note, the existing VACUUM throttling is already hard to control,\nthis seems to make it even harder.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 15 Feb 2019 21:02:45 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "Greetings,\n\n* Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n> On 2/15/19 7:41 PM, Andres Freund wrote:\n> > On 2019-02-15 08:50:03 -0500, Stephen Frost wrote:\n> >> * Andres Freund (andres@anarazel.de) wrote:\n> >>> On 2019-02-14 11:02:24 -0500, Stephen Frost wrote:\n> >>>> On Thu, Feb 14, 2019 at 10:15 Peter Eisentraut <\n> >>>> peter.eisentraut@2ndquadrant.com> wrote:\n> >>>>> On 14/02/2019 11:03, Tomas Vondra wrote:\n> >>>>>> But if you add extra sleep() calls somewhere (say because there's also\n> >>>>>> limit on WAL throughput), it will affect how fast VACUUM works in\n> >>>>>> general. Yet it'll continue with the cost-based throttling, but it will\n> >>>>>> never reach the limits. Say you do another 20ms sleep somewhere.\n> >>>>>> Suddenly it means it only does 25 rounds/second, and the actual write\n> >>>>>> limit drops to 4 MB/s.\n> >>>>>\n> >>>>> I think at a first approximation, you probably don't want to add WAL\n> >>>>> delays to vacuum jobs, since they are already slowed down, so the rate\n> >>>>> of WAL they produce might not be your first problem. The problem is\n> >>>>> more things like CREATE INDEX CONCURRENTLY that run at full speed.\n> >>>>>\n> >>>>> That leads to an alternative idea of expanding the existing cost-based\n> >>>>> vacuum delay system to other commands.\n> >>>>>\n> >>>>> We could even enhance the cost system by taking WAL into account as an\n> >>>>> additional factor.\n> >>>>\n> >>>> This is really what I was thinking- let’s not have multiple independent\n> >>>> ways of slowing down maintenance and similar jobs to reduce their impact on\n> >>>> I/o to the heap and to WAL.\n> >>>\n> >>> I think that's a bad idea. Both because the current vacuum code is\n> >>> *terrible* if you desire higher rates because both CPU and IO time\n> >>> aren't taken into account. And it's extremely hard to control. And it\n> >>> seems entirely valuable to be able to limit the amount of WAL generated\n> >>> for replication, but still try go get the rest of the work done as\n> >>> quickly as reasonably possible wrt local IO.\n> >>\n> >> I'm all for making improvements to the vacuum code and making it easier\n> >> to control.\n> >>\n> >> I don't buy off on the argument that there is some way to segregate the\n> >> local I/O question from the WAL when we're talking about these kinds of\n> >> operations (VACUUM, CREATE INDEX, CLUSTER, etc) on logged relations, nor\n> >> do I think we do our users a service by giving them independent knobs\n> >> for both that will undoubtably end up making it more difficult to\n> >> understand and control what's going on overall.\n> >>\n> >> Even here, it seems, you're arguing that the existing approach for\n> >> VACUUM is hard to control; wouldn't adding another set of knobs for\n> >> controlling the amount of WAL generated by VACUUM make that worse? I\n> >> have a hard time seeing how it wouldn't.\n> > \n> > I think it's because I see them as, often, having two largely \n> > independent use cases. If your goal is to avoid swamping replication \n> > with WAL, you don't necessarily care about also throttling VACUUM\n> > (or REINDEX, or CLUSTER, or ...)'s local IO. By forcing to combine\n> > the two you just make the whole feature less usable.\n> \n> I agree with that.\n\nI can agree that they're different use-cases but one does end up\nimpacting the other and that's what I had been thinking about from the\nperspective of \"if we could proivde just one knob for this.\"\n\nVACUUM is a pretty good example- if we're dirtying a page with VACUUM\nthen we're also writing that page into the WAL (at least, if the\nrelation isn't unlogged). Now, VACUUM does do other things (such as\nread pages), as does REINDEX or CLUSTER, so maybe there's a way to think\nabout this feature in those terms- cost for doing local read i/o, vs.\ncost for doing write i/o (to both heap and WAL) vs. cost for doing\n\"local\" write i/o (just to heap, ie: unlogged tables).\n\nWhat I was trying to say I didn't like previously was the idea of having\na \"local i/o write\" cost for VACUUM, to a logged table, *and* a \"WAL\nwrite\" cost for VACUUM, since those are very tightly correlated.\n\nThe current costing mechanism in VACUUM only provides the single hammer\nof \"if we hit the limit, go to sleep for a while\" which seems a bit\nunfortunate- if we haven't hit the \"read i/o\" limit, it'd be nice if we\ncould keep going and then come back to writing out pages when enough\ntime has passed that we're below our \"write i/o\" limit. That would end\nup requiring quite a bit of change to how we do things though, I expect,\nso probably not something to tie into this particular feature but I\nwanted to express the thought in case others found it interesting.\n\n> > I think it'd not be insane to add two things:\n> > - WAL write rate limiting, independent of the vacuum stuff. It'd also be\n> > used by lots of other bulk commands (CREATE INDEX, ALTER TABLE\n> > rewrites, ...)\n> > - Account for WAL writes in the current vacuum costing logic, by\n> > accounting for it using a new cost parameter\n> > \n> > Then VACUUM would be throttled by the *minimum* of the two, which seems\n> > to make plenty sense to me, given the usecases.\n> \n> Is it really minimum? If you add another cost parameter to the vacuum\n> model, then there's almost no chance of actually reaching the limit\n> because the budget (cost_limit) is shared with other stuff (local I/O).\n\nYeah, that does seem like it'd be an issue.\n\n> FWIW I do think the ability to throttle WAL is a useful feature, I just\n> don't want to shoot myself in the foot by making other things worse.\n> \n> As you note, the existing VACUUM throttling is already hard to control,\n> this seems to make it even harder.\n\nAgreed.\n\nThanks!\n\nStephen",
"msg_date": "Mon, 18 Feb 2019 12:12:36 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "On Fri, Feb 15, 2019 at 1:42 PM Andres Freund <andres@anarazel.de> wrote:\n> I think it'd not be insane to add two things:\n> - WAL write rate limiting, independent of the vacuum stuff. It'd also be\n> used by lots of other bulk commands (CREATE INDEX, ALTER TABLE\n> rewrites, ...)\n> - Account for WAL writes in the current vacuum costing logic, by\n> accounting for it using a new cost parameter\n>\n> Then VACUUM would be throttled by the *minimum* of the two, which seems\n> to make plenty sense to me, given the usecases.\n\nOr maybe we should just blow up the current vacuum cost delay stuff\nand replace it with something that is easier to tune. For example, we\ncould just have one parameter that sets the maximum read rate in kB/s\nand another that sets the maximum dirty-page rate in kB/s. Whichever\nlimit is tighter binds. If we also have the thing that is the topic\nof this thread, that's a third possible upper limit.\n\nI really don't see much point in doubling down on the current vacuum\ncost delay logic. The overall idea is good, but the specific way that\nyou have to set the parameters is pretty inscrutable, and I think we\nshould just fix it so that it can be, uh, scruted.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Tue, 19 Feb 2019 13:28:00 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-19 13:28:00 -0500, Robert Haas wrote:\n> On Fri, Feb 15, 2019 at 1:42 PM Andres Freund <andres@anarazel.de> wrote:\n> > I think it'd not be insane to add two things:\n> > - WAL write rate limiting, independent of the vacuum stuff. It'd also be\n> > used by lots of other bulk commands (CREATE INDEX, ALTER TABLE\n> > rewrites, ...)\n> > - Account for WAL writes in the current vacuum costing logic, by\n> > accounting for it using a new cost parameter\n> >\n> > Then VACUUM would be throttled by the *minimum* of the two, which seems\n> > to make plenty sense to me, given the usecases.\n> \n> Or maybe we should just blow up the current vacuum cost delay stuff\n> and replace it with something that is easier to tune. For example, we\n> could just have one parameter that sets the maximum read rate in kB/s\n> and another that sets the maximum dirty-page rate in kB/s. Whichever\n> limit is tighter binds. If we also have the thing that is the topic\n> of this thread, that's a third possible upper limit.\n\n> I really don't see much point in doubling down on the current vacuum\n> cost delay logic. The overall idea is good, but the specific way that\n> you have to set the parameters is pretty inscrutable, and I think we\n> should just fix it so that it can be, uh, scruted.\n\nI agree that that's something worthwhile to do, but given that the\nproposal in this thread is *NOT* just about VACUUM, I don't see why it'd\nbe useful to tie a general WAL rate limiting to rewriting cost limiting\nof vacuum. It seems better to write the WAL rate limiting logic with an\neye towards structuring it in a way that'd potentially allow reusing\nsome of the code for a better VACUUM cost limiting.\n\nI still don't *AT ALL* buy Stephen and Tomas' argument that it'd be\nconfusing that when both VACUUM and WAL cost limiting are active, the\nlower limit takes effect.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Tue, 19 Feb 2019 10:35:25 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "\n\nOn 2/19/19 7:35 PM, Andres Freund wrote:\n> Hi,\n> \n> On 2019-02-19 13:28:00 -0500, Robert Haas wrote:\n>> On Fri, Feb 15, 2019 at 1:42 PM Andres Freund <andres@anarazel.de> wrote:\n>>> I think it'd not be insane to add two things:\n>>> - WAL write rate limiting, independent of the vacuum stuff. It'd also be\n>>> used by lots of other bulk commands (CREATE INDEX, ALTER TABLE\n>>> rewrites, ...)\n>>> - Account for WAL writes in the current vacuum costing logic, by\n>>> accounting for it using a new cost parameter\n>>>\n>>> Then VACUUM would be throttled by the *minimum* of the two, which seems\n>>> to make plenty sense to me, given the usecases.\n>>\n>> Or maybe we should just blow up the current vacuum cost delay stuff\n>> and replace it with something that is easier to tune. For example, we\n>> could just have one parameter that sets the maximum read rate in kB/s\n>> and another that sets the maximum dirty-page rate in kB/s. Whichever\n>> limit is tighter binds. If we also have the thing that is the topic\n>> of this thread, that's a third possible upper limit.\n> \n>> I really don't see much point in doubling down on the current vacuum\n>> cost delay logic. The overall idea is good, but the specific way that\n>> you have to set the parameters is pretty inscrutable, and I think we\n>> should just fix it so that it can be, uh, scruted.\n> \n> I agree that that's something worthwhile to do, but given that the\n> proposal in this thread is *NOT* just about VACUUM, I don't see why it'd\n> be useful to tie a general WAL rate limiting to rewriting cost limiting\n> of vacuum. It seems better to write the WAL rate limiting logic with an\n> eye towards structuring it in a way that'd potentially allow reusing\n> some of the code for a better VACUUM cost limiting.\n> \n> I still don't *AT ALL* buy Stephen and Tomas' argument that it'd be\n> confusing that when both VACUUM and WAL cost limiting are active, the\n> lower limit takes effect.\n> \n\nExcept that's not my argument. I'm not arguing against throttling once\nwe hit the minimum of limits.\n\nThe problem I have with implementing a separate throttling logic is that\nit also changes the other limits (which are already kinda fuzzy). If you\nadd sleeps somewhere, those will affects the throttling built into\nautovacuum (lowering them in some unknown way).\n\n From this POV it would be better to include this into the vacuum cost\nlimit, because then it's at least subject to the same budget.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Tue, 19 Feb 2019 19:43:14 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "On Tue, Feb 19, 2019 at 1:35 PM Andres Freund <andres@anarazel.de> wrote:\n> I still don't *AT ALL* buy Stephen and Tomas' argument that it'd be\n> confusing that when both VACUUM and WAL cost limiting are active, the\n> lower limit takes effect.\n\nI think you guys may all be in vigorous -- not to say mortal -- agreement.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Tue, 19 Feb 2019 13:46:45 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "On 2019-02-19 19:43:14 +0100, Tomas Vondra wrote:\n> \n> \n> On 2/19/19 7:35 PM, Andres Freund wrote:\n> > Hi,\n> > \n> > On 2019-02-19 13:28:00 -0500, Robert Haas wrote:\n> >> On Fri, Feb 15, 2019 at 1:42 PM Andres Freund <andres@anarazel.de> wrote:\n> >>> I think it'd not be insane to add two things:\n> >>> - WAL write rate limiting, independent of the vacuum stuff. It'd also be\n> >>> used by lots of other bulk commands (CREATE INDEX, ALTER TABLE\n> >>> rewrites, ...)\n> >>> - Account for WAL writes in the current vacuum costing logic, by\n> >>> accounting for it using a new cost parameter\n> >>>\n> >>> Then VACUUM would be throttled by the *minimum* of the two, which seems\n> >>> to make plenty sense to me, given the usecases.\n> >>\n> >> Or maybe we should just blow up the current vacuum cost delay stuff\n> >> and replace it with something that is easier to tune. For example, we\n> >> could just have one parameter that sets the maximum read rate in kB/s\n> >> and another that sets the maximum dirty-page rate in kB/s. Whichever\n> >> limit is tighter binds. If we also have the thing that is the topic\n> >> of this thread, that's a third possible upper limit.\n> > \n> >> I really don't see much point in doubling down on the current vacuum\n> >> cost delay logic. The overall idea is good, but the specific way that\n> >> you have to set the parameters is pretty inscrutable, and I think we\n> >> should just fix it so that it can be, uh, scruted.\n> > \n> > I agree that that's something worthwhile to do, but given that the\n> > proposal in this thread is *NOT* just about VACUUM, I don't see why it'd\n> > be useful to tie a general WAL rate limiting to rewriting cost limiting\n> > of vacuum. It seems better to write the WAL rate limiting logic with an\n> > eye towards structuring it in a way that'd potentially allow reusing\n> > some of the code for a better VACUUM cost limiting.\n> > \n> > I still don't *AT ALL* buy Stephen and Tomas' argument that it'd be\n> > confusing that when both VACUUM and WAL cost limiting are active, the\n> > lower limit takes effect.\n> > \n> \n> Except that's not my argument. I'm not arguing against throttling once\n> we hit the minimum of limits.\n> \n> The problem I have with implementing a separate throttling logic is that\n> it also changes the other limits (which are already kinda fuzzy). If you\n> add sleeps somewhere, those will affects the throttling built into\n> autovacuum (lowering them in some unknown way).\n\nThose two paragraphs, to me, flat out contradict each other. If you\nthrottle according to the lower of two limits, *of course* the\nthrottling for the higher limit is affected in the sense that the limit\nwon't be reached. So yea, if you have a WAL rate limit, and you reach\nit, you won't be able to predict the IO rate for vacuum. Duh? I must be\nmissing something here.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Tue, 19 Feb 2019 10:50:46 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "On 2/19/19 7:50 PM, Andres Freund wrote:\n> On 2019-02-19 19:43:14 +0100, Tomas Vondra wrote:\n>>\n>>\n>> On 2/19/19 7:35 PM, Andres Freund wrote:\n>>> Hi,\n>>>\n>>> On 2019-02-19 13:28:00 -0500, Robert Haas wrote:\n>>>> On Fri, Feb 15, 2019 at 1:42 PM Andres Freund <andres@anarazel.de> wrote:\n>>>>> I think it'd not be insane to add two things:\n>>>>> - WAL write rate limiting, independent of the vacuum stuff. It'd also be\n>>>>> used by lots of other bulk commands (CREATE INDEX, ALTER TABLE\n>>>>> rewrites, ...)\n>>>>> - Account for WAL writes in the current vacuum costing logic, by\n>>>>> accounting for it using a new cost parameter\n>>>>>\n>>>>> Then VACUUM would be throttled by the *minimum* of the two, which seems\n>>>>> to make plenty sense to me, given the usecases.\n>>>>\n>>>> Or maybe we should just blow up the current vacuum cost delay stuff\n>>>> and replace it with something that is easier to tune. For example, we\n>>>> could just have one parameter that sets the maximum read rate in kB/s\n>>>> and another that sets the maximum dirty-page rate in kB/s. Whichever\n>>>> limit is tighter binds. If we also have the thing that is the topic\n>>>> of this thread, that's a third possible upper limit.\n>>>\n>>>> I really don't see much point in doubling down on the current vacuum\n>>>> cost delay logic. The overall idea is good, but the specific way that\n>>>> you have to set the parameters is pretty inscrutable, and I think we\n>>>> should just fix it so that it can be, uh, scruted.\n>>>\n>>> I agree that that's something worthwhile to do, but given that the\n>>> proposal in this thread is *NOT* just about VACUUM, I don't see why it'd\n>>> be useful to tie a general WAL rate limiting to rewriting cost limiting\n>>> of vacuum. It seems better to write the WAL rate limiting logic with an\n>>> eye towards structuring it in a way that'd potentially allow reusing\n>>> some of the code for a better VACUUM cost limiting.\n>>>\n>>> I still don't *AT ALL* buy Stephen and Tomas' argument that it'd be\n>>> confusing that when both VACUUM and WAL cost limiting are active, the\n>>> lower limit takes effect.\n>>>\n>>\n>> Except that's not my argument. I'm not arguing against throttling once\n>> we hit the minimum of limits.\n>>\n>> The problem I have with implementing a separate throttling logic is that\n>> it also changes the other limits (which are already kinda fuzzy). If you\n>> add sleeps somewhere, those will affects the throttling built into\n>> autovacuum (lowering them in some unknown way).\n> \n> Those two paragraphs, to me, flat out contradict each other. If you\n> throttle according to the lower of two limits, *of course* the\n> throttling for the higher limit is affected in the sense that the limit\n> won't be reached. So yea, if you have a WAL rate limit, and you reach\n> it, you won't be able to predict the IO rate for vacuum. Duh? I must be\n> missing something here.\n> \n\nLet's do a short example. Assume the default vacuum costing parameters\n\n vacuum_cost_limit = 200\n vacuum_cost_delay = 20ms\n cost_page_dirty = 20\n\nand for simplicity we only do writes. So vacuum can do ~8MB/s of writes.\n\nNow, let's also throttle based on WAL - once in a while, after producing\nsome amount of WAL we sleep for a while. Again, for simplicity let's\nassume the sleeps perfectly interleave and are also 20ms. So we have\nsomething like:\n\n sleep(20ms); -- vacuum\n sleep(20ms); -- WAL\n sleep(20ms); -- vacuum\n sleep(20ms); -- WAL\n sleep(20ms); -- vacuum\n sleep(20ms); -- WAL\n sleep(20ms); -- vacuum\n sleep(20ms); -- WAL\n\nSuddenly, we only reach 4MB/s of writes from vacuum. But we also reach\nonly 1/2 the WAL throughput, because it's affected exactly the same way\nby the sleeps from vacuum throttling.\n\nWe've not reached either of the limits. How exactly is this \"lower limit\ntakes effect\"?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Tue, 19 Feb 2019 20:02:32 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "\n\nOn 2/19/19 7:28 PM, Robert Haas wrote:\n> On Fri, Feb 15, 2019 at 1:42 PM Andres Freund <andres@anarazel.de> wrote:\n>> I think it'd not be insane to add two things:\n>> - WAL write rate limiting, independent of the vacuum stuff. It'd also be\n>> used by lots of other bulk commands (CREATE INDEX, ALTER TABLE\n>> rewrites, ...)\n>> - Account for WAL writes in the current vacuum costing logic, by\n>> accounting for it using a new cost parameter\n>>\n>> Then VACUUM would be throttled by the *minimum* of the two, which seems\n>> to make plenty sense to me, given the usecases.\n> \n> Or maybe we should just blow up the current vacuum cost delay stuff\n> and replace it with something that is easier to tune. For example, we\n> could just have one parameter that sets the maximum read rate in kB/s\n> and another that sets the maximum dirty-page rate in kB/s. Whichever\n> limit is tighter binds. If we also have the thing that is the topic\n> of this thread, that's a third possible upper limit.\n> \n> I really don't see much point in doubling down on the current vacuum\n> cost delay logic. The overall idea is good, but the specific way that\n> you have to set the parameters is pretty inscrutable, and I think we\n> should just fix it so that it can be, uh, scruted.\n> \n\nI think changing the vacuum throttling so that it uses actual I/O\namounts (in kB/s) instead of the cost limit would be a step in the right\ndirectly. It's clearer, and it also works with arbitrary page sizes.\n\nThen, instead of sleeping for a fixed amount of time after reaching the\ncost limit, we should track progress and compute the amount of time we\nactually need to sleep. AFAICS that's what spread checkpoints do.\n\nI'm sure it's bound to be trickier in practice, of course.\n\nFWIW I'm not entirely sure we want to fully separate the limits. I'd\nargue using the same vacuum budget for both reads and writes is actually\nthe right thing to do - the I/O likely hits the same device. So it makes\nsense to \"balance\" those two in some way. But that may or may not be the\ncase for WAL, which is often moved to a different device.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Tue, 19 Feb 2019 20:20:04 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "On 2019-02-19 20:02:32 +0100, Tomas Vondra wrote:\n> Let's do a short example. Assume the default vacuum costing parameters\n>\n> vacuum_cost_limit = 200\n> vacuum_cost_delay = 20ms\n> cost_page_dirty = 20\n>\n> and for simplicity we only do writes. So vacuum can do ~8MB/s of writes.\n>\n> Now, let's also throttle based on WAL - once in a while, after producing\n> some amount of WAL we sleep for a while. Again, for simplicity let's\n> assume the sleeps perfectly interleave and are also 20ms. So we have\n> something like:\n\n> sleep(20ms); -- vacuum\n> sleep(20ms); -- WAL\n> sleep(20ms); -- vacuum\n> sleep(20ms); -- WAL\n> sleep(20ms); -- vacuum\n> sleep(20ms); -- WAL\n> sleep(20ms); -- vacuum\n> sleep(20ms); -- WAL\n>\n> Suddenly, we only reach 4MB/s of writes from vacuum. But we also reach\n> only 1/2 the WAL throughput, because it's affected exactly the same way\n> by the sleeps from vacuum throttling.\n>\n> We've not reached either of the limits. How exactly is this \"lower limit\n> takes effect\"?\n\nBecause I upthread said that that's not how I think a sane\nimplementation of WAL throttling would work. I think the whole cost\nbudgeting approach is BAD, and it'd be serious mistake to copy it for a\nWAL rate limit (it disregards the time taken to execute IO and CPU costs\netc, and in this case the cost of other bandwidth limitations). What\nI'm saying is that we ought to instead specify an WAL rate in bytes/sec\nand *only* sleep once we've exceeded it for a time period (with some\noptimizations, so we don't gettimeofday after every XLogInsert(), but\ninstead compute how many bytes later need to re-determine the time to\nsee if we're still in the same 'granule').\n\nNow, a non-toy implementation would probably would want to have a\nsliding window to avoid being overly bursty, and reduce the number of\ngettimeofday as mentioned above, but for explanation's sake basically\nimagine that at the \"main loop\" of an bulk xlog emitting command would\ninvoke a helper with a a computation in pseudocode like:\n\n current_time = gettimeofday();\n if (same_second(current_time, last_time))\n {\n wal_written_in_second += new_wal_written;\n if (wal_written_in_second >= wal_write_limit_per_second)\n {\n double too_much = (wal_written_in_second - wal_write_limit_per_second);\n sleep_fractional_seconds(too_much / wal_written_in_second);\n\n last_time = current_time;\n }\n }\n else\n {\n last_time = current_time;\n }\n\nwhich'd mean that in contrast to your example we'd not continually sleep\nfor WAL, we'd only do so if we actually exceeded (or are projected to\nexceed in a smarter implementation), the specified WAL write rate. As\nthe 20ms sleeps from vacuum effectively reduce the WAL write rate, we'd\ncorrespondingly sleep less.\n\n\nAnd my main point is that even if you implement a proper bytes/sec limit\nONLY for WAL, the behaviour of VACUUM rate limiting doesn't get\nmeaningfully more confusing than right now.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Tue, 19 Feb 2019 11:22:50 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "\n\nOn 2/19/19 8:22 PM, Andres Freund wrote:\n> On 2019-02-19 20:02:32 +0100, Tomas Vondra wrote:\n>> Let's do a short example. Assume the default vacuum costing parameters\n>>\n>> vacuum_cost_limit = 200\n>> vacuum_cost_delay = 20ms\n>> cost_page_dirty = 20\n>>\n>> and for simplicity we only do writes. So vacuum can do ~8MB/s of writes.\n>>\n>> Now, let's also throttle based on WAL - once in a while, after producing\n>> some amount of WAL we sleep for a while. Again, for simplicity let's\n>> assume the sleeps perfectly interleave and are also 20ms. So we have\n>> something like:\n> \n>> sleep(20ms); -- vacuum\n>> sleep(20ms); -- WAL\n>> sleep(20ms); -- vacuum\n>> sleep(20ms); -- WAL\n>> sleep(20ms); -- vacuum\n>> sleep(20ms); -- WAL\n>> sleep(20ms); -- vacuum\n>> sleep(20ms); -- WAL\n>>\n>> Suddenly, we only reach 4MB/s of writes from vacuum. But we also reach\n>> only 1/2 the WAL throughput, because it's affected exactly the same way\n>> by the sleeps from vacuum throttling.\n>>\n>> We've not reached either of the limits. How exactly is this \"lower limit\n>> takes effect\"?\n> \n> Because I upthread said that that's not how I think a sane\n> implementation of WAL throttling would work. I think the whole cost\n> budgeting approach is BAD, and it'd be serious mistake to copy it for a\n> WAL rate limit (it disregards the time taken to execute IO and CPU costs\n> etc, and in this case the cost of other bandwidth limitations). What\n> I'm saying is that we ought to instead specify an WAL rate in bytes/sec\n> and *only* sleep once we've exceeded it for a time period (with some\n> optimizations, so we don't gettimeofday after every XLogInsert(), but\n> instead compute how many bytes later need to re-determine the time to\n> see if we're still in the same 'granule').\n> \n\nOK, I agree with that. That's mostly what I described in response to\nRobert a while ago, I think. (If you've described that earlier in the\nthread, I missed it.)\n\n> Now, a non-toy implementation would probably would want to have a\n> sliding window to avoid being overly bursty, and reduce the number of\n> gettimeofday as mentioned above, but for explanation's sake basically\n> imagine that at the \"main loop\" of an bulk xlog emitting command would\n> invoke a helper with a a computation in pseudocode like:\n> \n> current_time = gettimeofday();\n> if (same_second(current_time, last_time))\n> {\n> wal_written_in_second += new_wal_written;\n> if (wal_written_in_second >= wal_write_limit_per_second)\n> {\n> double too_much = (wal_written_in_second - wal_write_limit_per_second);\n> sleep_fractional_seconds(too_much / wal_written_in_second);\n> \n> last_time = current_time;\n> }\n> }\n> else\n> {\n> last_time = current_time;\n> }\n> \n> which'd mean that in contrast to your example we'd not continually sleep\n> for WAL, we'd only do so if we actually exceeded (or are projected to\n> exceed in a smarter implementation), the specified WAL write rate. As\n> the 20ms sleeps from vacuum effectively reduce the WAL write rate, we'd\n> correspondingly sleep less.\n> \n\nYes, that makes sense.\n\n> \n> And my main point is that even if you implement a proper bytes/sec limit\n> ONLY for WAL, the behaviour of VACUUM rate limiting doesn't get\n> meaningfully more confusing than right now.\n> \n\nSo, why not to modify autovacuum to also use this approach? I wonder if\nthe situation there is more complicated because of multiple workers\nsharing the same budget ...\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Tue, 19 Feb 2019 20:34:25 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-19 20:34:25 +0100, Tomas Vondra wrote:\n> On 2/19/19 8:22 PM, Andres Freund wrote:\n> > And my main point is that even if you implement a proper bytes/sec limit\n> > ONLY for WAL, the behaviour of VACUUM rate limiting doesn't get\n> > meaningfully more confusing than right now.\n> > \n> \n> So, why not to modify autovacuum to also use this approach? I wonder if\n> the situation there is more complicated because of multiple workers\n> sharing the same budget ...\n\nI think the main reason is that implementing a scheme like this for WAL\nrate limiting isn't a small task, but it'd be aided by the fact that\nit'd probably not on by default, and that there'd not be any regressions\nbecause the behaviour didn't exist before. I contrast, people are\nextremely sensitive to autovacuum behaviour changes, even if it's to\nimprove autovacuum. I think it makes more sense to build the logic in a\nlower profile case first, and then migrate autovacuum over it. Even\nleaving the maturity issue aside, reducing the scope of the project into\nmore bite sized chunks seems to increase the likelihood of getting\nanything substantially.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Tue, 19 Feb 2019 11:40:41 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "On Wed, 20 Feb 2019 at 07:28, Robert Haas <robertmhaas@gmail.com> wrote:\n> Or maybe we should just blow up the current vacuum cost delay stuff\n> and replace it with something that is easier to tune. For example, we\n> could just have one parameter that sets the maximum read rate in kB/s\n> and another that sets the maximum dirty-page rate in kB/s. Whichever\n> limit is tighter binds. If we also have the thing that is the topic\n> of this thread, that's a third possible upper limit.\n\nI had similar thoughts when I saw that Peter's proposal didn't seem\nall that compatible with how the vacuum cost delays work today. I\nagree the cost limit would have to turn into something time based\nrather than points based.\n\nTo me, it seems just too crude to have a per-backend limit. I think\nglobal \"soft\" limits would be better. Let's say, for example, the DBA\nwould like to CREATE INDEX CONCURRENTLY on a 6TB table. They think\nthis is going to take about 36 hours, so they start the operation at\nthe start of off-peak, which is expected to last 12 hours. This means\nthe create index is going to run for 2 off-peaks and 1 on-peak. Must\nthey really configure the create index to run at a speed that is\nsuitable for running at peak-load? That's pretty wasteful as surely\nit could run much more quickly during the off-peak.\n\nI know there's debate as to if this can rate limit WAL, but, if we can\nfind a way to do that, then it seems to me some settings like:\n\nmax_soft_global_wal_rate (MB/sec)\nmin_hard_local_wal_rate (MB/sec)\n\nThat way the rate limited process would slow down to\nmin_hard_local_wal_rate when the WAL rate of all processes is\nexceeding max_soft_global_wal_rate. The min_hard_local_wal_rate is\njust there to ensure the process never stops completely. It can simply\ntick along at that rate until the global WAL rate slows down again.\n\nIt's likely going to be easier to do something like that in WAL than\nwith buffer read/write/dirties since we already can easily see how\nmuch WAL has been written by looking at the current LSN.\n\n(Of course, these GUC names are not very good. I just picked them out\nthe air quickly to try and get their meaning across)\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Wed, 20 Feb 2019 09:56:18 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "\n\nOn 2/19/19 8:40 PM, Andres Freund wrote:\n> Hi,\n> \n> On 2019-02-19 20:34:25 +0100, Tomas Vondra wrote:\n>> On 2/19/19 8:22 PM, Andres Freund wrote:\n>>> And my main point is that even if you implement a proper bytes/sec limit\n>>> ONLY for WAL, the behaviour of VACUUM rate limiting doesn't get\n>>> meaningfully more confusing than right now.\n>>>\n>>\n>> So, why not to modify autovacuum to also use this approach? I wonder if\n>> the situation there is more complicated because of multiple workers\n>> sharing the same budget ...\n> \n> I think the main reason is that implementing a scheme like this for WAL\n> rate limiting isn't a small task, but it'd be aided by the fact that\n> it'd probably not on by default, and that there'd not be any regressions\n> because the behaviour didn't exist before. I contrast, people are\n> extremely sensitive to autovacuum behaviour changes, even if it's to\n> improve autovacuum. I think it makes more sense to build the logic in a\n> lower profile case first, and then migrate autovacuum over it. Even\n> leaving the maturity issue aside, reducing the scope of the project into\n> more bite sized chunks seems to increase the likelihood of getting\n> anything substantially.\n> \n\nMaybe.\n\nI guess the main thing I'm advocating for here is to aim for a unified\nthrottling approach, not multiple disparate approaches interacting in\nways that are hard to understand/predict.\n\nThe time-based approach you described looks fine, an it's kinda what I\nwas imagining (and not unlike the checkpoint throttling). I don't think\nit'd be that hard to tweak autovacuum to use it too, but I admit I have\nnot thought about it particularly hard and there's stuff like per-table\nsettings which might make it more complex.\n\nSo maybe doing it for WAL first makes sense. But I still think we need\nto have at least a rough idea how it interacts with the existing\nthrottling and how it will work in the end.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Wed, 20 Feb 2019 00:28:14 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "Greetings,\n\n* Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n> On 2/19/19 8:40 PM, Andres Freund wrote:\n> > On 2019-02-19 20:34:25 +0100, Tomas Vondra wrote:\n> >> On 2/19/19 8:22 PM, Andres Freund wrote:\n> >>> And my main point is that even if you implement a proper bytes/sec limit\n> >>> ONLY for WAL, the behaviour of VACUUM rate limiting doesn't get\n> >>> meaningfully more confusing than right now.\n> >>\n> >> So, why not to modify autovacuum to also use this approach? I wonder if\n> >> the situation there is more complicated because of multiple workers\n> >> sharing the same budget ...\n> > \n> > I think the main reason is that implementing a scheme like this for WAL\n> > rate limiting isn't a small task, but it'd be aided by the fact that\n> > it'd probably not on by default, and that there'd not be any regressions\n> > because the behaviour didn't exist before. I contrast, people are\n> > extremely sensitive to autovacuum behaviour changes, even if it's to\n> > improve autovacuum. I think it makes more sense to build the logic in a\n> > lower profile case first, and then migrate autovacuum over it. Even\n> > leaving the maturity issue aside, reducing the scope of the project into\n> > more bite sized chunks seems to increase the likelihood of getting\n> > anything substantially.\n> \n> Maybe.\n\nI concur with that 'maybe'. :)\n\n> I guess the main thing I'm advocating for here is to aim for a unified\n> throttling approach, not multiple disparate approaches interacting in\n> ways that are hard to understand/predict.\n\nYes, agreed.\n\n> The time-based approach you described looks fine, an it's kinda what I\n> was imagining (and not unlike the checkpoint throttling). I don't think\n> it'd be that hard to tweak autovacuum to use it too, but I admit I have\n> not thought about it particularly hard and there's stuff like per-table\n> settings which might make it more complex.\n\nWhen reading Andres' proposal, I was heavily reminded of how checkpoint\nthrottling is handled and wondered if there might be some way to reuse\nor generalize that existing code/technique/etc and make it available to\nbe used for WAL, and more-or-less any/every other bulk operation (CREATE\nINDEX, REINDEX, CLUSTER, VACUUM...).\n\n> So maybe doing it for WAL first makes sense. But I still think we need\n> to have at least a rough idea how it interacts with the existing\n> throttling and how it will work in the end.\n\nWell, it seems like Andres explained how it'll work with the existing\nthrottling, no? As for how all of this will work in the end, that's a\ngood question but also a rather difficult one to answer, I suspect.\n\nJust to share a few additional thoughts after pondering this for a\nwhile, but the comment Andres made up-thread really struck a chord- we\ndon't necessairly want to throttle anything, what we'd really rather do\nis *prioritize* things, whereby foreground work (regular queries and\nsuch) have a higher priority than background/bulk work (VACUUM, REINDEX,\netc) but otherwise we use the system to its full capacity. We don't\nactually want to throttle a VACUUM run any more than a CREATE INDEX, we\njust don't want those to hurt the performance of regular queries that\nare happening.\n\nThe other thought I had was that doing things on a per-table basis, in\nparticular, isn't really addressing the resource question appropriately.\nWAL is relatively straight-forward and independent of a resource from\nthe IO for the heap/indexes, so getting an idea from the admin of how\nmuch capacity they have for WAL makes sense. When it comes to the\ncapacity for the heap/indexes, in terms of IO, that really goes to the\nunderlying storage system/channel, which would actually be a tablespace\nin properly set up environments (imv anyway).\n\nWrapping this up- it seems unlikely that we're going to get a\npriority-based system in place any time particularly soon but I do think\nit's worthy of some serious consideration and discussion about how we\nmight be able to get there. On the other hand, if we can provide a way\nfor the admin to say \"these are my IO channels (per-tablespace values,\nplus a value for WAL), here's what their capacity is, and here's how\nmuch buffer for foreground work I want to have (again, per IO channel),\nso, PG, please arrange to not use more than 'capacity-buffer' amount of\nresources for background/bulk tasks (per IO channel)\" then we can at\nleast help them address the issue that foreground tasks are being\nstalled or delayed due to background/bulk work. This approach means\nthat they won't be utilizing the system to its full capacity, but\nthey'll know that and they'll know that it's because, for them, it's\nmore important that they have that low latency for foreground tasks.\n\nThanks!\n\nStephen",
"msg_date": "Wed, 20 Feb 2019 16:43:35 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "On 2/20/19 10:43 PM, Stephen Frost wrote:\n> Greetings,\n> ...\n>>\n>> So maybe doing it for WAL first makes sense. But I still think we need\n>> to have at least a rough idea how it interacts with the existing\n>> throttling and how it will work in the end.\n> \n> Well, it seems like Andres explained how it'll work with the existing\n> throttling, no? As for how all of this will work in the end, that's a\n> good question but also a rather difficult one to answer, I suspect.\n> \n\nWell ... he explained how to do WAL throttling, and I agree what he\nproposed seems entirely sane to me.\n\nBut when it comes to interactions with current vacuum cost-based\nthrottling, he claims it does not get meaningfully more confusing due to\ninteractions with WAL throttling. I don't quite agree with that, but I'm\nnot going to beat this horse any longer ...\n\n> Just to share a few additional thoughts after pondering this for a\n> while, but the comment Andres made up-thread really struck a chord- we\n> don't necessairly want to throttle anything, what we'd really rather do\n> is *prioritize* things, whereby foreground work (regular queries and\n> such) have a higher priority than background/bulk work (VACUUM, REINDEX,\n> etc) but otherwise we use the system to its full capacity. We don't\n> actually want to throttle a VACUUM run any more than a CREATE INDEX, we\n> just don't want those to hurt the performance of regular queries that\n> are happening.\n> \n\nI think you're forgetting the motivation of this very patch was to\nprevent replication lag caused by a command generating large amounts of\nWAL (like CREATE INDEX / ALTER TABLE etc.). That has almost nothing to\ndo with prioritization or foreground/background split.\n\nI'm not arguing against ability to prioritize stuff, but I disagree it\nsomehow replaces throttling.\n\n> The other thought I had was that doing things on a per-table basis, in\n> particular, isn't really addressing the resource question appropriately.\n> WAL is relatively straight-forward and independent of a resource from\n> the IO for the heap/indexes, so getting an idea from the admin of how\n> much capacity they have for WAL makes sense. When it comes to the\n> capacity for the heap/indexes, in terms of IO, that really goes to the\n> underlying storage system/channel, which would actually be a tablespace\n> in properly set up environments (imv anyway).\n> \n> Wrapping this up- it seems unlikely that we're going to get a\n> priority-based system in place any time particularly soon but I do think\n> it's worthy of some serious consideration and discussion about how we\n> might be able to get there. On the other hand, if we can provide a way\n> for the admin to say \"these are my IO channels (per-tablespace values,\n> plus a value for WAL), here's what their capacity is, and here's how\n> much buffer for foreground work I want to have (again, per IO channel),\n> so, PG, please arrange to not use more than 'capacity-buffer' amount of\n> resources for background/bulk tasks (per IO channel)\" then we can at\n> least help them address the issue that foreground tasks are being\n> stalled or delayed due to background/bulk work. This approach means\n> that they won't be utilizing the system to its full capacity, but\n> they'll know that and they'll know that it's because, for them, it's\n> more important that they have that low latency for foreground tasks.\n> \n\nI think it's mostly orthogonal feature to throttling.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Thu, 21 Feb 2019 00:35:24 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "Greetings,\n\n* Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n> On 2/20/19 10:43 PM, Stephen Frost wrote:\n> > Just to share a few additional thoughts after pondering this for a\n> > while, but the comment Andres made up-thread really struck a chord- we\n> > don't necessairly want to throttle anything, what we'd really rather do\n> > is *prioritize* things, whereby foreground work (regular queries and\n> > such) have a higher priority than background/bulk work (VACUUM, REINDEX,\n> > etc) but otherwise we use the system to its full capacity. We don't\n> > actually want to throttle a VACUUM run any more than a CREATE INDEX, we\n> > just don't want those to hurt the performance of regular queries that\n> > are happening.\n> \n> I think you're forgetting the motivation of this very patch was to\n> prevent replication lag caused by a command generating large amounts of\n> WAL (like CREATE INDEX / ALTER TABLE etc.). That has almost nothing to\n> do with prioritization or foreground/background split.\n> \n> I'm not arguing against ability to prioritize stuff, but I disagree it\n> somehow replaces throttling.\n\nWhy is replication lag an issue though? I would contend it's an issue\nbecause with sync replication, it makes foreground processes wait, and\nwith async replication, it makes the actions of foreground processes\nshow up late on the replicas.\n\nIf the actual WAL records for the foreground processes got priority and\nwere pushed out earlier than the background ones, that would eliminate\nboth of those issues with replication lag. Perhaps there's other issues\nthat replication lag cause but which aren't solved by prioritizing the\nactual WAL records that you care about getting to the replicas faster,\nbut if so, I'd like to hear what those are.\n\n> > The other thought I had was that doing things on a per-table basis, in\n> > particular, isn't really addressing the resource question appropriately.\n> > WAL is relatively straight-forward and independent of a resource from\n> > the IO for the heap/indexes, so getting an idea from the admin of how\n> > much capacity they have for WAL makes sense. When it comes to the\n> > capacity for the heap/indexes, in terms of IO, that really goes to the\n> > underlying storage system/channel, which would actually be a tablespace\n> > in properly set up environments (imv anyway).\n> > \n> > Wrapping this up- it seems unlikely that we're going to get a\n> > priority-based system in place any time particularly soon but I do think\n> > it's worthy of some serious consideration and discussion about how we\n> > might be able to get there. On the other hand, if we can provide a way\n> > for the admin to say \"these are my IO channels (per-tablespace values,\n> > plus a value for WAL), here's what their capacity is, and here's how\n> > much buffer for foreground work I want to have (again, per IO channel),\n> > so, PG, please arrange to not use more than 'capacity-buffer' amount of\n> > resources for background/bulk tasks (per IO channel)\" then we can at\n> > least help them address the issue that foreground tasks are being\n> > stalled or delayed due to background/bulk work. This approach means\n> > that they won't be utilizing the system to its full capacity, but\n> > they'll know that and they'll know that it's because, for them, it's\n> > more important that they have that low latency for foreground tasks.\n> \n> I think it's mostly orthogonal feature to throttling.\n\nI'm... not sure that what I was getting at above really got across.\n\nWhat I was saying above, in a nutshell, is that if we're going to\nprovide throttling then we should give users a way to configure the\nthrottling on a per-IO-channel basis, which means at the tablespace\nlevel, plus an independent configuration option for WAL since we allow\nthat to be placed elsewhere too.\n\nIdeally, the configuration parameter would be in the same units as the\nactual resource is too- which would probably be IOPS+bandwidth, really.\nJust doing it in terms of bandwidth ends up being a bit of a mismatch\nas compared to reality, and would mean that users would have to tune it\ndown farther than they might otherwise and therefore give up that much\nmore in terms of system capability.\n\nThanks!\n\nStephen",
"msg_date": "Wed, 20 Feb 2019 18:46:09 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-20 18:46:09 -0500, Stephen Frost wrote:\n> * Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n> > On 2/20/19 10:43 PM, Stephen Frost wrote:\n> > > Just to share a few additional thoughts after pondering this for a\n> > > while, but the comment Andres made up-thread really struck a chord- we\n> > > don't necessairly want to throttle anything, what we'd really rather do\n> > > is *prioritize* things, whereby foreground work (regular queries and\n> > > such) have a higher priority than background/bulk work (VACUUM, REINDEX,\n> > > etc) but otherwise we use the system to its full capacity. We don't\n> > > actually want to throttle a VACUUM run any more than a CREATE INDEX, we\n> > > just don't want those to hurt the performance of regular queries that\n> > > are happening.\n> > \n> > I think you're forgetting the motivation of this very patch was to\n> > prevent replication lag caused by a command generating large amounts of\n> > WAL (like CREATE INDEX / ALTER TABLE etc.). That has almost nothing to\n> > do with prioritization or foreground/background split.\n> > \n> > I'm not arguing against ability to prioritize stuff, but I disagree it\n> > somehow replaces throttling.\n> \n> Why is replication lag an issue though? I would contend it's an issue\n> because with sync replication, it makes foreground processes wait, and\n> with async replication, it makes the actions of foreground processes\n> show up late on the replicas.\n\nI think reaching the bandwidth limit of either the replication stream,\nor of the startup process is actually more common than these. And for\nthat prioritization doesn't help, unless it somehow reduces the total\namount of WAL.\n\n\n> If the actual WAL records for the foreground processes got priority and\n> were pushed out earlier than the background ones, that would eliminate\n> both of those issues with replication lag. Perhaps there's other issues\n> that replication lag cause but which aren't solved by prioritizing the\n> actual WAL records that you care about getting to the replicas faster,\n> but if so, I'd like to hear what those are.\n\nWait, what? Are you actually suggesting that different sources of WAL\nrecords should be streamed out in different order? You're blowing a\nsomewhat reasonably doable project up into \"let's rewrite a large chunk\nof all of the durability layer in postgres\".\n\n\nStephen, we gotta stop blowing up projects into something that can't\never realistically be finished.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Wed, 20 Feb 2019 15:54:53 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2019-02-20 18:46:09 -0500, Stephen Frost wrote:\n> > * Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n> > > On 2/20/19 10:43 PM, Stephen Frost wrote:\n> > > > Just to share a few additional thoughts after pondering this for a\n> > > > while, but the comment Andres made up-thread really struck a chord- we\n> > > > don't necessairly want to throttle anything, what we'd really rather do\n> > > > is *prioritize* things, whereby foreground work (regular queries and\n> > > > such) have a higher priority than background/bulk work (VACUUM, REINDEX,\n> > > > etc) but otherwise we use the system to its full capacity. We don't\n> > > > actually want to throttle a VACUUM run any more than a CREATE INDEX, we\n> > > > just don't want those to hurt the performance of regular queries that\n> > > > are happening.\n> > > \n> > > I think you're forgetting the motivation of this very patch was to\n> > > prevent replication lag caused by a command generating large amounts of\n> > > WAL (like CREATE INDEX / ALTER TABLE etc.). That has almost nothing to\n> > > do with prioritization or foreground/background split.\n> > > \n> > > I'm not arguing against ability to prioritize stuff, but I disagree it\n> > > somehow replaces throttling.\n> > \n> > Why is replication lag an issue though? I would contend it's an issue\n> > because with sync replication, it makes foreground processes wait, and\n> > with async replication, it makes the actions of foreground processes\n> > show up late on the replicas.\n> \n> I think reaching the bandwidth limit of either the replication stream,\n> or of the startup process is actually more common than these. And for\n> that prioritization doesn't help, unless it somehow reduces the total\n> amount of WAL.\n\nThe issue with hitting those bandwidth limits is that you end up with\nqueues outside of your control and therefore are unable to prioritize\nthe data going through them. I agree, that's an issue and it might be\nnecessary to ask the admin to provide what the bandwidth limit is, so\nthat we could then avoid running into issues with downstream queues that\nare outside of our control causing unexpected/unacceptable lag.\n\n> > If the actual WAL records for the foreground processes got priority and\n> > were pushed out earlier than the background ones, that would eliminate\n> > both of those issues with replication lag. Perhaps there's other issues\n> > that replication lag cause but which aren't solved by prioritizing the\n> > actual WAL records that you care about getting to the replicas faster,\n> > but if so, I'd like to hear what those are.\n> \n> Wait, what? Are you actually suggesting that different sources of WAL\n> records should be streamed out in different order? You're blowing a\n> somewhat reasonably doable project up into \"let's rewrite a large chunk\n> of all of the durability layer in postgres\".\n> \n> Stephen, we gotta stop blowing up projects into something that can't\n> ever realistically be finished.\n\nI started this sub-thread specifically the way I did because I was\ntrying to make it clear that these were just ideas for possible\ndiscussion- I'm *not* suggesting, nor saying, that we have to go\nimplement this right now instead of implementing the throttling that\nstarted this thread. I'm also, to be clear, not objecting to\nimplementing the throttling discussed (though, as mentioned but\nseemingly ignored, I'd see it maybe configurable in different ways than\noriginally suggested).\n\nIf there's a way I can get that across more clearly than saying \"Just to\nshare a few additional thoughts\", I'm happy to try and do so, but I\ndon't agree that I should be required to simply keep such thoughts to\nmyself; indeed, I'll admit that I don't know how large a project this\nwould actually be and while I figured it'd be *huge*, I wanted to share\nthe thought in case someone might see a way that we could implement it\nwith much less work and have a better solution as a result.\n\nThanks!\n\nStephen",
"msg_date": "Wed, 20 Feb 2019 19:20:15 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "On Thu, Feb 21, 2019 at 2:20 AM Stephen Frost <sfrost@snowman.net> wrote:\n\n> * Andres Freund (andres@anarazel.de) wrote:\n> > On 2019-02-20 18:46:09 -0500, Stephen Frost wrote:\n> > > * Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n> > > > On 2/20/19 10:43 PM, Stephen Frost wrote:\n> > > > > Just to share a few additional thoughts after pondering this for a\n> > > > > while, but the comment Andres made up-thread really struck a\n> chord- we\n> > > > > don't necessairly want to throttle anything, what we'd really\n> rather do\n> > > > > is *prioritize* things, whereby foreground work (regular queries\n> and\n> > > > > such) have a higher priority than background/bulk work (VACUUM,\n> REINDEX,\n> > > > > etc) but otherwise we use the system to its full capacity. We\n> don't\n> > > > > actually want to throttle a VACUUM run any more than a CREATE\n> INDEX, we\n> > > > > just don't want those to hurt the performance of regular queries\n> that\n> > > > > are happening.\n> > > >\n> > > > I think you're forgetting the motivation of this very patch was to\n> > > > prevent replication lag caused by a command generating large amounts\n> of\n> > > > WAL (like CREATE INDEX / ALTER TABLE etc.). That has almost nothing\n> to\n> > > > do with prioritization or foreground/background split.\n> > > >\n> > > > I'm not arguing against ability to prioritize stuff, but I disagree\n> it\n> > > > somehow replaces throttling.\n> > >\n> > > Why is replication lag an issue though? I would contend it's an issue\n> > > because with sync replication, it makes foreground processes wait, and\n> > > with async replication, it makes the actions of foreground processes\n> > > show up late on the replicas.\n> >\n> > I think reaching the bandwidth limit of either the replication stream,\n> > or of the startup process is actually more common than these. And for\n> > that prioritization doesn't help, unless it somehow reduces the total\n> > amount of WAL.\n>\n> The issue with hitting those bandwidth limits is that you end up with\n> queues outside of your control and therefore are unable to prioritize\n> the data going through them. I agree, that's an issue and it might be\n> necessary to ask the admin to provide what the bandwidth limit is, so\n> that we could then avoid running into issues with downstream queues that\n> are outside of our control causing unexpected/unacceptable lag.\n>\n\nIf there is a global rate limit on WAL throughput it could be adjusted by a\ncontrol loop, measuring replication queue length and/or apply delay. I\ndon't see any sane way how one would tune a per command rate limit, or even\nworse, a cost-delay parameter. It would have the same problems as work_mem\nsettings.\n\nRate limit in front of WAL insertion would allow for allocating the\nthroughput between foreground and background tasks, and even allow for\npriority inheritance to alleviate priority inversion due to locks.\n\nThere is also an implicit assumption here that a maintenance command is a\nbackground task and a normal DML query is a foreground task. This is not\ntrue for all cases, users may want to throttle transactions doing lots of\nDML to keep synchronous commit latencies for smaller transactions within\nreasonable limits.\n\nAs a wild idea for how to handle the throttling, what if when all our wal\ninsertion credits are used up XLogInsert() sets InterruptPending and the\nactual sleep is done inside ProcessInterrupts()?\n\nRegards,\nAnts Aasma\n\nOn Thu, Feb 21, 2019 at 2:20 AM Stephen Frost <sfrost@snowman.net> wrote:* Andres Freund (andres@anarazel.de) wrote:\n> On 2019-02-20 18:46:09 -0500, Stephen Frost wrote:\n> > * Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n> > > On 2/20/19 10:43 PM, Stephen Frost wrote:\n> > > > Just to share a few additional thoughts after pondering this for a\n> > > > while, but the comment Andres made up-thread really struck a chord- we\n> > > > don't necessairly want to throttle anything, what we'd really rather do\n> > > > is *prioritize* things, whereby foreground work (regular queries and\n> > > > such) have a higher priority than background/bulk work (VACUUM, REINDEX,\n> > > > etc) but otherwise we use the system to its full capacity. We don't\n> > > > actually want to throttle a VACUUM run any more than a CREATE INDEX, we\n> > > > just don't want those to hurt the performance of regular queries that\n> > > > are happening.\n> > > \n> > > I think you're forgetting the motivation of this very patch was to\n> > > prevent replication lag caused by a command generating large amounts of\n> > > WAL (like CREATE INDEX / ALTER TABLE etc.). That has almost nothing to\n> > > do with prioritization or foreground/background split.\n> > > \n> > > I'm not arguing against ability to prioritize stuff, but I disagree it\n> > > somehow replaces throttling.\n> > \n> > Why is replication lag an issue though? I would contend it's an issue\n> > because with sync replication, it makes foreground processes wait, and\n> > with async replication, it makes the actions of foreground processes\n> > show up late on the replicas.\n> \n> I think reaching the bandwidth limit of either the replication stream,\n> or of the startup process is actually more common than these. And for\n> that prioritization doesn't help, unless it somehow reduces the total\n> amount of WAL.\n\nThe issue with hitting those bandwidth limits is that you end up with\nqueues outside of your control and therefore are unable to prioritize\nthe data going through them. I agree, that's an issue and it might be\nnecessary to ask the admin to provide what the bandwidth limit is, so\nthat we could then avoid running into issues with downstream queues that\nare outside of our control causing unexpected/unacceptable lag.If there is a global rate limit on WAL throughput it could be adjusted by a control loop, measuring replication queue length and/or apply delay. I don't see any sane way how one would tune a per command rate limit, or even worse, a cost-delay parameter. It would have the same problems as work_mem settings.Rate limit in front of WAL insertion would allow for allocating the throughput between foreground and background tasks, and even allow for priority inheritance to alleviate priority inversion due to locks.There is also an implicit assumption here that a maintenance command is a background task and a normal DML query is a foreground task. This is not true for all cases, users may want to throttle transactions doing lots of DML to keep synchronous commit latencies for smaller transactions within reasonable limits.As a wild idea for how to handle the throttling, what if when all our wal insertion credits are used up XLogInsert() sets InterruptPending and the actual sleep is done inside ProcessInterrupts()?Regards,Ants Aasma",
"msg_date": "Thu, 21 Feb 2019 11:06:44 +0200",
"msg_from": "Ants Aasma <ants.aasma@eesti.ee>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "Greetings,\n\n* Ants Aasma (ants.aasma@eesti.ee) wrote:\n> On Thu, Feb 21, 2019 at 2:20 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > The issue with hitting those bandwidth limits is that you end up with\n> > queues outside of your control and therefore are unable to prioritize\n> > the data going through them. I agree, that's an issue and it might be\n> > necessary to ask the admin to provide what the bandwidth limit is, so\n> > that we could then avoid running into issues with downstream queues that\n> > are outside of our control causing unexpected/unacceptable lag.\n> \n> If there is a global rate limit on WAL throughput it could be adjusted by a\n> control loop, measuring replication queue length and/or apply delay. I\n> don't see any sane way how one would tune a per command rate limit, or even\n> worse, a cost-delay parameter. It would have the same problems as work_mem\n> settings.\n\nYeah, having some kind of feedback loop would be interesting. I agree\nthat a per-command rate limit would have similar problems to work_mem,\nand that's definitely one problem we have with the way VACUUM is tuned\ntoday but the ship has more-or-less sailed on that- I don't think we're\ngoing to be able to simply remove the VACUUM settings. Avoiding adding\nnew settings that are per-command would be good though, if we can sort\nout a way how.\n\n> Rate limit in front of WAL insertion would allow for allocating the\n> throughput between foreground and background tasks, and even allow for\n> priority inheritance to alleviate priority inversion due to locks.\n\nI'm not sure how much we have to worry about priority inversion here as\nyou need to have conflicts for that and if there's actually a conflict,\nthen it seems like we should just press on.\n\nThat is, a non-concurrent REINDEX is going to prevent an UPDATE from\nmodifying anything in the table, which if the UPDATE is a higher\npriority than the REINDEX would be priority inversion, but that doesn't\nmean we should slow down the REINDEX to allow the UPDATE to happen\nbecause the UPDATE simply can't happen until the REINDEX is complete.\nNow, we might slow down the REINDEX because there's UPDATEs against\n*other* tables that aren't conflicting and we want those UPDATEs to be\nprioritized over the REINDEX but then that isn't priority inversion.\n\nBasically, I'm not sure that there's anything we can do, or need to do,\ndifferently from what we do today when it comes to priority inversion\nrisk, at least as it relates to this discussion. There's an interesting\ndiscussion to be had about if we should delay the REINDEX taking the\nlock at all when there's an UPDATE pending, but you run the risk of\nstarving the REINDEX from ever getting the lock and being able to run in\nthat case. A better approach is what we're already working on- arrange\nfor the REINDEX to not require a conflicting lock, so that both can run\nconcurrently.\n\n> There is also an implicit assumption here that a maintenance command is a\n> background task and a normal DML query is a foreground task. This is not\n> true for all cases, users may want to throttle transactions doing lots of\n> DML to keep synchronous commit latencies for smaller transactions within\n> reasonable limits.\n\nAgreed, that was something that I was contemplating too- and one could\npossibly argue in the other direction as well (maybe that REINDEX is on\na small table but has a very high priority and we're willing to accept\nthat some regular DML is delayed a bit to allow that REINDEX to finish).\nStill, I would think we'd basically want to use the heuristic that DDL\nis bulk and DML is a higher priority for a baseline/default position,\nbut then provide users with a way to change the priority on a\nper-session level, presumably with a GUC or similar, if they have a case\nwhere that heuristic is wrong.\n\nAgain, just to be clear, this is all really 'food for thought' and\ninteresting discussion and shouldn't keep us from doing something simple\nnow, if we can, to help alleviate the immediate practical issue that\nbulk commands can cause serious WAL lag. I think it's good to have\nthese discussions since they may help us to craft the simple solution in\na way that could later be extended (or at least won't get in the way)\nfor these much larger changes, but even if that's not possible, we\nshould be open to accepting a simpler, short-term, improvement, as these\nlarger changes would very likely be multiple major releases away if\nthey're able to be done at all.\n\n> As a wild idea for how to handle the throttling, what if when all our wal\n> insertion credits are used up XLogInsert() sets InterruptPending and the\n> actual sleep is done inside ProcessInterrupts()?\n\nThis comment might be better if it was made higher up in the thread,\ncloser to where the discussion was happening about the issues with\ncritical sections and the current patch's approach for throttle-based\nrate limiting. I'm afraid that it might get lost in this sub-thread\nabout these much larger and loftier ideas around where we might want to\ngo in the future.\n\nThanks!\n\nStephen",
"msg_date": "Thu, 21 Feb 2019 05:50:25 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "On Thu, Feb 21, 2019 at 12:50 PM Stephen Frost <sfrost@snowman.net> wrote:\n\n> > Rate limit in front of WAL insertion would allow for allocating the\n> > throughput between foreground and background tasks, and even allow for\n> > priority inheritance to alleviate priority inversion due to locks.\n>\n> I'm not sure how much we have to worry about priority inversion here as\n> you need to have conflicts for that and if there's actually a conflict,\n> then it seems like we should just press on.\n>\n> That is, a non-concurrent REINDEX is going to prevent an UPDATE from\n> modifying anything in the table, which if the UPDATE is a higher\n> priority than the REINDEX would be priority inversion, but that doesn't\n> mean we should slow down the REINDEX to allow the UPDATE to happen\n> because the UPDATE simply can't happen until the REINDEX is complete.\n> Now, we might slow down the REINDEX because there's UPDATEs against\n> *other* tables that aren't conflicting and we want those UPDATEs to be\n> prioritized over the REINDEX but then that isn't priority inversion.\n>\n\nI was thinking along the lines that each backend gets a budget of WAL\ninsertion credits per time interval, and when the credits run out the\nprocess sleeps. With this type of scheme it would be reasonably\nstraightforward to let UPDATEs being blocked by REINDEX to transfer their\nWAL insertion budgets to the REINDEX, making it get a larger piece of the\ntotal throughput pie.\n\nRegards,\nAnts Aasma\n\nOn Thu, Feb 21, 2019 at 12:50 PM Stephen Frost <sfrost@snowman.net> wrote:> Rate limit in front of WAL insertion would allow for allocating the\n> throughput between foreground and background tasks, and even allow for\n> priority inheritance to alleviate priority inversion due to locks.\n\nI'm not sure how much we have to worry about priority inversion here as\nyou need to have conflicts for that and if there's actually a conflict,\nthen it seems like we should just press on.\n\nThat is, a non-concurrent REINDEX is going to prevent an UPDATE from\nmodifying anything in the table, which if the UPDATE is a higher\npriority than the REINDEX would be priority inversion, but that doesn't\nmean we should slow down the REINDEX to allow the UPDATE to happen\nbecause the UPDATE simply can't happen until the REINDEX is complete.\nNow, we might slow down the REINDEX because there's UPDATEs against\n*other* tables that aren't conflicting and we want those UPDATEs to be\nprioritized over the REINDEX but then that isn't priority inversion.I was thinking along the lines that each backend gets a budget of WAL insertion credits per time interval, and when the credits run out the process sleeps. With this type of scheme it would be reasonably straightforward to let UPDATEs being blocked by REINDEX to transfer their WAL insertion budgets to the REINDEX, making it get a larger piece of the total throughput pie.Regards,Ants Aasma",
"msg_date": "Thu, 21 Feb 2019 15:06:58 +0200",
"msg_from": "Ants Aasma <ants.aasma@eesti.ee>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
},
{
"msg_contents": "Greetings,\n\n* Ants Aasma (ants.aasma@eesti.ee) wrote:\n> On Thu, Feb 21, 2019 at 12:50 PM Stephen Frost <sfrost@snowman.net> wrote:\n> \n> > > Rate limit in front of WAL insertion would allow for allocating the\n> > > throughput between foreground and background tasks, and even allow for\n> > > priority inheritance to alleviate priority inversion due to locks.\n> >\n> > I'm not sure how much we have to worry about priority inversion here as\n> > you need to have conflicts for that and if there's actually a conflict,\n> > then it seems like we should just press on.\n> >\n> > That is, a non-concurrent REINDEX is going to prevent an UPDATE from\n> > modifying anything in the table, which if the UPDATE is a higher\n> > priority than the REINDEX would be priority inversion, but that doesn't\n> > mean we should slow down the REINDEX to allow the UPDATE to happen\n> > because the UPDATE simply can't happen until the REINDEX is complete.\n> > Now, we might slow down the REINDEX because there's UPDATEs against\n> > *other* tables that aren't conflicting and we want those UPDATEs to be\n> > prioritized over the REINDEX but then that isn't priority inversion.\n> \n> I was thinking along the lines that each backend gets a budget of WAL\n> insertion credits per time interval, and when the credits run out the\n> process sleeps. With this type of scheme it would be reasonably\n> straightforward to let UPDATEs being blocked by REINDEX to transfer their\n> WAL insertion budgets to the REINDEX, making it get a larger piece of the\n> total throughput pie.\n\nSure, that could possibly be done if it's a per-backend throttle\nmechanism, but that's got more-or-less the same issue as a per-command\nmechanism and work_mem as discussed up-thread.\n\nAlso seems like if we've solved for a way to do this transferring and\ndelay and such that we could come up with a way to prioritize (or 'give\nmore credits') to foreground and less to background (there was another\npoint made elsewhere in the thread that background processes should\nstill be given *some* amount of credits, always, so that they don't end\nup starving completely, and I agree with that).\n\nThere's actually a lot of similarity or parallels between this and basic\nnetwork traffic shaping, it seems to me anyway, where you have a pipe of\na certain size and you want to prioritize some things (like interactive\nSSH) while de-prioritizing other things (bulk SCP) but also using the\npipe fully.\n\nThanks!\n\nStephen",
"msg_date": "Thu, 21 Feb 2019 08:50:38 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: WAL insert delay settings"
}
] |
[
{
"msg_contents": "Hi, hackers!\n\nWe have held education project at Sirius edu center (Sochi, Russia) with mentors from Yandex. The group of 5 students was working on improving the shared buffers eviction algorithm: Andrey Chausov, Yuriy Skakovskiy, Ivan Edigaryev, Arslan Gumerov, Daria Filipetskaya. I’ve been a mentor for the group. For two weeks we have been looking into known caching algorithms and tried to adapt some of them for PostgreSQL codebase.\nWhile a lot of algorithms appeared to be too complex to be hacked in 2 weeks, we decided to implement and test the working version of tweaked LRU eviction algorithm.\n\n===How it works===\nMost of the buffers are organized into the linked list. Firstly admitted pages jump into 5/8th of the queue. The last ⅛ of the queue is governed by clock sweep algorithm to improve concurrency.\n\n===So how we tested the patch===\nWe used sampling on 4 Yandex.Cloud compute instances with 16 vCPU cores, 8GB of RAM, 200GB database in 30-minute YCSB-like runs with Zipf distribution. We found that on read-only workload our algorithm is showing consistent improvement over the current master branch. On read-write workloads we haven’t found performance improvements yet, there was too much noise from checkpoints and bgwriter (more on it in TODO section).\nCharts are here: [0,1]\nWe used this config: [2]\n\n===TODO===\nWe have taken some ideas expressed by Ben Manes in the pgsql-hackers list. But we could not implement all of them during the time of the program. For example, we tried to make LRU bumps less write-contentious by storing them in a circular buffer. But this feature was not stable enough.\nThe patch in its current form also requires improvements. So, we shall reduce the number of locks at all (in this way we have tried bufferization, but not only it). “Clock sweep” algorithm at the last ⅛ part of the logical sequence should be improved too (see ClockSweepTickAtTheAnd() and places of its usage).\nUnfortunately, we didn’t have more time to bring CAR and W-TinyLFU to testing-ready state.\nWe have a working implementation of frequency sketch [3] to make a transition between the admission cycle and LRU more concise with TinyLFU filter. Most probably, work in this direction will be continued.\nAlso, the current patch does not change bgwriter behavior: with a piece of knowledge from LRU, we can predict that some pages will not be changed in the nearest future. This information should be used to schedule the background writes better.\nWe also think that with proper eviction algorithm shared buffers should be used instead of specialized buffer rings.\n\nWe will be happy to hear your feedback on our work! Thank you :)\n\n[0] LRU TPS https://yadi.sk/i/wNNmNw3id22nMQ\n[1] LRU hitrate https://yadi.sk/i/l1o710C3IX6BiA\n[2] Benchmark config https://yadi.sk/d/L-PIVq--ujw6NA\n[3] Frequency sketch code https://github.com/heyfaraday/postgres/commit/a684a139a72cd50dd0f9d031a8aa77f998607cf1\n\nWith best regards, almost serious cache group.",
"msg_date": "Wed, 13 Feb 2019 17:37:26 +0300",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "[Patch][WiP] Tweaked LRU for shared buffers"
},
{
"msg_contents": "\nHi,\n\nOn 2/13/19 3:37 PM, Andrey Borodin wrote:\n> Hi, hackers!\n> \n> We have held education project at Sirius edu center (Sochi, Russia)\n> with mentors from Yandex. The group of 5 students was working on\n> improving the shared buffers eviction algorithm: Andrey Chausov, Yuriy\n> Skakovskiy, Ivan Edigaryev, Arslan Gumerov, Daria Filipetskaya. I’ve\n> been a mentor for the group. For two weeks we have been looking into\n> known caching algorithms and tried to adapt some of them for PostgreSQL\n> codebase.\n>\n> While a lot of algorithms appeared to be too complex to be hacked in \n> 2 weeks, we decided to implement and test the working version of\n> tweaked LRU eviction algorithm.\n> \n> ===How it works===\n> Most of the buffers are organized into the linked list. Firstly\n> admitted pages jump into 5/8th of the queue. The last ⅛ of the queue is\n> governed by clock sweep algorithm to improve concurrency.\n> \n\nInteresting. Where do these numbers (5/8 and 1/8) come from?\n\n> ===So how we tested the patch===\n> We used sampling on 4 Yandex.Cloud compute instances with 16 vCPU\n> cores, 8GB of RAM, 200GB database in 30-minute YCSB-like runs with Zipf\n> distribution. We found that on read-only workload our algorithm is\n> showing consistent improvement over the current master branch. On\n> read-write workloads we haven’t found performance improvements yet,\n> there was too much noise from checkpoints and bgwriter (more on it in\n> TODO section).\n> Charts are here: [0,1]\n\nThat TPS chart looks a bit ... wild. How come the master jumps so much\nup and down? That's a bit suspicious, IMHO.\n\nHow do I reproduce this benchmark? I'm aware of pg_ycsb, but maybe\nyou've used some other tools?\n\nAlso, have you tried some other benchmarks (like, regular TPC-B as\nimplemented by pgbench, or read-only pgbench)? We need such benchmarks\nwith a range of access patterns to check for regressions.\n\nBTW what do you mean by \"sampling\"?\n\n> We used this config: [2]\n> \n\nThat's only half the information - it doesn't say how many clients were\nrunning the benchmark etc.\n\n> ===TODO===\n> We have taken some ideas expressed by Ben Manes in the pgsql-hackers\n> list. But we could not implement all of them during the time of the\n> program. For example, we tried to make LRU bumps less write-contentious\n> by storing them in a circular buffer. But this feature was not stable\n> enough.\n\nCan you point us to the thread/email discussing those ideas? I have\ntried searching through archives, but I haven't found anything :-(\n\nThis message does not really explain the algorithms, and combined with\nthe absolute lack of comments in the linked commit, it'd somewhat\ndifficult to form an opinion.\n\n> The patch in its current form also requires improvements. So, we \n> shall reduce the number of locks at all (in this way we have tried \n> bufferization, but not only it). “Clock sweep” algorithm at the last\n> ⅛ part of the logical sequence should be improved too (see \n> ClockSweepTickAtTheAnd() and places of its usage).\nOK\n\n> Unfortunately, we didn’t have more time to bring CAR and W-TinyLFU\n> to testing-ready state.\n\nWhat is CAR? Did you mean ARC, perhaps?\n\n> We have a working implementation of frequency sketch [3] to make a\n> transition between the admission cycle and LRU more concise with TinyLFU\n> filter. Most probably, work in this direction will be continued.\n\nOK\n\n> Also, the current patch does not change bgwriter behavior: with a\n> piece of knowledge from LRU, we can predict that some pages will not be\n> changed in the nearest future. This information should be used to\n> schedule the background writes better.\n\nSounds interesting.\n\n> We also think that with proper eviction algorithm shared buffers\n> should be used instead of specialized buffer rings.\n> \n\nAre you suggesting to get rid of the buffer rings we use for sequential\nscans, for example? IMHO that's going to be tricky, e.g. because of\nsynchronized sequential scans.\n\n> We will be happy to hear your feedback on our work! Thank you :)\n> \n> [0] LRU TPS https://yadi.sk/i/wNNmNw3id22nMQ\n> [1] LRU hitrate https://yadi.sk/i/l1o710C3IX6BiA\n> [2] Benchmark config https://yadi.sk/d/L-PIVq--ujw6NA\n> [3] Frequency sketch code https://github.com/heyfaraday/postgres/commit/a684a139a72cd50dd0f9d031a8aa77f998607cf1\n> \n> With best regards, almost serious cache group.\n> \n\ncheers\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Sat, 16 Feb 2019 00:30:26 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch][WiP] Tweaked LRU for shared buffers"
},
{
"msg_contents": "On Fri, Feb 15, 2019 at 3:30 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> That TPS chart looks a bit ... wild. How come the master jumps so much\n> up and down? That's a bit suspicious, IMHO.\n\nSomebody should write a patch to make buffer eviction completely\nrandom, without aiming to get it committed. That isn't as bad of a\nstrategy as it sounds, and it would help with assessing improvements\nin this area.\n\nWe know that the cache replacement algorithm behaves randomly when\nthere is extreme contention, while also slowing everything down due to\nmaintaining the clock. A unambiguously better caching algorithm would\nat a minimum be able to beat our \"cheap random replacement\" prototype\nas well as the existing clocksweep algorithm in most or all cases.\nThat seems like a reasonably good starting point, at least.\n\n-- \nPeter Geoghegan\n\n",
"msg_date": "Fri, 15 Feb 2019 15:51:19 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: [Patch][WiP] Tweaked LRU for shared buffers"
},
{
"msg_contents": "\nOn 2/16/19 12:51 AM, Peter Geoghegan wrote:\n> On Fri, Feb 15, 2019 at 3:30 PM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n>> That TPS chart looks a bit ... wild. How come the master jumps so much\n>> up and down? That's a bit suspicious, IMHO.\n> \n> Somebody should write a patch to make buffer eviction completely \n> random, without aiming to get it committed. That isn't as bad of a \n> strategy as it sounds, and it would help with assessing improvements \n> in this area.\n> \n> We know that the cache replacement algorithm behaves randomly when \n> there is extreme contention, while also slowing everything down due\n> to maintaining the clock.\n\nPossibly, although I still find it strange that the throughput first\ngrows, then at shared_buffers 1GB it drops, and then at 3GB it starts\ngrowing again. Considering this is on 200GB data set, I doubt the\npressure/contention is much different with 1GB and 3GB, but maybe it is.\n\n> A unambiguously better caching algorithm would at a minimum be able\n> to beat our \"cheap random replacement\" prototype as well as the\n> existing clocksweep algorithm in most or all cases. That seems like a\n> reasonably good starting point, at least.\n> \n\nYes, comparison to cheap random replacement would be an interesting\nexperiment.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Sat, 16 Feb 2019 01:22:36 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch][WiP] Tweaked LRU for shared buffers"
},
{
"msg_contents": "Hi,\n\nI was not involved with Andrey and his team's work, which looks like a very\npromising first pass. I can try to clarify a few minor details.\n\nWhat is CAR? Did you mean ARC, perhaps?\n\n\nCAR is the Clock variants of ARC: CAR: Clock with Adaptive Replacement\n<https://www.usenix.org/legacy/publications/library/proceedings/fast04/tech/full_papers/bansal/bansal.pdf>\n\nI believe the main interest in ARC is its claim of adaptability with high\nhit rates. Unfortunately the reality is less impressive as it fails to\nhandle many frequency workloads, e.g. poor scan resistance, and the\nadaptivity is poor due to the accesses being quite noisy. For W-TinyLFU, we\nhave recent improvements which show near perfect adaptivity\n<https://user-images.githubusercontent.com/378614/52891655-2979e380-3141-11e9-91b3-00002a3cc80b.png>\nin\nour stress case that results in double the hit rate of ARC and is less than\n1% from optimal.\n\nCan you point us to the thread/email discussing those ideas? I have tried\n> searching through archives, but I haven't found anything :-(\n\n\nThis thread\n<https://www.postgresql.org/message-id/1526057854777-0.post%40n3.nabble.com>\ndoesn't explain Andrey's work, but includes my minor contribution. The\nlonger thread discusses the interest in CAR, et al.\n\nAre you suggesting to get rid of the buffer rings we use for sequential\n> scans, for example? IMHO that's going to be tricky, e.g. because of\n> synchronized sequential scans.\n\n\nIf you mean \"synchronized\" in terms of multi-threading and locks, then this\nis a solved problem\n<http://highscalability.com/blog/2016/1/25/design-of-a-modern-cache.html>\nin terms of caching. My limited understanding is that the buffer rings are\nused to protect the cache from being polluted by scans which flush the\nLRU-like algorithms. This allows those policies to capture more frequent\nitems. It also avoids lock contention on the cache due to writes caused by\nmisses, where Clock allows lock-free reads but uses a global lock on\nwrites. A smarter cache eviction policy and concurrency model can handle\nthis without needing buffer rings to compensate.\n\nSomebody should write a patch to make buffer eviction completely random,\n> without aiming to get it committed. That isn't as bad of a strategy as it\n> sounds, and it would help with assessing improvements in this area.\n>\n\nA related and helpful patch would be to capture the access log and provide\nanonymized traces. We have a simulator\n<https://github.com/ben-manes/caffeine/wiki/Simulator> with dozens of\npolicies to quickly provide a breakdown. That would let you know the hit\nrates before deciding on the policy to adopt.\n\nCheers.\n\nOn Fri, Feb 15, 2019 at 4:22 PM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n>\n> On 2/16/19 12:51 AM, Peter Geoghegan wrote:\n> > On Fri, Feb 15, 2019 at 3:30 PM Tomas Vondra\n> > <tomas.vondra@2ndquadrant.com> wrote:\n> >> That TPS chart looks a bit ... wild. How come the master jumps so much\n> >> up and down? That's a bit suspicious, IMHO.\n> >\n> > Somebody should write a patch to make buffer eviction completely\n> > random, without aiming to get it committed. That isn't as bad of a\n> > strategy as it sounds, and it would help with assessing improvements\n> > in this area.\n> >\n> > We know that the cache replacement algorithm behaves randomly when\n> > there is extreme contention, while also slowing everything down due\n> > to maintaining the clock.\n>\n> Possibly, although I still find it strange that the throughput first\n> grows, then at shared_buffers 1GB it drops, and then at 3GB it starts\n> growing again. Considering this is on 200GB data set, I doubt the\n> pressure/contention is much different with 1GB and 3GB, but maybe it is.\n>\n> > A unambiguously better caching algorithm would at a minimum be able\n> > to beat our \"cheap random replacement\" prototype as well as the\n> > existing clocksweep algorithm in most or all cases. That seems like a\n> > reasonably good starting point, at least.\n> >\n>\n> Yes, comparison to cheap random replacement would be an interesting\n> experiment.\n>\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nHi,I was not involved with Andrey and his team's work, which looks like a very promising first pass. I can try to clarify a few minor details.What is CAR? Did you mean ARC, perhaps?CAR is the Clock variants of ARC: CAR: Clock with Adaptive ReplacementI believe the main interest in ARC is its claim of adaptability with high hit rates. Unfortunately the reality is less impressive as it fails to handle many frequency workloads, e.g. poor scan resistance, and the adaptivity is poor due to the accesses being quite noisy. For W-TinyLFU, we have recent improvements which show near perfect adaptivity in our stress case that results in double the hit rate of ARC and is less than 1% from optimal.Can you point us to the thread/email discussing those ideas? I have tried searching through archives, but I haven't found anything :-(This thread doesn't explain Andrey's work, but includes my minor contribution. The longer thread discusses the interest in CAR, et al.Are you suggesting to get rid of the buffer rings we use for sequential scans, for example? IMHO that's going to be tricky, e.g. because of synchronized sequential scans.If you mean \"synchronized\" in terms of multi-threading and locks, then this is a solved problem in terms of caching. My limited understanding is that the buffer rings are used to protect the cache from being polluted by scans which flush the LRU-like algorithms. This allows those policies to capture more frequent items. It also avoids lock contention on the cache due to writes caused by misses, where Clock allows lock-free reads but uses a global lock on writes. A smarter cache eviction policy and concurrency model can handle this without needing buffer rings to compensate.Somebody should write a patch to make buffer eviction completely random, without aiming to get it committed. That isn't as bad of a strategy as it sounds, and it would help with assessing improvements in this area.A related and helpful patch would be to capture the access log and provide anonymized traces. We have a simulator with dozens of policies to quickly provide a breakdown. That would let you know the hit rates before deciding on the policy to adopt.Cheers.On Fri, Feb 15, 2019 at 4:22 PM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\nOn 2/16/19 12:51 AM, Peter Geoghegan wrote:\n> On Fri, Feb 15, 2019 at 3:30 PM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n>> That TPS chart looks a bit ... wild. How come the master jumps so much\n>> up and down? That's a bit suspicious, IMHO.\n> \n> Somebody should write a patch to make buffer eviction completely \n> random, without aiming to get it committed. That isn't as bad of a \n> strategy as it sounds, and it would help with assessing improvements \n> in this area.\n> \n> We know that the cache replacement algorithm behaves randomly when \n> there is extreme contention, while also slowing everything down due\n> to maintaining the clock.\n\nPossibly, although I still find it strange that the throughput first\ngrows, then at shared_buffers 1GB it drops, and then at 3GB it starts\ngrowing again. Considering this is on 200GB data set, I doubt the\npressure/contention is much different with 1GB and 3GB, but maybe it is.\n\n> A unambiguously better caching algorithm would at a minimum be able\n> to beat our \"cheap random replacement\" prototype as well as the\n> existing clocksweep algorithm in most or all cases. That seems like a\n> reasonably good starting point, at least.\n> \n\nYes, comparison to cheap random replacement would be an interesting\nexperiment.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 15 Feb 2019 16:48:50 -0800",
"msg_from": "Benjamin Manes <ben.manes@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch][WiP] Tweaked LRU for shared buffers"
},
{
"msg_contents": "On 2/16/19 1:48 AM, Benjamin Manes wrote:\n> Hi,\n> \n> I was not involved with Andrey and his team's work, which looks like a\n> very promising first pass. I can try to clarify a few minor details.\n> \n> What is CAR? Did you mean ARC, perhaps?\n> \n> \n> CAR is the Clock variants of ARC: CAR: Clock with Adaptive Replacement\n> <https://www.usenix.org/legacy/publications/library/proceedings/fast04/tech/full_papers/bansal/bansal.pdf>\n> \n\nThanks, will check.\n\n> I believe the main interest in ARC is its claim of adaptability with\n> high hit rates. Unfortunately the reality is less impressive as it fails\n> to handle many frequency workloads, e.g. poor scan resistance, and the\n> adaptivity is poor due to the accesses being quite noisy. For W-TinyLFU,\n> we have recent improvements which show near perfect adaptivity\n> <https://user-images.githubusercontent.com/378614/52891655-2979e380-3141-11e9-91b3-00002a3cc80b.png> in\n> our stress case that results in double the hit rate of ARC and is less\n> than 1% from optimal.\n> \n\nInteresting.\n\n> Can you point us to the thread/email discussing those ideas? I have\n> tried searching through archives, but I haven't found anything :-(\n> \n> \n> This thread\n> <https://www.postgresql.org/message-id/1526057854777-0.post%40n3.nabble.com>\n> doesn't explain Andrey's work, but includes my minor contribution. The\n> longer thread discusses the interest in CAR, et al.\n> \n\nThanks.\n\n> Are you suggesting to get rid of the buffer rings we use for\n> sequential scans, for example? IMHO that's going to be tricky, e.g.\n> because of synchronized sequential scans.\n> \n> \n> If you mean \"synchronized\" in terms of multi-threading and locks, then\n> this is a solved problem\n> <http://highscalability.com/blog/2016/1/25/design-of-a-modern-cache.html> in\n> terms of caching.\n\nNo, I \"synchronized scans\" are an optimization to reduce I/O when\nmultiple processes do sequential scan on the same table. Say one process\nis half-way through the table, when another process starts another scan.\nInstead of scanning it from block #0 we start at the position of the\nfirst process (at which point they \"synchronize\") and then wrap around\nto read the beginning.\n\nI was under the impression that this somehow depends on the small\ncircular buffers, but I've now checked the code and I see that's bogus.\n\n\n> My limited understanding is that the buffer rings are\n> used to protect the cache from being polluted by scans which flush the\n> LRU-like algorithms. This allows those policies to capture more frequent\n> items. It also avoids lock contention on the cache due to writes caused\n> by misses, where Clock allows lock-free reads but uses a global lock on\n> writes. A smarter cache eviction policy and concurrency model can handle\n> this without needing buffer rings to compensate.\n> \n\nRight - that's the purpose of circular buffers.\n\n> Somebody should write a patch to make buffer eviction completely\n> random, without aiming to get it committed. That isn't as bad of a\n> strategy as it sounds, and it would help with assessing improvements\n> in this area.\n> \n> \n> A related and helpful patch would be to capture the access log and\n> provide anonymized traces. We have a simulator\n> <https://github.com/ben-manes/caffeine/wiki/Simulator> with dozens of\n> policies to quickly provide a breakdown. That would let you know the hit\n> rates before deciding on the policy to adopt.\n> \n\nInteresting. I assume the trace is essentially a log of which blocks\nwere requested? Is there some trace format specification somewhere?\n\ncheers\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Sat, 16 Feb 2019 02:51:24 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch][WiP] Tweaked LRU for shared buffers"
},
{
"msg_contents": ">\n> No, I \"synchronized scans\" are an optimization to reduce I/O when multiple\n> processes do sequential scan on the same table.\n\n\nOh, very neat. Thanks!\n\nInteresting. I assume the trace is essentially a log of which blocks were\n> requested? Is there some trace format specification somewhere?\n>\n\nYes, whatever constitutes a cache key (block address, item hash, etc). We\nwrite parsers for each trace so there isn't a format requirement. The\nparsers produce a 64-bit int stream of keys, which are broadcasted to each\npolicy via an actor framework. The trace files can be text or binary,\noptionally compressed (xz preferred). The ARC traces are block I/O and this\nis their format description\n<https://www.dropbox.com/sh/9ii9sc7spcgzrth/j1CJ72HiWa/Papers/ARCTraces/README.txt>,\nso perhaps something similar? That parser is only 5 lines of custom code\n<https://github.com/ben-manes/caffeine/blob/b752c586f7bf143f774a51a0a10593ae3b77802b/simulator/src/main/java/com/github/benmanes/caffeine/cache/simulator/parser/arc/ArcTraceReader.java#L36-L42>\n.\n\nOn Fri, Feb 15, 2019 at 5:51 PM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> On 2/16/19 1:48 AM, Benjamin Manes wrote:\n> > Hi,\n> >\n> > I was not involved with Andrey and his team's work, which looks like a\n> > very promising first pass. I can try to clarify a few minor details.\n> >\n> > What is CAR? Did you mean ARC, perhaps?\n> >\n> >\n> > CAR is the Clock variants of ARC: CAR: Clock with Adaptive Replacement\n> > <\n> https://www.usenix.org/legacy/publications/library/proceedings/fast04/tech/full_papers/bansal/bansal.pdf\n> >\n> >\n>\n> Thanks, will check.\n>\n> > I believe the main interest in ARC is its claim of adaptability with\n> > high hit rates. Unfortunately the reality is less impressive as it fails\n> > to handle many frequency workloads, e.g. poor scan resistance, and the\n> > adaptivity is poor due to the accesses being quite noisy. For W-TinyLFU,\n> > we have recent improvements which show near perfect adaptivity\n> > <\n> https://user-images.githubusercontent.com/378614/52891655-2979e380-3141-11e9-91b3-00002a3cc80b.png\n> > in\n> > our stress case that results in double the hit rate of ARC and is less\n> > than 1% from optimal.\n> >\n>\n> Interesting.\n>\n> > Can you point us to the thread/email discussing those ideas? I have\n> > tried searching through archives, but I haven't found anything :-(\n> >\n> >\n> > This thread\n> > <\n> https://www.postgresql.org/message-id/1526057854777-0.post%40n3.nabble.com\n> >\n> > doesn't explain Andrey's work, but includes my minor contribution. The\n> > longer thread discusses the interest in CAR, et al.\n> >\n>\n> Thanks.\n>\n> > Are you suggesting to get rid of the buffer rings we use for\n> > sequential scans, for example? IMHO that's going to be tricky, e.g.\n> > because of synchronized sequential scans.\n> >\n> >\n> > If you mean \"synchronized\" in terms of multi-threading and locks, then\n> > this is a solved problem\n> > <http://highscalability.com/blog/2016/1/25/design-of-a-modern-cache.html>\n> in\n> > terms of caching.\n>\n> No, I \"synchronized scans\" are an optimization to reduce I/O when\n> multiple processes do sequential scan on the same table. Say one process\n> is half-way through the table, when another process starts another scan.\n> Instead of scanning it from block #0 we start at the position of the\n> first process (at which point they \"synchronize\") and then wrap around\n> to read the beginning.\n>\n> I was under the impression that this somehow depends on the small\n> circular buffers, but I've now checked the code and I see that's bogus.\n>\n>\n> > My limited understanding is that the buffer rings are\n> > used to protect the cache from being polluted by scans which flush the\n> > LRU-like algorithms. This allows those policies to capture more frequent\n> > items. It also avoids lock contention on the cache due to writes caused\n> > by misses, where Clock allows lock-free reads but uses a global lock on\n> > writes. A smarter cache eviction policy and concurrency model can handle\n> > this without needing buffer rings to compensate.\n> >\n>\n> Right - that's the purpose of circular buffers.\n>\n> > Somebody should write a patch to make buffer eviction completely\n> > random, without aiming to get it committed. That isn't as bad of a\n> > strategy as it sounds, and it would help with assessing improvements\n> > in this area.\n> >\n> >\n> > A related and helpful patch would be to capture the access log and\n> > provide anonymized traces. We have a simulator\n> > <https://github.com/ben-manes/caffeine/wiki/Simulator> with dozens of\n> > policies to quickly provide a breakdown. That would let you know the hit\n> > rates before deciding on the policy to adopt.\n> >\n>\n> Interesting. I assume the trace is essentially a log of which blocks\n> were requested? Is there some trace format specification somewhere?\n>\n> cheers\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nNo, I \"synchronized scans\" are an optimization to reduce I/O when multiple processes do sequential scan on the same table.Oh, very neat. Thanks!Interesting. I assume the trace is essentially a log of which blocks were requested? Is there some trace format specification somewhere? Yes, whatever constitutes a cache key (block address, item hash, etc). We write parsers for each trace so there isn't a format requirement. The parsers produce a 64-bit int stream of keys, which are broadcasted to each policy via an actor framework. The trace files can be text or binary, optionally compressed (xz preferred). The ARC traces are block I/O and this is their format description, so perhaps something similar? That parser is only 5 lines of custom code.On Fri, Feb 15, 2019 at 5:51 PM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:On 2/16/19 1:48 AM, Benjamin Manes wrote:\n> Hi,\n> \n> I was not involved with Andrey and his team's work, which looks like a\n> very promising first pass. I can try to clarify a few minor details.\n> \n> What is CAR? Did you mean ARC, perhaps?\n> \n> \n> CAR is the Clock variants of ARC: CAR: Clock with Adaptive Replacement\n> <https://www.usenix.org/legacy/publications/library/proceedings/fast04/tech/full_papers/bansal/bansal.pdf>\n> \n\nThanks, will check.\n\n> I believe the main interest in ARC is its claim of adaptability with\n> high hit rates. Unfortunately the reality is less impressive as it fails\n> to handle many frequency workloads, e.g. poor scan resistance, and the\n> adaptivity is poor due to the accesses being quite noisy. For W-TinyLFU,\n> we have recent improvements which show near perfect adaptivity\n> <https://user-images.githubusercontent.com/378614/52891655-2979e380-3141-11e9-91b3-00002a3cc80b.png> in\n> our stress case that results in double the hit rate of ARC and is less\n> than 1% from optimal.\n> \n\nInteresting.\n\n> Can you point us to the thread/email discussing those ideas? I have\n> tried searching through archives, but I haven't found anything :-(\n> \n> \n> This thread\n> <https://www.postgresql.org/message-id/1526057854777-0.post%40n3.nabble.com>\n> doesn't explain Andrey's work, but includes my minor contribution. The\n> longer thread discusses the interest in CAR, et al.\n> \n\nThanks.\n\n> Are you suggesting to get rid of the buffer rings we use for\n> sequential scans, for example? IMHO that's going to be tricky, e.g.\n> because of synchronized sequential scans.\n> \n> \n> If you mean \"synchronized\" in terms of multi-threading and locks, then\n> this is a solved problem\n> <http://highscalability.com/blog/2016/1/25/design-of-a-modern-cache.html> in\n> terms of caching.\n\nNo, I \"synchronized scans\" are an optimization to reduce I/O when\nmultiple processes do sequential scan on the same table. Say one process\nis half-way through the table, when another process starts another scan.\nInstead of scanning it from block #0 we start at the position of the\nfirst process (at which point they \"synchronize\") and then wrap around\nto read the beginning.\n\nI was under the impression that this somehow depends on the small\ncircular buffers, but I've now checked the code and I see that's bogus.\n\n\n> My limited understanding is that the buffer rings are\n> used to protect the cache from being polluted by scans which flush the\n> LRU-like algorithms. This allows those policies to capture more frequent\n> items. It also avoids lock contention on the cache due to writes caused\n> by misses, where Clock allows lock-free reads but uses a global lock on\n> writes. A smarter cache eviction policy and concurrency model can handle\n> this without needing buffer rings to compensate.\n> \n\nRight - that's the purpose of circular buffers.\n\n> Somebody should write a patch to make buffer eviction completely\n> random, without aiming to get it committed. That isn't as bad of a\n> strategy as it sounds, and it would help with assessing improvements\n> in this area.\n> \n> \n> A related and helpful patch would be to capture the access log and\n> provide anonymized traces. We have a simulator\n> <https://github.com/ben-manes/caffeine/wiki/Simulator> with dozens of\n> policies to quickly provide a breakdown. That would let you know the hit\n> rates before deciding on the policy to adopt.\n> \n\nInteresting. I assume the trace is essentially a log of which blocks\nwere requested? Is there some trace format specification somewhere?\n\ncheers\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 15 Feb 2019 18:03:44 -0800",
"msg_from": "Benjamin Manes <ben.manes@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch][WiP] Tweaked LRU for shared buffers"
},
{
"msg_contents": "Benjamin> A related and helpful patch would be to capture the access log and\nBenjamin> provide anonymized traces.\n\nThe traces can be captured via DTrace scripts, so no patch is required here.\n\nFor instance:\nhttps://www.postgresql.org/message-id/CAB%3DJe-F_BhGfBu1sO1H7u_XMtvak%3DBQtuJFyv8cfjGBRp7Q_yA%40mail.gmail.com\nor\nhttps://www.postgresql.org/message-id/CAH2-WzmbUWKvCqjDycpCOSF%3D%3DPEswVf6WtVutgm9efohH0NfHA%40mail.gmail.com\n\nThe missing bit is a database with more-or-less relevant workload.\n\nVladimir\n\nBenjamin> A related and helpful patch would be to capture the access log andBenjamin> provide anonymized traces.The traces can be captured via DTrace scripts, so no patch is required here.For instance:https://www.postgresql.org/message-id/CAB%3DJe-F_BhGfBu1sO1H7u_XMtvak%3DBQtuJFyv8cfjGBRp7Q_yA%40mail.gmail.comorhttps://www.postgresql.org/message-id/CAH2-WzmbUWKvCqjDycpCOSF%3D%3DPEswVf6WtVutgm9efohH0NfHA%40mail.gmail.comThe missing bit is a database with more-or-less relevant workload.Vladimir",
"msg_date": "Sat, 16 Feb 2019 12:36:45 +0300",
"msg_from": "Vladimir Sitnikov <sitnikov.vladimir@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch][WiP] Tweaked LRU for shared buffers"
},
{
"msg_contents": "On 2/16/19 10:36 AM, Vladimir Sitnikov wrote:\n> Benjamin> A related and helpful patch would be to capture the access log and\n> Benjamin> provide anonymized traces.\n> \n> The traces can be captured via DTrace scripts, so no patch is required here.\n> \n\nRight. Or a BPF on reasonably new linux kernels.\n\n> For instance:\n> https://www.postgresql.org/message-id/CAB%3DJe-F_BhGfBu1sO1H7u_XMtvak%3DBQtuJFyv8cfjGBRp7Q_yA%40mail.gmail.com\n> or\n> https://www.postgresql.org/message-id/CAH2-WzmbUWKvCqjDycpCOSF%3D%3DPEswVf6WtVutgm9efohH0NfHA%40mail.gmail.com\n> \n> The missing bit is a database with more-or-less relevant workload.\n> \n\nI think it'd be sufficient (or at least reasonable first step) to get\ntraces from workloads regularly used for benchmarking (different flavors\nof pgbench workload, YCSB, TPC-H/TPC-DS and perhaps something else).\n\nA good algorithm has to perform well in those anyway, and applications\ngenerally can be modeled as a mix of those simple workloads.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Sat, 16 Feb 2019 15:35:46 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch][WiP] Tweaked LRU for shared buffers"
},
{
"msg_contents": "Hi there. I was responsible for the benchmarks, and I would be glad to\nmake clear that part for you.\n\nOn Sat, 16 Feb 2019 at 02:30, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n> Interesting. Where do these numbers (5/8 and 1/8) come from?\n\nThe first number came from MySQL realization of LRU algorithm\n<https://dev.mysql.com/doc/refman/8.0/en/innodb-buffer-pool.html>\nand the second from simple tuning, we've tried to change 1/8 a little,\nbut it didn't change metrics significantly.\n\n> That TPS chart looks a bit ... wild. How come the master jumps so much\n> up and down? That's a bit suspicious, IMHO.\n\nYes, it is. It would be great if someone will try to reproduce those results.\n\n> How do I reproduce this benchmark? I'm aware of pg_ycsb, but maybe\n> you've used some other tools?\n\nYes, we used pg_ycsb, but nothing more than that and pgbench, it's\njust maybe too simple. I attach the example of the sh script that has\nbeen used to generate database and measure each point on the chart.\n\nBuild was generated without any additional debug flags in\nconfiguration. Database was mad by initdb with --data-checksums\nenabled and generated by initialization step in pgbench with\n--scale=11000.\n\n> Also, have you tried some other benchmarks (like, regular TPC-B as\n> implemented by pgbench, or read-only pgbench)? We need such benchmarks\n> with a range of access patterns to check for regressions.\n\nYes, we tried all builtin pgbench benchmarks and YCSB-A,B,C from\npg_ycsb with unoform and zipfian distribution. I also attach some\nother charts that we did, they are not as statistically significant as\ncould because we used less time in them but hope that it will help.\n\n> BTW what do you mean by \"sampling\"?\n\nI meant that we measure tps and hit rate on several virtual machines\nfor our and master build in order to neglect the influence that came\nfrom the difference between them.\n\n> > We used this config: [2]\n> >\n>\n> That's only half the information - it doesn't say how many clients were\n> running the benchmark etc.\n\nYes, sorry for that missing, we've had virtual machines with\nconfiguration mentioned in the initial letter, with 16 jobs and 16\nclients in pgbench configuration.\n\n[0] Scripts https://yadi.sk/d/PHICP0N6YrN5Cw\n[1] Measurements for other workloads https://yadi.sk/d/6G0e09Drf0ygag\n\nI will be looking forward if you have any other questions about\nmeasurement or code. Please note me if you have them.\n\nBest regards.\n\n--\nIvan Edigaryev\n\n",
"msg_date": "Sun, 17 Feb 2019 16:14:30 +0300",
"msg_from": "\n =?UTF-8?B?0JXQtNC40LPQsNGA0YzQtdCyLCDQmNCy0LDQvSDQk9GA0LjQs9C+0YDRjNC10LLQuNGH?=\n <edigaryev.ig@phystech.edu>",
"msg_from_op": false,
"msg_subject": "Re: [Patch][WiP] Tweaked LRU for shared buffers"
},
{
"msg_contents": "On 2/17/19 2:14 PM, Едигарьев, Иван Григорьевич wrote:\n> Hi there. I was responsible for the benchmarks, and I would be glad to\n> make clear that part for you.\n> \n> On Sat, 16 Feb 2019 at 02:30, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>> Interesting. Where do these numbers (5/8 and 1/8) come from?\n> \n> The first number came from MySQL realization of LRU algorithm\n> <https://dev.mysql.com/doc/refman/8.0/en/innodb-buffer-pool.html>\n> and the second from simple tuning, we've tried to change 1/8 a little,\n> but it didn't change metrics significantly.\n> \n>> That TPS chart looks a bit ... wild. How come the master jumps so much\n>> up and down? That's a bit suspicious, IMHO.\n> \n> Yes, it is. It would be great if someone will try to reproduce those results.\n> \n\nI'll try.\n\n>> How do I reproduce this benchmark? I'm aware of pg_ycsb, but maybe\n>> you've used some other tools?\n> \n> Yes, we used pg_ycsb, but nothing more than that and pgbench, it's\n> just maybe too simple. I attach the example of the sh script that has\n> been used to generate database and measure each point on the chart.\n> \n\nOK. I see the measurement script is running plain select-only test (so\nwith uniform distribution). So how did you do the zipfian test?\n\nAlso, I see the test is resetting all stats regularly - that may not be\nquite a good thing, because it also resets stats used by autovacuum for\nexample. It's better to query the values before the run, and compute the\ndelta. OTOH it should affect all tests about the same, I guess.\n\n> Build was generated without any additional debug flags in\n> configuration. Database was mad by initdb with --data-checksums\n> enabled and generated by initialization step in pgbench with\n> --scale=11000.\n> \n\nSounds reasonable.\n\n>> Also, have you tried some other benchmarks (like, regular TPC-B as\n>> implemented by pgbench, or read-only pgbench)? We need such benchmarks\n>> with a range of access patterns to check for regressions.\n> \n> Yes, we tried all builtin pgbench benchmarks and YCSB-A,B,C from\n> pg_ycsb with unoform and zipfian distribution. I also attach some\n> other charts that we did, they are not as statistically significant as\n> could because we used less time in them but hope that it will help.\n> \n\nOK. Notice that the random_zipfian function is quite expensive in terms\nof CPU, and it may significantly skew the results particularly with\nlarge data sets / short runs. I've posted a message earlier, as I ran\ninto the issue while trying to reproduce the behavior.\n\n>> BTW what do you mean by \"sampling\"?\n> \n> I meant that we measure tps and hit rate on several virtual machines\n> for our and master build in order to neglect the influence that came\n> from the difference between them.\n> \n\nOK\n\n>>> We used this config: [2]\n>>>\n>>\n>> That's only half the information - it doesn't say how many clients were\n>> running the benchmark etc.\n> \n> Yes, sorry for that missing, we've had virtual machines with\n> configuration mentioned in the initial letter, with 16 jobs and 16\n> clients in pgbench configuration.\n> \n> [0] Scripts https://yadi.sk/d/PHICP0N6YrN5Cw\n> [1] Measurements for other workloads https://yadi.sk/d/6G0e09Drf0ygag\n> \n> I will be looking forward if you have any other questions about\n> measurement or code. Please note me if you have them.\n> \n\nThanks. I'll try running some tests on my machines, I'll see if I manage\nto reproduce the suspicious behavior.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Sun, 17 Feb 2019 14:53:17 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch][WiP] Tweaked LRU for shared buffers"
},
{
"msg_contents": "On 2/17/19 2:53 PM, Tomas Vondra wrote:\n> On 2/17/19 2:14 PM, Едигарьев, Иван Григорьевич wrote:\n>> Hi there. I was responsible for the benchmarks, and I would be glad to\n>> make clear that part for you.\n>>\n>> On Sat, 16 Feb 2019 at 02:30, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>> Interesting. Where do these numbers (5/8 and 1/8) come from?\n>>\n>> The first number came from MySQL realization of LRU algorithm\n>> <https://dev.mysql.com/doc/refman/8.0/en/innodb-buffer-pool.html>\n>> and the second from simple tuning, we've tried to change 1/8 a little,\n>> but it didn't change metrics significantly.\n>>\n>>> That TPS chart looks a bit ... wild. How come the master jumps so much\n>>> up and down? That's a bit suspicious, IMHO.\n>>\n>> Yes, it is. It would be great if someone will try to reproduce those results.\n>>\n> \n> I'll try.\n> \n\nI've tried to reproduce this behavior, and I've done a quite extensive\nset of tests on two different (quite different) machines, but so far I\nhave not observed anything like that. The results are attached, along\nwith the test scripts used.\n\nI wonder if this might be due to pg_ycsb using random_zipfian, which has\nsomewhat annoying behavior for some parameters (as I've mentioned in a\nseparate thread). But that should affect all the runs, not just some\nshared_buffers sizes.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 27 Feb 2019 00:03:50 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch][WiP] Tweaked LRU for shared buffers"
},
{
"msg_contents": "Hi Tomas,\n\nIf you are on a benchmarking binge and feel like generating some trace\nfiles (as mentioned earlier), I'd be happy to help in regards to running\nthem through simulations to show how different policies behave. We can add\nmore types to match this patch / Postgres' GClock as desired, too.\n\nOn Tue, Feb 26, 2019 at 3:04 PM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> On 2/17/19 2:53 PM, Tomas Vondra wrote:\n> > On 2/17/19 2:14 PM, Едигарьев, Иван Григорьевич wrote:\n> >> Hi there. I was responsible for the benchmarks, and I would be glad to\n> >> make clear that part for you.\n> >>\n> >> On Sat, 16 Feb 2019 at 02:30, Tomas Vondra <\n> tomas.vondra@2ndquadrant.com> wrote:\n> >>> Interesting. Where do these numbers (5/8 and 1/8) come from?\n> >>\n> >> The first number came from MySQL realization of LRU algorithm\n> >> <https://dev.mysql.com/doc/refman/8.0/en/innodb-buffer-pool.html>\n> >> and the second from simple tuning, we've tried to change 1/8 a little,\n> >> but it didn't change metrics significantly.\n> >>\n> >>> That TPS chart looks a bit ... wild. How come the master jumps so much\n> >>> up and down? That's a bit suspicious, IMHO.\n> >>\n> >> Yes, it is. It would be great if someone will try to reproduce those\n> results.\n> >>\n> >\n> > I'll try.\n> >\n>\n> I've tried to reproduce this behavior, and I've done a quite extensive\n> set of tests on two different (quite different) machines, but so far I\n> have not observed anything like that. The results are attached, along\n> with the test scripts used.\n>\n> I wonder if this might be due to pg_ycsb using random_zipfian, which has\n> somewhat annoying behavior for some parameters (as I've mentioned in a\n> separate thread). But that should affect all the runs, not just some\n> shared_buffers sizes.\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nHi Tomas,If you are on a benchmarking binge and feel like generating some trace files (as mentioned earlier), I'd be happy to help in regards to running them through simulations to show how different policies behave. We can add more types to match this patch / Postgres' GClock as desired, too.On Tue, Feb 26, 2019 at 3:04 PM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:On 2/17/19 2:53 PM, Tomas Vondra wrote:\n> On 2/17/19 2:14 PM, Едигарьев, Иван Григорьевич wrote:\n>> Hi there. I was responsible for the benchmarks, and I would be glad to\n>> make clear that part for you.\n>>\n>> On Sat, 16 Feb 2019 at 02:30, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>> Interesting. Where do these numbers (5/8 and 1/8) come from?\n>>\n>> The first number came from MySQL realization of LRU algorithm\n>> <https://dev.mysql.com/doc/refman/8.0/en/innodb-buffer-pool.html>\n>> and the second from simple tuning, we've tried to change 1/8 a little,\n>> but it didn't change metrics significantly.\n>>\n>>> That TPS chart looks a bit ... wild. How come the master jumps so much\n>>> up and down? That's a bit suspicious, IMHO.\n>>\n>> Yes, it is. It would be great if someone will try to reproduce those results.\n>>\n> \n> I'll try.\n> \n\nI've tried to reproduce this behavior, and I've done a quite extensive\nset of tests on two different (quite different) machines, but so far I\nhave not observed anything like that. The results are attached, along\nwith the test scripts used.\n\nI wonder if this might be due to pg_ycsb using random_zipfian, which has\nsomewhat annoying behavior for some parameters (as I've mentioned in a\nseparate thread). But that should affect all the runs, not just some\nshared_buffers sizes.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 26 Feb 2019 15:23:57 -0800",
"msg_from": "Benjamin Manes <ben.manes@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch][WiP] Tweaked LRU for shared buffers"
}
] |
[
{
"msg_contents": "Hi,\n\nreplace_text() in varlena.c builds the result in a StringInfo buffer,\nand finishes by copying it into a freshly allocated varlena structure\nwith cstring_to_text_with_len(), in the same memory context.\n\nIt looks like that copy step could be avoided by preprending the\nvarlena header to the StringInfo to begin with, and return the buffer\nas a text*, as in the attached patch.\n\nOn large strings, the time saved can be significant. For instance\nI'm seeing a ~20% decrease in total execution time on a test with\nlengths in the 2-3 MB range, like this:\n\n select sum(length(\n replace(repeat('abcdefghijklmnopqrstuvwxyz', i*10), 'abc', 'ABC')\n ))\n from generate_series(10000,12000) as i;\n\nAlso, at a glance, there are a few other functions with similar\nStringInfo-to-varlena copies that seem avoidable:\nconcat_internal(), text_format(), replace_text_regexp().\n\nAre there reasons not to do this? Otherwise, should it be considered\nin in a more principled way, such as adding to the StringInfo API\nfunctions like void InitStringInfoForVarlena() and\ntext *StringInfoAsVarlena()?\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite",
"msg_date": "Wed, 13 Feb 2019 16:38:50 +0100",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": true,
"msg_subject": "replace_text optimization (StringInfo to varlena)"
},
{
"msg_contents": "At Wed, 13 Feb 2019 16:38:50 +0100, \"Daniel Verite\" <daniel@manitou-mail.org> wrote in <bc319ec6-60d0-4878-a800-bcc12a190c02@manitou-mail.org>\n> Hi,\n> \n> replace_text() in varlena.c builds the result in a StringInfo buffer,\n> and finishes by copying it into a freshly allocated varlena structure\n> with cstring_to_text_with_len(), in the same memory context.\n> \n> It looks like that copy step could be avoided by preprending the\n> varlena header to the StringInfo to begin with, and return the buffer\n> as a text*, as in the attached patch.\n> \n> On large strings, the time saved can be significant. For instance\n> I'm seeing a ~20% decrease in total execution time on a test with\n> lengths in the 2-3 MB range, like this:\n> \n> select sum(length(\n> replace(repeat('abcdefghijklmnopqrstuvwxyz', i*10), 'abc', 'ABC')\n> ))\n> from generate_series(10000,12000) as i;\n> \n> Also, at a glance, there are a few other functions with similar\n> StringInfo-to-varlena copies that seem avoidable:\n> concat_internal(), text_format(), replace_text_regexp().\n> \n> Are there reasons not to do this? Otherwise, should it be considered\n> in in a more principled way, such as adding to the StringInfo API\n> functions like void InitStringInfoForVarlena() and\n> text *StringInfoAsVarlena()?\n\nFirst, I agree that the waste of cycles should be eliminated.\n\nGrepping with 'cstring_to_text_with_len\\(.*[\\.>]data,.*\\)' shows\nmany instances of the use. Though StringInfo seems very\nopen-minded, the number of instances would be a good reason to\nhave new API functions.\n\nThat is, I vote for providing a set of API for the use in\nStringInfo. But it seems to be difficult to name the latter\nfunction. The name convention for the object is basically\n<verb>StringInfo. getVarlenaStringInfo/getTextStringInfo\napparently fits the convention but seems to me a bit strange.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 14 Feb 2019 16:32:44 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: replace_text optimization (StringInfo to varlena)"
}
] |
[
{
"msg_contents": "Hello,\n\nMy name is Abhishek Agrawal and I am a CSE UG student of IIT Patna.\n\nI am interested in doing GSOC with PostgreSQL if you guys are applying for it this year. If anyone could direct me to proper links or some channel to prepare for the same then it will be really helpful. I have some questions like: What will be the projects this year, What are the requirement for the respective projects, who will be the mentors, etc.\n\nThis will be really helpful to me.\n\nSincerely,\n\nAbhishek Agrawal\nGithub Link: https://github.com/Abhishek-dev2\n",
"msg_date": "Wed, 13 Feb 2019 22:26:03 +0530",
"msg_from": "Abhishek Agrawal <aagrawal207@gmail.com>",
"msg_from_op": true,
"msg_subject": "Regarding participating in GSOC 2019 with PostgreSQL"
},
{
"msg_contents": "Greetings,\n\n* Abhishek Agrawal (aagrawal207@gmail.com) wrote:\n> I am interested in doing GSOC with PostgreSQL if you guys are applying for it this year. If anyone could direct me to proper links or some channel to prepare for the same then it will be really helpful. I have some questions like: What will be the projects this year, What are the requirement for the respective projects, who will be the mentors, etc.\n\nWe have applied but there is no guarantee that we'll be accepted.\nGoogle has said they're announcing on Feb 26 the projects which have\nbeen accepted. Also, it's up to Google to decide how many slots we are\ngiven to have GSoC students, and we also won't know that until they tell\nus.\n\nI'd suggest you wait until the announcement is made by Google, at which\ntime they'll also tell us how many slots we have, and will provide\ndirection to prospective students regarding the next steps.\n\nThanks!\n\nStephen",
"msg_date": "Fri, 15 Feb 2019 09:25:08 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Regarding participating in GSOC 2019 with PostgreSQL"
}
] |
[
{
"msg_contents": "Hi,\n\nTurns out in portions of the regression tests a good chunk of the\nruntime is inside AddNewAttributeTuples() and\nrecordMultipleDependencies()'s heap insertions. Looking at a few\nprofiles I had lying around I found that in some production cases\ntoo. ISTM we should use heap_multi_insert() for both, as the source\ntuples ought to be around reasonably comfortably.\n\nFor recordMultipleDependencies() it'd obviously better if we collected\nall dependencies for new objects, rather than doing so separately. Right\nnow e.g. the code for a new table looks like:\n\n\t\trecordDependencyOn(&myself, &referenced, DEPENDENCY_NORMAL);\n\n\t\trecordDependencyOnOwner(RelationRelationId, relid, ownerid);\n\n\t\trecordDependencyOnNewAcl(RelationRelationId, relid, 0, ownerid, relacl);\n\n\t\trecordDependencyOnCurrentExtension(&myself, false);\n\n\t\tif (reloftypeid)\n\t\t{\n\t\t\treferenced.classId = TypeRelationId;\n\t\t\treferenced.objectId = reloftypeid;\n\t\t\treferenced.objectSubId = 0;\n\t\t\trecordDependencyOn(&myself, &referenced, DEPENDENCY_NORMAL);\n\t\t}\n\nand it'd obviously be more efficient to do that once if we went to using\nheap_multi_insert() in the dependency code. But I suspect even if we\njust used an extended API in AddNewAttributeTuples() (for the type /\ncollation dependencies), it'd be a win.\n\nI'm not planning to work on this soon, but I though it'd be worthwhile\nto put this out there (even if potentially just as a note to myself).\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Wed, 13 Feb 2019 10:27:37 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Ought to use heap_multi_insert() for pg_attribute/depend insertions?"
},
{
"msg_contents": "> On 13 Feb 2019, at 19:27, Andres Freund <andres@anarazel.de> wrote:\n> \n> Hi,\n> \n> Turns out in portions of the regression tests a good chunk of the\n> runtime is inside AddNewAttributeTuples() and\n> recordMultipleDependencies()'s heap insertions. Looking at a few\n> profiles I had lying around I found that in some production cases\n> too. ISTM we should use heap_multi_insert() for both, as the source\n> tuples ought to be around reasonably comfortably.\n> \n> For recordMultipleDependencies() it'd obviously better if we collected\n> all dependencies for new objects, rather than doing so separately. Right\n> now e.g. the code for a new table looks like:\n> \n> \t\trecordDependencyOn(&myself, &referenced, DEPENDENCY_NORMAL);\n> \n> \t\trecordDependencyOnOwner(RelationRelationId, relid, ownerid);\n> \n> \t\trecordDependencyOnNewAcl(RelationRelationId, relid, 0, ownerid, relacl);\n> \n> \t\trecordDependencyOnCurrentExtension(&myself, false);\n> \n> \t\tif (reloftypeid)\n> \t\t{\n> \t\t\treferenced.classId = TypeRelationId;\n> \t\t\treferenced.objectId = reloftypeid;\n> \t\t\treferenced.objectSubId = 0;\n> \t\t\trecordDependencyOn(&myself, &referenced, DEPENDENCY_NORMAL);\n> \t\t}\n> \n> and it'd obviously be more efficient to do that once if we went to using\n> heap_multi_insert() in the dependency code. But I suspect even if we\n> just used an extended API in AddNewAttributeTuples() (for the type /\n> collation dependencies), it'd be a win.\n\nWhen a colleague was looking at heap_multi_insert in the COPY codepath I\nremembered this and took a stab at a WIP patch inspired by this email, while\nnot following it to the letter. It’s not going the full route of collecting\nall the dependencies for creating a table, but adding ways to perform\nmulti_heap_insert in the existing codepaths as it seemed like a good place to\nstart.\n\nIt introduces a new function CatalogMultiInsertWithInfo which takes a set of\nslots for use in heap_multi_insert, used from recordMultipleDependencies and\nInsertPgAttributeTuples (which replace calling InsertPgAttributeTuple\nrepeatedly). The code is still a WIP with some kludges, following the show-\nearly philosophy.\n\nIt passes make check and some light profiling around regress suites indicates\nthat it does improve a bit by reducing the somewhat costly calls. Is this\nalong the lines of what you were thinking or way off?\n\ncheers ./daniel",
"msg_date": "Wed, 22 May 2019 10:25:14 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-22 10:25:14 +0200, Daniel Gustafsson wrote:\n> > On 13 Feb 2019, at 19:27, Andres Freund <andres@anarazel.de> wrote:\n> > \n> > Hi,\n> > \n> > Turns out in portions of the regression tests a good chunk of the\n> > runtime is inside AddNewAttributeTuples() and\n> > recordMultipleDependencies()'s heap insertions. Looking at a few\n> > profiles I had lying around I found that in some production cases\n> > too. ISTM we should use heap_multi_insert() for both, as the source\n> > tuples ought to be around reasonably comfortably.\n> > \n> > For recordMultipleDependencies() it'd obviously better if we collected\n> > all dependencies for new objects, rather than doing so separately. Right\n> > now e.g. the code for a new table looks like:\n> > \n> > \t\trecordDependencyOn(&myself, &referenced, DEPENDENCY_NORMAL);\n> > \n> > \t\trecordDependencyOnOwner(RelationRelationId, relid, ownerid);\n> > \n> > \t\trecordDependencyOnNewAcl(RelationRelationId, relid, 0, ownerid, relacl);\n> > \n> > \t\trecordDependencyOnCurrentExtension(&myself, false);\n> > \n> > \t\tif (reloftypeid)\n> > \t\t{\n> > \t\t\treferenced.classId = TypeRelationId;\n> > \t\t\treferenced.objectId = reloftypeid;\n> > \t\t\treferenced.objectSubId = 0;\n> > \t\t\trecordDependencyOn(&myself, &referenced, DEPENDENCY_NORMAL);\n> > \t\t}\n> > \n> > and it'd obviously be more efficient to do that once if we went to using\n> > heap_multi_insert() in the dependency code. But I suspect even if we\n> > just used an extended API in AddNewAttributeTuples() (for the type /\n> > collation dependencies), it'd be a win.\n> \n> When a colleague was looking at heap_multi_insert in the COPY codepath I\n> remembered this and took a stab at a WIP patch inspired by this email, while\n> not following it to the letter. It’s not going the full route of collecting\n> all the dependencies for creating a table, but adding ways to perform\n> multi_heap_insert in the existing codepaths as it seemed like a good place to\n> start.\n\nCool. I don't quite have the energy to look at this right now, could you\ncreate a CF entry for this?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 22 May 2019 18:46:13 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "> On 23 May 2019, at 03:46, Andres Freund <andres@anarazel.de> wrote:\n> On 2019-05-22 10:25:14 +0200, Daniel Gustafsson wrote:\n\n>> When a colleague was looking at heap_multi_insert in the COPY codepath I\n>> remembered this and took a stab at a WIP patch inspired by this email, while\n>> not following it to the letter. It’s not going the full route of collecting\n>> all the dependencies for creating a table, but adding ways to perform\n>> multi_heap_insert in the existing codepaths as it seemed like a good place to\n>> start.\n> \n> Cool. I don't quite have the energy to look at this right now, could you\n> create a CF entry for this?\n\nOf course, done.\n\ncheers ./daniel\n\n",
"msg_date": "Thu, 23 May 2019 09:09:52 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-22 10:25:14 +0200, Daniel Gustafsson wrote:\n> It passes make check and some light profiling around regress suites indicates\n> that it does improve a bit by reducing the somewhat costly calls.\n\nJust for the record, here is the profile I did:\n\nperf record --call-graph lbr make -s check-world -Otarget -j16 -s\nperf report\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 30 May 2019 14:31:54 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "> On 23 May 2019, at 03:46, Andres Freund <andres@anarazel.de> wrote:\n> On 2019-05-22 10:25:14 +0200, Daniel Gustafsson wrote:\n\n>> When a colleague was looking at heap_multi_insert in the COPY codepath I\n>> remembered this and took a stab at a WIP patch inspired by this email, while\n>> not following it to the letter. It’s not going the full route of collecting\n>> all the dependencies for creating a table, but adding ways to perform\n>> multi_heap_insert in the existing codepaths as it seemed like a good place to\n>> start.\n> \n> Cool. I don't quite have the energy to look at this right now, could you\n> create a CF entry for this?\n\nAttached is an updated version with some of the stuff we briefly discussed at\nPGCon. This version use the ObjectAddresses API already established to collect\nthe dependencies, and perform a few more multi inserts. Profiling shows that\nwe are spending less time in catalog insertions, but whether it’s enough to\nwarrant the added complexity is up for debate.\n\nThe patch is still rough around the edges (TODO’s left to mark some areas), but\nI prefer to get some feedback early rather than working too far in potentially\nthe wrong direction, so parking this in the CF for now.\n\ncheers ./daniel",
"msg_date": "Tue, 11 Jun 2019 15:20:42 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-11 15:20:42 +0200, Daniel Gustafsson wrote:\n> Attached is an updated version with some of the stuff we briefly discussed at\n> PGCon. This version use the ObjectAddresses API already established to collect\n> the dependencies, and perform a few more multi inserts.\n\nCool.\n\n> Profiling shows that\n> we are spending less time in catalog insertions, but whether it’s enough to\n> warrant the added complexity is up for debate.\n\nProbably worth benchmarking e.g. temp table creation speed in\nisolation. People do complain about that occasionally.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 11 Jun 2019 16:26:36 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "On Wed, Jun 12, 2019 at 1:21 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> The patch is still rough around the edges (TODO’s left to mark some areas), but\n> I prefer to get some feedback early rather than working too far in potentially\n> the wrong direction, so parking this in the CF for now.\n\nHi Daniel,\n\nGiven the above disclaimers the following may be entirely expected,\nbut just in case you weren't aware:\nt/010_logical_decoding_timelines.pl fails with this patch applied.\n\nhttps://travis-ci.org/postgresql-cfbot/postgresql/builds/555205042\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Mon, 8 Jul 2019 10:02:03 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "> On 8 Jul 2019, at 00:02, Thomas Munro <thomas.munro@gmail.com> wrote:\n> \n> On Wed, Jun 12, 2019 at 1:21 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>> The patch is still rough around the edges (TODO’s left to mark some areas), but\n>> I prefer to get some feedback early rather than working too far in potentially\n>> the wrong direction, so parking this in the CF for now.\n> \n> Hi Daniel,\n> \n> Given the above disclaimers the following may be entirely expected,\n> but just in case you weren't aware:\n> t/010_logical_decoding_timelines.pl fails with this patch applied.\n> \n> https://travis-ci.org/postgresql-cfbot/postgresql/builds/555205042\n\nI hadn’t seen since I had fat-fingered and accidentally run the full tests in a\ntree without assertions. The culprit here seems to an assertion in the logical\ndecoding code which doesn’t account for heap_multi_insert into catalog\nrelations (which there are none now, this patch introduce them and thus trip\nthe assertion). As the issue is somewhat unrelated, I’ve opened a separate\nthread with a small patch:\n\n\thttps://postgr.es/m/CBFFD532-C033-49EB-9A5A-F67EAEE9EB0B@yesql.se\n\nThe attached v3 also has that fix in order to see if the cfbot is happier with\nthis.\n\ncheers ./daniel",
"msg_date": "Tue, 9 Jul 2019 13:07:15 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "On Tue, Jul 9, 2019 at 11:07 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> The attached v3 also has that fix in order to see if the cfbot is happier with\n> this.\n\nNoticed while moving this to the next CF:\n\nheap.c: In function ‘StorePartitionKey’:\n1191heap.c:3582:3: error: ‘referenced’ undeclared (first use in this function)\n1192 referenced.classId = RelationRelationId;\n1193 ^\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Fri, 2 Aug 2019 10:41:39 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "> On 2 Aug 2019, at 00:41, Thomas Munro <thomas.munro@gmail.com> wrote:\n> \n> On Tue, Jul 9, 2019 at 11:07 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>> The attached v3 also has that fix in order to see if the cfbot is happier with\n>> this.\n> \n> Noticed while moving this to the next CF:\n> \n> heap.c: In function ‘StorePartitionKey’:\n> 1191heap.c:3582:3: error: ‘referenced’ undeclared (first use in this function)\n> 1192 referenced.classId = RelationRelationId;\n> 1193 ^\n\nThanks, the attached v4 updates to patch to handle a0555ddab9b672a046 as well.\n\ncheers ./daniel",
"msg_date": "Mon, 5 Aug 2019 09:20:07 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "On Mon, Aug 05, 2019 at 09:20:07AM +0200, Daniel Gustafsson wrote:\n> Thanks, the attached v4 updates to patch to handle a0555ddab9b672a046 as well.\n\n+ if (referenced->numrefs == 1)\n+ recordDependencyOn(depender, &referenced->refs[0], behavior);\n+ else\n+ recordMultipleDependencies(depender,\n+ referenced->refs, referenced->numrefs,\n+ behavior);\nThis makes me wonder if we should not just add a shortcut in\nrecordMultipleDependencies() to use recordDependencyOn if there is\nonly one reference in the set. That would save the effort of a multi\ninsert for all callers of recordMultipleDependencies() this way,\nincluding the future ones. And that could also be done independently\nof the addition of InsertPgAttributeTuples(), no?\n--\nMichael",
"msg_date": "Mon, 5 Aug 2019 16:45:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "On Mon, Aug 05, 2019 at 04:45:59PM +0900, Michael Paquier wrote:\n> + if (referenced->numrefs == 1)\n> + recordDependencyOn(depender, &referenced->refs[0], behavior);\n> + else\n> + recordMultipleDependencies(depender,\n> + referenced->refs, referenced->numrefs,\n> + behavior);\n> This makes me wonder if we should not just add a shortcut in\n> recordMultipleDependencies() to use recordDependencyOn if there is\n> only one reference in the set. That would save the effort of a multi\n> insert for all callers of recordMultipleDependencies() this way,\n> including the future ones. And that could also be done independently\n> of the addition of InsertPgAttributeTuples(), no?\n\nA comment in heap_multi_insert() needs to be updated because it\nbecomes the case with your patch:\n /*\n * We don't use heap_multi_insert for catalog tuples yet, but\n * better be prepared...\n */\n\nThere is also this one in DecodeMultiInsert()\n * CONTAINS_NEW_TUPLE will always be set currently as multi_insert\n * isn't used for catalogs, but better be future proof.\n\n(I am going to comment on the assertion issue on the other thread, I\ngot some suggestions about it.)\n--\nMichael",
"msg_date": "Tue, 6 Aug 2019 11:24:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "On Tue, Aug 06, 2019 at 11:24:46AM +0900, Michael Paquier wrote:\n> A comment in heap_multi_insert() needs to be updated because it\n> becomes the case with your patch:\n> /*\n> * We don't use heap_multi_insert for catalog tuples yet, but\n> * better be prepared...\n> */\n> \n> There is also this one in DecodeMultiInsert()\n> * CONTAINS_NEW_TUPLE will always be set currently as multi_insert\n> * isn't used for catalogs, but better be future proof.\n\nApplying the latest patch, this results in an assertion failure for\nthe tests of test_decoding.\n\n> (I am going to comment on the assertion issue on the other thread, I\n> got some suggestions about it.)\n\nThis part has resulted in 75c1921, and we could just change\nDecodeMultiInsert() so as if there is no tuple data then we'd just\nleave. However, I don't feel completely comfortable with that either\nas it would be nice to still check that for normal relations we\n*always* have a FPW available.\n\nDaniel, your thoughts? I am switching the patch as waiting on\nauthor.\n--\nMichael",
"msg_date": "Mon, 11 Nov 2019 17:32:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "> On 11 Nov 2019, at 09:32, Michael Paquier <michael@paquier.xyz> wrote:\n\n> This part has resulted in 75c1921, and we could just change\n> DecodeMultiInsert() so as if there is no tuple data then we'd just\n> leave. However, I don't feel completely comfortable with that either\n> as it would be nice to still check that for normal relations we\n> *always* have a FPW available.\n\nXLH_INSERT_CONTAINS_NEW_TUPLE will only be set in case of catalog relations\nIIUC (that is, non logically decoded relations), so it seems to me that we can\nhave a fastpath out of DecodeMultiInsert() which inspects that flag without\nproblems. Is this proposal along the lines of what you were thinking?\n\nThe patch is now generating an error in the regression tests as well, on your\nrecently added CREATE INDEX test from 68ac9cf2499236996f3d4bf31f7f16d5bd3c77af.\nBy using the ObjectAddresses API the dependencies are deduped before recorded,\nremoving the duplicate entry from that test output. AFAICT there is nothing\nbenefiting us from having duplicate dependencies, so this seems an improvement\n(albeit tiny and not very important), or am I missing something? Is there a\nuse for duplicate dependencies?\n\nAttached version contains the above two fixes, as well as a multi_insert for\ndependencies in CREATE EXTENSION which I had missed to git add in previous\nversions.\n\ncheers ./daniel",
"msg_date": "Tue, 12 Nov 2019 16:25:06 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "On 2019-11-12 16:25:06 +0100, Daniel Gustafsson wrote:\n> > On 11 Nov 2019, at 09:32, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> > This part has resulted in 75c1921, and we could just change\n> > DecodeMultiInsert() so as if there is no tuple data then we'd just\n> > leave. However, I don't feel completely comfortable with that either\n> > as it would be nice to still check that for normal relations we\n> > *always* have a FPW available.\n> \n> XLH_INSERT_CONTAINS_NEW_TUPLE will only be set in case of catalog relations\n> IIUC (that is, non logically decoded relations), so it seems to me that we can\n> have a fastpath out of DecodeMultiInsert() which inspects that flag without\n> problems. Is this proposal along the lines of what you were thinking?\n\nMaybe I'm missing something, but it's the opposite, no?\n\n\tbool\t\tneed_tuple_data = RelationIsLogicallyLogged(relation);\n\n...\n\t\t\tif (need_tuple_data)\n\t\t\t\txlrec->flags |= XLH_INSERT_CONTAINS_NEW_TUPLE;\n\nor am I misunderstanding what you mean?\n\n\n> @@ -1600,10 +1598,16 @@ recordDependencyOnExpr(const ObjectAddress *depender,\n> \t/* Remove any duplicates */\n> \teliminate_duplicate_dependencies(context.addrs);\n> \n> -\t/* And record 'em */\n> -\trecordMultipleDependencies(depender,\n> -\t\t\t\t\t\t\t context.addrs->refs, context.addrs->numrefs,\n> -\t\t\t\t\t\t\t behavior);\n> +\t/*\n> +\t * And record 'em. If we know we only have a single dependency then\n> +\t * avoid the extra cost of setting up a multi insert.\n> +\t */\n> +\tif (context.addrs->numrefs == 1)\n> +\t\trecordDependencyOn(depender, &context.addrs->refs[0], behavior);\n> +\telse\n> +\t\trecordMultipleDependencies(depender,\n> +\t\t\t\t\t\t\t\t context.addrs->refs, context.addrs->numrefs,\n> +\t\t\t\t\t\t\t\t behavior);\n\nI'm not sure this is actually a worthwhile complexity to keep. Hard to\nbelieve that setting up a multi-insert is goign to have a significant\nenough overhead to matter here?\n\nAnd if it does, is there a chance we can hide this repeated block\nsomewhere within recordMultipleDependencies() or such? I don't like the\nrepetitiveness here. Seems recordMultipleDependencies() could easily\noptimize the case of there being exactly one dependency to insert?\n\n\n> +/*\n> + * InsertPgAttributeTuples\n> + *\t\tConstruct and insert multiple tuples in pg_attribute.\n> + *\n> + * This is a variant of InsertPgAttributeTuple() which dynamically allocates\n> + * space for multiple tuples. Having two so similar functions is a kludge, but\n> + * for now it's a TODO to make it less terrible.\n> + */\n> +void\n> +InsertPgAttributeTuples(Relation pg_attribute_rel,\n> +\t\t\t\t\t\tFormData_pg_attribute *new_attributes,\n> +\t\t\t\t\t\tint natts,\n> +\t\t\t\t\t\tCatalogIndexState indstate)\n> +{\n> +\tDatum\t\tvalues[Natts_pg_attribute];\n> +\tbool\t\tnulls[Natts_pg_attribute];\n> +\tHeapTuple\ttup;\n> +\tint\t\t\ti;\n> +\tTupleTableSlot **slot;\n> +\n> +\t/*\n> +\t * The slots are dropped in CatalogMultiInsertWithInfo(). TODO: natts\n> +\t * number of slots is not a reasonable assumption as a wide relation\n> +\t * would be detrimental, figuring a good number is left as a TODO.\n> +\t */\n> +\tslot = palloc(sizeof(TupleTableSlot *) * natts);\n\nHm. Looking at\n\nSELECT avg(pg_column_size(pa)) FROM pg_attribute pa;\n\nyielding ~144 bytes, we can probably cap this at 128 or such, without\nloosing efficiency. Or just use\n#define MAX_BUFFERED_BYTES\t\t65535\nfrom copy.c or such (MAX_BUFFERED_BYTES / sizeof(FormData_pg_attribute)).\n\n\n> +\t/* This is a tad tedious, but way cleaner than what we used to do... */\n\nI don't like comments that refer to \"what we used to\" because there's no\nway for anybody to make sense of that, because it's basically a dangling\nreference :)\n\n\n> +\tmemset(values, 0, sizeof(values));\n> +\tmemset(nulls, false, sizeof(nulls));\n\n> +\t/* start out with empty permissions and empty options */\n> +\tnulls[Anum_pg_attribute_attacl - 1] = true;\n> +\tnulls[Anum_pg_attribute_attoptions - 1] = true;\n> +\tnulls[Anum_pg_attribute_attfdwoptions - 1] = true;\n> +\tnulls[Anum_pg_attribute_attmissingval - 1] = true;\n> +\n> +\t/* attcacheoff is always -1 in storage */\n> +\tvalues[Anum_pg_attribute_attcacheoff - 1] = Int32GetDatum(-1);\n> +\n> +\tfor (i = 0; i < natts; i++)\n> +\t{\n> +\t\tvalues[Anum_pg_attribute_attrelid - 1] = ObjectIdGetDatum(new_attributes[i].attrelid);\n> +\t\tvalues[Anum_pg_attribute_attname - 1] = NameGetDatum(&new_attributes[i].attname);\n> +\t\tvalues[Anum_pg_attribute_atttypid - 1] = ObjectIdGetDatum(new_attributes[i].atttypid);\n> +\t\tvalues[Anum_pg_attribute_attstattarget - 1] = Int32GetDatum(new_attributes[i].attstattarget);\n> +\t\tvalues[Anum_pg_attribute_attlen - 1] = Int16GetDatum(new_attributes[i].attlen);\n> +\t\tvalues[Anum_pg_attribute_attnum - 1] = Int16GetDatum(new_attributes[i].attnum);\n> +\t\tvalues[Anum_pg_attribute_attndims - 1] = Int32GetDatum(new_attributes[i].attndims);\n> +\t\tvalues[Anum_pg_attribute_atttypmod - 1] = Int32GetDatum(new_attributes[i].atttypmod);\n> +\t\tvalues[Anum_pg_attribute_attbyval - 1] = BoolGetDatum(new_attributes[i].attbyval);\n> +\t\tvalues[Anum_pg_attribute_attstorage - 1] = CharGetDatum(new_attributes[i].attstorage);\n> +\t\tvalues[Anum_pg_attribute_attalign - 1] = CharGetDatum(new_attributes[i].attalign);\n> +\t\tvalues[Anum_pg_attribute_attnotnull - 1] = BoolGetDatum(new_attributes[i].attnotnull);\n> +\t\tvalues[Anum_pg_attribute_atthasdef - 1] = BoolGetDatum(new_attributes[i].atthasdef);\n> +\t\tvalues[Anum_pg_attribute_atthasmissing - 1] = BoolGetDatum(new_attributes[i].atthasmissing);\n> +\t\tvalues[Anum_pg_attribute_attidentity - 1] = CharGetDatum(new_attributes[i].attidentity);\n> +\t\tvalues[Anum_pg_attribute_attgenerated - 1] = CharGetDatum(new_attributes[i].attgenerated);\n> +\t\tvalues[Anum_pg_attribute_attisdropped - 1] = BoolGetDatum(new_attributes[i].attisdropped);\n> +\t\tvalues[Anum_pg_attribute_attislocal - 1] = BoolGetDatum(new_attributes[i].attislocal);\n> +\t\tvalues[Anum_pg_attribute_attinhcount - 1] = Int32GetDatum(new_attributes[i].attinhcount);\n> +\t\tvalues[Anum_pg_attribute_attcollation - 1] = ObjectIdGetDatum(new_attributes[i].attcollation);\n> +\n> +\t\tslot[i] = MakeSingleTupleTableSlot(RelationGetDescr(pg_attribute_rel),\n> +\t\t\t\t\t\t\t\t\t\t &TTSOpsHeapTuple);\n> +\t\ttup = heap_form_tuple(RelationGetDescr(pg_attribute_rel), values, nulls);\n> +\t\tExecStoreHeapTuple(heap_copytuple(tup), slot[i], false);\n\nThis seems likely to waste some effort - during insertion the slot will\nbe materialized, which'll copy the tuple. I think it'd be better to\nconstruct the tuple directly inside the slot's tts_values/isnull, and\nthen store it with ExecStoreVirtualTuple().\n\nAlso, why aren't you actually just specifying shouldFree = true? We'd\nwant the result of the heap_form_tuple() to be freed eagerly, no? Nor do\nI get why you're *also* heap_copytuple'ing the tuple, despite having\njust locally formed it, and not referencing the result of\nheap_form_tuple() further? Obviously that's all unimportant if you\nchange the code to use ExecStoreVirtualTuple()?\n\n\n> +\t}\n> +\n> +\t/* finally insert the new tuples, update the indexes, and clean up */\n> +\tCatalogMultiInsertWithInfo(pg_attribute_rel, slot, natts, indstate);\n\nPrevious comment:\n\nI think it might be worthwhile to clear and delete the slots\nafter inserting. That's potentially a good bit of memory, no?\n\nCurrent comment:\n\nI think I quite dislike the API of CatalogMultiInsertWithInfo freeing\nthe slots. It'd e.g. make it impossible to reuse them to insert more\ndata. It's also really hard to understand\n\n\n> +}\n> +\n> /*\n> * InsertPgAttributeTuple\n> *\t\tConstruct and insert a new tuple in pg_attribute.\n> @@ -754,7 +827,7 @@ AddNewAttributeTuples(Oid new_rel_oid,\n> \t\t\t\t\t TupleDesc tupdesc,\n> \t\t\t\t\t char relkind)\n> {\n> -\tForm_pg_attribute attr;\n> +\tForm_pg_attribute *attrs;\n> \tint\t\t\ti;\n> \tRelation\trel;\n> \tCatalogIndexState indstate;\n> @@ -769,35 +842,42 @@ AddNewAttributeTuples(Oid new_rel_oid,\n> \n> \tindstate = CatalogOpenIndexes(rel);\n> \n> +\tattrs = palloc(sizeof(Form_pg_attribute) * natts);\n\nHm. Why we we need this separate allocation? Isn't this exactly the same\nas what's in the tupledesc?\n\n\n> +/*\n> + * CatalogMultiInsertWithInfo\n\nHm. The current function is called CatalogTupleInsert(), wouldn't\nCatalogTuplesInsertWithInfo() or such be more accurate? Or\nCatalogTuplesMultiInsertWithInfo()?\n\n\n\n> @@ -81,7 +128,14 @@ recordMultipleDependencies(const ObjectAddress *depender,\n> \n> \tmemset(nulls, false, sizeof(nulls));\n> \n> -\tfor (i = 0; i < nreferenced; i++, referenced++)\n> +\tvalues[Anum_pg_depend_classid - 1] = ObjectIdGetDatum(depender->classId);\n> +\tvalues[Anum_pg_depend_objid - 1] = ObjectIdGetDatum(depender->objectId);\n> +\tvalues[Anum_pg_depend_objsubid - 1] = Int32GetDatum(depender->objectSubId);\n> +\n> +\t/* TODO is nreferenced a reasonable allocation of slots? */\n> +\tslot = palloc(sizeof(TupleTableSlot *) * nreferenced);\n> +\n> +\tfor (i = 0, ntuples = 0; i < nreferenced; i++, referenced++)\n> \t{\n> \t\t/*\n> \t\t * If the referenced object is pinned by the system, there's no real\n> @@ -94,30 +148,34 @@ recordMultipleDependencies(const ObjectAddress *depender,\n> \t\t\t * Record the Dependency. Note we don't bother to check for\n> \t\t\t * duplicate dependencies; there's no harm in them.\n> \t\t\t */\n> -\t\t\tvalues[Anum_pg_depend_classid - 1] = ObjectIdGetDatum(depender->classId);\n> -\t\t\tvalues[Anum_pg_depend_objid - 1] = ObjectIdGetDatum(depender->objectId);\n> -\t\t\tvalues[Anum_pg_depend_objsubid - 1] = Int32GetDatum(depender->objectSubId);\n> -\n> \t\t\tvalues[Anum_pg_depend_refclassid - 1] = ObjectIdGetDatum(referenced->classId);\n> \t\t\tvalues[Anum_pg_depend_refobjid - 1] = ObjectIdGetDatum(referenced->objectId);\n> \t\t\tvalues[Anum_pg_depend_refobjsubid - 1] = Int32GetDatum(referenced->objectSubId);\n> \n> \t\t\tvalues[Anum_pg_depend_deptype - 1] = CharGetDatum((char) behavior);\n> \n> -\t\t\ttup = heap_form_tuple(dependDesc->rd_att, values, nulls);\n> +\t\t\tslot[ntuples] = MakeSingleTupleTableSlot(RelationGetDescr(dependDesc),\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t &TTSOpsHeapTuple);\n> +\n> +\t\t\ttuple = heap_form_tuple(dependDesc->rd_att, values, nulls);\n> +\t\t\tExecStoreHeapTuple(heap_copytuple(tuple), slot[ntuples], false);\n> +\t\t\tntuples++;\n\nSame concern as in the equivalent pg_attribute code.\n\n\n> +\tntuples = 0;\n> \twhile (HeapTupleIsValid(tup = systable_getnext(scan)))\n> \t{\n> -\t\tHeapTuple\tnewtup;\n> -\n> -\t\tnewtup = heap_modify_tuple(tup, sdepDesc, values, nulls, replace);\n> -\t\tCatalogTupleInsertWithInfo(sdepRel, newtup, indstate);\n> +\t\tslot[ntuples] = MakeSingleTupleTableSlot(RelationGetDescr(sdepRel),\n> +\t\t\t\t\t\t\t\t\t&TTSOpsHeapTuple);\n> +\t\tnewtuple = heap_modify_tuple(tup, sdepDesc, values, nulls, replace);\n> +\t\tExecStoreHeapTuple(heap_copytuple(newtuple), slot[ntuples], false);\n> +\t\tntuples++;\n\n\n> -\t\theap_freetuple(newtup);\n> +\t\tif (ntuples == DEPEND_TUPLE_BUF)\n> +\t\t{\n> +\t\t\tCatalogMultiInsertWithInfo(sdepRel, slot, ntuples, indstate);\n> +\t\t\tntuples = 0;\n> +\t\t}\n> \t}\n\nToo much copying again.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 12 Nov 2019 10:33:16 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "On Tue, Nov 12, 2019 at 10:33:16AM -0800, Andres Freund wrote:\n> On 2019-11-12 16:25:06 +0100, Daniel Gustafsson wrote:\n>> On 11 Nov 2019, at 09:32, Michael Paquier <michael@paquier.xyz> wrote:\n>> \n>>> This part has resulted in 75c1921, and we could just change\n>>> DecodeMultiInsert() so as if there is no tuple data then we'd just\n>>> leave. However, I don't feel completely comfortable with that either\n>>> as it would be nice to still check that for normal relations we\n>>> *always* have a FPW available.\n>> \n>> XLH_INSERT_CONTAINS_NEW_TUPLE will only be set in case of catalog relations\n>> IIUC (that is, non logically decoded relations), so it seems to me that we can\n>> have a fastpath out of DecodeMultiInsert() which inspects that flag without\n>> problems. Is this proposal along the lines of what you were thinking?\n> \n> Maybe I'm missing something, but it's the opposite, no?\n\n> \tbool\t\tneed_tuple_data = RelationIsLogicallyLogged(relation);\n\nYep. This checks after IsCatalogRelation().\n\n> ...\n> \t\t\tif (need_tuple_data)\n> \t\t\t\txlrec->flags |= XLH_INSERT_CONTAINS_NEW_TUPLE;\n> \n> or am I misunderstanding what you mean?\n\nNot sure that I can think about a good way to properly track if the\nnew tuple data is associated to a catalog or not, aka if the data is\nexpected all the time or not to get a good sanity check when doing the\nmulti-insert decoding. We could extend xl_multi_insert_tuple with a\nflag to track that, but that seems like an overkill for a simple\nsanity check..\n--\nMichael",
"msg_date": "Wed, 13 Nov 2019 15:58:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "Hi,\n\nOn 2019-11-13 15:58:28 +0900, Michael Paquier wrote:\n> On Tue, Nov 12, 2019 at 10:33:16AM -0800, Andres Freund wrote:\n> > On 2019-11-12 16:25:06 +0100, Daniel Gustafsson wrote:\n> >> On 11 Nov 2019, at 09:32, Michael Paquier <michael@paquier.xyz> wrote:\n> >> \n> >>> This part has resulted in 75c1921, and we could just change\n> >>> DecodeMultiInsert() so as if there is no tuple data then we'd just\n> >>> leave. However, I don't feel completely comfortable with that either\n> >>> as it would be nice to still check that for normal relations we\n> >>> *always* have a FPW available.\n> >> \n> >> XLH_INSERT_CONTAINS_NEW_TUPLE will only be set in case of catalog relations\n> >> IIUC (that is, non logically decoded relations), so it seems to me that we can\n> >> have a fastpath out of DecodeMultiInsert() which inspects that flag without\n> >> problems. Is this proposal along the lines of what you were thinking?\n> > \n> > Maybe I'm missing something, but it's the opposite, no?\n> \n> > \tbool\t\tneed_tuple_data = RelationIsLogicallyLogged(relation);\n> \n> Yep. This checks after IsCatalogRelation().\n> \n> > ...\n> > \t\t\tif (need_tuple_data)\n> > \t\t\t\txlrec->flags |= XLH_INSERT_CONTAINS_NEW_TUPLE;\n> > \n> > or am I misunderstanding what you mean?\n> \n> Not sure that I can think about a good way to properly track if the\n> new tuple data is associated to a catalog or not, aka if the data is\n> expected all the time or not to get a good sanity check when doing the\n> multi-insert decoding. We could extend xl_multi_insert_tuple with a\n> flag to track that, but that seems like an overkill for a simple\n> sanity check..\n\nMy point was that I think there must be negation missing in Daniel's\nstatement - XLH_INSERT_CONTAINS_NEW_TUPLE will only be set if *not* a\ncatalog relation. But Daniel's statement says exactly the opposite, at\nleast by my read.\n\nI can't follow what you're trying to get at in this sub discussion - why\nwould we want to sanity check something about catalog tables here? Given\nthat XLH_INSERT_CONTAINS_NEW_TUPLE is set exactly when the table is not\na catalog table, that seems entirely superfluous?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 13 Nov 2019 17:55:12 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "On Wed, Nov 13, 2019 at 05:55:12PM -0800, Andres Freund wrote:\n> My point was that I think there must be negation missing in Daniel's\n> statement - XLH_INSERT_CONTAINS_NEW_TUPLE will only be set if *not* a\n> catalog relation. But Daniel's statement says exactly the opposite, at\n> least by my read.\n\nFWIW, I am reading the same, aka the sentence of Daniel is wrong. And\nthat what you say is right.\n\n> I can't follow what you're trying to get at in this sub discussion - why\n> would we want to sanity check something about catalog tables here? Given\n> that XLH_INSERT_CONTAINS_NEW_TUPLE is set exactly when the table is not\n> a catalog table, that seems entirely superfluous?\n\n[ ... Looking ... ]\nYou are right, we could just rely on cross-checking that when we have\nno data then XLH_INSERT_CONTAINS_NEW_TUPLE is not set, or something\nlike that:\n@@ -901,11 +901,17 @@ DecodeMultiInsert(LogicalDecodingContext *ctx,\nXLogRecordBuffer *buf)\n return;\n\n /*\n- * As multi_insert is not used for catalogs yet, the block should always\n- * have data even if a full-page write of it is taken.\n+ * multi_insert can be used by catalogs, hence it is possible that\n+ * the block does not have any data even if a full-page write of it\n+ * is taken.\n */\n tupledata = XLogRecGetBlockData(r, 0, &tuplelen);\n- Assert(tupledata != NULL);\n+ Assert(tupledata == NULL ||\n+ (xlrec->flags & XLH_INSERT_CONTAINS_NEW_TUPLE) != 0);\n+\n+ /* No data, then leave */\n+ if (tupledata == NULL)\n+ return;\n\nThe latest patch does not apply, so it needs a rebase.\n--\nMichael",
"msg_date": "Fri, 15 Nov 2019 11:08:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "> On 12 Nov 2019, at 19:33, Andres Freund <andres@anarazel.de> wrote:\n\nThanks for reviewing!\n\n> On 2019-11-12 16:25:06 +0100, Daniel Gustafsson wrote:\n>>> On 11 Nov 2019, at 09:32, Michael Paquier <michael@paquier.xyz> wrote:\n>> \n>>> This part has resulted in 75c1921, and we could just change\n>>> DecodeMultiInsert() so as if there is no tuple data then we'd just\n>>> leave. However, I don't feel completely comfortable with that either\n>>> as it would be nice to still check that for normal relations we\n>>> *always* have a FPW available.\n>> \n>> XLH_INSERT_CONTAINS_NEW_TUPLE will only be set in case of catalog relations\n>> IIUC (that is, non logically decoded relations), so it seems to me that we can\n>> have a fastpath out of DecodeMultiInsert() which inspects that flag without\n>> problems. Is this proposal along the lines of what you were thinking?\n> \n> Maybe I'm missing something, but it's the opposite, no?\n> ...\n> or am I misunderstanding what you mean?\n\nCorrect, as has been discussed in this thread already, I managed to write it\nbackwards.\n\n>> @@ -1600,10 +1598,16 @@ recordDependencyOnExpr(const ObjectAddress *depender,\n>> \t/* Remove any duplicates */\n>> \teliminate_duplicate_dependencies(context.addrs);\n>> \n>> -\t/* And record 'em */\n>> -\trecordMultipleDependencies(depender,\n>> -\t\t\t\t\t\t\t context.addrs->refs, context.addrs->numrefs,\n>> -\t\t\t\t\t\t\t behavior);\n>> +\t/*\n>> +\t * And record 'em. If we know we only have a single dependency then\n>> +\t * avoid the extra cost of setting up a multi insert.\n>> +\t */\n>> +\tif (context.addrs->numrefs == 1)\n>> +\t\trecordDependencyOn(depender, &context.addrs->refs[0], behavior);\n>> +\telse\n>> +\t\trecordMultipleDependencies(depender,\n>> +\t\t\t\t\t\t\t\t context.addrs->refs, context.addrs->numrefs,\n>> +\t\t\t\t\t\t\t\t behavior);\n> \n> I'm not sure this is actually a worthwhile complexity to keep. Hard to\n> believe that setting up a multi-insert is goign to have a significant\n> enough overhead to matter here?\n> \n> And if it does, is there a chance we can hide this repeated block\n> somewhere within recordMultipleDependencies() or such? I don't like the\n> repetitiveness here. Seems recordMultipleDependencies() could easily\n> optimize the case of there being exactly one dependency to insert?\n\nAgreed, I've simplified by just calling recordMultipleDepencies() until that's\nfound to be too expensive.\n\n>> +\tslot = palloc(sizeof(TupleTableSlot *) * natts);\n> \n> Hm. Looking at\n> \n> SELECT avg(pg_column_size(pa)) FROM pg_attribute pa;\n> \n> yielding ~144 bytes, we can probably cap this at 128 or such, without\n> loosing efficiency. Or just use\n> #define MAX_BUFFERED_BYTES\t\t65535\n> from copy.c or such (MAX_BUFFERED_BYTES / sizeof(FormData_pg_attribute)).\n\nAdded a limit using MAX_BUFFERED_BYTES as above, or the number of tuples,\nwhichever is smallest.\n\n>> +\t/* This is a tad tedious, but way cleaner than what we used to do... */\n> \n> I don't like comments that refer to \"what we used to\" because there's no\n> way for anybody to make sense of that, because it's basically a dangling\n> reference :)\n\nThis is a copy/paste from InsertPgAttributeTuple added in 03e5248d0f0, which in\nturn quite likely was a copy/paste from InsertPgClassTuple added in b7b78d24f7f\nback in 2006. Looking at that commit, it's clear what the comment refers to\nbut it's quite useless in isolation. Since the current coding is its teens by\nnow, perhaps we should just remove the two existing occurrences?\n\nI can submit a rewrite of the comments into something less gazing into the past\nif you feel like removing these.\n\n> This seems likely to waste some effort - during insertion the slot will\n> be materialized, which'll copy the tuple. I think it'd be better to\n> construct the tuple directly inside the slot's tts_values/isnull, and\n> then store it with ExecStoreVirtualTuple().\n\nI'm not sure why it looked that way, but it's clearly rubbish. Rewrote by\ntaking the TupleDesc as input (which addresses your other comment below too),\nand create the tuples directly by using ExecStoreVirtualTuple.\n\n>> +\t}\n>> +\n>> +\t/* finally insert the new tuples, update the indexes, and clean up */\n>> +\tCatalogMultiInsertWithInfo(pg_attribute_rel, slot, natts, indstate);\n> \n> Previous comment:\n> \n> I think it might be worthwhile to clear and delete the slots\n> after inserting. That's potentially a good bit of memory, no?\n> \n> Current comment:\n> \n> I think I quite dislike the API of CatalogMultiInsertWithInfo freeing\n> the slots. It'd e.g. make it impossible to reuse them to insert more\n> data. It's also really hard to understand\n\nI don't have strong feelings, I see merit in both approches but the reuse\naspect is clearly the winner. I've changed it such that the caller is\nresponsible for freeing.\n\n>> +\tattrs = palloc(sizeof(Form_pg_attribute) * natts);\n> \n> Hm. Why we we need this separate allocation? Isn't this exactly the same\n> as what's in the tupledesc?\n\nFixed.\n\n>> +/*\n>> + * CatalogMultiInsertWithInfo\n> \n> Hm. The current function is called CatalogTupleInsert(), wouldn't\n> CatalogTuplesInsertWithInfo() or such be more accurate? Or\n> CatalogTuplesMultiInsertWithInfo()?\n\nFixed by opting for the latter, mostly since it seems best convey what the\nfunction does.\n\n> Same concern as in the equivalent pg_attribute code.\n\n> Too much copying again.\n\nBoth of them fixed.\n\nThe attached patch addresses all of the comments, thanks again for reviewing!\n\ncheers ./daniel",
"msg_date": "Sun, 17 Nov 2019 00:01:08 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "On Sun, Nov 17, 2019 at 12:01:08AM +0100, Daniel Gustafsson wrote:\n> Fixed by opting for the latter, mostly since it seems best convey what the\n> function does.\n\n- recordMultipleDependencies(depender,\n- context.addrs->refs, context.addrs->numrefs,\n- behavior);\n+ recordMultipleDependencies(depender, context.addrs->refs,\n+ context.addrs->numrefs, behavior);\nSome noise diffs.\n\n--- a/src/test/regress/expected/create_index.out\n+++ b/src/test/regress/expected/create_index.out\n index concur_reindex_ind4 | column c1 of table\n- index concur_reindex_ind4 | column c1 of table\n index concur_reindex_ind4 | column c2 of table\nThis removes a duplicated dependency with indexes using the same\ncolumn multiple times. Guess that's good to get rid of, this was just\nunnecessary bloat in pg_depend.\n\nThe regression tests of contrib/test_decoding are still failing here:\n+ERROR: could not resolve cmin/cmax of catalog tuple\n\nGetting rid the duplication between InsertPgAttributeTuples() and\nInsertPgAttributeTuple() would be nice. You would basically finish by\njust using a single slot when inserting one tuple..\n--\nMichael",
"msg_date": "Tue, 26 Nov 2019 14:44:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "> On 26 Nov 2019, at 06:44, Michael Paquier <michael@paquier.xyz> wrote:\n\nRe this patch being in WoA state for some time [0]:\n\n> The regression tests of contrib/test_decoding are still failing here:\n> +ERROR: could not resolve cmin/cmax of catalog tuple\n\nThis is the main thing left with this patch, and I've been unable so far to\nfigure it out. I have an unscientific hunch that this patch is shaking out\nsomething (previously unexercised) in the logical decoding code supporting\nmulti-inserts in the catalog. If anyone has ideas or insights, I would love\nthe help.\n\nOnce my plate clears up a bit I will return to this one, but feel free to mark\nit rwf for this cf.\n\ncheers ./daniel\n\n[0] postgr.es/m/20200121144937.n24oacjkegu4pnpe@development\n\n",
"msg_date": "Wed, 22 Jan 2020 23:18:12 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "On Wed, Jan 22, 2020 at 11:18:12PM +0100, Daniel Gustafsson wrote:\n> Once my plate clears up a bit I will return to this one, but feel free to mark\n> it rwf for this cf.\n\nThanks for the update. I have switched the patch status to returned\nwith feedback.\n--\nMichael",
"msg_date": "Thu, 23 Jan 2020 15:30:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "Hi,\n\nOn 2020-01-22 23:18:12 +0100, Daniel Gustafsson wrote:\n> > On 26 Nov 2019, at 06:44, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> Re this patch being in WoA state for some time [0]:\n> \n> > The regression tests of contrib/test_decoding are still failing here:\n> > +ERROR: could not resolve cmin/cmax of catalog tuple\n> \n> This is the main thing left with this patch, and I've been unable so far to\n> figure it out. I have an unscientific hunch that this patch is shaking out\n> something (previously unexercised) in the logical decoding code supporting\n> multi-inserts in the catalog. If anyone has ideas or insights, I would love\n> the help.\n\nPerhaps we can take a look at it together while at fosdem? Feel free to\nremind me...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 26 Jan 2020 12:30:59 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "> On 26 Jan 2020, at 21:30, Andres Freund <andres@anarazel.de> wrote:\n> On 2020-01-22 23:18:12 +0100, Daniel Gustafsson wrote:\n>>> On 26 Nov 2019, at 06:44, Michael Paquier <michael@paquier.xyz> wrote:\n>> \n>> Re this patch being in WoA state for some time [0]:\n>> \n>>> The regression tests of contrib/test_decoding are still failing here:\n>>> +ERROR: could not resolve cmin/cmax of catalog tuple\n>> \n>> This is the main thing left with this patch, and I've been unable so far to\n>> figure it out. I have an unscientific hunch that this patch is shaking out\n>> something (previously unexercised) in the logical decoding code supporting\n>> multi-inserts in the catalog. If anyone has ideas or insights, I would love\n>> the help.\n> \n> Perhaps we can take a look at it together while at fosdem? Feel free to\n> remind me...\n\nWhich we did, and after that I realized I had been looking at it from the wrong\nangle. Returning to it after coming home from FOSDEM I believe I have found\nthe culprit.\n\nTurns out that we in heap_multi_insert missed to call log_heap_new_cid for the\nfirst tuple inserted, we only do it in the loop body for the subsequent ones.\nWith the attached patch, the v6 of this patch posted upthead pass the tests for\nme. I have a v7 brewing which I'll submit shortly, but since this fix\nunrelated to that patchseries other than as a pre-requisite I figured I'd post\nthat separately.\n\ncheers ./daniel",
"msg_date": "Sat, 22 Feb 2020 22:22:27 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "On Sat, Feb 22, 2020 at 10:22:27PM +0100, Daniel Gustafsson wrote:\n> Turns out that we in heap_multi_insert missed to call log_heap_new_cid for the\n> first tuple inserted, we only do it in the loop body for the subsequent ones.\n> With the attached patch, the v6 of this patch posted upthead pass the tests for\n> me. I have a v7 brewing which I'll submit shortly, but since this fix\n> unrelated to that patchseries other than as a pre-requisite I figured I'd post\n> that separately.\n\nGood catch. I would not backpatch that as it is not a live bug\nbecause heap_multi_insert() is not used for catalogs yet. With your\npatch, that would be the case though..\n--\nMichael",
"msg_date": "Sun, 23 Feb 2020 16:27:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "> On 23 Feb 2020, at 08:27, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Sat, Feb 22, 2020 at 10:22:27PM +0100, Daniel Gustafsson wrote:\n>> Turns out that we in heap_multi_insert missed to call log_heap_new_cid for the\n>> first tuple inserted, we only do it in the loop body for the subsequent ones.\n>> With the attached patch, the v6 of this patch posted upthead pass the tests for\n>> me. I have a v7 brewing which I'll submit shortly, but since this fix\n>> unrelated to that patchseries other than as a pre-requisite I figured I'd post\n>> that separately.\n> \n> Good catch. I would not backpatch that as it is not a live bug\n> because heap_multi_insert() is not used for catalogs yet. With your\n> patch, that would be the case though..\n\nI'll leave that call up to others, the bug is indeed unreachable with the\ncurrent coding.\n\nAttached is a v7 of the catalog multi_insert patch which removes some code\nduplication which was previously commented on. There are still a few rouch\nedges but this version passes tests when paired with the heap_multi_insert cid\npatch.\n\ncheers ./daniel",
"msg_date": "Tue, 25 Feb 2020 00:19:58 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "Hi,\n\nOn 2020-02-23 16:27:57 +0900, Michael Paquier wrote:\n> On Sat, Feb 22, 2020 at 10:22:27PM +0100, Daniel Gustafsson wrote:\n> > Turns out that we in heap_multi_insert missed to call log_heap_new_cid for the\n> > first tuple inserted, we only do it in the loop body for the subsequent ones.\n> > With the attached patch, the v6 of this patch posted upthead pass the tests for\n> > me. I have a v7 brewing which I'll submit shortly, but since this fix\n> > unrelated to that patchseries other than as a pre-requisite I figured I'd post\n> > that separately.\n\nThanks for finding!\n\n\n> Good catch. I would not backpatch that as it is not a live bug\n> because heap_multi_insert() is not used for catalogs yet. With your\n> patch, that would be the case though..\n\nThanks for pushing.\n\nI do find it a bit odd that you essentially duplicated the existing\ncomment in a different phrasing. What's the point?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 24 Feb 2020 15:29:03 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "On Mon, Feb 24, 2020 at 03:29:03PM -0800, Andres Freund wrote:\n> I do find it a bit odd that you essentially duplicated the existing\n> comment in a different phrasing. What's the point?\n\nA direct reference to catalog tuples is added for each code path\ncalling log_heap_new_cid(), so that was more natural to me.\n--\nMichael",
"msg_date": "Tue, 25 Feb 2020 11:05:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "> On 25 Feb 2020, at 00:29, Andres Freund <andres@anarazel.de> wrote:\n> On 2020-02-23 16:27:57 +0900, Michael Paquier wrote:\n\n>> Good catch. I would not backpatch that as it is not a live bug\n>> because heap_multi_insert() is not used for catalogs yet. With your\n>> patch, that would be the case though..\n> \n> Thanks for pushing.\n\n+1, thanks to the both of you for helping with the patch. Attached is a v8 of\nthe patchset to make the cfbot happier, as the previous no longer applies.\n\nIn doing that I realized that there is another hunk in this patch for fixing up\nlogical decoding multi-insert support, which is independent of the patch in\nquestion here so I split it off. It's another issue which cause no harm at all\ntoday, but fails as soon as someone tries to perform multi inserts into the\ncatalog.\n\nAFAICT, the current coding assumes that the multi-insert isn't from a catalog,\nand makes little attempts at surviving in case it is. The attached patch fixes\nthat by taking a fast-path exit in case it is a catalog multi-insert, as there\nis nothing to decode.\n\ncheers ./daniel",
"msg_date": "Tue, 25 Feb 2020 22:44:40 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "On Tue, Feb 25, 2020 at 10:44:40PM +0100, Daniel Gustafsson wrote:\n> In doing that I realized that there is another hunk in this patch for fixing up\n> logical decoding multi-insert support, which is independent of the patch in\n> question here so I split it off. It's another issue which cause no harm at all\n> today, but fails as soon as someone tries to perform multi inserts into the\n> catalog.\n\nYep. I was wondering how we should do that for some time, but I was\nnot able to come back to it.\n\n+ /*\n+ * CONTAINS_NEW_TUPLE will always be set unless the multi_insert was\n+ * performed for a catalog. If it is a catalog, return immediately as\n+ * there is nothing to logically decode.\n+ */\n+ if (!(xlrec->flags & XLH_INSERT_CONTAINS_NEW_TUPLE))\n+ return;\nHmm, OK. Consistency with DecodeInsert() is a good idea here, so\ncount me in regarding the way your patch handles the problem. I would\nbe fine to apply that part but, Andres, perhaps you would prefer\ntaking care of it yourself?\n--\nMichael",
"msg_date": "Fri, 28 Feb 2020 17:24:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "On Fri, Feb 28, 2020 at 05:24:29PM +0900, Michael Paquier wrote:\n> + /*\n> + * CONTAINS_NEW_TUPLE will always be set unless the multi_insert was\n> + * performed for a catalog. If it is a catalog, return immediately as\n> + * there is nothing to logically decode.\n> + */\n> + if (!(xlrec->flags & XLH_INSERT_CONTAINS_NEW_TUPLE))\n> + return;\n> Hmm, OK. Consistency with DecodeInsert() is a good idea here, so\n> count me in regarding the way your patch handles the problem. I would\n> be fine to apply that part but, Andres, perhaps you would prefer\n> taking care of it yourself?\n\nOkay, I have applied this part.\n\nOne comment removal is missed in heapam.c (my fault, then).\n\n+ * TODO this is defined in copy.c, if we want to use this to limit the number\n+ * of slots in this patch, we need to figure out where to put it.\n+ */\n+#define MAX_BUFFERED_BYTES 65535\nThe use-case is different than copy.c, so aren't you looking at a\nseparate variable to handle that?\n\n+reset_object_addresses(ObjectAddresses *addrs)\n+{\n+ if (addrs->numrefs == 0)\n+ return;\nOr just use an assert?\n\nsrc/backend/commands/tablecmds.c: /* attribute.attacl is handled by\nInsertPgAttributeTuple */\nsrc/backend/catalog/heap.c: * This is a variant of\nInsertPgAttributeTuple() which dynamically allocates\nThose two references are not true anymore as your patch removes\nInsertPgAttributeTuple() and replaces it by the plural flavor.\n\n+/*\n+ * InsertPgAttributeTuples\n+ * Construct and insert multiple tuples in pg_attribute.\n *\n- * Caller has already opened and locked pg_attribute. new_attribute is the\n- * attribute to insert. attcacheoff is always initialized to -1, attacl and\n- * attoptions are always initialized to NULL.\n- *\n- * indstate is the index state for CatalogTupleInsertWithInfo. It can be\n- * passed as NULL, in which case we'll fetch the necessary info. (Don't do\n- * this when inserting multiple attributes, because it's a tad more\n- * expensive.)\n+ * This is a variant of InsertPgAttributeTuple() which dynamically allocates\n+ * space for multiple tuples. Having two so similar functions is a kludge, but\n+ * for now it's a TODO to make it less terrible.\nAnd those comments largely need to remain around.\n\n- attr = TupleDescAttr(tupdesc, i);\n- /* Fill in the correct relation OID */\n- attr->attrelid = new_rel_oid;\n- /* Make sure this is OK, too */\n- attr->attstattarget = -1;\n-\n- InsertPgAttributeTuple(rel, attr, indstate);\nattstattarget handling is inconsistent here, no?\nInsertPgAttributeTuples() does not touch this part, though your code\nupdates the attribute's attrelid.\n\n- referenced.classId = RelationRelationId;\n- referenced.objectId = heapRelationId;\n- referenced.objectSubId = indexInfo->ii_IndexAttrNumbers[i];\n-\n- recordDependencyOn(&myself, &referenced, DEPENDENCY_AUTO);\n-\n+ add_object_address(OCLASS_CLASS, heapRelationId,\n+ indexInfo->ii_IndexAttrNumbers[i],\n+ refobjs);\nNot convinced that we have to publish add_object_address() in the\nheaders, because we have already add_exact_object_address which is\nable to do the same job. All code paths touched by your patch involve\nalready ObjectAddress objects, so switching to _exact_ creates less\ncode diffs.\n\n+ if (ntuples == DEPEND_TUPLE_BUF)\n+ {\n+ CatalogTuplesMultiInsertWithInfo(sdepRel, slot, ntuples, indstate);\n+ ntuples = 0;\n+ }\nMaybe this is a sufficient argument that we had better consider the\ntemplate copy as part of a separate patch, handled once the basics is\nin place. It is not really obvious if 32 is a good thing, or not.\n\n+ /* TODO is nreferenced a reasonable allocation of slots? */\n+ slot = palloc(sizeof(TupleTableSlot *) * nreferenced);\nI guess so.\n\n /* construct new attribute's pg_attribute entry */\n+ oldcontext = MemoryContextSwitchTo(CurTransactionContext);\nThis needs a comment as this change is not obvious for the reader.\nAnd actually, why? \n--\nMichael",
"msg_date": "Mon, 2 Mar 2020 11:06:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "> On 2 Mar 2020, at 03:06, Michael Paquier <michael@paquier.xyz> wrote:\n\nThanks a lot for another round of review, much appreciated!\n\n> On Fri, Feb 28, 2020 at 05:24:29PM +0900, Michael Paquier wrote:\n>> + /*\n>> + * CONTAINS_NEW_TUPLE will always be set unless the multi_insert was\n>> + * performed for a catalog. If it is a catalog, return immediately as\n>> + * there is nothing to logically decode.\n>> + */\n>> + if (!(xlrec->flags & XLH_INSERT_CONTAINS_NEW_TUPLE))\n>> + return;\n>> Hmm, OK. Consistency with DecodeInsert() is a good idea here, so\n>> count me in regarding the way your patch handles the problem. I would\n>> be fine to apply that part but, Andres, perhaps you would prefer\n>> taking care of it yourself?\n> \n> Okay, I have applied this part.\n\nThanks!\n\n> +#define MAX_BUFFERED_BYTES 65535\n> The use-case is different than copy.c, so aren't you looking at a\n> separate variable to handle that?\n\nRenamed to indicate current usecase.\n\n> +reset_object_addresses(ObjectAddresses *addrs)\n> +{\n> + if (addrs->numrefs == 0)\n> + return;\n> Or just use an assert?\n\nI don't see why we should prohibit calling reset_object_addresses so strongly,\nit's nonsensical but is it wrong enough to Assert on? I can change it if you\nfeel it should be an assertion, but have left it for now.\n\n> src/backend/commands/tablecmds.c: /* attribute.attacl is handled by\n> InsertPgAttributeTuple */\n> src/backend/catalog/heap.c: * This is a variant of\n> InsertPgAttributeTuple() which dynamically allocates\n> Those two references are not true anymore as your patch removes\n> InsertPgAttributeTuple() and replaces it by the plural flavor.\n\nFixed.\n\n> +/*\n> + * InsertPgAttributeTuples\n> ...\n> And those comments largely need to remain around.\n\nFixed.\n\n> attstattarget handling is inconsistent here, no?\n\nIndeed it was, fixed.\n\n> Not convinced that we have to publish add_object_address() in the\n> headers, because we have already add_exact_object_address which is\n> able to do the same job. All code paths touched by your patch involve\n> already ObjectAddress objects, so switching to _exact_ creates less\n> code diffs.\n\nGood point, using _exact will make the changes smaller at the cost of additions\nbeing larger. I do prefer the add_object_address API since it makes code more\nreadable IMO, but there is clear value in not exposing more functions. I've\nchanged to using add_exact_object_address in the attached version.\n\n> + if (ntuples == DEPEND_TUPLE_BUF)\n> + {\n> + CatalogTuplesMultiInsertWithInfo(sdepRel, slot, ntuples, indstate);\n> + ntuples = 0;\n> + }\n> Maybe this is a sufficient argument that we had better consider the\n> template copy as part of a separate patch, handled once the basics is\n> in place. It is not really obvious if 32 is a good thing, or not.\n\nFixed by instead avoiding the copying altogether and creating virtual tuples.\nAndres commented on this upthread but I had missed fixing the code to do that,\nso better late than never.\n\n> /* construct new attribute's pg_attribute entry */\n> + oldcontext = MemoryContextSwitchTo(CurTransactionContext);\n> This needs a comment as this change is not obvious for the reader.\n> And actually, why? \n\nThats a good question, this was a leftover from a different version the code\nwhich I abandoned, but I missed removing the context handling. Sorry about the\nsloppyness there. Removed.\n\nThe v9 patch attached addresses, I believe, all comments made to date. I tried\nto read through earlier reviews too to ensure I hadn't missed anything so I\nhope I've covered all findings so far.\n\ncheers ./daniel",
"msg_date": "Wed, 4 Mar 2020 23:16:20 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "> On 4 Mar 2020, at 23:16, Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> The v9 patch attached addresses, I believe, all comments made to date. I tried\n> to read through earlier reviews too to ensure I hadn't missed anything so I\n> hope I've covered all findings so far.\n\nAttached is a rebased version which was updated to handle the changes for op\nclass parameters introduced in 911e70207703799605.\n\ncheers ./daniel",
"msg_date": "Thu, 25 Jun 2020 09:38:23 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "On Thu, Jun 25, 2020 at 09:38:23AM +0200, Daniel Gustafsson wrote:\n> Attached is a rebased version which was updated to handle the changes for op\n> class parameters introduced in 911e70207703799605.\n\nThanks for the updated version.\n\nWhile re-reading the code, I got cold feet with the changes done in\nrecordDependencyOn(). Sure, we could do as you propose, but that does\nnot have to be part of this patch I think, aimed at switching more\ncatalogs to use multi inserts, and it just feels a duplicate of\nrecordMultipleDependencies(), with the same comments copied all over\nthe place, etc.\n\nMAX_TEMPLATE_BYTES in pg_shdepend.c needs a comment to explain that\nthis is to cap the number of slots used in\ncopyTemplateDependencies() for pg_shdepend.\n\nNot much a fan of the changes in GenerateTypeDependencies(),\nparticularly the use of refobjs[8], capped to the number of items from\ntypeForm. If we add new members I think that this would break\neasily without us actually noticing that it broke. The use of\nObjectAddressSet() is a good idea though, but it does not feel\nconsistent if you don't the same coding rule to typbasetype,\ntypcollation or typelem. I am also thinking to split this part of the\ncleanup in a first independent patch.\n\npg_constraint.c, pg_operator.c, extension.c and pg_aggregate.c were\nusing ObjectAddressSubSet() with subset set to 0 when registering a\ndependency. It is simpler to just use ObjectAddressSet(). As this\nupdates the way dependencies are tracked and recorded, that's better\nif kept in the main patch.\n\n+ /* TODO is nreferenced a reasonable allocation of slots? */\n+ slot = palloc(sizeof(TupleTableSlot *) * nreferenced);\nIt seems to me that we could just apply the same rule as for\npg_attribute and pg_shdepend, no?\n\nCatalogTupleInsertWithInfo() becomes mostly unused with this patch,\nits only caller being now LOs. Just noticing, I'd rather not remove\nit for now.\n\nThe attached includes a bunch of modifications I have done while going\nthrough the patch (I indend to split and apply the changes of\npg_type.c separately first, just lacked of time now to send a proper\nsplit), and there is the number of slots for pg_depend insertions that\nstill needs to be addressed. On top of that pgindent has not been run\nyet. That's all I have for today, overall the patch is taking a\ncommittable shape :)\n--\nMichael",
"msg_date": "Fri, 26 Jun 2020 17:11:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "> On 26 Jun 2020, at 10:11, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Thu, Jun 25, 2020 at 09:38:23AM +0200, Daniel Gustafsson wrote:\n>> Attached is a rebased version which was updated to handle the changes for op\n>> class parameters introduced in 911e70207703799605.\n> \n> Thanks for the updated version.\n\nThanks for reviewing!\n\n> While re-reading the code, I got cold feet with the changes done in\n> recordDependencyOn(). Sure, we could do as you propose, but that does\n> not have to be part of this patch I think, aimed at switching more\n> catalogs to use multi inserts, and it just feels a duplicate of\n> recordMultipleDependencies(), with the same comments copied all over\n> the place, etc.\n\nFair enough, I can take that to another patch later in the cycle.\n\n> MAX_TEMPLATE_BYTES in pg_shdepend.c needs a comment to explain that\n> this is to cap the number of slots used in\n> copyTemplateDependencies() for pg_shdepend.\n\nAgreed, +1 on the proposed wording.\n\n> Not much a fan of the changes in GenerateTypeDependencies(),\n> particularly the use of refobjs[8], capped to the number of items from\n> typeForm. If we add new members I think that this would break\n> easily without us actually noticing that it broke. \n\nYeah, thats not good, it's better to leave that out.\n\n> The use of\n> ObjectAddressSet() is a good idea though, but it does not feel\n> consistent if you don't the same coding rule to typbasetype,\n> typcollation or typelem. I am also thinking to split this part of the\n> cleanup in a first independent patch.\n\n+1 on splitting into a separate patch.\n\n> pg_constraint.c, pg_operator.c, extension.c and pg_aggregate.c were\n> using ObjectAddressSubSet() with subset set to 0 when registering a\n> dependency. It is simpler to just use ObjectAddressSet().\n\nFair enough, either way, I don't have strong opinions.\n\n> As this\n> updates the way dependencies are tracked and recorded, that's better\n> if kept in the main patch.\n\nAgreed.\n\n> + /* TODO is nreferenced a reasonable allocation of slots? */\n> + slot = palloc(sizeof(TupleTableSlot *) * nreferenced);\n> It seems to me that we could just apply the same rule as for\n> pg_attribute and pg_shdepend, no?\n\nI think so, I see no reason not to.\n\n> CatalogTupleInsertWithInfo() becomes mostly unused with this patch,\n> its only caller being now LOs. Just noticing, I'd rather not remove\n> it for now.\n\nAgreed, let's not bite off that too here, there's enough to chew on.\n\n> The attached includes a bunch of modifications I have done while going\n> through the patch (I indend to split and apply the changes of\n> pg_type.c separately first, just lacked of time now to send a proper\n> split), and there is the number of slots for pg_depend insertions that\n> still needs to be addressed. On top of that pgindent has not been run\n> yet. That's all I have for today, overall the patch is taking a\n> committable shape :)\n\nI like it, thanks for hacking on it. I will take another look at it later\ntoday when back at my laptop.\n\ncheers ./daniel\n\n\n\n",
"msg_date": "Fri, 26 Jun 2020 14:26:50 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "On Fri, Jun 26, 2020 at 02:26:50PM +0200, Daniel Gustafsson wrote:\n> On 26 Jun 2020, at 10:11, Michael Paquier <michael@paquier.xyz> wrote:\n>> + /* TODO is nreferenced a reasonable allocation of slots? */\n>> + slot = palloc(sizeof(TupleTableSlot *) * nreferenced);\n>> It seems to me that we could just apply the same rule as for\n>> pg_attribute and pg_shdepend, no?\n> \n> I think so, I see no reason not to.\n\nI was actually looking a patch to potentially support REINDEX for\npartitioned table, and the CONCURRENTLY case may need this patch,\nstill if a lot of dependencies are switched at once it may be a\nproblem, so it is better to cap it. One thing though is that we may\nfinish by allocating more slots than what's necessary if some\ndependencies are pinned, but using multi-inserts would be a gain\nanyway, and the patch does not insert in more slots than needed.\n\n> I like it, thanks for hacking on it. I will take another look at it later\n> today when back at my laptop.\n\nThanks. I have been able to apply the independent part of pg_type.c\nas of 68de144.\n\nAttached is a rebased set, with more splitting work done after a new\nround of review. 0001 is more refactoring to use more\nObjectAddress[Sub]Set() where we can, leading to some more cleanup:\n 5 files changed, 43 insertions(+), 120 deletions(-)\n\nIn this round, I got confused with the way ObjectAddress items are\nassigned, assuming that we have to use the same dependency type for a\nbunch of dependencies to attach. Using add_exact_object_address() is\nfine for this purpose, but this also makes me wonder if we should try\nto think harder about this interface so as we would be able to insert\na bunch of dependency tuples with multiple types of dependencies\nhandled. So this has made me remove reset_object_addresses() from the\npatch series, as it was used when dependency types were mixed up. We\ncould also discuss that separately, but that's not strongly mandatory\nhere. \n\nThere are however cases where it is straight-forward to gather\nand insert multiple records, like in InsertExtensionTuple() (as does\nalready tsearchcmds.c), which is what 0002 does. An opposite example\nis StorePartitionKey(), where there is a mix of normal and internal\ndependencies, so I have removed it for now.\n\n0003 is the main patch, where I have capped the number of slots used\nfor pg_depend similarly what is done for pg_shdepend and\npg_attribute to flush tuples in batches of 64kB.\nExecDropSingleTupleTableSlot() was also called for an incorrect number\nof slots when it came to pg_shdepend. I was thinking if it could be\npossible to do more consolidation between the three places where we\ncalculate the number of slots to use, but that would also mean to have\nmore tuple slot dependency moving around, which is not great. Anyway,\nthis leaves in patch 0003 only the multi-INSERT logic, without the\npieces that manipulated the dependency recording and insertions (we\nstill have three ObjectAddress[Sub]Set calls in heap.c but the same\narea of the code is reworked for attribute insertions).\n\n0001 and 0002 are committable and independent pieces, while 0003 still\nneeds more attention. One thing I also wanted to do with 0003 is\nmeasure the difference in WAL produced (say pg_shdepend when using the\nregression database as template) to have an idea of the gain.\n--\nMichael",
"msg_date": "Mon, 29 Jun 2020 15:57:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "> On 29 Jun 2020, at 08:57, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Fri, Jun 26, 2020 at 02:26:50PM +0200, Daniel Gustafsson wrote:\n>> On 26 Jun 2020, at 10:11, Michael Paquier <michael@paquier.xyz> wrote:\n>>> + /* TODO is nreferenced a reasonable allocation of slots? */\n>>> + slot = palloc(sizeof(TupleTableSlot *) * nreferenced);\n>>> It seems to me that we could just apply the same rule as for\n>>> pg_attribute and pg_shdepend, no?\n>> \n>> I think so, I see no reason not to.\n> \n> I was actually looking a patch to potentially support REINDEX for\n> partitioned table, and the CONCURRENTLY case may need this patch,\n> still if a lot of dependencies are switched at once it may be a\n> problem, so it is better to cap it.\n\nAgreed.\n\n> One thing though is that we may finish by allocating more slots than what's\n> necessary if some dependencies are pinned, but using multi-inserts would be a\n> gain anyway, and the patch does not insert in more slots than needed.\n\nI was playing around with a few approaches around this when I initially wrote\nthis, but I was unable to find any way to calculate the correct number of slots\nwhich wasn't significantly more expensive than the extra allocations.\n\n> Attached is a rebased set, with more splitting work done after a new\n> round of review. 0001 is more refactoring to use more\n> ObjectAddress[Sub]Set() where we can, leading to some more cleanup:\n> 5 files changed, 43 insertions(+), 120 deletions(-)\n\nThis patch seems quite straightforward, and in the case of the loops in for\nexample CreateConstraintEntry() makes the code a lot more readable. +1 for\napplying 0001.\n\n> In this round, I got confused with the way ObjectAddress items are\n> assigned, assuming that we have to use the same dependency type for a\n> bunch of dependencies to attach. Using add_exact_object_address() is\n> fine for this purpose, but this also makes me wonder if we should try\n> to think harder about this interface so as we would be able to insert\n> a bunch of dependency tuples with multiple types of dependencies\n> handled. So this has made me remove reset_object_addresses() from the\n> patch series, as it was used when dependency types were mixed up. We\n> could also discuss that separately, but that's not strongly mandatory\n> here. \n\nOk, once the final state of this patchset is known I can take a stab at\nrecording multiple dependencies with different behaviors as a separate\npatchset.\n\n> There are however cases where it is straight-forward to gather and insert\n> multiple records, like in InsertExtensionTuple() (as does already\n> tsearchcmds.c), which is what 0002 does.\n\n\n+1 on 0002 as well.\n\n> An opposite example is StorePartitionKey(), where there is a mix of normal and\n> internal dependencies, so I have removed it for now.\n\n\nI don't think it makes the code any worse off to handle the NORMAL dependencies\nseparate here, but MVV.\n\n> ExecDropSingleTupleTableSlot() was also called for an incorrect number\n> of slots when it came to pg_shdepend.\n\nHmm, don't you mean in heap.c:InsertPgAttributeTuples()?\n\n> I was thinking if it could be possible to do more consolidation between the\n> three places where we calculate the number of slots to use, but that would also\n> mean to have more tuple slot dependency moving around, which is not great.\n\n\nIf we do, we need to keep the cap consistent across all callers, else we'll end\nup with an API without an abstraction to make it worth more than saving a few\nlines of quite simple to read code. Currently this is the case, but that might\nnot always hold, so not sure it if it's worth it.\n\n0001 + 0002 + 0003 pass tests on my machine and nothing sticks out.\n\ncheers ./daniel\n\n\n",
"msg_date": "Tue, 30 Jun 2020 14:25:07 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "On Tue, Jun 30, 2020 at 02:25:07PM +0200, Daniel Gustafsson wrote:\n> Ok, once the final state of this patchset is known I can take a stab at\n> recording multiple dependencies with different behaviors as a separate\n> patchset.\n\nThanks. I have applied 0001 and 0002 today, in a reversed order\nactually.\n\n> If we do, we need to keep the cap consistent across all callers, else we'll end\n> up with an API without an abstraction to make it worth more than saving a few\n> lines of quite simple to read code. Currently this is the case, but that might\n> not always hold, so not sure it if it's worth it.\n\nI am not sure either, still it looks worth thinking about it.\nAttached is a rebased version of the last patch. What this currently\nholds is just the switch to heap_multi_insert() for the three catalogs\npg_attribute, pg_depend and pg_shdepend. One point that looks worth\ndebating about is to how much to cap the data inserted at once. This\nuses 64kB for all three, with a number of slots chosen based on the\nsize of each record, similarly to what we do for COPY.\n--\nMichael",
"msg_date": "Wed, 1 Jul 2020 18:24:18 +0900",
"msg_from": "michael@paquier.xyz",
"msg_from_op": false,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "On Wed, Jul 01, 2020 at 06:24:18PM +0900, Michael Paquier wrote:\n> I am not sure either, still it looks worth thinking about it.\n> Attached is a rebased version of the last patch. What this currently\n> holds is just the switch to heap_multi_insert() for the three catalogs\n> pg_attribute, pg_depend and pg_shdepend. One point that looks worth\n> debating about is to how much to cap the data inserted at once. This\n> uses 64kB for all three, with a number of slots chosen based on the\n> size of each record, similarly to what we do for COPY.\n\nI got an extra round of review done for this patch. One spot was\nmissed in heap_multi_insert() for a comment telling catalogs not using\nmulti inserts. After some consideration, I think that using 64kB as a\nbase number to calculate the number of slots should be fine, similarly\nto COPY.\n\nWhile on it, I have done some measurements to see the difference in\nWAL produced and get an idea of the gain. For example, this function\nwould create one table with a wanted number of attributes:\nCREATE OR REPLACE FUNCTION create_cols(tabname text, num_cols int)\nRETURNS VOID AS\n$func$\nDECLARE\n query text;\nBEGIN\n query := 'CREATE TABLE ' || tabname || ' (';\n FOR i IN 1..num_cols LOOP\n query := query || 'a_' || i::text || ' int';\n IF i != num_cols THEN\n query := query || ', ';\n END IF;\n END LOOP;\n query := query || ')';\n EXECUTE format(query);\nEND\n$func$ LANGUAGE plpgsql;\n\nOn HEAD, with a table that has 1300 attributes, this leads to 563kB of \nWAL produced. With the patch, we get down to 505kB. That's an\nextreme case of course, but that's nice a nice gain.\n\nA second test, after creating a database from a template that has\nroughly 10k entries in pg_shdepend (10k empty tables actually), showed\na reduction from 2158kB to 1762kB in WAL.\n\nFinally comes the third catalog, pg_depend, and there is one thing\nthat makes me itching about this part. We do a lot of useless work\nfor the allocation and destruction of the slots when there are pinned\ndependencies, and there can be a lot of them. Just by running the\nmain regression tests, it is easy to see that in 0003 we still do a\nlot of calls of recordMultipleDependencies() for one single\ndependency, and that most of these are actually pinned. So we finish\nby doing a lot of slot manipulation to insert nothing at the end,\ncontrary to the counterparts with pg_shdepend and pg_attribute. In\nshort, I think that for now it would be fine to commit a patch that\ndoes the multi-INSERT optimization for pg_attribute and pg_shdepend,\nbut that we need more careful work for pg_depend. For example we\ncould go through all the dependencies first and recalculate the number\nof slots to use depending on what is pinned or not, but this would\nmake sense actually when more dependencies are inserted at once in\nmore code paths, mostly for ALTER TABLE. So this needs more\nconsideration IMO.\n\nThoughts?\n--\nMichael",
"msg_date": "Wed, 29 Jul 2020 17:32:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "> On 29 Jul 2020, at 10:32, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Wed, Jul 01, 2020 at 06:24:18PM +0900, Michael Paquier wrote:\n>> I am not sure either, still it looks worth thinking about it.\n>> Attached is a rebased version of the last patch. What this currently\n>> holds is just the switch to heap_multi_insert() for the three catalogs\n>> pg_attribute, pg_depend and pg_shdepend. One point that looks worth\n>> debating about is to how much to cap the data inserted at once. This\n>> uses 64kB for all three, with a number of slots chosen based on the\n>> size of each record, similarly to what we do for COPY.\n> \n> I got an extra round of review done for this patch. \n\nThanks!\n\n> While on it, I have done some measurements to see the difference in\n> WAL produced and get an idea of the gain.\n\n> On HEAD, with a table that has 1300 attributes, this leads to 563kB of \n> WAL produced. With the patch, we get down to 505kB. That's an\n> extreme case of course, but that's nice a nice gain.\n> \n> A second test, after creating a database from a template that has\n> roughly 10k entries in pg_shdepend (10k empty tables actually), showed\n> a reduction from 2158kB to 1762kB in WAL.\n\nExtreme cases for sure, but more importantly, there should be no cases when\nthis would cause an increase wrt the status quo.\n\n> Finally comes the third catalog, pg_depend, and there is one thing\n> that makes me itching about this part. We do a lot of useless work\n> for the allocation and destruction of the slots when there are pinned\n> dependencies, and there can be a lot of them. Just by running the\n> main regression tests, it is easy to see that in 0003 we still do a\n> lot of calls of recordMultipleDependencies() for one single\n> dependency, and that most of these are actually pinned. So we finish\n> by doing a lot of slot manipulation to insert nothing at the end,\n> contrary to the counterparts with pg_shdepend and pg_attribute. \n\nMaybe it'd be worth pre-computing by a first pass which tracks pinned objects\nin a bitmap; with a second pass which then knows how many and which to insert\ninto slots?\n\n> In\n> short, I think that for now it would be fine to commit a patch that\n> does the multi-INSERT optimization for pg_attribute and pg_shdepend,\n> but that we need more careful work for pg_depend.\n\nFair enough, let's break out pg_depend and I'll have another go at that.\n\ncheers ./daniel\n\n\n",
"msg_date": "Wed, 29 Jul 2020 23:34:07 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "On Wed, Jul 29, 2020 at 11:34:07PM +0200, Daniel Gustafsson wrote:\n> Extreme cases for sure, but more importantly, there should be no cases when\n> this would cause an increase wrt the status quo.\n\nYep.\n\n> Maybe it'd be worth pre-computing by a first pass which tracks pinned objects\n> in a bitmap; with a second pass which then knows how many and which to insert\n> into slots?\n\nOr it could be possible to just rebuild a new list of dependencies\nbefore insertion into the catalog. No objections with a bitmap, any\napproach would be fine here as long as there is a first pass on the\nitem list.\n\n> Fair enough, let's break out pg_depend and I'll have another go at that.\n\nThanks. Attached is a polished version of the patch that I intend to\ncommit for pg_attribute and pg_shdepend. Let's look again at\npg_depend later, as there are also links with the handling of\ndependencies for ALTER TABLE mainly.\n--\nMichael",
"msg_date": "Thu, 30 Jul 2020 10:28:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "> On 30 Jul 2020, at 03:28, Michael Paquier <michael@paquier.xyz> wrote:\n> On Wed, Jul 29, 2020 at 11:34:07PM +0200, Daniel Gustafsson wrote:\n\n>> Fair enough, let's break out pg_depend and I'll have another go at that.\n> \n> Thanks. Attached is a polished version of the patch that I intend to\n> commit for pg_attribute and pg_shdepend. Let's look again at\n> pg_depend later, as there are also links with the handling of\n> dependencies for ALTER TABLE mainly.\n\nLooks good, thanks. Let's close the CF entry with this and open a new for the\npg_depend part when that's done.\n\ncheers ./daniel\n\n",
"msg_date": "Thu, 30 Jul 2020 23:23:38 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "On Thu, Jul 30, 2020 at 11:23:38PM +0200, Daniel Gustafsson wrote:\n> Looks good, thanks. Let's close the CF entry with this and open a new for the\n> pg_depend part when that's done.\n\nI have applied the patch, thanks.\n\nAnd actually, I have found just after that CREATE DATABASE gets\nseverely impacted by the number of slots initialized when copying the\ntemplate dependencies if there are few of them. The fix is as simple\nas delaying the initialization of the slots once we know they will be\nused. In my initial tests, I was using fsync = off, so I missed that.\nSorry about that. The attached fixes this regression by delaying the\nslot initialization until we know that it will be used. This does not\nmatter for pg_attribute as we know in advance the number of attributes\nto insert.\n--\nMichael",
"msg_date": "Fri, 31 Jul 2020 11:41:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "> On 31 Jul 2020, at 04:42, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Thu, Jul 30, 2020 at 11:23:38PM +0200, Daniel Gustafsson wrote:\n>> Looks good, thanks. Let's close the CF entry with this and open a new for the\n>> pg_depend part when that's done.\n> \n> I have applied the patch, thanks.\n> \n> And actually, I have found just after that CREATE DATABASE gets\n> severely impacted by the number of slots initialized when copying the\n> template dependencies if there are few of them. The fix is as simple\n> as delaying the initialization of the slots once we know they will be\n> used. In my initial tests, I was using fsync = off, so I missed that.\n> Sorry about that. The attached fixes this regression by delaying the\n> slot initialization until we know that it will be used. This does not\n> matter for pg_attribute as we know in advance the number of attributes\n> to insert.\n\nRight, that makes sense. Sorry for missing that, and thanks for fixing.\n\ncheers ./daniel\n\n",
"msg_date": "Fri, 31 Jul 2020 09:44:27 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
},
{
"msg_contents": "Hi,\n\nOn 2020-07-30 23:23:38 +0200, Daniel Gustafsson wrote:\n> > On 30 Jul 2020, at 03:28, Michael Paquier <michael@paquier.xyz> wrote:\n> > On Wed, Jul 29, 2020 at 11:34:07PM +0200, Daniel Gustafsson wrote:\n> \n> >> Fair enough, let's break out pg_depend and I'll have another go at that.\n> > \n> > Thanks. Attached is a polished version of the patch that I intend to\n> > commit for pg_attribute and pg_shdepend. Let's look again at\n> > pg_depend later, as there are also links with the handling of\n> > dependencies for ALTER TABLE mainly.\n> \n> Looks good, thanks.\n\nNice work!\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 31 Jul 2020 08:51:35 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Ought to use heap_multi_insert() for pg_attribute/depend\n insertions?"
}
] |
[
{
"msg_contents": "One of the remarkably common user errors with pg_restore is users\nleaving off the -d option. (We get cases of it regularly on the IRC\nchannel, including one just now which prompted me to finally propose\nthis)\n\nI propose we add a new option: --convert-to-text or some such name, and\nthen make pg_restore throw a usage error if neither -d nor the new\noption is given.\n\nOpinions?\n\n(Yes, it will break the scripts of anyone who is currently scripting\npg_restore to output SQL text. How many people do that?)\n\n-- \nAndrew (irc:RhodiumToad)\n\n",
"msg_date": "Wed, 13 Feb 2019 22:56:04 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": true,
"msg_subject": "proposal: pg_restore --convert-to-text"
},
{
"msg_contents": "Em qua, 13 de fev de 2019 às 19:56, Andrew Gierth\n<andrew@tao11.riddles.org.uk> escreveu:\n> One of the remarkably common user errors with pg_restore is users\n> leaving off the -d option. (We get cases of it regularly on the IRC\n> channel, including one just now which prompted me to finally propose\n> this)\n>\nI'm not sure it is a common error. If you want to restore schema\nand/or data it is natural that I should specify the database name (or\nat least PGDATABASE).\n\n> I propose we add a new option: --convert-to-text or some such name, and\n> then make pg_restore throw a usage error if neither -d nor the new\n> option is given.\n>\nHowever, I agree that pg_restore to stdout if -d wasn't specified is\nnot a good default. The current behavior is the same as \"-f -\"\n(however, pg_restore doesn't allow - meaning stdout). Isn't it the\ncase to error out if -d or -f wasn't specified? If we go to this road,\n-f option should allow - (stdout) as parameter.\n\n> (Yes, it will break the scripts of anyone who is currently scripting\n> pg_restore to output SQL text. How many people do that?)\n>\nI use pg_restore to stdout a lot but I wouldn't bother to specify an\noption to get it (such as pg_restore -Fc -f - /tmp/foo.dmp).\n\n\n-- \n Euler Taveira Timbira -\nhttp://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\n",
"msg_date": "Wed, 13 Feb 2019 21:31:17 -0300",
"msg_from": "Euler Taveira <euler@timbira.com.br>",
"msg_from_op": false,
"msg_subject": "Re: proposal: pg_restore --convert-to-text"
},
{
"msg_contents": "Euler Taveira <euler@timbira.com.br> writes:\n> Em qua, 13 de fev de 2019 às 19:56, Andrew Gierth\n> <andrew@tao11.riddles.org.uk> escreveu:\n>> I propose we add a new option: --convert-to-text or some such name, and\n>> then make pg_restore throw a usage error if neither -d nor the new\n>> option is given.\n\n> However, I agree that pg_restore to stdout if -d wasn't specified is\n> not a good default. The current behavior is the same as \"-f -\"\n> (however, pg_restore doesn't allow - meaning stdout). Isn't it the\n> case to error out if -d or -f wasn't specified? If we go to this road,\n> -f option should allow - (stdout) as parameter.\n\nI won't take a position on whether it's okay to break backwards\ncompatibility here (although I can think of some people who are\nlikely to complain ;-)). But if we decide to do it, I like\nEuler's suggestion for how to do it. A separate --convert-to-text\nswitch seems kind of ugly, plus if that's the approach it'd be hard\nto write scripts that work with different pg_restore versions.\n\nThe idea of cross-version-compatible scripts suggests that we\nmight consider back-patching the addition of \"-f -\", though not\nthe change that says you must write it.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 13 Feb 2019 20:11:23 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: proposal: pg_restore --convert-to-text"
},
{
"msg_contents": "On 14/02/2019 01.31, Euler Taveira wrote:\n> Em qua, 13 de fev de 2019 às 19:56, Andrew Gierth\n> <andrew@tao11.riddles.org.uk> escreveu:\n>> I propose we add a new option: --convert-to-text or some such name, and\n>> then make pg_restore throw a usage error if neither -d nor the new\n>> option is given.\n>>\n> However, I agree that pg_restore to stdout if -d wasn't specified is\n> not a good default. The current behavior is the same as \"-f -\"\n> (however, pg_restore doesn't allow - meaning stdout). Isn't it the\n> case to error out if -d or -f wasn't specified? If we go to this road,\n> -f option should allow - (stdout) as parameter.\n\nAgreed, \"-f -\" would be acceptable. I use pg_restore to stdout a lot, \nbut almost always manually and would have to have to remember and type \n\"--convert-to-text\".\n\nAndreas\n\n",
"msg_date": "Fri, 15 Feb 2019 02:41:13 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: proposal: pg_restore --convert-to-text"
},
{
"msg_contents": "Em qui, 14 de fev de 2019 às 22:41, Andreas Karlsson\n<andreas@proxel.se> escreveu:\n> Agreed, \"-f -\" would be acceptable. I use pg_restore to stdout a lot,\n> but almost always manually and would have to have to remember and type\n> \"--convert-to-text\".\n>\nSince no one has stepped up, I took a stab at it. It will prohibit\nstandard output unless '-f -' be specified. -l option also has the\nsame restriction.\n\nIt breaks backward compatibility and as Tom suggested a variant of\nthis patch (without docs) should be applied to back branches.\n\n\n-- \n Euler Taveira Timbira -\nhttp://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento",
"msg_date": "Mon, 18 Feb 2019 19:17:45 -0300",
"msg_from": "Euler Taveira <euler@timbira.com.br>",
"msg_from_op": false,
"msg_subject": "Re: proposal: pg_restore --convert-to-text"
},
{
"msg_contents": "Euler Taveira <euler@timbira.com.br> writes:\n> Since no one has stepped up, I took a stab at it. It will prohibit\n> standard output unless '-f -' be specified. -l option also has the\n> same restriction.\n\nHm, don't really see the need to break -l usage here.\n\nPls add to next CF, if you didn't already.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 18 Feb 2019 17:21:46 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: proposal: pg_restore --convert-to-text"
},
{
"msg_contents": "Em seg, 18 de fev de 2019 às 19:21, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n>\n> Euler Taveira <euler@timbira.com.br> writes:\n> > Since no one has stepped up, I took a stab at it. It will prohibit\n> > standard output unless '-f -' be specified. -l option also has the\n> > same restriction.\n>\n> Hm, don't really see the need to break -l usage here.\n>\nAfter thinking about it, revert it.\n\n> Pls add to next CF, if you didn't already.\n>\nDone.\n\n\n-- \n Euler Taveira Timbira -\nhttp://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento",
"msg_date": "Tue, 19 Feb 2019 17:19:46 -0300",
"msg_from": "Euler Taveira <euler@timbira.com.br>",
"msg_from_op": false,
"msg_subject": "Re: proposal: pg_restore --convert-to-text"
},
{
"msg_contents": "Hi,\r\n\r\nOn Tue, Feb 19, 2019 at 8:20 PM, Euler Taveira wrote:\r\n> Em seg, 18 de fev de 2019 às 19:21, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\r\n> >\r\n> > Euler Taveira <euler@timbira.com.br> writes:\r\n> > > Since no one has stepped up, I took a stab at it. It will prohibit\r\n> > > standard output unless '-f -' be specified. -l option also has the\r\n> > > same restriction.\r\n> >\r\n> > Hm, don't really see the need to break -l usage here.\r\n> >\r\n> After thinking about it, revert it.\r\n> \r\n> > Pls add to next CF, if you didn't already.\r\n> >\r\n> Done.\r\n\r\nI saw the patch.\r\n\r\nIs there no need to rewrite the Description in the Doc to state we should specify either -d or -f option?\r\n(and also it might be better to write if -l option is given, neither -d nor -f option isn't necessarily needed.)\r\n\r\n\r\nI also have the simple question in the code.\r\n\r\nI thought the below if-else condition\r\n\r\n+\tif (filename && strcmp(filename, \"-\") == 0)\r\n+\t\tfn = fileno(stdout);\r\n+\telse if (filename)\r\n \t\tfn = -1;\r\n else if (AH->FH)\r\n\r\ncan also be written by the form below.\r\n\r\n if (filename)\r\n {\r\n if(strcmp(filename, \"-\") == 0)\r\n fn = fileno(stdout);\r\n else\r\n fn = -1;\r\n }\r\n else if (AH->FH)\r\n\r\nI think the former one looks like pretty, but which one is preffered?\r\n\r\n--\r\nYoshikazu Imai\r\n\r\n",
"msg_date": "Thu, 28 Feb 2019 02:48:22 +0000",
"msg_from": "\"Imai, Yoshikazu\" <imai.yoshikazu@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: proposal: pg_restore --convert-to-text"
},
{
"msg_contents": "\n\nOn Thu, 28 Feb 2019, Imai, Yoshikazu wrote:\n\n> Is there no need to rewrite the Description in the Doc to state we should specify either -d or -f option?\n> (and also it might be better to write if -l option is given, neither -d nor -f option isn't necessarily needed.)\n\nSince the default part of text was removed, looks ok to me.\n\n\n> I also have the simple question in the code.\n>\n> I thought the below if-else condition\n>\n> +\tif (filename && strcmp(filename, \"-\") == 0)\n> +\t\tfn = fileno(stdout);\n> +\telse if (filename)\n> \t\tfn = -1;\n> else if (AH->FH)\n>\n> can also be written by the form below.\n>\n> if (filename)\n> {\n> if(strcmp(filename, \"-\") == 0)\n> fn = fileno(stdout);\n> else\n> fn = -1;\n> }\n> else if (AH->FH)\n>\n> I think the former one looks like pretty, but which one is preffered?\n\nAside the above question, I tested the code against a up-to-date \nrepository. It compiled, worked as expected and passed all tests.\n\n--\nJose Arthur Benetasso Villanova\n\n",
"msg_date": "Wed, 6 Mar 2019 07:58:14 -0300 (-03)",
"msg_from": "\"=?ISO-8859-15?Q?Jos=E9_Arthur_Benetasso_Villanova?=\"\n <jose.arthur@gmail.com>",
"msg_from_op": false,
"msg_subject": "RE: proposal: pg_restore --convert-to-text"
},
{
"msg_contents": "Hi Jose,\r\n\r\nSorry for my late reply.\r\n\r\nOn Wed, Mar 6, 2019 at 10:58 AM, José Arthur Benetasso Villanova wrote:\r\n> On Thu, 28 Feb 2019, Imai, Yoshikazu wrote:\r\n> \r\n> > Is there no need to rewrite the Description in the Doc to state we should\r\n> specify either -d or -f option?\r\n> > (and also it might be better to write if -l option is given, neither\r\n> > -d nor -f option isn't necessarily needed.)\r\n> \r\n> Since the default part of text was removed, looks ok to me.\r\n\r\nAh, yeah, after looking again, I also think it's ok.\r\n\r\n> > I also have the simple question in the code.\r\n> >\r\n> > I thought the below if-else condition\r\n> >\r\n> > +\tif (filename && strcmp(filename, \"-\") == 0)\r\n> > +\t\tfn = fileno(stdout);\r\n> > +\telse if (filename)\r\n> > \t\tfn = -1;\r\n> > else if (AH->FH)\r\n> >\r\n> > can also be written by the form below.\r\n> >\r\n> > if (filename)\r\n> > {\r\n> > if(strcmp(filename, \"-\") == 0)\r\n> > fn = fileno(stdout);\r\n> > else\r\n> > fn = -1;\r\n> > }\r\n> > else if (AH->FH)\r\n> >\r\n> > I think the former one looks like pretty, but which one is preffered?\r\n> \r\n> Aside the above question, I tested the code against a up-to-date\r\n> repository. It compiled, worked as expected and passed all tests.\r\n\r\nIt still can be applied to HEAD by cfbot.\r\n\r\n\r\nUpon committing this, we have to care this patch break backwards compatibility, but I haven't seen any complaints so far. If there are no objections, I will set this patch to ready for committer.\r\n\r\n--\r\nYoshikazu Imai\r\n",
"msg_date": "Fri, 15 Mar 2019 06:20:10 +0000",
"msg_from": "\"Imai, Yoshikazu\" <imai.yoshikazu@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: proposal: pg_restore --convert-to-text"
},
{
"msg_contents": "Em qua, 27 de fev de 2019 às 23:48, Imai, Yoshikazu\n<imai.yoshikazu@jp.fujitsu.com> escreveu:\n>\n> Is there no need to rewrite the Description in the Doc to state we should specify either -d or -f option?\n> (and also it might be better to write if -l option is given, neither -d nor -f option isn't necessarily needed.)\n>\nI don't think so. The description is already there (see \"pg_restore\ncan operate in two modes...\"). I left -l as is which means that\n'pg_restore -l foo.dump' dumps to standard output and 'pg_restore -f -\n-l foo.dump' has the same behavior).\n\n> I think the former one looks like pretty, but which one is preffered?\n>\nI don't have a style preference but decided to change to your\nsuggestion. New version attached.\n\n\n-- \n Euler Taveira Timbira -\nhttp://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento",
"msg_date": "Sat, 16 Mar 2019 19:54:30 -0300",
"msg_from": "Euler Taveira <euler@timbira.com.br>",
"msg_from_op": false,
"msg_subject": "Re: proposal: pg_restore --convert-to-text"
},
{
"msg_contents": "On Sat, 16 Mar 2019, Euler Taveira wrote:\n\n>> I think the former one looks like pretty, but which one is preffered?\n>>\n> I don't have a style preference but decided to change to your\n> suggestion. New version attached.\n>\n\nAgain, the patch compiles and works as expected.\n\n\n--\nJosé Arthur Benetasso Villanova",
"msg_date": "Sun, 17 Mar 2019 07:35:18 -0300 (-03)",
"msg_from": "\"=?ISO-8859-15?Q?Jos=E9_Arthur_Benetasso_Villanova?=\"\n <jose.arthur@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: proposal: pg_restore --convert-to-text"
},
{
"msg_contents": "On Sat, Mar 16, 2019 at 10:55 PM, Euler Taveira wrote:\r\n> > Is there no need to rewrite the Description in the Doc to state we should\r\n> specify either -d or -f option?\r\n> > (and also it might be better to write if -l option is given, neither\r\n> > -d nor -f option isn't necessarily needed.)\r\n> >\r\n> I don't think so. The description is already there (see \"pg_restore can\r\n> operate in two modes...\"). I left -l as is which means that 'pg_restore\r\n> -l foo.dump' dumps to standard output and 'pg_restore -f - -l foo.dump'\r\n> has the same behavior).\r\n\r\nAh, I understand it.\r\n\r\n> > I think the former one looks like pretty, but which one is preffered?\r\n> >\r\n> I don't have a style preference but decided to change to your suggestion.\r\n> New version attached.\r\n\r\nI checked it. It may be a trivial matter, so thanks for taking it consideration.\r\n\r\n--\r\nYoshikazu Imai\r\n",
"msg_date": "Mon, 18 Mar 2019 00:21:50 +0000",
"msg_from": "\"Imai, Yoshikazu\" <imai.yoshikazu@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: proposal: pg_restore --convert-to-text"
},
{
"msg_contents": "On Fri, Mar 15, 2019 at 6:20 AM, Imai, Yoshikazu wrote:\r\n> Upon committing this, we have to care this patch break backwards\r\n> compatibility, but I haven't seen any complaints so far. If there are\r\n> no objections, I will set this patch to ready for committer.\r\n\r\nJose had set this to ready for committer. Thanks.\r\n\r\n--\r\nYoshikazu Imai\r\n\r\n",
"msg_date": "Mon, 18 Mar 2019 00:25:45 +0000",
"msg_from": "\"Imai, Yoshikazu\" <imai.yoshikazu@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: proposal: pg_restore --convert-to-text"
},
{
"msg_contents": "I just pushed it with trivial changes:\n\n* rebased for cc8d41511721\n\n* changed wording in the error message\n\n* added a new test for the condition in t/001_basic.pl\n\n* Added the \"-\" in the --help line of -f.\n\n\nAndrew G. never confirmed that this change is sufficient to appease\nusers being confused by the previous behavior. I hope it is ...\n\nThanks!\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 4 Apr 2019 17:03:29 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: proposal: pg_restore --convert-to-text"
},
{
"msg_contents": "\tJosé Arthur Benetasso Villanova wrote:\n\n> On Thu, 28 Feb 2019, Imai, Yoshikazu wrote:\n> \n> > Is there no need to rewrite the Description in the Doc to state we should specify either -d or -f option?\n> > (and also it might be better to write if -l option is given, neither -d nor -f option isn't necessarily needed.)\n> \n> Since the default part of text was removed, looks ok to me.\n\n[4 months later]\n\nWhile testing pg_restore on v12, I'm stumbling on this too.\npg_restore without argument fails like that:\n\n$ pg_restore\npg_restore: error: one of -d/--dbname and -f/--file must be specified\n\nBut that's not right since \"pg_restore -l dumpfile\" would work.\nI don't understand why the discussion seems to conclude that it's okay.\n\nAlso, in the doc at https://www.postgresql.org/docs/devel/app-pgrestore.html\nthe synopsis is\n\n pg_restore [connection-option...] [option...] [filename]\n\nso the invocation without argument seems possible while in fact\nit's not.\n\nOn a subjective note, I must say that I don't find the change to be\nhelpful.\nIn particular saying that --file must be specified leaves me with\nno idea what file it's talking about: the dumpfile or the output file?\nMy first inclination is to transform \"pg_restore dumpfile\" into\n\"pg_restore -f dumpfile\", which does nothing but wait\n(it waits for the dumpfile on the standard input, before\nI realize that it's not working and hit ^C. Fortunately it doesn't\noverwrite the dumpfile with an empty output).\n\nSo the change in v12 removes the standard output as default,\nbut does not remove the standard input as default.\nIsn't that weird?\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Wed, 12 Jun 2019 17:20:59 +0200",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": false,
"msg_subject": "RE: proposal: pg_restore --convert-to-text"
},
{
"msg_contents": ">>>>> \"Daniel\" == Daniel Verite <daniel@manitou-mail.org> writes:\n\n Daniel> While testing pg_restore on v12, I'm stumbling on this too.\n Daniel> pg_restore without argument fails like that:\n\n Daniel> $ pg_restore\n Daniel> pg_restore: error: one of -d/--dbname and -f/--file must be specified\n\nYeah, that's not good.\n\nHow about:\n\npg_restore: error: no destination specified for restore\npg_restore: error: use -d/--dbname to restore into a database, or -f/--file to output SQL text\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Wed, 12 Jun 2019 17:49:14 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": true,
"msg_subject": "Re: proposal: pg_restore --convert-to-text"
},
{
"msg_contents": "On 2019-Jun-12, Daniel Verite wrote:\n\n> While testing pg_restore on v12, I'm stumbling on this too.\n\nThanks for testing.\n\n> pg_restore without argument fails like that:\n> \n> $ pg_restore\n> pg_restore: error: one of -d/--dbname and -f/--file must be specified\n> \n> But that's not right since \"pg_restore -l dumpfile\" would work.\n\nSo you suggest that it should be \n\npg_restore: error: one of -d/--dbname, -f/--file and -l/--list must be specified\n?\n\n> Also, in the doc at https://www.postgresql.org/docs/devel/app-pgrestore.html\n> the synopsis is\n> \n> pg_restore [connection-option...] [option...] [filename]\n> \n> so the invocation without argument seems possible while in fact\n> it's not.\n\nSo you suggest that it should be \n pg_restore [connection-option...] { -d | -f | -l } [option...] [filename]\n?\n\nMaybe we should do that and split out the \"output destination options\"\nfrom other options in the list of options, to make this clearer; see\na proposal after my sig.\n\n\n> In particular saying that --file must be specified leaves me with\n> no idea what file it's talking about: the dumpfile or the output file?\n\nIf you want to submit a patch (for pg13) to rename --file to\n--output-file (and make --file an alias of that), you're welcome to, and\nendure the resulting discussion and possible rejection. I don't think\nwe're changing that at this point of pg12.\n\n> My first inclination is to transform \"pg_restore dumpfile\" into\n> \"pg_restore -f dumpfile\", which does nothing but wait\n> (it waits for the dumpfile on the standard input, before\n> I realize that it's not working and hit ^C. Fortunately it doesn't\n> overwrite the dumpfile with an empty output).\n\nWould you have it emit to stderr a message saying \"reading standard\ninput\" when it is?\n\n> So the change in v12 removes the standard output as default,\n> but does not remove the standard input as default.\n> Isn't that weird?\n\nI don't think they have the same surprise factor.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nUsage:\n pg_restore [connection-option...] { -d | -f | -l } [option...] [filename]\n\nGeneral options:\n -F, --format=c|d|t backup file format (should be automatic)\n -v, --verbose verbose mode\n -V, --version output version information, then exit\n -?, --help show this help, then exit\n\nOutput target options:\n -l, --list print summarized TOC of the archive\n -d, --dbname=NAME connect to database name\n -f, --file=FILENAME output file name (- for stdout)\n\nOptions controlling the restore:\n -a, --data-only restore only the data, no schema\n -c, --clean clean (drop) database objects before recreating\n ...\n\n\n",
"msg_date": "Wed, 12 Jun 2019 13:02:01 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: proposal: pg_restore --convert-to-text"
},
{
"msg_contents": "\tAlvaro Herrera wrote:\n\n> So you suggest that it should be \n> \n> pg_restore: error: one of -d/--dbname, -f/--file and -l/--list must be\n> specified\n> ?\n\nI'd suggest this minimal fix :\n\n(int argc, char **argv)\n\t/* Complain if neither -f nor -d was specified (except if dumping\nTOC) */\n\tif (!opts->dbname && !opts->filename && !opts->tocSummary)\n\t{\n-\t\tpg_log_error(\"one of -d/--dbname and -f/--file must be\nspecified\");\n+\t\tpg_log_error(\"-d/--dbname or -f/--file or -l/--list must be\nspecified\");\n+\t\tfprintf(stderr, _(\"Try \\\"%s --help\\\" for more\ninformation.\\n\"),\n+\t\t\t\tprogname);\n\t\texit_nicely(1);\n\t}\n\n> So you suggest that it should be \n> pg_restore [connection-option...] { -d | -f | -l } [option...] [filename]\n> ?\n\nLooking at the other commands, it doesn't seem that we use this form for\nany of those that require at least one argument, for instance:\n\n===\n$ ./pg_basebackup \npg_basebackup: error: no target directory specified\nTry \"pg_basebackup --help\" for more information.\n\n$ ./pg_basebackup --help\npg_basebackup takes a base backup of a running PostgreSQL server.\n\nUsage:\n pg_basebackup [OPTION]...\n\n\n$ ./pg_checksums \npg_checksums: error: no data directory specified\nTry \"pg_checksums --help\" for more information.\n\n$ ./pg_checksums --help\npg_checksums enables, disables, or verifies data checksums in a PostgreSQL\ndatabase cluster.\n\nUsage:\n pg_checksums [OPTION]... [DATADIR]\n\n\n$ ./pg_rewind \npg_rewind: error: no source specified (--source-pgdata or --source-server)\nTry \"pg_rewind --help\" for more information.\n\n$ ./pg_rewind --help\npg_rewind resynchronizes a PostgreSQL cluster with another copy of the\ncluster.\n\nUsage:\n pg_rewind [OPTION]...\n===\n\n\"pg_restore [OPTION]... [FILE]\" appears to be consistent with those, even\nwith the new condition that no option is an error, so it's probably okay.\n\n> Output target options:\n> -l, --list print summarized TOC of the archive\n> -d, --dbname=NAME connect to database name\n> -f, --file=FILENAME output file name (- for stdout)\n\nThat makes sense. I would put this section first, before\n\"General options\".\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Wed, 03 Jul 2019 18:43:46 +0200",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": false,
"msg_subject": "Re: proposal: pg_restore --convert-to-text"
}
] |
[
{
"msg_contents": "Hi,\n\nIt would help the project to \"speed up partition planning\" [1] a bit if\ngrouping_planner didn't call query_planner directly. grouping_planner's\nmain role seems to be adding Path(s) for the \"top-level\" operations of the\nquery such as grouping, aggregation, etc. on top of Path(s) for scan/join\npaths produced by query_planner(). ISTM, scan/join paths themselves could\nvery well be generated *before* we get into grouping_planner, that is, by\ncalling query_planner before calling grouping_planner. Some of the\ntop-level processing code in grouping_planner depends on the information\nproduced by some code in the same function placed before where\nquery_planner is called, but we could share that information between\ngrouping_planner and its caller where that information would be generated.\n\nAttached patch shows one possible way that could be done.\n\nOver in [1], the premise of the one of the patches is that\ninheritance_planner gets slow as the number of children increases, because\nit invokes query_planner repeatedly (via grouping_planner) on the\ntranslated query tree for *all* target child relations. For partitioned\ntables, that also means that partition pruning cannot be used, making\nUPDATE vastly slower compared to SELECT. Based on that premise, the\npatch's solution is to invoke query_planner only once at the beginning by\npassing it the original query tree. That will generate scan Paths for all\ntarget and non-target base relations (partition pruning can be used to\nquickly determine target partitions) and join paths per target child\nrelation. Per-target-child join paths are generated by repeatedly running\nmake_rel_from_joinlist on translated joinlist wherein the top-parent\ntarget relation reference is replaced by those to individual child target\nrelations. So, query_planner now effectively generates N top-level\nscan/join RelOptInfos for N target child relations, which are tucked away\nin the top PlannerInfo. Back in inheritance_planner, grouping_planner is\ncalled to apply the final PathTarget to individual scan/join paths\ncollected above based on each target child relation's row type, but\nquery_planner is NOT called again during these grouping_planner\ninvocations. The way that's currently implemented by the patch seems a\nbit hacky, but if we refactor grouping_planner like I described above,\nthen there's no need for grouping_planner to behave specially for\ninheritance_planner (not minding the inherited_update argument).\n\nThoughts?\n\nThanks,\nAmit\n\n[1] https://commitfest.postgresql.org/22/1778/",
"msg_date": "Thu, 14 Feb 2019 10:30:31 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "grouping_planner refactoring"
}
] |
[
{
"msg_contents": "Hi, \n\nI found some \"CREATE TABLE ... AS ... \" syntaxes could not be used in ECPG. \n\n[PROBLEM]\nFirst, ECPG command is failed when the source code (*.pgc) has \"IF NOT EXISTS\". \n-------------------------------------------------------------------\nEXEC SQL CREATE TABLE IF NOT EXISTS test_cta AS SELECT * FROM test;\n-------------------------------------------------------------------\n\nSecond, ECPG command is succeeded when the source code (*.pgc) has following embedded SQL. However, created c program has no \"WITH NO DATA\". \n------------------------------------------------------------------\nEXEC SQL CREATE TABLE test_cta AS SELECT * FROM test WITH NO DATA;\n------------------------------------------------------------------\n\n[Investigation]\nIn my investigation, parse.pl ignore type CreateAsStmt of gram.y and CreateAsStmt is defined in ecpg.trailer. ECPG use ecpg.trailer's CreateAsStmt. However, ecpg.trailer lacks some syntaxes. \nI feel ignoring type CreateAsStmt of gram.y is strange. Seeing ecpg.trailer, it seems that ECPG wanted to output the message \"CREATE TABLE AS cannot specify INTO\" but is this needed now? In view of the maintenance, ECPG should use not ecpg.trailer's definition but gram.y's one. \n\nI attached the patch for this and I will register this for next CF. \n\nRegards, \nDaisuke Higuchi",
"msg_date": "Thu, 14 Feb 2019 07:39:50 +0000",
"msg_from": "\"Higuchi, Daisuke\" <higuchi.daisuke@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "[Bug Fix] ECPG: could not use some CREATE TABLE AS syntax"
},
{
"msg_contents": "Higuchi-san,\n\n> I found some \"CREATE TABLE ... AS ... \" syntaxes could not be used in\n> ECPG. \n> ...\n> [Investigation]\n> In my investigation, parse.pl ignore type CreateAsStmt of gram.y and\n> CreateAsStmt is defined in ecpg.trailer. ECPG use ecpg.trailer's\n> CreateAsStmt. However, ecpg.trailer lacks some syntaxes. \n\nCorrect, the syntax of create as statement was changed and those\nchanges have not been added to ecpg.\n\n> I feel ignoring type CreateAsStmt of gram.y is strange. Seeing\n> ecpg.trailer, it seems that ECPG wanted to output the message \"CREATE\n> TABLE AS cannot specify INTO\" but is this needed now? In view of the\n> maintenance, ECPG should use not ecpg.trailer's definition but\n> gram.y's one. \n\nI beg to disagree, or I don't understand. Why would ecpg's changes to\nthe statement be wrong nowadays?\n\n> I attached the patch for this and I will register this for next CF. \n\nI think the patch does not work correctly. The statement\nCREATE TABLE a AS SELECT * INTO test FROM a;\nis accepted with your patch, but it is not accepted in current ecpg nor\nis it accepted by the backend when you execute it through ecpg. The\nwhole change of this rule has been made to make sure this syntax is not\naccepted.\n\nMichael\n\n-- \nMichael Meskes\nMichael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)\nMeskes at (Debian|Postgresql) dot Org\nJabber: michael at xmpp dot meskes dot org\nVfL Borussia! Força Barça! SF 49ers! Use Debian GNU/Linux, PostgreSQL\n\n\n",
"msg_date": "Fri, 15 Feb 2019 14:45:45 +0100",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: [Bug Fix] ECPG: could not use some CREATE TABLE AS syntax"
},
{
"msg_contents": "Hi, Meskes-san, \r\n\r\nThanks for your response! \r\n\r\n> I beg to disagree, or I don't understand. Why would ecpg's changes to \r\n> the statement be wrong nowadays?\r\n\r\nI might confuse you, but it does not mean that it is wrong to reject CREATE TABLE AS ... INTO ... syntax in ECPG. \r\n\r\nMy goal is to accept syntax which is currently rejected by ECPG. To realize that, I am considering following two ways:\r\n(a) new syntax of create as statement should be added to ECPG\r\n(b) make ECPG to use not ecpg.trailer but gram.y in the syntax of create as statement\r\n\r\nIn (a), we need to keep similar codes in both ecpg.trailer and gram.y. Also, if the syntax of create as statement will be changed in the future, there is a possibility that it will not be reflected in ECPG like this bug. Therefore, I thought that (b) was better and created a patch. And, in order to make it the simplest code, some SQL which is rejected in current ECPG is accepted in my patch's ECPG.\r\n\r\n> The statement CREATE TABLE a AS SELECT * INTO test FROM a; is accepted \r\n> with your patch, but it is not accepted in current ecpg nor is it accepted \r\n> by the backend when you execute it through ecpg. \r\n\r\nIndeed, CREATE TABLE a AS SELECT * INTO test FROM a; is accepted in my patch's ECPG, but the backend always reject, but which SQL should be rejected in both ECPG and the backend? Following inappropriate SQL are accepted in ECPG but rejected by the backend (I am wondering why only CREATE TABLE .. AS .. INTO is rejected and other inappropriate SQL are accepted in current ECPG. ).\r\n- EXEC SQL delete from test where c1 = (select into c2 from test2);\r\n\r\nFrom the viewpoint of compatibility, if (b) is not good, I will consider (a) solution like following:\r\n\r\ndiff --git a/src/interfaces/ecpg/preproc/ecpg.trailer b/src/interfaces/ecpg/preproc/ecpg.trailer\r\n@@ -34,7 +34,14 @@ CreateAsStmt: CREATE OptTemp TABLE create_as_target AS {FoundInto = 0;} SelectSt\r\n if (FoundInto == 1)\r\n mmerror(PARSE_ERROR, ET_ERROR, \"CREATE TABLE AS cannot specify INTO\");\r\n \r\n- $$ = cat_str(6, mm_strdup(\"create\"), $2, mm_strdup(\"table\"), $4, mm_strdup(\"as\"), $7);\r\n+ $$ = cat_str(7, mm_strdup(\"create\"), $2, mm_strdup(\"table\"), $4, mm_strdup(\"as\"), $7, $8);\r\n+ }\r\n+ | CREATE OptTemp TABLE IF_P NOT EXISTS create_as_target AS {FoundInto = 0;} SelectStmt opt_with_data\r\n+ {\r\n+ if (FoundInto == 1)\r\n+ mmerror(PARSE_ERROR, ET_ERROR, \"CREATE TABLE AS cannot specify INTO\");\r\n+\r\n+ $$ = cat_str(7, mm_strdup(\"create\"), $2, mm_strdup(\"table if not exists\"), $7, mm_strdup(\"as\"), $10, $11);\r\n }\r\n ;\r\n\r\nI also want to hear your opinion. I will change my opinion flexibly. \r\n\r\nRegards, \r\nDaisuke, Higuchi\r\n\r\n",
"msg_date": "Mon, 18 Feb 2019 04:56:14 +0000",
"msg_from": "\"Higuchi, Daisuke\" <higuchi.daisuke@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [Bug Fix] ECPG: could not use some CREATE TABLE AS syntax"
},
{
"msg_contents": "Hi Higuchi-san,\n\n> My goal is to accept syntax which is currently rejected by ECPG. To\n> realize that, I am considering following two ways:\n> (a) new syntax of create as statement should be added to ECPG\n\nCorrect.\n\n> (b) make ECPG to use not ecpg.trailer but gram.y in the syntax of\n> create as statement\n\nI don't see how this would be possible to be honest.\n\n> In (a), we need to keep similar codes in both ecpg.trailer and\n> gram.y. Also, if the syntax of create as statement will be changed in\n> the future, there is a possibility that it will not be reflected in\n> ECPG like this bug. Therefore, I thought that (b) was better and\n> created a patch. And, in order to make it the simplest code, some SQL\n> which is rejected in current ECPG is accepted in my patch's ECPG.\n\nYes, I fully understand that. However, in doing so you accept\nstatements that the backend later on rejects. The sole reason for the\nbig ecpg grammar is to prevent those cases whenever possible.\n\n> Indeed, CREATE TABLE a AS SELECT * INTO test FROM a; is accepted in\n> my patch's ECPG, but the backend always reject, but which SQL should\n> be rejected in both ECPG and the backend? Following inappropriate SQL\n> are accepted in ECPG but rejected by the backend (I am wondering why\n> only CREATE TABLE .. AS .. INTO is rejected and other inappropriate\n> SQL are accepted in current ECPG. ).\n\nThat does sound like a bug to me. There may cases where it is not\npossible to catch an invalid syntax for one reason or another. But I\nwould definitely go the extra mile to make the parsers as compatible as\npossible.\n\n> From the viewpoint of compatibility, if (b) is not good, I will\n> consider (a) solution like following:\n> \n> diff --git a/src/interfaces/ecpg/preproc/ecpg.trailer\n> b/src/interfaces/ecpg/preproc/ecpg.trailer\n> @@ -34,7 +34,14 @@ CreateAsStmt: CREATE OptTemp TABLE\n> create_as_target AS {FoundInto = 0;} SelectSt\n> if (FoundInto == 1)\n> mmerror(PARSE_ERROR, ET_ERROR,\n> \"CREATE TABLE AS cannot specify INTO\");\n> \n> - $$ = cat_str(6, mm_strdup(\"create\"), $2,\n> mm_strdup(\"table\"), $4, mm_strdup(\"as\"), $7);\n> + $$ = cat_str(7, mm_strdup(\"create\"), $2,\n> mm_strdup(\"table\"), $4, mm_strdup(\"as\"), $7, $8);\n> + }\n> + | CREATE OptTemp TABLE IF_P NOT EXISTS\n> create_as_target AS {FoundInto = 0;} SelectStmt opt_with_data\n> + {\n> + if (FoundInto == 1)\n> + mmerror(PARSE_ERROR, ET_ERROR, \"CREATE\n> TABLE AS cannot specify INTO\");\n> +\n> + $$ = cat_str(7, mm_strdup(\"create\"), $2,\n> mm_strdup(\"table if not exists\"), $7, mm_strdup(\"as\"), $10, $11);\n> }\n> ;\n> \n> I also want to hear your opinion. I will change my opinion flexibly. \n\nI agree that this the way to go. \n\nMichael\n-- \nMichael Meskes\nMichael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)\nMeskes at (Debian|Postgresql) dot Org\nJabber: michael at xmpp dot meskes dot org\nVfL Borussia! Força Barça! SF 49ers! Use Debian GNU/Linux, PostgreSQL\n\n\n",
"msg_date": "Mon, 18 Feb 2019 09:23:25 +0100",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: [Bug Fix] ECPG: could not use some CREATE TABLE AS syntax"
},
{
"msg_contents": "Hi, Meskes-san\r\n\r\n> Yes, I fully understand that. However, in doing so you accept \r\n> statements that the backend later on rejects. The sole reason for \r\n> the big ecpg grammar is to prevent those cases whenever possible.\r\n\r\nOk, I agree with you. \r\n\r\n> > I also want to hear your opinion. I will change my opinion flexibly. \r\n> I agree that this the way to go.\r\n\r\nI updated and attached the patch. As I show in previous post, this version is that \"IF NOT EXISTS\" keyword and variable for \"WITH NO DATA\" are added to ecpg.trailer. \r\n\r\nRegards, \r\nDaisuke, Higuchi",
"msg_date": "Mon, 18 Feb 2019 09:19:29 +0000",
"msg_from": "\"Higuchi, Daisuke\" <higuchi.daisuke@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [Bug Fix] ECPG: could not use some CREATE TABLE AS syntax"
},
{
"msg_contents": "Hi Higuchi-san,\n\n> I updated and attached the patch. As I show in previous post, this\n> version is that \"IF NOT EXISTS\" keyword and variable for \"WITH NO\n> DATA\" are added to ecpg.trailer. \n\nThank you, committed.\n\nMichael\n-- \nMichael Meskes\nMichael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)\nMeskes at (Debian|Postgresql) dot Org\nJabber: michael at xmpp dot meskes dot org\nVfL Borussia! Força Barça! SF 49ers! Use Debian GNU/Linux, PostgreSQL\n\n\n",
"msg_date": "Mon, 18 Feb 2019 12:58:09 +0100",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: [Bug Fix] ECPG: could not use some CREATE TABLE AS syntax"
},
{
"msg_contents": "Hi, Meskes-san\r\n\r\n> Thank you, committed.\r\n\r\nThank you!\r\nBy the way, I have found another bugs which are related to ECPG , so I will post that later. \r\n\r\nRegards, \r\nDaisuke, Higuchi \r\n\r\n",
"msg_date": "Mon, 18 Feb 2019 23:59:48 +0000",
"msg_from": "\"Higuchi, Daisuke\" <higuchi.daisuke@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [Bug Fix] ECPG: could not use some CREATE TABLE AS syntax"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile reviewing the foreign keys referencing partitioned tables patch, I\nnoticed that the parentConstr argument of ATAddForeignConstraint is\nrendered useless by the following commit:\n\ncommit 0325d7a5957ba39a0dce90835ab54a08ab8bf762\nAuthor: Alvaro Herrera <alvherre@alvh.no-ip.org>\nDate: Fri Jan 18 14:49:40 2019 -0300\n\n Fix creation of duplicate foreign keys on partitions\n\nAbove commit added another function specialized for recursively adding a\ngiven foreign key constraint to partitions, so ATAddForeignConstraint is\nno longer called recursively.\n\nMaybe remove that argument in HEAD ? Attached a patch.\n\nThanks,\nAmit",
"msg_date": "Thu, 14 Feb 2019 18:47:42 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "useless argument of ATAddForeignKeyConstraint"
},
{
"msg_contents": "On 2019-Feb-14, Amit Langote wrote:\n\n> While reviewing the foreign keys referencing partitioned tables patch, I\n> noticed that the parentConstr argument of ATAddForeignConstraint is\n> rendered useless by the following commit:\n\n> Maybe remove that argument in HEAD ? Attached a patch.\n\nIndeed -- two years later this is still valid, so applied, with thanks!\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"Investigaci�n es lo que hago cuando no s� lo que estoy haciendo\"\n(Wernher von Braun)\n\n\n",
"msg_date": "Wed, 5 May 2021 12:33:39 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: useless argument of ATAddForeignKeyConstraint"
}
] |
[
{
"msg_contents": "Hi,\n\nwhile hacking on pg_verify_checksums and looking at hexdumps, I noticed\nthat the TAP tests of pg_verify_checksums (and pg_basebackup from which\nit was copy-pasted) actually write \"305c305c[...]\" (i.e. literal\nbackslashes and number 0s) instead of \"000[...]\" into the to-be-\ncorrupted relfilenodes due to wrong quoting.\n\nThis also revealed a second bug in the pg_basebackup test suite where\nthe offset for the corruption in the second file was wrong, so it\nactually never got corrupted, and the tests only passed due to the above\ntwice than expected number of written bytes. The write() probably\noverflowed into an adjacent block so that the total number of corrupted\nblocks was as expected by addident. Oops, my bad, patch attached.\n\n\nMichael\n\n-- \nMichael Banck\nProjektleiter / Senior Berater\nTel.: +49 2166 9901-171\nFax: +49 2166 9901-100\nEmail: michael.banck@credativ.de\n\ncredativ GmbH, HRB Mönchengladbach 12080\nUSt-ID-Nummer: DE204566209\nTrompeterallee 108, 41189 Mönchengladbach\nGeschäftsführung: Dr. Michael Meskes, Jörg Folz, Sascha Heuer\n\nUnser Umgang mit personenbezogenen Daten unterliegt\nfolgenden Bestimmungen: https://www.credativ.de/datenschutz",
"msg_date": "Thu, 14 Feb 2019 15:07:56 +0100",
"msg_from": "Michael Banck <michael.banck@credativ.de>",
"msg_from_op": true,
"msg_subject": "[Patch] checksumming-related buglets in\n pg_verify_checksums/pg_basebackup TAP tests"
},
{
"msg_contents": "On Thu, Feb 14, 2019 at 03:07:56PM +0100, Michael Banck wrote:\n> This also revealed a second bug in the pg_basebackup test suite where\n> the offset for the corruption in the second file was wrong, so it\n> actually never got corrupted, and the tests only passed due to the above\n> twice than expected number of written bytes. The write() probably\n> overflowed into an adjacent block so that the total number of corrupted\n> blocks was as expected by accident. Oops, my bad, patch attached.\n\nFixed and back-patched where adapted, thanks! ee9e145 was a first\nshot for a fix in pg_basebackup tests, but it has missed one seek()\ncall.\n--\nMichael",
"msg_date": "Mon, 18 Feb 2019 14:25:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] checksumming-related buglets in\n pg_verify_checksums/pg_basebackup TAP tests"
}
] |
[
{
"msg_contents": "Hello,\n\nI'm trying to add support for specifying parameters when using a COPY\ncommand to Npgsql (.NET's Postgres provider):\nhttps://github.com/npgsql/npgsql/pull/2332\n\nI've used the extended query protocol to send the COPY command. When I send\na COPY command without parameters, the backend issues the\nappropriate CopyOutResponse/CopyInResponse/CopyData:\n\n> COPY (select generate_series(1, 5)) TO STDOUT\n\nWhen I add parameters, the backend will issue an ErrorResponse message\nafter issuing the ParseComplete and BindComplete messages:\n\n> COPY (select generate_series(1, $1)) TO STDOUT\n> Error: 42P02: there is no parameter $1\n\nThe owner of Npgsql confirmed that my use of the protocol seems correct\n(parameters going over the wire, etc) but Postgres doesn't seem to be\nresolving the parameters. Does Postgres support COPY with parameters?\n\nMore background on my use case: I'd like to be able to use COPY to\nefficiently generate a CSV from our database with parameters are specified.\nFor example, generating a CSV of users recently created:\n\nCOPY (SELECT id, name, email FROM USERS where date_created > $1) TO STDOUT\nWITH (DELIMITER ',', FORMAT CSV, HEADER true, ENCODING 'UTF8')\n\nIf COPY doesn't support parameters, we're required to build the SELECT\nusing quote_literal() or format() with the L format specifier -- both of\nwhich are less safe than using a parameterized query when the parameter\ncomes from a user.\n\nThanks,\n\nAdrian Phinney\n\nHello,I'm trying to add support for specifying parameters when using a COPY command to Npgsql (.NET's Postgres provider): https://github.com/npgsql/npgsql/pull/2332I've used the extended query protocol to send the COPY command. When I send a COPY command without parameters, the backend issues the appropriate CopyOutResponse/CopyInResponse/CopyData:> COPY (select generate_series(1, 5)) TO STDOUTWhen I add parameters, the backend will issue an ErrorResponse message after issuing the ParseComplete and BindComplete messages:> COPY (select generate_series(1, $1)) TO STDOUT> Error: 42P02: there is no parameter $1The owner of Npgsql confirmed that my use of the protocol seems correct (parameters going over the wire, etc) but Postgres doesn't seem to be resolving the parameters. Does Postgres support COPY with parameters?More background on my use case: I'd like to be able to use COPY to efficiently generate a CSV from our database with parameters are specified. For example, generating a CSV of users recently created:COPY (SELECT id, name, email FROM USERS where date_created > $1) TO STDOUT WITH (DELIMITER ',', FORMAT CSV, HEADER true, ENCODING 'UTF8')If COPY doesn't support parameters, we're required to build the SELECT using quote_literal() or format() with the L format specifier -- both of which are less safe than using a parameterized query when the parameter comes from a user.Thanks,Adrian Phinney",
"msg_date": "Thu, 14 Feb 2019 10:11:45 -0500",
"msg_from": "Adrian Phinney <adrian.phinney+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "COPY support for parameters"
},
{
"msg_contents": "Adrian Phinney <adrian.phinney+postgres@gmail.com> writes:\n> Does Postgres support COPY with parameters?\n\nNo. In general you can only use parameters in DML statements\n(SELECT/INSERT/UPDATE/DELETE); utility statements don't cope,\nmainly because most of them lack expression eval capability\naltogether.\n\nPerhaps the special case of COPY from a SELECT could be made\nto allow parameters inside the SELECT, but I don't think\nanyone has tried.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 14 Feb 2019 10:22:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: COPY support for parameters"
}
] |
[
{
"msg_contents": "In <26527.1549572789@sss.pgh.pa.us> I speculated about adding a\nfunction to objectaddress.c that would probe to see if an object\nwith a given ObjectAddress (still) exists. I started to implement\nthis, but soon noticed that objectaddress.c doesn't cover all the\nobject classes that dependency.c knows. This seems bad; is there\na reason for it? The omitted object classes are\n\nAccessMethodOperatorRelationId\nAccessMethodProcedureRelationId\nAttrDefaultRelationId\nDefaultAclRelationId\nPublicationRelRelationId\nUserMappingRelationId\n\nWhat's potentially a lot worse, the two subsystems do not agree\nas to the object class OID to be used for large objects:\ndependency.c has LargeObjectRelationId but what's in objectaddress.c\nis LargeObjectMetadataRelationId. How did we get to that, and why\nisn't it causing serious problems?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 14 Feb 2019 11:43:05 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Inconsistencies between dependency.c and objectaddress.c"
}
] |
[
{
"msg_contents": "Hello,\n\nIs there any interest in making autovacuum parameters available on a\ntablespace level in order to apply those to all vacuumable objects in the\ntablespace?\n\nWe have a set of tables running on ZFS, where autovacuum does almost no good\nto us (except for preventing anti-wraparound) due to the nature of ZFS (FS\nfragmentation caused by copy-on-write leads to sequential scans doing random\naccess) and the fact that our tables there are append-only. Initially, the\nteam in charge of the application just disabled autovacuum globally, but\nthat lead to a huge system catalog bloat.\n\nAt present, we have to re-enable autovacuum globally and then disable it\nper-table using table storage parameters, but that is inelegant and requires\ndoing it once for existing tables and modifying the script that periodically\ncreates new ones (the whole system is a Postgres-based replacement of an\nElasticSearch cluster and we have to create new partitions regularly).\n\nGrouping tables by tablespaces for the purpose of autovacuum configuration\nseems natural, as tablespaces are often placed on another filesystems/device\nthat may require changing how often does autovacuum run, make it less/more\naggressive depending on the I/O performance or require disabling it\naltogether as in my example above. Furthermore, given that we allow\ncost-based options per-tablespace the infrastructure is already there and\nthe task is mostly to teach autovacuum to look at tablespaces in addition to\nthe relation storage options (in case of a conflict, relation options should\nalways take priority).\n\n\nRegards,\nOleksii Kliukin\n\n\n",
"msg_date": "Thu, 14 Feb 2019 17:56:17 +0100",
"msg_from": "Oleksii Kliukin <alexk@hintbits.com>",
"msg_from_op": true,
"msg_subject": "Per-tablespace autovacuum settings"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-14 17:56:17 +0100, Oleksii Kliukin wrote:\n> Is there any interest in making autovacuum parameters available on a\n> tablespace level in order to apply those to all vacuumable objects in the\n> tablespace?\n> \n> We have a set of tables running on ZFS, where autovacuum does almost no good\n> to us (except for preventing anti-wraparound) due to the nature of ZFS (FS\n> fragmentation caused by copy-on-write leads to sequential scans doing random\n> access) and the fact that our tables there are append-only. Initially, the\n> team in charge of the application just disabled autovacuum globally, but\n> that lead to a huge system catalog bloat.\n> \n> At present, we have to re-enable autovacuum globally and then disable it\n> per-table using table storage parameters, but that is inelegant and requires\n> doing it once for existing tables and modifying the script that periodically\n> creates new ones (the whole system is a Postgres-based replacement of an\n> ElasticSearch cluster and we have to create new partitions regularly).\n\nWon't that a) lead to periodic massive anti-wraparound sessions? b)\nprevent any use of index only scans?\n\nISTM you'd be better off running vacuum rarely, with large\nthresholds. That way it'd do most of the writes in one pass, hopefully\nleading to less fragementation, and it'd set the visibilitymap bits to\nprevent further need to touch those. By doing it only rarely, vacuum\nshould process pages sequentially, reducing the fragmentation.\n\n\n> Grouping tables by tablespaces for the purpose of autovacuum configuration\n> seems natural, as tablespaces are often placed on another filesystems/device\n> that may require changing how often does autovacuum run, make it less/more\n> aggressive depending on the I/O performance or require disabling it\n> altogether as in my example above. Furthermore, given that we allow\n> cost-based options per-tablespace the infrastructure is already there and\n> the task is mostly to teach autovacuum to look at tablespaces in addition to\n> the relation storage options (in case of a conflict, relation options should\n> always take priority).\n\nWhile I don't buy the reasoning above, I think this'd be useful for\nother cases.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Thu, 14 Feb 2019 09:17:58 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Per-tablespace autovacuum settings"
},
{
"msg_contents": "Oleksii Kliukin <alexk@hintbits.com> writes:\n> Is there any interest in making autovacuum parameters available on a\n> tablespace level in order to apply those to all vacuumable objects in the\n> tablespace?\n\nI understand what you want to accomplish, and it doesn't seem\nunreasonable. But I just want to point out that the situation with\nvacuuming parameters is on the edge of unintelligible already; adding\nanother scope might push it over the edge. In particular there's no\nprincipled way to decide whether an autovacuum parameter set at an outer\nscope should override a plain-vacuum parameter set at a narrower scope.\nAnd it's really questionable which of database-wide and tablespace-wide\nshould be seen as a narrower scope in the first place.\n\nI don't know how to make this better, but I wish we'd take a step\nback and think about it rather than just accreting more and more\ncomplexity.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 14 Feb 2019 12:20:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Per-tablespace autovacuum settings"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n> \n> On 2019-02-14 17:56:17 +0100, Oleksii Kliukin wrote:\n>> Is there any interest in making autovacuum parameters available on a\n>> tablespace level in order to apply those to all vacuumable objects in the\n>> tablespace?\n>> \n>> We have a set of tables running on ZFS, where autovacuum does almost no good\n>> to us (except for preventing anti-wraparound) due to the nature of ZFS (FS\n>> fragmentation caused by copy-on-write leads to sequential scans doing random\n>> access) and the fact that our tables there are append-only. Initially, the\n>> team in charge of the application just disabled autovacuum globally, but\n>> that lead to a huge system catalog bloat.\n>> \n>> At present, we have to re-enable autovacuum globally and then disable it\n>> per-table using table storage parameters, but that is inelegant and requires\n>> doing it once for existing tables and modifying the script that periodically\n>> creates new ones (the whole system is a Postgres-based replacement of an\n>> ElasticSearch cluster and we have to create new partitions regularly).\n> \n> Won't that a) lead to periodic massive anti-wraparound sessions? b)\n> prevent any use of index only scans?\n\nThe wraparound is hardly an issue there, as the data is transient and only\nexist for 14 days (I think the entire date-based partition is dropped,\nthat’s how we ended up with pg_class catalog bloat). The index-only scan can\nbe an issue, although, IIRC, there is some manual vacuum that runs from time\nto time, perhaps following your advice below.\n\n> ISTM you'd be better off running vacuum rarely, with large\n> thresholds. That way it'd do most of the writes in one pass, hopefully\n> leading to less fragementation, and it'd set the visibilitymap bits to\n> prevent further need to touch those. By doing it only rarely, vacuum\n> should process pages sequentially, reducing the fragmentation.\n> \n> \n>> Grouping tables by tablespaces for the purpose of autovacuum configuration\n>> seems natural, as tablespaces are often placed on another filesystems/device\n>> that may require changing how often does autovacuum run, make it less/more\n>> aggressive depending on the I/O performance or require disabling it\n>> altogether as in my example above. Furthermore, given that we allow\n>> cost-based options per-tablespace the infrastructure is already there and\n>> the task is mostly to teach autovacuum to look at tablespaces in addition to\n>> the relation storage options (in case of a conflict, relation options should\n>> always take priority).\n> \n> While I don't buy the reasoning above, I think this'd be useful for\n> other cases.\n\nEven if we don’t want to disable autovacuum completely, we might want to\nmake it much less frequent by increasing the thresholds or costs/delays to\nreduce the I/O strain for a particular tablespace.\n\nRegards,\nOleksii Kliukin\n\n\n",
"msg_date": "Thu, 14 Feb 2019 19:14:49 +0100",
"msg_from": "Oleksii Kliukin <alexk@hintbits.com>",
"msg_from_op": true,
"msg_subject": "Re: Per-tablespace autovacuum settings"
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Oleksii Kliukin <alexk@hintbits.com> writes:\n>> Is there any interest in making autovacuum parameters available on a\n>> tablespace level in order to apply those to all vacuumable objects in the\n>> tablespace?\n> \n> I understand what you want to accomplish, and it doesn't seem\n> unreasonable. But I just want to point out that the situation with\n> vacuuming parameters is on the edge of unintelligible already; adding\n> another scope might push it over the edge. In particular there's no\n> principled way to decide whether an autovacuum parameter set at an outer\n> scope should override a plain-vacuum parameter set at a narrower scope.\n\nMy naive understanding is that vacuum and autovacuum should decide\nindependently which scope applies, coming from the most specific (per-table\nfor autovacuum, per-DB for vacuum) to the broader scopes, ending with\nconfiguration parameters at the outermost scope . Both *_cost_limit and\n*_cost_delay should be taken from the current vacuum scope only if effective\nautovacuum settings yield -1.\n\n> And it's really questionable which of database-wide and tablespace-wide\n> should be seen as a narrower scope in the first place.\n\nAFAIK we don’t allow setting autovacuum options per-database; neither I\nsuggest enabling plain vacuum to be configured per-tablespace; as a result,\nwe won’t be deciding between databases and tablespaces, unless we want to do\ncross-lookups from autovacuum to the outer scope of plain vacuum options\nbefore considering autovacuum’s own outer scope and I don’t see any reason\nto do that.\n\n> I don't know how to make this better, but I wish we'd take a step\n> back and think about it rather than just accreting more and more\n> complexity.\n\nI am willing to do the refactoring when necessary, any particular place in\nthe code that is indicative of the issue?\n\nRegards,\nOleksii Kliukin\n\n\n",
"msg_date": "Thu, 14 Feb 2019 20:46:38 +0100",
"msg_from": "Oleksii Kliukin <alexk@hintbits.com>",
"msg_from_op": true,
"msg_subject": "Re: Per-tablespace autovacuum settings"
},
{
"msg_contents": "Hello,\n\nOleksii Kliukin <alexk@hintbits.com> wrote:\n\n> Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> \n>> I don't know how to make this better, but I wish we'd take a step\n>> back and think about it rather than just accreting more and more\n>> complexity.\n> \n> I am willing to do the refactoring when necessary, any particular place in\n> the code that is indicative of the issue?\n\nI’ve managed to return to that and here’s the first iteration of the patch\nto add autovacuum parameters to tablespaces. I tried to make it as simple as\npossible and didn’t make any decisions I found questionable, opting to\ndiscuss them here instead. Some of them are probably linked to the kind of\nissues mentioned by Tom upthread.\n\nThings worth mentioning are:\n\n- Fallbacks to autovacuum parameters in another scope. Right now in the\nabsence of the per-table and per-tablespace autovacuum parameters the code\nuses the ones from the global scope. However, if only some of the reloptions\nare set on a per-table level (i.e. none of the autovacuum related ones), we\nassume defaults for the rest of reloptions without consulting the lower\nlevel (i.e .per-tablespace options). This is so because we don’t have the\nmechanism to tell whether the option is set to its default value (some of\nthem use -1 to request the fallback to the outer level, but for some it’s\nnot possible, i.e. autovacuum_enabled is just a boolean value).\n\n- There are no separate per-tablespace settings for TOAST tables. I couldn't\nfind a strong case for setting all TOAST autovacuum options in a tablespace\nto the same value that is distinct from the corresponding settings for the\nregular tables. The difficulty of implementing TOAST options lies in the\nfact that we strip the namespace part from the option name before storing it\nin reltoptions. Changing that would break compatibility with previous\nversions and require another step for pg_upgrade, I don’t think it is worth\nthe troubles. We could also come with a separate set of tablespace options,\ni.e. prefixed with “toast_”, but that seems an ugly solution for the problem\nthat doesn’t seem real. As a result, if per-tablespace autovacuum options\nare set and there are no table-specific TOAST options, the TOAST table will\ninherit autovacuum options from the tablespace, rather than taking them from\nthe regular table it is attached to.\n\n- There are a few relatively recently introduced options\n(vacuum_index_cleanup, vacuum_truncate and\nvacuum_cleanup_index_scale_factor) that I haven’t incorporated into the\nper-tablespace options, as they are not part of autovacuum options and I see\nno use for setting them on a tablespace level. This can be changed easily if\npeople think otherwise.\n\nThe patch is attached. It has a few tests and no documentation, I will\nimprovise both one we get in agreement on how the end result should look.\n\n\nKind regards,\nOleksii Kliukin",
"msg_date": "Thu, 25 Apr 2019 18:35:59 +0200",
"msg_from": "Oleksii Kliukin <alexk@hintbits.com>",
"msg_from_op": true,
"msg_subject": "Re: Per-tablespace autovacuum settings"
},
{
"msg_contents": "On Thu, Apr 25, 2019 at 12:36 PM Oleksii Kliukin <alexk@hintbits.com> wrote:\n> - Fallbacks to autovacuum parameters in another scope. Right now in the\n> absence of the per-table and per-tablespace autovacuum parameters the code\n> uses the ones from the global scope. However, if only some of the reloptions\n> are set on a per-table level (i.e. none of the autovacuum related ones), we\n> assume defaults for the rest of reloptions without consulting the lower\n> level (i.e .per-tablespace options). This is so because we don’t have the\n> mechanism to tell whether the option is set to its default value (some of\n> them use -1 to request the fallback to the outer level, but for some it’s\n> not possible, i.e. autovacuum_enabled is just a boolean value).\n\nThat sounds like it's probably not acceptable?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 3 May 2019 13:01:38 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Per-tablespace autovacuum settings"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Thu, Apr 25, 2019 at 12:36 PM Oleksii Kliukin <alexk@hintbits.com> wrote:\n>> - Fallbacks to autovacuum parameters in another scope. Right now in the\n>> absence of the per-table and per-tablespace autovacuum parameters the code\n>> uses the ones from the global scope. However, if only some of the reloptions\n>> are set on a per-table level (i.e. none of the autovacuum related ones), we\n>> assume defaults for the rest of reloptions without consulting the lower\n>> level (i.e .per-tablespace options). This is so because we don’t have the\n>> mechanism to tell whether the option is set to its default value (some of\n>> them use -1 to request the fallback to the outer level, but for some it’s\n>> not possible, i.e. autovacuum_enabled is just a boolean value).\n> \n> That sounds like it's probably not acceptable?\n\nYes, I think it would be inconsistent. However, it looks like all the\noptions from AutoVacOpts other than autovacuum_enabled are set to -1 by\ndefault. This can be used to tell whether the option is set to its default\nvalue. For autovacuum_enabled we don’t care much: it’s true by default and\nit’s a safe choice (even if the global autovacuum is off, enabling per-table\nor per-tablespace one is a no-op).\n\nI will update the patch. \n\nCheers,\nOleksii\n\n",
"msg_date": "Mon, 6 May 2019 17:02:09 +0200",
"msg_from": "Oleksii Kliukin <alexk@hintbits.com>",
"msg_from_op": true,
"msg_subject": "Re: Per-tablespace autovacuum settings"
}
] |
[
{
"msg_contents": "Hi,\n\nAs last year [1], I'll try to summarize all commitfest items in 2019-03\nto see which I think could realistically be put into 12.\n\nGoing through all non bugfix CF entries. Here's the summary for the\nentries I could stomach today:\n\n\nRFC: ready for committer\nNR: needs review\nWOA: waiting on author.\n\n\n- pgbench - another attempt at tap test for time-related options\n\n NR. This was already around last year. I think it'd be fair to argue\n that there's not been a ton of push to get this committed.\n\n\n- Psql patch to show access methods info\n\n This adds \\ commands to display many properties of [index ]access methods.\n\n NR. This patch has gotten a fair bit of committer feedback via\n Alvaro. I personally am not particularly convinced this is\n functionally that many people are going to use.\n\n\n- Show size of partitioned table\n\n NR. There seems to have been plenty discussion over details. Feels\n like this ought to be committable for v12?\n\n\n- pgbench - add pseudo-random permutation function\n\n WOA. I'm not clear as to why we'd want to add this to pgbench. To\n revive a discussion from last year's thread, I feel like we're adding\n more code to pgbench than we can actually usefully use.\n\n\n- libpq host/hostaddr consistency\n\n NR. I think the patches in this needs a few more committer eyes. It's\n possible we just should fix the documentation, or go further and\n change the behaviour. Feels like without more senior attention,\n this'll not be resolved.\n\n- pg_dump multi VALUES INSERT\n\n NR. There seems to be some tentative agreement, excepting Tom, that\n we probably want this feature. There has been plenty review /\n improvements.\n\n Seems like it ought to be possible to get this into v12.\n\n- libpq trace log\n\n NR. There seems to be considerable debate about what exactly this\n feature should do, and the code hasn't yet seen a lot of review. I\n think we ought to just target v13 here, there seems to be some\n agreement that there's a problem, just not exactly what the solution\n is.\n\n Andres: punt to v13.\n\n\n- pg_dumpall --exclude-database option\n\n RFC, and author is committer.\n\n\n- Add sqlstate output mode to VERBOSITY\n\n RFC, there seems to be agreement.\n\n\n- DECLARE STATEMENT syntax support in ECPG\n\n NR. There seems to be some tentative agreement that this is\n desirable. But the patch was only recently (2018-12-16) revived, and\n hasn't yet gotten a ton of review. I pinged Michael Meskes, to make\n him aware of this work.\n\n Andres: punt to v13.\n\n\n- A new data type 'bytea' for ECPG host variable\n\n NR: This seems to be much closer to being ready than the\n above. Michael has done a few review cycles. Not sure if it'll be\n done by 12, but it seems to have a good chance.\n\n\n- \\describe: verbose commands in psql\n\n NR: This seems like a relatively large change, and only submitted\n 2019-01-24. Per our policy to not add nontrivial work to the last CF,\n I think we should not consider this a candidate for v12.\n\n Andres: punt to v13.\n\n\n- documenting signal handling with readme\n\n WOA: I'm very unclear as to why this is something worth documenting in\n this manner. Right now I'm not clear what the audience of this is\n supposed to be.\n\n\n- First SVG graphic\n\n NR: My reading of the discussion makes this look like we'll probably\n have graphics in v12's docs. Neat.\n\n\n- Update INSTALL file\n\n WOA: It seems we've not really come to much of a conclusion what this\n ought to contain.\n\n\n- Make installcheck-world in a clean environment\n\n NR: I don't feel we really have agreement on what we want here. Thus\n I'm doubtful this is likely to be resolvable in short order.\n\n Andres: lightly in favor of punting to v13\n\n\n- Avoid creation of the free space map for small tables\n\n NR: the majority of this patch has been committed, I assume the\n remaining pieces will too.\n\n\n- Adding a TAP test checking data consistency on standby with\n minRecoveryPoint\n\n NR: Seems like it ought to be committable, Andrew Gierth did provide\n some feedback.\n\n\n- idle-in-transaction timeout error does not give a hint\n\n NR: Seems trivial enough.\n\n\n- commontype and commontypearray polymorphic types\n\n NR: To me this seems like a potentially large change, with user\n visible impact. As it was only submitted in December, and is clearly\n not yet reviewed much, I think we ought to punt on this for v12.\n\n Andres: punt to v13\n\n\n- EXPLAIN with information about modified GUC values\n\n NR: The patch seems to be getting closer to completion, but I'm not\n sure how many actually agree that we want this.\n\n Andres: aim for v12, but we probably should discuss soon whether we\n actually want this.\n\n\n- Include all columns in default names for foreign key constraints.\n\n WOA: This patch has been submitted to the last CF first. I think it's\n straddling the line after which we should just refuse that pretty\n closely. Not sure.\n\n\n- Shared-memory based stats collector\n\n WOA: I think this patch has quite some promise and solve quite a few\n issues, and it has been worked on for a while. But at the same time\n the code isn't that close to being committable.\n\n Andres: unless there's a new version cleaning up review comments PDQ,\n I think we're unfortunately have to punt this to v13.\n\n\n- timeout parameters in libpq\n\n NR: This doesn't yet seem terribly well reviewed and designed.\n\n Andres: +0.5 for punting to v13\n\n\n- Log a sample of transactions\n\n WOA: Seems like a useful enough feature to me, but there were a few\n other senior community members that didn't quite agree. Issues in the\n patch seem resolvable in time for v12. I'm wondering if this doesn't\n need a more radical approach to avoid the overhead.\n\n\n- monitoring CREATE INDEX [CONCURRENTLY]\n\n NR: I think there's agreement on the desirability of the feature. But\n while the patch has been submitted to CF 2019-01 that seems to\n have been somewhat of a placeholder entry. OTOH, it's a committer's\n project, so we can give more leeway there.\n\n\n- pg_stat_statements should notice FOR UPDATE clauses\n\n NR: This seems like to get in, given how sipmle it is. Some quibbles\n about the appropriate approach aside.\n\n\nOk, my flight's about to land. So that's it for this round.\n\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/20180301110344.kyp3wejoxp2ipler%40alap3.anarazel.de\n\n",
"msg_date": "Thu, 14 Feb 2019 12:37:52 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "2019-03 CF Summary / Review - Tranche #1"
},
{
"msg_contents": "On Thu, Feb 14, 2019 at 12:37:52PM -0800, Andres Freund wrote:\n> - Adding a TAP test checking data consistency on standby with\n> minRecoveryPoint\n> \n> NR: Seems like it ought to be committable, Andrew Gierth did provide\n> some feedback.\n\nI am rather happy of the shape of the test, and was just waiting from\nAndrew for a last confirmation. If there are no objections, I could\ncommit it as well.\n--\nMichael",
"msg_date": "Fri, 15 Feb 2019 17:17:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: 2019-03 CF Summary / Review - Tranche #1"
},
{
"msg_contents": "On Fri, Feb 15, 2019 at 2:08 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> - Avoid creation of the free space map for small tables\n>\n> NR: the majority of this patch has been committed,\n>\n\nThis is a correct assessment.\n\n> I assume the\n> remaining pieces will too.\n>\n\nYes, I am busy these days with something else, but I will surely get\nback to reviewing/committing the remaining stuff for PG12.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n",
"msg_date": "Sat, 16 Feb 2019 08:06:18 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: 2019-03 CF Summary / Review - Tranche #1"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-14 12:37:52 -0800, Andres Freund wrote:\n> - pg_stat_statements should notice FOR UPDATE clauses\n>\n> NR: This seems like to get in, given how sipmle it is. Some quibbles\n> about the appropriate approach aside.\n>\n>\n> Ok, my flight's about to land. So that's it for this round.\n\n\n- Protect syscache from bloating with negative cache entries\n\n WOA: I think unless the feature is drastically scaled down in scope,\n there's no chance to get anything into v12.\n\n Andres: punt to v13, unless a smaller chunk can be split off\n\n\n- SERIALIZABLE with parallel query\n\n NR: This seems like it's pretty much committable, and the author is a\n committer these days.\n\n\n- Removing [Merge]Append nodes which contain a single subpath\n\n NR: I can't quite tell the state of this patch just by reading the\n thread. It's longstanding, and the code doesn't look terrible, but Tom\n appears to still be a bit unhappy.\n\n Andres: ???\n\n\n- verify ALTER TABLE SET NOT NULL by valid constraints\n\n NR: Seems like a pretty useful feature. The code isn't very\n invasive. The patch has been lingering around for a while. We should\n merge this.\n\n\n- Reduce amount of WAL generated by CREATE INDEX for gist, gin and\n sp-gist\n\n NR: This unfortunately has barely gotten any review so far, and has\n had a number of issues authors have discovered themselves. It's a bit\n sad that a useful patch has gotten this little review, but I think\n it's probably a stretch to get it into v12 unless some senior\n reviewers show up.\n\n\n- GiST VACUUM\n\n NR: Has gotten a fair bit of review by Heikki, but there still seems\n to be a number of unresolved issues. Not sure if there's cycles to get\n this done unless Heikki has time.\n\n\n- libpq compression\n\n NR: While a very useful feature, the patch seems pretty raw, there's\n disagreements about code structure, and some cryptographic risks need\n to be discussed or at least better documented.\n\n Andres: punt to v13\n\n\n- Evaluate immutable functions during planning (in FROM clause)\n\n NR: As far as I can tell this CF entry should have been either WO or\n even rejected for the last two CFs. Even if the review feedback had\n been addressed, it seems there's very substantial architecture\n concerns that haven't been responded to.\n\n Andres: no chance for v12, CF entry should probably be closed.\n\n\n- Global shared meta cache\n\n NR: This is extremely invasive, and is more PoC work than anything\n else.\n\n Andres: punt to v13\n\n\n- Remove self join on a unique column\n\n NR: This still seems pretty raw, and there's not been a ton of\n detailed review (although plenty of more general discussion). I don't\n quite see how we can get this into shape for v12, although some review\n would be good to allow the feature to progress.\n\n Andres: punt to v13\n\n\n- Inline Common Table Expressions\n\n NR: I think it's likely that Tom will commit this soon, we're mostly\n debating syntax and similar things at this point (and man, I'm looking\n forward to this).\n\n\n- Autoprepare: implicitly replace literals with parameters and store\n generalized plan\n\n NR: I think there's no chance to get this into v12, given the state of\n the patch. There's not even agreement that we want this feature\n (although I think we can't avoid it for much longer), not to speak of\n agreement on the architecture.\n\n I think this needs a lot more attention to ever get anywhere.\n\n Andres: punt to v13\n\n\n- Tid scan improvements (ordering and range scan)\n\n NR: The patch has been through recent significant architectural\n changes, albeit to an architecture more similar to an earlier\n approach. There's not been meaningful review since. On the other hand,\n the patch isn't actually all that complex. Seems like a stretch to get\n into v12, but possible if e.g. Tom wants to pick it up.\n\n Andres: +0.5 for punting to v13\n\n\n- Block level parallel vacuum\n\n NR: Cool feature, but I don't think this has gotten even remotely\n enough review to be mergable into v12.\n\n Andres: punt to v13\n\n\n- Speed up planning with partitions\n\n NR: Important feature, but based on a skim of the discussion this\n doesn't seem ready for v12.\n\n Andres: punt to v13\n\n\n- Make nbtree keys unique by appending heap TID, suffix truncation\n\n NR: Important, seemingly carefully worked on feature. But it's\n *complicated* stuff, and there's only been a bit of design review by\n Heikki. The author's a committer. I don't feel qualified to judge\n this.\n\n\n- KNN for B-tree\n\n NR: The patch still seems to lack a bit of review. Possible but a\n stretch for v12. (While the thread is quite old, there've been\n substantial gaps where it wasn't worked on, so I don't think it's one\n of the really bad cases of not getting review.)\n\n Andres: +0.5 for punting to v13\n\n\n- New vacuum option to do only freezing\n\n NR: Seems simple enough. We probably can merge this.\n\n\n- Speed up transaction completion faster after many relations are\n accessed in a transaction\n\n NR: This has only been submitted 2019-02-12. While not a really\n complicated patch, it's also not trivial. Therefore I'd say this falls\n under our policy of not merging nontrivial patches first submitted to\n the last CF.\n\n Andres: punt to v13\n\n\n- SortSupport implementation on inet/cdir\n\n WOA: This is a substantial amount of code submitted first for the last\n CF.\n\n Andres: punt to v13\n\n\n- Referential Integrity Checks with Statement Level Triggers\n\n NR: This has only been submitted to this CF, and is a very substantial\n change. There has been no review as of yet, and the author\n acknowledges several shortcomings in the patch.\n\n Andres: punt to v13\n\n\n- postgres_fdw: Perform UPPERREL_ORDERED and UPPERREL_FINAL steps\n remotely\n\n WOA: This is a nontrivial change, and the design and review only\n started in late December. It's probably not realistic to target v12.\n\n Andres: punt to v13\n\n\n- Delay locking partitions during query execution\n\n NR: Important patch. Development has only started in December. Not a\n ton of code though.\n\n Andres: ???\n\n\n- Delay locking partitions during INSERT and UPDATE\n\n RFC: Looks reasonable enough, although there's some discussion related\n to increased deadlock risk. Re-raised that on the thread.\n\n\n- Prove IS NOT NULL inference for large arrays\n\n NR: No idea.\n\n\n- Detoast Compressed Datum Slice\n\n NR: Probably just can get committed close to as-is. Pinged Stephen,\n who mentioned he's interested in committing it.\n\n\n- Ordered Partitioned Table Scans\n\n NR: Patch has been through a few rounds, but probably needs a look\n soon by somebody else with a fair bit of planner experience. Might be\n doable.\n\n\n- schema variables, LET command\n\n NR: For a feature that's as user exposed as this, I don't think this\n has gotten even remotely enough review.\n\n Andres: punt to v13\n\n\n- block level PRAGMA\n\n NR: My reading of this thread is that the proposal is closer to being\n rejected than merged.\n\n Andres: reject or punt?\n\n\n- get rid of StdRdOptions, use individual binary reloptions\n representation for each relation kind instead\n\n NR: I'm not sure I understand what this going to buy us.\n\n Andres: ???\n\n\n- Track the next xid using 64 bits\n\n WOA: Can probably be merged, I posted a few relatively minor review\n comments, but I assume Thomas is going to merge an updated version.\n\n\n- Refactoring the checkpointer's fsync request queue\n\n NR: There's been some design issues raised (by your's truly, in\n person). And there's very likely not going to be an in-core user for\n this in v12 (neither slru-via-bufmgr or undo seem likely to get\n merged). So I'm not sure it's realistic to get this into v12, although\n it certainly seems doable if a bit of elbow grease is put into it.\n\n\n- Making WAL receiver startup rely on GUC context for primary_conninfo\n and primary_slot_name\n\n NR: I think this should be rejected. As noted in the thread, this\n doesn't actually buy us much, and has some potential costs once we\n make primary_conninfo PGC_SIGHUP.\n\n Andres: Reject\n\n\n- Respect client-initiated CopyDone during logical streaming replication\n\n NR: I don't think this is ready.\n\n Andres: punt to v13\n\n\n- logical decoding of two-phase transactions\n\n WOA: This is probably not going to be ready by v13, although I think\n it could become so if somebody senior really started working on it.\n\n Andres: punt to v13 :(\n\n\n- logical streaming for large in-progress transactions\n\n WOA: Tomas, at the dev meeting in Brussels, said he doesn't believe\n this is going to be ready for v12.\n\n Andres: punt to v13 :(\n\n\n- Restricting maximum keep segments by repslots\n\n WOA: This seems to need substantial polishing before being\n ready. Might be doable before v13, but the author also is involved in\n numerous other patches needing work...\n\n\n- Synchronous replay mode for avoiding stale reads on hot standbys\n\n NR: There's some disagreement about the desirability of the feature,\n but plenty people signalled they want it. Seems like it ought to get\n merged at some point, there's been review (but more probably wouldn't\n hurt).\n\n\n- Copy function for logical replication slots\n\n WOA: Probably can be committed, was briefly marked RFC, but I found\n some issues (which should be easy enough to rectify).\n\n\n- pg_rewind: options to use restore_command from recovery.conf or\n command line\n\n WOA: Was previously marked as RFC, but I don't see how it is. Possibly\n can be finished, but does require a good bit more work.\n\n\n- create and use subscription for nonsuperuser\n\n NR: This seems to need a good bit more work. I'm a bit doubtful this\n can be finished for v12.\n\n Andres: +0.25 for punting to v13\n\n\n- online change primary_conninfo\n\n WOA: Needs a bit more work, but probably can be finished for v12.\n\n\n- Remove deprecated exclusive backup mode\n\n NR: There clearly seems to be no concensus on making this change.\n\n Andres: punt to v13 or something\n\n\n- Add timeline to partial WAL segments\n\n WOA: Seems to need a good bit more work, and touches sensitive bits.\n\n Andres: +0.5 for punting to v13\n\n\n- Synchronizing slots from primary to standby\n\n NR: This clearly is just a POC at this point.\n\n Andres: punt to v13\n\n\n- pg_hba.conf : new auth option : clientcert=verify-full\n\n NR: this should probably be RFC, as it was before needing to be\n rebased. Looks like it should just get merged.\n\n\n- GSSAPI encryption support\n\n NR: Seems Stephen is pondering committing this. Not quite sure I like\n the way it's integreated in fe-secure/be-secure.\n\n\n- multivariate MCV lists and histograms\n\n WOA: Seems the MCV bits might be realistic for v12, but the histogram\n part not?\n\n\n- Push aggregation down to base relations and joins\n\n NR: Seems like this needs a few more review, and is probably not quite\n going to be ready for v12. But it'd probably need some attention by\n Tom for the author to be able to move forward.\n\n Andres: push to v13\n\n\n- Pluggable storage API\n\n NR: I'm biased... I plan to push substantial portions of this\n feature. There's a few later features that I'm not sure are going to\n be ready (e.g. doing trigger lookups using a snapshot).\n\n\n- Custom compression methods\n\n WOA: Hm.\n\n Andres: I think we need to make a call whether we actually want this,\n rather than just continuing to punt this forward.\n\n\n- BRIN bloom and multi-minmax indexes\n\n NR: Unfortunately this doesn't seem to have gotten meaningful review\n in the last year :(\n\n\n- SQL/JSON: jsonpath\n\n WOA: This seems to need substantial further work before being\n committable\n\n Andres: +0.5 for punting to v13\n\n\n(okay, breaking open a bottle of wine here)\n\n\n- Add enum relation option type\n\n RFC: I think Alvaro probably can commit this? He's edited a few\n versions of the patch, and set the target version to 12.\n\n\n- Covering GiST indexes\n\n RFC\n\n\n- amcheck verification for GiST\n\n WOA: Some locking changes are apparently needed, possibly a bit too\n much to fix up for v12?\n\n\n- DNS SRV support for LDAP authentication\n\n NR: Looks like Thomas should just merge this\n\n\n- Add Hook Functions for Disk Quota Extension\n\n NR: This is in the early design stages, rather than realistically\n targeting v12.\n\n Andres: punt to v13\n\n\n- Implement NULL-related checks in object address functions to prevent cache lookup errors, take two\n\n NR: Seems like this should go into 12, although it'd be good if Alvaro\n could take another peek before Michael pushes.\n\n\n- Triggers on Materialized Views\n\n NR: I think we need to provide useful feedback whether we actually\n want this feature. But either way, it's not v12 material.\n\n Andres: punt to v13, discuss whether to reject\n\n\n- Ltree syntax improvement\n\n NR: Given this is a nontrivial patch, and was submitted 2019-01-29,\n it's clearly not v12 material.\n\n Andres: punt to v13\n\n\n- Skip table truncation at VACUUM (should be: allow to disable\n truncations via a reloptions)\n\n NR: The feature is near trivial, and avoids significant problems in\n hot standby environments. It seems to need some language lawyering.\n\n\n- SQL/JSON: functions\n\n NR: Dependant on jsonpath which I have a hard time seeing in v12. And\n it's barely reviewed (and still contains exciting PG_CATCH games that\n I warned about at the tail end of v11...).\n\n Andres: punt to v13\n\n\n- SQL/JSON: JSON_TABLE\n\n NR: Depends on previous, no review.\n\n Andres: punt to v13\n\n\n- chained transactions\n\n NR: Looks like it ought to be committable\n\n\n- conflict handling for COPY FROM\n\n NR: Clearly not ready for v12, the whole business with requiring a log\n file doesn't seem acceptable.\n\n Andres: punt to v13\n\n\n- FETCH FIRST clause PERCENT option\n\n WOA: Clearly not ready for v12.\n\n Andres: punt to v13\n\n\n- ALTER TABLE on system catalogs\n\n NR: I still think this is the wrong approach, but I can also live with\n Peter hacking this up. But I think a call got to be made at some\n point, rather than schlepping this around continuously.\n\n\n- ATTACH/DETACH PARTITION CONCURRENTLY\n\n NR: Seems to be getting closer to being mergable.\n\n\n- FETCH FIRST clause WITH TIES option\n\n NR: Doesn't quite seem ready, insufficient tests, new code probably\n should be in separate function. But possibly could be fixed up for\n v12?\n\n\n- support VARIADIC arg for least/greatest functions\n\n NR: There's debate whether we want this feature. Tom argues, and I'm\n inclined to agree, that this should rather be separate array specific\n functions. Pavel's position is that that's a separate thing, but I'm\n not sure I agree.\n\n Andres: reject?\n\n\n- Temporary materialized views\n\n NR: Some infrastructure work is needed before this can go in. Not sure\n if that can be finished for v12?\n\n\n- insensitive/non-deterministic collations\n\n NR: Peter has stated that he's targeting v12, but I'm not sure this\n had enough review? But it's not *that* complicated...\n\n\n- Log10 and hyperbolic functions for SQL:2016 compliance\n\n NR: Seems simple enough, we should just merge this.\n\n\n- pg_upgrade: Pass -j option down to vacuumdb\n\n NR: Seems simple enough, we should just merge this.\n\n\n- Support huge_pages on AIX\n\n NR: Probably can be merged.\n\n\n\n\nThere's a few patches that authors, rather than others, have market as\ntargeting v13, or where authors consented to that. I don't see a need to\ngo through those here.\n\n- Generic type subscripting\n- Transactions involving multiple postgres foreign servers\n- Undo logs\n- Undo worker and transaction rollback\n- Index Skip Scan\n- SERIALIZABLE on standby servers\n- Advanced partition matching for partition-wise join\n\n\nComments?\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Fri, 15 Feb 2019 21:45:26 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: 2019-03 CF Summary / Review - Tranche #2"
},
{
"msg_contents": "Hi\n\n\n\n> - block level PRAGMA\n>\n> NR: My reading of this thread is that the proposal is closer to being\n> rejected than merged.\n>\n> Andres: reject or punt?\n>\n>\nThis patch is very simple and has strong sense for users of\nplpgsql_checks. For first moment, only plpgsql_check's users can get some\nadvance from it. But if we implement autonomous transactions, and I hope so\nthis feature will be implemented, then this code can be used for Oracle's\nPL/SQL syntax compatible implementation. There is not any disadvantage - it\nis clean, and compatible with ADA, PL/SQL .. I implemented just only block\nlevel PRAGMA, and there was not any disagreement.\n\nRegards\n\nPavel\n\nHi \n- block level PRAGMA\n\n NR: My reading of this thread is that the proposal is closer to being\n rejected than merged.\n\n Andres: reject or punt?\nThis patch is very simple and has strong sense for users of plpgsql_checks. For first moment, only plpgsql_check's users can get some advance from it. But if we implement autonomous transactions, and I hope so this feature will be implemented, then this code can be used for Oracle's PL/SQL syntax compatible implementation. There is not any disadvantage - it is clean, and compatible with ADA, PL/SQL .. I implemented just only block level PRAGMA, and there was not any disagreement.RegardsPavel",
"msg_date": "Sat, 16 Feb 2019 07:02:50 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: 2019-03 CF Summary / Review - Tranche #2"
},
{
"msg_contents": "On Fri, Feb 15, 2019 at 9:45 PM Andres Freund <andres@anarazel.de> wrote:\n> - Make nbtree keys unique by appending heap TID, suffix truncation\n>\n> NR: Important, seemingly carefully worked on feature. But it's\n> *complicated* stuff, and there's only been a bit of design review by\n> Heikki. The author's a committer. I don't feel qualified to judge\n> this.\n\nI think that's fair. The best way of understanding the risks as a\nnon-expert is to think of the patch as having two distinct components:\n\n1. Make nbtree entries unique + add suffix truncation -- the basic mechanism.\n\n2. Make more intelligent decisions about where to split pages, to work\nwith suffix truncation, while still mostly caring about the balance of\nfree space among each half of the split, and caring about a number of\nother new concerns besides these two.\n\nYou're right that some parts of the patch are very complicated, but\nthose are all contained in the second component. That has been the\nmain focus of Heikki's review, by far. This second component is\nconcerned with picking a split point that is already known to be legal\nbased on the *existing* criteria. If there were bugs here, they could\nnot result in data corruption. The worst outcome I can conceive of is\nthat an index would be bloated in a new and novel way. It would be\npossible to correct that in a point release without breaking on-disk\ncompatibility. That would be painful, certainly, but it's far from the\nworst outcome.\n\nGranted, there are also one or two subtle things in the first, more\ncritical component, but these are also the things that were\nestablished earliest and have received the most testing. And, amcheck\nis now capable of doing point lookups using the same code paths as\nindex scans (calling _bt_search()) to relocate each and every tuple on\nthe leaf level, starting from the root. The first component does not\nchange anything about how crash recovery or VACUUM works, either. It's\nall about how keys compare, and how new pivot tuples are generated --\nit's mostly about the key space, while changing very little about the\nphysical on-disk representation. (It builds on the on-disk\nrepresentation changes added in Postgres 11, for INCLUDE indexes.)\n\n> - SortSupport implementation on inet/cdir\n>\n> WOA: This is a substantial amount of code submitted first for the last\n> CF.\n>\n> Andres: punt to v13\n\nI was kind of involved here. I think that it's fair to punt, based on\nthe rule about submitting a big patch to the last CF.\n\n> - amcheck verification for GiST\n>\n> WOA: Some locking changes are apparently needed, possibly a bit too\n> much to fix up for v12?\n\nI had hoped that Andrey Borodin would get back to me on this soon. It\ndoes still seem unsettled.\n\n> - insensitive/non-deterministic collations\n>\n> NR: Peter has stated that he's targeting v12, but I'm not sure this\n> had enough review? But it's not *that* complicated...\n\nI could help out here.\n\n--\nPeter Geoghegan\n\n",
"msg_date": "Fri, 15 Feb 2019 22:52:41 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: 2019-03 CF Summary / Review - Tranche #2"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> - Inline Common Table Expressions\n\n> NR: I think it's likely that Tom will commit this soon, we're mostly\n> debating syntax and similar things at this point (and man, I'm looking\n> forward to this).\n\nUnless there are a bunch of votes real soon against the [NOT] MATERIALIZED\nsyntax, I'm going to commit it that way. \"Real soon\" means \"probably\ntomorrow\".\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sat, 16 Feb 2019 02:05:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 2019-03 CF Summary / Review - Tranche #2"
},
{
"msg_contents": "On Fri, Feb 15, 2019 at 09:45:26PM -0800, Andres Freund wrote:\n> - Making WAL receiver startup rely on GUC context for primary_conninfo\n> and primary_slot_name\n> \n> NR: I think this should be rejected. As noted in the thread, this\n> doesn't actually buy us much, and has some potential costs once we\n> make primary_conninfo PGC_SIGHUP.\n> \n> Andres: Reject\n\nI am not surprised by your opinion here, you stated it clearly on the\nthread :) I have just marked it as rejected, as I don't have the\nenergy to fight for it.\n\n> - online change primary_conninfo\n> \n> WOA: Needs a bit more work, but probably can be finished for v12.\n\nYep, agreed.\n\n> - Add timeline to partial WAL segments\n> \n> WOA: Seems to need a good bit more work, and touches sensitive bits.\n> \n> Andres: +0.5 for punting to v13\n\nThe problem is not that easy, particularly to make things consistent\nbetween the backend and pg_receivewal.\n\n> - Implement NULL-related checks in object address functions to prevent cache lookup errors, take two\n> \n> NR: Seems like this should go into 12, although it'd be good if Alvaro\n> could take another peek before Michael pushes.\n\nI would prefer if Alvaro has an extra look at what I am proposing as\nmost of the stuff in objectaddress.c is originally his.\n\n> - Temporary materialized views\n> \n> NR: Some infrastructure work is needed before this can go in. Not sure\n> if that can be finished for v12?\n\nI would suggest to move that to v13. Relation creation done by CTAS\nneeds to be refactored first, and my take is that we could have this\nrefactoring part for v12, moving the core portion of the proposal to\nthe next dev cycle.\n--\nMichael",
"msg_date": "Sun, 17 Feb 2019 00:01:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: 2019-03 CF Summary / Review - Tranche #2"
},
{
"msg_contents": "On Sat, Feb 16, 2019 at 8:45 AM Andres Freund <andres@anarazel.de> wrote:\n> - SQL/JSON: jsonpath\n>\n> WOA: This seems to need substantial further work before being\n> committable\n>\n> Andres: +0.5 for punting to v13\n\nI'm going to post updated patchset next week. All the issues\nhighlighted will be resolved there. So, let's don't decide this too\nearly.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n",
"msg_date": "Sat, 16 Feb 2019 22:22:32 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: 2019-03 CF Summary / Review - Tranche #2"
},
{
"msg_contents": "Hi,\n\nOn February 16, 2019 11:22:32 AM PST, Alexander Korotkov <a.korotkov@postgrespro.ru> wrote:\n>On Sat, Feb 16, 2019 at 8:45 AM Andres Freund <andres@anarazel.de>\n>wrote:\n>> - SQL/JSON: jsonpath\n>>\n>> WOA: This seems to need substantial further work before being\n>> committable\n>>\n>> Andres: +0.5 for punting to v13\n>\n>I'm going to post updated patchset next week. All the issues\n>highlighted will be resolved there. So, let's don't decide this too\n>early.\n\nWell, given that the patches still have a lot of the same issues complained about a year ago, where people said we should try to get them into v11, I'm not sure that that's a realistic goal. Jsonb was a success, but also held up the release by several months.\n\nAndres \n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n",
"msg_date": "Sat, 16 Feb 2019 12:31:05 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: 2019-03 CF Summary / Review - Tranche #2"
},
{
"msg_contents": "On Sat, Feb 16, 2019 at 11:31 PM Andres Freund <andres@anarazel.de> wrote:\n> On February 16, 2019 11:22:32 AM PST, Alexander Korotkov <a.korotkov@postgrespro.ru> wrote:\n> >On Sat, Feb 16, 2019 at 8:45 AM Andres Freund <andres@anarazel.de>\n> >wrote:\n> >> - SQL/JSON: jsonpath\n> >>\n> >> WOA: This seems to need substantial further work before being\n> >> committable\n> >>\n> >> Andres: +0.5 for punting to v13\n> >\n> >I'm going to post updated patchset next week. All the issues\n> >highlighted will be resolved there. So, let's don't decide this too\n> >early.\n>\n> Well, given that the patches still have a lot of the same issues complained about a year ago, where people said we should try to get them into v11, I'm not sure that that's a realistic goal.\n\nI'm sorry, a year ago I didn't understand this issue correctly.\nOtherwise, I would push people to do something more productive during\nthis year.\n\nIf solution I'm going to post next week wouldn't be good enough, there\nis a backup plan. We can wipe out error suppression completely. Then\nwe implement less part of standard, but still can get something very\nuseful into core.\n\n> Jsonb was a success, but also held up the release by several months.\n\nI don't ask to commit patchset in current shape and help up the\nrelease because of it. I just say there is still time before FF. So,\nlet's not doom patchset too early.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n",
"msg_date": "Sun, 17 Feb 2019 01:09:11 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: 2019-03 CF Summary / Review - Tranche #2"
},
{
"msg_contents": "On Sun, Feb 17, 2019 at 1:09 AM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n>\n> On Sat, Feb 16, 2019 at 11:31 PM Andres Freund <andres@anarazel.de> wrote:\n> > On February 16, 2019 11:22:32 AM PST, Alexander Korotkov <a.korotkov@postgrespro.ru> wrote:\n> > >On Sat, Feb 16, 2019 at 8:45 AM Andres Freund <andres@anarazel.de>\n> > >wrote:\n> > >> - SQL/JSON: jsonpath\n> > >>\n> > >> WOA: This seems to need substantial further work before being\n> > >> committable\n> > >>\n> > >> Andres: +0.5 for punting to v13\n> > >\n> > >I'm going to post updated patchset next week. All the issues\n> > >highlighted will be resolved there. So, let's don't decide this too\n> > >early.\n> >\n> > Well, given that the patches still have a lot of the same issues complained about a year ago, where people said we should try to get them into v11, I'm not sure that that's a realistic goal.\n>\n> I'm sorry, a year ago I didn't understand this issue correctly.\n> Otherwise, I would push people to do something more productive during\n> this year.\n>\n> If solution I'm going to post next week wouldn't be good enough, there\n> is a backup plan. We can wipe out error suppression completely. Then\n> we implement less part of standard, but still can get something very\n> useful into core.\n\nTo be more clear. In both options I'm NOT proposing to commit any\nPG_TRY/PG_CATCH anymore :)\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n",
"msg_date": "Sun, 17 Feb 2019 01:17:06 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: 2019-03 CF Summary / Review - Tranche #2"
},
{
"msg_contents": "> - Prove IS NOT NULL inference for large arrays\n>\n> NR: No idea.\n\nAs the fairly new author of this patch, my perspective is that this\npatch got quite a bit of review, albeit without a formal \"yes or no\"\nresponse.\n\nI'm obviously interested in getting it committed, and I believe it's a\nfairly simple patch to look at (though since it's in predtest probably\nrequires extra brain cycles to be careful we don't make spurious\nassumptions in the optimizer). It's also an obvious performance win\nfor queries that can use partial indexes with almost no additional\noptimizer overhead.\n\nBut I'm also interested in feedback on how patches like this work in\nthe review process -- particularly when given the fair amount of\ndiscussion/cleanup but without a final word. As a new patch author a\nlot of understanding how this works feels very much like a \"learn as\nyou go\", but since there's not a lot of information directly written\non it I figured asking explicitly is the best way to learn the process\nbetter.\n\nThanks,\nJames Coleman\n\n",
"msg_date": "Sun, 17 Feb 2019 14:44:24 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: 2019-03 CF Summary / Review - Tranche #2"
},
{
"msg_contents": "James Coleman <jtc331@gmail.com> writes:\n>> - Prove IS NOT NULL inference for large arrays\n>> \n>> NR: No idea.\n\n> As the fairly new author of this patch, my perspective is that this\n> patch got quite a bit of review, albeit without a formal \"yes or no\"\n> response.\n\nFWIW, I ran out of time for that patch during the January 'fest, but\nI think it still has a good shot at getting committed during March.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 17 Feb 2019 18:30:12 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 2019-03 CF Summary / Review - Tranche #2"
},
{
"msg_contents": "From: Andres Freund [mailto:andres@anarazel.de]\n> - Protect syscache from bloating with negative cache entries\n> \n> WOA: I think unless the feature is drastically scaled down in scope,\n> there's no chance to get anything into v12.\n> \n> Andres: punt to v13, unless a smaller chunk can be split off\n\nAt a glance, the patch set looks big divided in 5 patch files, but the actual size may get much smaller by merging those files and excluding 004 (statistics views). I'm reviewing this. I wish this could not be given up yet...\n\n\n> - Speed up planning with partitions\n> \n> NR: Important feature, but based on a skim of the discussion this\n> doesn't seem ready for v12.\n> \n> Andres: punt to v13\n\n> - Speed up transaction completion faster after many relations are\n> accessed in a transaction\n> \n> NR: This has only been submitted 2019-02-12. While not a really\n> complicated patch, it's also not trivial. Therefore I'd say this falls\n> under our policy of not merging nontrivial patches first submitted to\n> the last CF.\n> \n> Andres: punt to v13\n\nI hope these will be continued in PG 12, because these will make PostgreSQL comparable to a commercial database in OLTP workloads. I'd like to hear from Amit and David on the feasibility.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n",
"msg_date": "Mon, 18 Feb 2019 02:13:19 +0000",
"msg_from": "\"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: 2019-03 CF Summary / Review - Tranche #2"
},
{
"msg_contents": "Hi,\n\nOn 2019/02/18 11:13, Tsunakawa, Takayuki wrote:\n> From: Andres Freund [mailto:andres@anarazel.de]\n>> - Speed up planning with partitions\n>>\n>> NR: Important feature, but based on a skim of the discussion this\n>> doesn't seem ready for v12.\n>>\n>> Andres: punt to v13\n> \n>> - Speed up transaction completion faster after many relations are\n>> accessed in a transaction\n>>\n>> NR: This has only been submitted 2019-02-12. While not a really\n>> complicated patch, it's also not trivial. Therefore I'd say this falls\n>> under our policy of not merging nontrivial patches first submitted to\n>> the last CF.\n>>\n>> Andres: punt to v13\n> \n> I hope these will be continued in PG 12, because these will make PostgreSQL comparable to a commercial database in OLTP workloads. I'd like to hear from Amit and David on the feasibility.\n\nAs far as the first one is concerned (speed up planning with partitions),\nalthough there's no new functionality [1], there is quite a bit of code\nchurn affecting somewhat complex logic of inheritance planning. As I\nmentioned in the FOSDEM developer meeting's patch triage discussion,\nthere's a chance of moving this forward if Tom has time to look at some\nportions of these patches. David's and Imai-san's review over the past\nfew months has been very helpful to get the patches to the state they are\nin now.\n\nAs for the 2nd one, while I can say it's really helpful for workloads with\nmany partitions, I have to admit I can't give an expert opinion on the\ncode changes. Thank you for working on it.\n\nThanks,\nAmit\n\n[1] it does enable hash partition pruning for update/delete queries\nthough, which is a new feature\n\n\n",
"msg_date": "Mon, 18 Feb 2019 11:49:31 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: 2019-03 CF Summary / Review - Tranche #2"
},
{
"msg_contents": "(2019/02/16 14:45), Andres Freund wrote:\n> - postgres_fdw: Perform UPPERREL_ORDERED and UPPERREL_FINAL steps\n> remotely\n>\n> WOA: This is a nontrivial change, and the design and review only\n> started in late December. It's probably not realistic to target v12.\n>\n> Andres: punt to v13\n\nI also think this needs more reviews, but I don't think it's unrealistic \nto target v12, because 1) the patch is actually not that large (at least \nin the latest version, most of the changes are in regression tests), and \n2) IMO the patch is rather straightforward.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Mon, 18 Feb 2019 12:40:49 +0900",
"msg_from": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: 2019-03 CF Summary / Review - Tranche #2"
},
{
"msg_contents": "On Mon, 18 Feb 2019 at 15:50, Amit Langote\n<Langote_Amit_f8@lab.ntt.co.jp> wrote:\n> On 2019/02/18 11:13, Tsunakawa, Takayuki wrote:\n> > From: Andres Freund [mailto:andres@anarazel.de]\n> >> - Speed up planning with partitions\n> >>\n> >> NR: Important feature, but based on a skim of the discussion this\n> >> doesn't seem ready for v12.\n> >>\n> >> Andres: punt to v13\n> >\n> >> - Speed up transaction completion faster after many relations are\n> >> accessed in a transaction\n> >>\n> >> NR: This has only been submitted 2019-02-12. While not a really\n> >> complicated patch, it's also not trivial. Therefore I'd say this falls\n> >> under our policy of not merging nontrivial patches first submitted to\n> >> the last CF.\n> >>\n> >> Andres: punt to v13\n> >\n> > I hope these will be continued in PG 12, because these will make PostgreSQL comparable to a commercial database in OLTP workloads. I'd like to hear from Amit and David on the feasibility.\n>\n> As far as the first one is concerned (speed up planning with partitions),\n> although there's no new functionality [1], there is quite a bit of code\n> churn affecting somewhat complex logic of inheritance planning.\n\nI think we need to treat each patch in that series individually. Last\ntime I looked, the first 1 or 2 patches looked very close. I know\nthere's some tricky stuff in later patches in the series, but I don't\nthink that should stop earlier patches being committed for PG12.\nHowever, I'd say that if the entire thing is to make PG12 then we'll\nneed to start ticking off earlier patches pretty soon, likely before\nMarch.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Mon, 18 Feb 2019 18:15:46 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: 2019-03 CF Summary / Review - Tranche #2"
},
{
"msg_contents": "On 2/16/19 7:45 AM, Andres Freund wrote:\n\n> - Add timeline to partial WAL segments\n> \n> WOA: Seems to need a good bit more work, and touches sensitive bits.\n> \n> Andres: +0.5 for punting to v13\nI have labelled this patch v13 and I'll push it as soon as the CF app \nallows me to do so.\n\nI think this is important but there is a lot of tricky work to be done \nin pg_receivewal and it's best not to rush that. Michael and I have an \nagreement in principle but ...\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n",
"msg_date": "Mon, 18 Feb 2019 18:59:22 +0200",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: 2019-03 CF Summary / Review - Tranche #2"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-18 18:59:22 +0200, David Steele wrote:\n> On 2/16/19 7:45 AM, Andres Freund wrote:\n> \n> > - Add timeline to partial WAL segments\n> > \n> > WOA: Seems to need a good bit more work, and touches sensitive bits.\n> > \n> > Andres: +0.5 for punting to v13\n> I have labelled this patch v13 and I'll push it as soon as the CF app allows\n> me to do so.\n> \n> I think this is important but there is a lot of tricky work to be done in\n> pg_receivewal and it's best not to rush that. Michael and I have an\n> agreement in principle but ...\n\nFWIW, given that we now can filter by targetting v12 (although it'd be\nnice if it were possible to filter by v12 or NULL), I don't think\nthere's any urgency to push to the next CF immediately.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 18 Feb 2019 09:03:11 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: 2019-03 CF Summary / Review - Tranche #2"
},
{
"msg_contents": "On 2/18/19 7:03 PM, Andres Freund wrote:\n> Hi,\n> \n> On 2019-02-18 18:59:22 +0200, David Steele wrote:\n>> On 2/16/19 7:45 AM, Andres Freund wrote:\n>>\n>>> - Add timeline to partial WAL segments\n>>>\n>>> WOA: Seems to need a good bit more work, and touches sensitive bits.\n>>>\n>>> Andres: +0.5 for punting to v13\n>> I have labelled this patch v13 and I'll push it as soon as the CF app allows\n>> me to do so.\n>>\n>> I think this is important but there is a lot of tricky work to be done in\n>> pg_receivewal and it's best not to rush that. Michael and I have an\n>> agreement in principle but ...\n> \n> FWIW, given that we now can filter by targetting v12 (although it'd be\n> nice if it were possible to filter by v12 or NULL), I don't think\n> there's any urgency to push to the next CF immediately.\n\nAgreed. This new feature helps a lot.\n\nEven so, I'll push it when I can to get it out of my hair, as it were. \nI'll be spending a lot of time look at that list next month.\n\n-- \n-David\ndavid@pgmasters.net\n\n",
"msg_date": "Mon, 18 Feb 2019 19:07:29 +0200",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: 2019-03 CF Summary / Review - Tranche #2"
},
{
"msg_contents": "David Steele <david@pgmasters.net> writes:\n> Even so, I'll push it when I can to get it out of my hair, as it were. \n> I'll be spending a lot of time look at that list next month.\n\nCan't you do it now? The status summary line already shows one\npatch as having been pushed to the next CF.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 18 Feb 2019 12:37:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 2019-03 CF Summary / Review - Tranche #2"
},
{
"msg_contents": "On 2019-02-18 12:37:27 -0500, Tom Lane wrote:\n> David Steele <david@pgmasters.net> writes:\n> > Even so, I'll push it when I can to get it out of my hair, as it were. \n> > I'll be spending a lot of time look at that list next month.\n> \n> Can't you do it now? The status summary line already shows one\n> patch as having been pushed to the next CF.\n\nIt's CF app nannyism. One can't move a patch to the next CF that's\nwaiting-on-author. I've complained about that a number of times, but...\n\n",
"msg_date": "Mon, 18 Feb 2019 09:40:39 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: 2019-03 CF Summary / Review - Tranche #2"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-02-18 12:37:27 -0500, Tom Lane wrote:\n>> Can't you do it now? The status summary line already shows one\n>> patch as having been pushed to the next CF.\n\n> It's CF app nannyism. One can't move a patch to the next CF that's\n> waiting-on-author. I've complained about that a number of times, but...\n\nSo change it to another state, push it, change it again.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 18 Feb 2019 12:43:00 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 2019-03 CF Summary / Review - Tranche #2"
},
{
"msg_contents": "\nOn 2/18/19 12:43 PM, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> On 2019-02-18 12:37:27 -0500, Tom Lane wrote:\n>>> Can't you do it now? The status summary line already shows one\n>>> patch as having been pushed to the next CF.\n>> It's CF app nannyism. One can't move a patch to the next CF that's\n>> waiting-on-author. I've complained about that a number of times, but...\n> So change it to another state, push it, change it again.\n>\n> \t\t\t\n\n\nI'm with Andres. I found this annoying six months ago.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 18 Feb 2019 18:17:55 -0500",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: 2019-03 CF Summary / Review - Tranche #2"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 2/18/19 12:43 PM, Tom Lane wrote:\n>> Andres Freund <andres@anarazel.de> writes:\n>>> It's CF app nannyism. One can't move a patch to the next CF that's\n>>> waiting-on-author. I've complained about that a number of times, but...\n\n>> So change it to another state, push it, change it again.\n\n> I'm with Andres. I found this annoying six months ago.\n\nOh, I agree the restriction is stupid. I'm just pointing out that\nit can be gotten around.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 18 Feb 2019 18:23:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 2019-03 CF Summary / Review - Tranche #2"
},
{
"msg_contents": "On 2/19/19 1:23 AM, Tom Lane wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> On 2/18/19 12:43 PM, Tom Lane wrote:\n>>> Andres Freund <andres@anarazel.de> writes:\n>>>> It's CF app nannyism. One can't move a patch to the next CF that's\n>>>> waiting-on-author. I've complained about that a number of times, but...\n> \n>>> So change it to another state, push it, change it again.\n> \n>> I'm with Andres. I found this annoying six months ago.\n> \n> Oh, I agree the restriction is stupid. I'm just pointing out that\n> it can be gotten around.\n\nOK, yeah, that worked. For some reason I was having trouble moving \nthings out of the 2019-01 CF last month but this time it worked just \nfine. I think more than one was marked as open then, not sure.\n\n-- \n-David\ndavid@pgmasters.net\n\n",
"msg_date": "Tue, 19 Feb 2019 07:25:44 +0200",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: 2019-03 CF Summary / Review - Tranche #2"
},
{
"msg_contents": "(2019/02/18 12:40), Etsuro Fujita wrote:\n> (2019/02/16 14:45), Andres Freund wrote:\n>> - postgres_fdw: Perform UPPERREL_ORDERED and UPPERREL_FINAL steps\n>> remotely\n>>\n>> WOA: This is a nontrivial change, and the design and review only\n>> started in late December. It's probably not realistic to target v12.\n>>\n>> Andres: punt to v13\n>\n> I also think this needs more reviews, but I don't think it's unrealistic\n> to target v12, because 1) the patch is actually not that large (at least\n> in the latest version, most of the changes are in regression tests), and\n> 2) IMO the patch is rather straightforward.\n\nThere seems to be no objections, so I marked this as targeting v12.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Wed, 20 Feb 2019 21:30:20 +0900",
"msg_from": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: 2019-03 CF Summary / Review - Tranche #2"
},
{
"msg_contents": "On 16.02.2019 8:45, Andres Freund wrote:\n> - pg_rewind: options to use restore_command from recovery.conf or\n> command line\n>\n> WOA: Was previously marked as RFC, but I don't see how it is. Possibly\n> can be finished, but does require a good bit more work.\n\nJust sent new version of the patch to the thread [1], which removes all \nunnecessary complexity. I am willing to address all new issues during \n2019-03 CF if any.\n\n[1] \nhttps://www.postgresql.org/message-id/c9cfabce-8fb6-493f-68ec-e0a72d957bf4%40postgrespro.ru\n\n\nThanks\n\n-- \nAlexey Kondratov\n\n\n\n",
"msg_date": "Wed, 20 Feb 2019 16:23:47 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: 2019-03 CF Summary / Review - Tranche #2"
}
] |
[
{
"msg_contents": "As mentioned in the near thread, I think there is another oversight in\nthe cost estimation for aggregate pushdown paths in postgres_fdw, IIUC.\n When costing an aggregate pushdown path using local statistics, we\nre-use the estimated costs of implementing the underlying scan/join\nrelation, cached in the relation's PgFdwRelationInfo (ie,\nrel_startup_cost and rel_total_cost). Since these costs wouldn't yet\ncontain the costs of evaluating the final scan/join target, as tlist\nreplacement by apply_scanjoin_target_to_paths() is performed afterwards.\n So I think we need to adjust these costs so that the tlist eval costs\nare included, but ISTM that estimate_path_cost_size() forgot to do so.\nAttached is a patch for fixing this issue.\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Fri, 15 Feb 2019 20:19:50 +0900",
"msg_from": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "postgres_fdw: another oddity in costing aggregate pushdown paths"
},
{
"msg_contents": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp> wrote:\n\n> As mentioned in the near thread, I think there is another oversight in\n> the cost estimation for aggregate pushdown paths in postgres_fdw, IIUC.\n> When costing an aggregate pushdown path using local statistics, we\n> re-use the estimated costs of implementing the underlying scan/join\n> relation, cached in the relation's PgFdwRelationInfo (ie,\n> rel_startup_cost and rel_total_cost). Since these costs wouldn't yet\n> contain the costs of evaluating the final scan/join target, as tlist\n> replacement by apply_scanjoin_target_to_paths() is performed afterwards.\n> So I think we need to adjust these costs so that the tlist eval costs\n> are included, but ISTM that estimate_path_cost_size() forgot to do so.\n> Attached is a patch for fixing this issue.\n\nI think the following comment in apply_scanjoin_target_to_paths() should\nmention that FDWs rely on the new value of reltarget.\n\n\t/*\n\t * Update the reltarget. This may not be strictly necessary in all cases,\n\t * but it is at least necessary when create_append_path() gets called\n\t * below directly or indirectly, since that function uses the reltarget as\n\t * the pathtarget for the resulting path. It seems like a good idea to do\n\t * it unconditionally.\n\t */\n\trel->reltarget = llast_node(PathTarget, scanjoin_targets);\n\n-- \nAntonin Houska\nhttps://www.cybertec-postgresql.com\n\n",
"msg_date": "Fri, 22 Feb 2019 15:10:22 +0100",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: another oddity in costing aggregate pushdown paths"
},
{
"msg_contents": "(2019/02/22 23:10), Antonin Houska wrote:\n> Etsuro Fujita<fujita.etsuro@lab.ntt.co.jp> wrote:\n>> As mentioned in the near thread, I think there is another oversight in\n>> the cost estimation for aggregate pushdown paths in postgres_fdw, IIUC.\n>> When costing an aggregate pushdown path using local statistics, we\n>> re-use the estimated costs of implementing the underlying scan/join\n>> relation, cached in the relation's PgFdwRelationInfo (ie,\n>> rel_startup_cost and rel_total_cost). Since these costs wouldn't yet\n>> contain the costs of evaluating the final scan/join target, as tlist\n>> replacement by apply_scanjoin_target_to_paths() is performed afterwards.\n>> So I think we need to adjust these costs so that the tlist eval costs\n>> are included, but ISTM that estimate_path_cost_size() forgot to do so.\n>> Attached is a patch for fixing this issue.\n>\n> I think the following comment in apply_scanjoin_target_to_paths() should\n> mention that FDWs rely on the new value of reltarget.\n>\n> \t/*\n> \t * Update the reltarget. This may not be strictly necessary in all cases,\n> \t * but it is at least necessary when create_append_path() gets called\n> \t * below directly or indirectly, since that function uses the reltarget as\n> \t * the pathtarget for the resulting path. It seems like a good idea to do\n> \t * it unconditionally.\n> \t */\n> \trel->reltarget = llast_node(PathTarget, scanjoin_targets);\n\nAgreed. How about mentioning that like the attached? In addition, I \nadded another assertion to estimate_path_cost_size() in that patch.\n\nThanks for the review!\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Mon, 25 Feb 2019 19:59:01 +0900",
"msg_from": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw: another oddity in costing aggregate pushdown paths"
},
{
"msg_contents": "(2019/02/25 19:59), Etsuro Fujita wrote:\n> (2019/02/22 23:10), Antonin Houska wrote:\n>> Etsuro Fujita<fujita.etsuro@lab.ntt.co.jp> wrote:\n>>> As mentioned in the near thread, I think there is another oversight in\n>>> the cost estimation for aggregate pushdown paths in postgres_fdw, IIUC.\n>>> When costing an aggregate pushdown path using local statistics, we\n>>> re-use the estimated costs of implementing the underlying scan/join\n>>> relation, cached in the relation's PgFdwRelationInfo (ie,\n>>> rel_startup_cost and rel_total_cost). Since these costs wouldn't yet\n>>> contain the costs of evaluating the final scan/join target, as tlist\n>>> replacement by apply_scanjoin_target_to_paths() is performed afterwards.\n>>> So I think we need to adjust these costs so that the tlist eval costs\n>>> are included, but ISTM that estimate_path_cost_size() forgot to do so.\n>>> Attached is a patch for fixing this issue.\n>>\n>> I think the following comment in apply_scanjoin_target_to_paths() should\n>> mention that FDWs rely on the new value of reltarget.\n>>\n>> /*\n>> * Update the reltarget. This may not be strictly necessary in all cases,\n>> * but it is at least necessary when create_append_path() gets called\n>> * below directly or indirectly, since that function uses the reltarget as\n>> * the pathtarget for the resulting path. It seems like a good idea to do\n>> * it unconditionally.\n>> */\n>> rel->reltarget = llast_node(PathTarget, scanjoin_targets);\n>\n> Agreed. How about mentioning that like the attached? In addition, I\n> added another assertion to estimate_path_cost_size() in that patch.\n\nThis doesn't get applied cleanly after commit 1d33858406. Here is a \nrebased version of the patch. I also modified the comments a little \nbit. If there are no objections from Antonin or anyone else, I'll \ncommit the patch.\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Wed, 08 May 2019 12:42:51 +0900",
"msg_from": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw: another oddity in costing aggregate pushdown paths"
},
{
"msg_contents": "On Wed, May 8, 2019 at 12:45 PM Etsuro Fujita\n<fujita.etsuro@lab.ntt.co.jp> wrote:\n> This doesn't get applied cleanly after commit 1d33858406. Here is a\n> rebased version of the patch. I also modified the comments a little\n> bit. If there are no objections from Antonin or anyone else, I'll\n> commit the patch.\n\nPushed. Thanks for reviewing, Antonin!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Thu, 9 May 2019 18:52:38 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: another oddity in costing aggregate pushdown paths"
}
] |
[
{
"msg_contents": "We removed channel binding from PG 11 in August of 2018 because we were\nconcerned about downgrade attacks. Are there any plans to enable it for\nPG 12?\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n",
"msg_date": "Fri, 15 Feb 2019 16:17:07 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Channel binding"
},
{
"msg_contents": "On Fri, Feb 15, 2019 at 04:17:07PM -0500, Bruce Momjian wrote:\n> We removed channel binding from PG 11 in August of 2018 because we were\n> concerned about downgrade attacks. Are there any plans to enable it for\n> PG 12?\n\nThe original implementation of channel binding for SCRAM has included\nsupport for two channel binding types: tls-unique and\ntls-server-end-point. The original implementation also had a\nconnection parameter called scram_channel_binding to control the\nchannel binding type to use or to disable it.\n\nWhat has been removed via 7729113 are tls-unique and the libpq\nparameter, and we still have basic channel binding support. The\nreasons behind that is that tls-unique future is uncertain as of TLS\n1.3, and that tls-server-end-point will still be supported. This also\nsimplified the protocol as it is not necessary to let the client\ndecide which channel binding to use.\n\nDowngrade attacks at protocol level are something different though, as\nit is possible to trick libpq to lower down the initial level of\nauthentication wanted (say from SCRAM to MD5, or even worse from MD5\nto trust). What we need here is additional frontend facility to allow\na client to authorize only a subset of authentication protocols. With\nwhat's is v11 and HEAD, any driver speaking the Postgres protocol can\nimplement that.\n--\nMichael",
"msg_date": "Sat, 16 Feb 2019 10:12:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Channel binding"
},
{
"msg_contents": "On Sat, Feb 16, 2019 at 10:12:19AM +0900, Michael Paquier wrote:\n> On Fri, Feb 15, 2019 at 04:17:07PM -0500, Bruce Momjian wrote:\n> > We removed channel binding from PG 11 in August of 2018 because we were\n> > concerned about downgrade attacks. Are there any plans to enable it for\n> > PG 12?\n> \n> The original implementation of channel binding for SCRAM has included\n> support for two channel binding types: tls-unique and\n> tls-server-end-point. The original implementation also had a\n> connection parameter called scram_channel_binding to control the\n> channel binding type to use or to disable it.\n> \n> What has been removed via 7729113 are tls-unique and the libpq\n> parameter, and we still have basic channel binding support. The\n> reasons behind that is that tls-unique future is uncertain as of TLS\n> 1.3, and that tls-server-end-point will still be supported. This also\n> simplified the protocol as it is not necessary to let the client\n> decide which channel binding to use.\n\nWell, my point was that this features was considered to be very\nimportant for PG 11, but for some reason there has been no advancement\nof this features for PG 12.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n",
"msg_date": "Fri, 15 Feb 2019 22:21:12 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Channel binding"
},
{
"msg_contents": "On Fri, Feb 15, 2019 at 10:21:12PM -0500, Bruce Momjian wrote:\n> Well, my point was that this features was considered to be very\n> important for PG 11, but for some reason there has been no advancement\n> of this features for PG 12.\n\nYeah, it is unfortunate that we have not seen patches about that or\nconcrete proposals for a proper libpq patch.\n--\nMichael",
"msg_date": "Sat, 16 Feb 2019 23:55:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Channel binding"
}
] |
[
{
"msg_contents": "I saw this error once last week while stress testing to reproduce earlier bugs,\nbut tentatively thought it was a downstream symptom of those bugs (since\nfixed), and now wanted to check that #15585 and others were no longer\nreproducible. Unfortunately I got this error while running same test case [2]\nas for previous bug ('could not attach').\n\n2019-02-14 23:40:41.611 MST [32287] ERROR: cannot unpin a segment that is not pinned\n\nOn commit faf132449c0cafd31fe9f14bbf29ca0318a89058 (REL_11_STABLE including\nboth of last week's post-11.2 DSA patches), I reproduced twice, once within\n~2.5 hours, once within 30min.\n\nI'm not able to reproduce on master running overnight and now 16+hours.\n\nSee also:\n\npg11.1: dsa_area could not attach to segment - resolved by commit 6c0fb941892519ad6b8873e99c4001404fb9a128\n [1] https://www.postgresql.org/message-id/20181231221734.GB25379%40telsasoft.com\npg11.1: dsa_area could not attach to segment:\n [2] https://www.postgresql.org/message-id/20190211040132.GV31721%40telsasoft.com\nBUG #15585: infinite DynamicSharedMemoryControlLock waiting in parallel query\n [3] https://www.postgresql.org/message-id/flat/15585-324ff6a93a18da46%40postgresql.org\ndsa_allocate() faliure - resolved by commit 7215efdc005e694ec93678a6203dbfc714d12809 (also doesn't affect master)\n [4] https://www.postgresql.org/message-id/flat/CAMAYy4%2Bw3NTBM5JLWFi8twhWK4%3Dk_5L4nV5%2BbYDSPu8r4b97Zg%40mail.gmail.com\n\n#0 0x00007f96a589e277 in raise () from /lib64/libc.so.6\n#1 0x00007f96a589f968 in abort () from /lib64/libc.so.6\n#2 0x000000000088b6f7 in errfinish (dummy=dummy@entry=0) at elog.c:555\n#3 0x000000000088eee8 in elog_finish (elevel=elevel@entry=22, fmt=fmt@entry=0xa52cb0 \"cannot unpin a segment that is not pinned\") at elog.c:1376\n#4 0x00000000007578b2 in dsm_unpin_segment (handle=1780672242) at dsm.c:914\n#5 0x00000000008aee15 in dsa_release_in_place (place=0x7f96a6991640) at dsa.c:618\n#6 0x00000000007571a9 in dsm_detach (seg=0x2470f78) at dsm.c:731\n#7 0x0000000000509233 in DestroyParallelContext (pcxt=0x24c18c0) at parallel.c:900\n#8 0x000000000062db65 in ExecParallelCleanup (pei=0x25aacf8) at execParallel.c:1154\n#9 0x0000000000640588 in ExecShutdownGather (node=node@entry=0x2549b60) at nodeGather.c:406\n#10 0x0000000000630208 in ExecShutdownNode (node=0x2549b60) at execProcnode.c:767\n#11 0x000000000067724f in planstate_tree_walker (planstate=planstate@entry=0x2549a48, walker=walker@entry=0x6301c0 <ExecShutdownNode>, context=context@entry=0x0) at nodeFuncs.c:3739\n#12 0x00000000006301dd in ExecShutdownNode (node=0x2549a48) at execProcnode.c:749\n#13 0x000000000067724f in planstate_tree_walker (planstate=planstate@entry=0x25495d0, walker=walker@entry=0x6301c0 <ExecShutdownNode>, context=context@entry=0x0) at nodeFuncs.c:3739\n#14 0x00000000006301dd in ExecShutdownNode (node=0x25495d0) at execProcnode.c:749\n#15 0x000000000067724f in planstate_tree_walker (planstate=planstate@entry=0x25494b8, walker=walker@entry=0x6301c0 <ExecShutdownNode>, context=context@entry=0x0) at nodeFuncs.c:3739\n#16 0x00000000006301dd in ExecShutdownNode (node=0x25494b8) at execProcnode.c:749\n#17 0x000000000067724f in planstate_tree_walker (planstate=planstate@entry=0x25492a8, walker=walker@entry=0x6301c0 <ExecShutdownNode>, context=context@entry=0x0) at nodeFuncs.c:3739\n#18 0x00000000006301dd in ExecShutdownNode (node=node@entry=0x25492a8) at execProcnode.c:749\n#19 0x0000000000628fbd in ExecutePlan (execute_once=<optimized out>, dest=0xd96e60 <donothingDR>, direction=<optimized out>, numberTuples=0, sendTuples=true, operation=CMD_SELECT, use_parallel_mode=<optimized out>, \n planstate=0x25492a8, estate=0x2549038) at execMain.c:1787\n#20 standard_ExecutorRun (queryDesc=0x2576990, direction=<optimized out>, count=0, execute_once=<optimized out>) at execMain.c:364\n#21 0x00000000005c635f in ExplainOnePlan (plannedstmt=plannedstmt@entry=0x2579820, into=into@entry=0x0, es=es@entry=0x2529170, \n queryString=queryString@entry=0x243f348 \"explain analyze SELECT colcld.child c, parent p, array_agg(colpar.attname::text ORDER BY colpar.attnum) cols, array_agg(format_type(colpar.atttypid, colpar.atttypmod) ORDER BY colpar.attnum) AS types \"..., params=params@entry=0x0, queryEnv=queryEnv@entry=0x0, planduration=planduration@entry=0x7fff1048afa0) at explain.c:535\n#22 0x00000000005c665f in ExplainOneQuery (query=<optimized out>, cursorOptions=<optimized out>, into=0x0, es=0x2529170, \n queryString=0x243f348 \"explain analyze SELECT colcld.child c, parent p, array_agg(colpar.attname::text ORDER BY colpar.attnum) cols, array_agg(format_type(colpar.atttypid, colpar.atttypmod) ORDER BY colpar.attnum) AS types \"..., params=0x0, queryEnv=0x0) at explain.c:371\n#23 0x00000000005c6bbe in ExplainQuery (pstate=pstate@entry=0x2461bd8, stmt=stmt@entry=0x24f87a8, \n queryString=queryString@entry=0x243f348 \"explain analyze SELECT colcld.child c, parent p, array_agg(colpar.attname::text ORDER BY colpar.attnum) cols, array_agg(format_type(colpar.atttypid, colpar.atttypmod) ORDER BY colpar.attnum) AS types \"..., params=params@entry=0x0, queryEnv=queryEnv@entry=0x0, dest=dest@entry=0x2461b40) at explain.c:254\n#24 0x0000000000782a5d in standard_ProcessUtility (pstmt=0x24f8930, \n queryString=0x243f348 \"explain analyze SELECT colcld.child c, parent p, array_agg(colpar.attname::text ORDER BY colpar.attnum) cols, array_agg(format_type(colpar.atttypid, colpar.atttypmod) ORDER BY colpar.attnum) AS types \"..., context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x2461b40, completionTag=0x7fff1048b160 \"\") at utility.c:675\n#25 0x000000000077fcc6 in PortalRunUtility (portal=0x24a4bd8, pstmt=0x24f8930, isTopLevel=<optimized out>, setHoldSnapshot=<optimized out>, dest=0x2461b40, completionTag=0x7fff1048b160 \"\") at pquery.c:1178\n#26 0x0000000000780b03 in FillPortalStore (portal=portal@entry=0x24a4bd8, isTopLevel=isTopLevel@entry=true) at pquery.c:1038\n#27 0x0000000000781638 in PortalRun (portal=portal@entry=0x24a4bd8, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true, dest=dest@entry=0x2512490, \n#28 0x000000000077d27f in exec_simple_query (\n query_string=0x243f348 \"explain analyze SELECT colcld.child c, parent p, array_agg(colpar.attname::text ORDER BY colpar.attnum) cols, array_agg(format_type(colpar.atttypid, colpar.atttypmod) ORDER BY colpar.attnum) AS types \"...) at postgres.c:1145\n#29 0x000000000077e5b2 in PostgresMain (argc=<optimized out>, argv=argv@entry=0x24691b8, dbname=0x24690b0 \"postgres\", username=<optimized out>) at postgres.c:4182\n#30 0x000000000047cdf3 in BackendRun (port=0x24613a0) at postmaster.c:4361\n#31 BackendStartup (port=0x24613a0) at postmaster.c:4033\n#32 ServerLoop () at postmaster.c:1706\n#33 0x0000000000706449 in PostmasterMain (argc=argc@entry=20, argv=argv@entry=0x2439ba0) at postmaster.c:1379\n#34 0x000000000047ddd1 in main (argc=20, argv=0x2439ba0) at main.c:228\n\n",
"msg_date": "Fri, 15 Feb 2019 20:38:54 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "REL_11_STABLE: dsm.c - cannot unpin a segment that is not pinned"
},
{
"msg_contents": "On Sat, Feb 16, 2019 at 3:38 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> I saw this error once last week while stress testing to reproduce earlier bugs,\n> but tentatively thought it was a downstream symptom of those bugs (since\n> fixed), and now wanted to check that #15585 and others were no longer\n> reproducible. Unfortunately I got this error while running same test case [2]\n> as for previous bug ('could not attach').\n>\n> 2019-02-14 23:40:41.611 MST [32287] ERROR: cannot unpin a segment that is not pinned\n>\n> On commit faf132449c0cafd31fe9f14bbf29ca0318a89058 (REL_11_STABLE including\n> both of last week's post-11.2 DSA patches), I reproduced twice, once within\n> ~2.5 hours, once within 30min.\n>\n> I'm not able to reproduce on master running overnight and now 16+hours.\n\nOh, I think I know why: dsm_unpin_segment() containt another variant\nof the race fixed by 6c0fb941 (that was for dsm_attach() being\nconfused by segments with the same handle that are concurrently going\naway, but dsm_unpin_segment() does a handle lookup too, so it can be\nconfused by the same phenomenon). Untested, but the fix is probably:\n\ndiff --git a/src/backend/storage/ipc/dsm.c b/src/backend/storage/ipc/dsm.c\nindex cfbebeb31d..23ccc59f13 100644\n--- a/src/backend/storage/ipc/dsm.c\n+++ b/src/backend/storage/ipc/dsm.c\n@@ -844,8 +844,8 @@ dsm_unpin_segment(dsm_handle handle)\n LWLockAcquire(DynamicSharedMemoryControlLock, LW_EXCLUSIVE);\n for (i = 0; i < dsm_control->nitems; ++i)\n {\n- /* Skip unused slots. */\n- if (dsm_control->item[i].refcnt == 0)\n+ /* Skip unused slots and segments that are\nconcurrently going away. */\n+ if (dsm_control->item[i].refcnt <= 1)\n continue;\n\n /* If we've found our handle, we can stop searching. */\n\n-- \nThomas Munro\nhttp://www.enterprisedb.com\n\n",
"msg_date": "Sat, 16 Feb 2019 17:08:01 +1300",
"msg_from": "Thomas Munro <thomas.munro@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: REL_11_STABLE: dsm.c - cannot unpin a segment that is not pinned"
},
{
"msg_contents": "On Sat, Feb 16, 2019 at 09:16:01PM +1300, Thomas Munro wrote:\n> On Sat, Feb 16, 2019 at 5:31 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > Thanks, will leave it spinning overnight.\n\nNo errors in ~36 hours (126 CPU-hrs), so that seems to work. Thanks.\n\n> > Do you know if any of the others should also be changed ?\n> \n> Good question, let me double check...\n> \n> > $ grep 'refcnt == 0' src/backend/storage/ipc/dsm.c\n> > if (refcnt == 0)\n> \n> That's dsm_cleanup_using_control_segment() and runs when starting up\n> before any workers can be running to clean up after a preceding crash,\n> so it's OK (if it's 1, meaning we crashed while that slot was going\n> away, we'll try to destroy it again, which is correct). Good.\n> \n> > if (dsm_control->item[i].refcnt == 0)\n> \n> That's dsm_postmaster_shutdown(), similar but at shutdown time, run by\n> the postmaster, and it errs on the side of trying to destroy. Good.\n> \n> > if (dsm_control->item[i].refcnt == 0)\n> \n> That's dsm_create(), and it's looking specifically for a free slot,\n> and that's 0 only, it'll step over used/active (refcnt > 1) and\n> used/going-away (refcnt == 1). Good.\n\n",
"msg_date": "Sun, 17 Feb 2019 13:41:45 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: REL_11_STABLE: dsm.c - cannot unpin a segment that is not pinned"
},
{
"msg_contents": "On Sun, Feb 17, 2019 at 01:41:45PM -0600, Justin Pryzby wrote:\n> On Sat, Feb 16, 2019 at 09:16:01PM +1300, Thomas Munro wrote:\n> > On Sat, Feb 16, 2019 at 5:31 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > Thanks, will leave it spinning overnight.\n> \n> No errors in ~36 hours (126 CPU-hrs), so that seems to work. Thanks.\n\nActually...\n\nOn killing the postmaster having completed this stress test, one of the\nbackends was left running and didn't die on its own. It did die gracefully\nwhen I killed the backend or the client.\n\nI was able to repeat the result, on first try, but took numerous attempts to\nrepeat the 2nd and 3rd time to save pg_stat_activity.\n\nIs there some issue regarding dsm_postmaster_shutdown ?\n\n[pryzbyj@database postgresql]$ ps -wwf --ppid 31656\nUID PID PPID C STIME TTY TIME CMD\npryzbyj 4512 31656 1 13:00 ? 00:00:00 postgres: pryzbyj postgres [local] EXPLAIN\npryzbyj 31657 31656 0 12:59 ? 00:00:00 postgres: logger \npryzbyj 31659 31656 0 12:59 ? 00:00:00 postgres: checkpointer \npryzbyj 31662 31656 0 12:59 ? 00:00:00 postgres: stats collector \npryzbyj 31785 31656 0 12:59 ? 00:00:00 postgres: pryzbyj postgres [local] idle\n\ndatid | 13285\ndatname | postgres\npid | 4512\nusesysid | 10\nusename | pryzbyj\napplication_name | psql\nclient_addr | \nclient_hostname | \nclient_port | -1\nbackend_start | 2019-02-17 13:00:50.79285-07\nxact_start | 2019-02-17 13:00:50.797711-07\nquery_start | 2019-02-17 13:00:50.797711-07\nstate_change | 2019-02-17 13:00:50.797713-07\nwait_event_type | IPC\nwait_event | ExecuteGather\nstate | active\nbackend_xid | \nbackend_xmin | 1569\nquery | explain analyze SELECT colcld.child c, parent p, array_agg(colpar.attname::text ORDER BY colpar.attnum) cols, array_agg(format_type(colpar.atttypid, colpar.atttypmod) ORDER BY colpar.attnum) AS typ\nes FROM queued_alters qa JOIN pg_attribute colpar ON to_regclass(qa.parent)=colpar.attrelid AND colpar.attnum>0 AND NOT colpar.attisdropped JOIN (SELECT *, attrelid::regclass::text AS child FROM pg_attribute) colcld \nON to_regclass(qa.child) =colcld.attrelid AND colcld.attnum>0 AND NOT colcld.attisdropped WHERE colcld.attname=colpar.attname AND colpar.atttypid!=colcld.atttypid GROUP BY 1,2 ORDER BY parent LIKE 'unused%', regexp_r\neplace(colcld.child, '.*_((([0-9]{4}_[0-9]{2})_[0-9]{2})|(([0-9]{6})([0-9]{2})?))$', '\\3\\5') DESC, regexp_replace(colcld.child, '.*_', '') DESC LIMIT 1\nbackend_type | client backend\n\n#0 0x00007fe131637163 in __epoll_wait_nocancel () from /lib64/libc.so.6\n#1 0x0000000000758d26 in WaitEventSetWaitBlock (nevents=1, occurred_events=0x7ffde16775a0, cur_timeout=-1, set=0x7fe132640e50) at latch.c:1048\n#2 WaitEventSetWait (set=set@entry=0x7fe132640e50, timeout=timeout@entry=-1, occurred_events=occurred_events@entry=0x7ffde16775a0, nevents=nevents@entry=1, wait_event_info=wait_event_info@entry=134217731)\n at latch.c:1000\n#3 0x00000000007591c2 in WaitLatchOrSocket (latch=0x7fe12a7591b4, wakeEvents=wakeEvents@entry=1, sock=sock@entry=-1, timeout=-1, timeout@entry=0, wait_event_info=wait_event_info@entry=134217731) at latch.c:385\n#4 0x00000000007592a0 in WaitLatch (latch=<optimized out>, wakeEvents=wakeEvents@entry=1, timeout=timeout@entry=0, wait_event_info=wait_event_info@entry=134217731) at latch.c:339\n#5 0x00000000006401e2 in gather_readnext (gatherstate=<optimized out>) at nodeGather.c:367\n#6 gather_getnext (gatherstate=0x2af1f70) at nodeGather.c:256\n#7 ExecGather (pstate=0x2af1f70) at nodeGather.c:207\n#8 0x0000000000630188 in ExecProcNodeInstr (node=0x2af1f70) at execProcnode.c:461\n#9 0x0000000000653506 in ExecProcNode (node=0x2af1f70) at ../../../src/include/executor/executor.h:247\n#10 ExecSort (pstate=0x2af1e58) at nodeSort.c:107\n#11 0x0000000000630188 in ExecProcNodeInstr (node=0x2af1e58) at execProcnode.c:461\n#12 0x0000000000638a89 in ExecProcNode (node=0x2af1e58) at ../../../src/include/executor/executor.h:247\n#13 fetch_input_tuple (aggstate=aggstate@entry=0x2af19e0) at nodeAgg.c:406\n#14 0x000000000063a6b0 in agg_retrieve_direct (aggstate=0x2af19e0) at nodeAgg.c:1740\n#15 ExecAgg (pstate=0x2af19e0) at nodeAgg.c:1555\n#16 0x0000000000630188 in ExecProcNodeInstr (node=0x2af19e0) at execProcnode.c:461\n#17 0x0000000000653506 in ExecProcNode (node=0x2af19e0) at ../../../src/include/executor/executor.h:247\n#18 ExecSort (pstate=0x2af18c8) at nodeSort.c:107\n#19 0x0000000000630188 in ExecProcNodeInstr (node=0x2af18c8) at execProcnode.c:461\n#20 0x00000000006498e1 in ExecProcNode (node=0x2af18c8) at ../../../src/include/executor/executor.h:247\n#21 ExecLimit (pstate=0x2af16b8) at nodeLimit.c:95\n#22 0x0000000000630188 in ExecProcNodeInstr (node=0x2af16b8) at execProcnode.c:461\n#23 0x0000000000628eda in ExecProcNode (node=0x2af16b8) at ../../../src/include/executor/executor.h:247\n#24 ExecutePlan (execute_once=<optimized out>, dest=0xd96e60 <donothingDR>, direction=<optimized out>, numberTuples=0, sendTuples=true, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x2af16b8, \n estate=0x2af1448) at execMain.c:1723\n#25 standard_ExecutorRun (queryDesc=0x2b1eda0, direction=<optimized out>, count=0, execute_once=<optimized out>) at execMain.c:364\n#26 0x00000000005c635f in ExplainOnePlan (plannedstmt=plannedstmt@entry=0x2b21c30, into=into@entry=0x0, es=es@entry=0x2ad1580, \n queryString=queryString@entry=0x29e7348 \"explain analyze SELECT colcld.child c, parent p, array_agg(colpar.attname::text ORDER BY colpar.attnum) cols, array_agg(format_type(colpar.atttypid, colpar.atttypmod) ORDER BY colpar.attnum) AS types \"..., params=params@entry=0x0, queryEnv=queryEnv@entry=0x0, planduration=planduration@entry=0x7ffde1677970) at explain.c:535\n#27 0x00000000005c665f in ExplainOneQuery (query=<optimized out>, cursorOptions=<optimized out>, into=0x0, es=0x2ad1580, \n queryString=0x29e7348 \"explain analyze SELECT colcld.child c, parent p, array_agg(colpar.attname::text ORDER BY colpar.attnum) cols, array_agg(format_type(colpar.atttypid, colpar.atttypmod) ORDER BY colpar.attnum) AS types \"..., params=0x0, queryEnv=0x0) at explain.c:371\n#28 0x00000000005c6bbe in ExplainQuery (pstate=pstate@entry=0x2a09bd8, stmt=stmt@entry=0x2aa0bb8, \n queryString=queryString@entry=0x29e7348 \"explain analyze SELECT colcld.child c, parent p, array_agg(colpar.attname::text ORDER BY colpar.attnum) cols, array_agg(format_type(colpar.atttypid, colpar.atttypmod) ORDER BY colpar.attnum) AS types \"..., params=params@entry=0x0, queryEnv=queryEnv@entry=0x0, dest=dest@entry=0x2a09b40) at explain.c:254\n#29 0x0000000000782a1d in standard_ProcessUtility (pstmt=0x2aa0d40, \n\n\n",
"msg_date": "Sun, 17 Feb 2019 14:07:00 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: REL_11_STABLE: dsm.c - cannot unpin a segment that is not pinned"
},
{
"msg_contents": "On Mon, Feb 18, 2019 at 9:07 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Sun, Feb 17, 2019 at 01:41:45PM -0600, Justin Pryzby wrote:\n> > On Sat, Feb 16, 2019 at 09:16:01PM +1300, Thomas Munro wrote:\n> > > On Sat, Feb 16, 2019 at 5:31 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > > Thanks, will leave it spinning overnight.\n> >\n> > No errors in ~36 hours (126 CPU-hrs), so that seems to work. Thanks.\n\nGreat news. I will commit that.\n\n> Actually...\n>\n> On killing the postmaster having completed this stress test, one of the\n> backends was left running and didn't die on its own. It did die gracefully\n> when I killed the backend or the client.\n>\n> I was able to repeat the result, on first try, but took numerous attempts to\n> repeat the 2nd and 3rd time to save pg_stat_activity.\n>\n> Is there some issue regarding dsm_postmaster_shutdown ?\n\nHuh. What exactly do you mean by \"killing the postmaster\"? If you\nmean SIGKILL or something, one problem with 11 is that\ngather_readnext() doesn't respond to postmaster death. I fixed that\n(and every similar place) in master with commit cfdf4dc4fc9, like so:\n\n- WaitLatch(MyLatch, WL_LATCH_SET, 0,\nWAIT_EVENT_EXECUTE_GATHER);\n+ (void) WaitLatch(MyLatch, WL_LATCH_SET |\nWL_EXIT_ON_PM_DEATH, 0,\n+\nWAIT_EVENT_EXECUTE_GATHER);\n\n-- \nThomas Munro\nhttp://www.enterprisedb.com\n\n",
"msg_date": "Mon, 18 Feb 2019 09:26:53 +1300",
"msg_from": "Thomas Munro <thomas.munro@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: REL_11_STABLE: dsm.c - cannot unpin a segment that is not pinned"
},
{
"msg_contents": "On Mon, Feb 18, 2019 at 09:26:53AM +1300, Thomas Munro wrote:\n> On Mon, Feb 18, 2019 at 9:07 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > Actually...\n> >\n> > On killing the postmaster having completed this stress test, one of the\n> > backends was left running and didn't die on its own. It did die gracefully\n> > when I killed the backend or the client.\n> >\n> > I was able to repeat the result, on first try, but took numerous attempts to\n> > repeat the 2nd and 3rd time to save pg_stat_activity.\n> >\n> > Is there some issue regarding dsm_postmaster_shutdown ?\n> \n> Huh. What exactly do you mean by \"killing the postmaster\"? If you\n> mean SIGKILL or something, one problem with 11 is that\n\nI mean unqualified /bin/kill which is kill -TERM (-15 in linux).\n\nI gather (pun acknowledged but not intended) you mean \"one problem with PG v11\"\nand not \"one problem with kill -11\" (which is what I first thought, although I\nwas somehow confusing kill -9, and since I don't know why anyone would ever\nwant to manually send SIGSEGV).\n\nI think you're suggesting that's a known issue with v11, so nothing to do.\n\nThanks,\nJustin\n\n",
"msg_date": "Sun, 17 Feb 2019 14:35:37 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: REL_11_STABLE: dsm.c - cannot unpin a segment that is not pinned"
},
{
"msg_contents": "On Mon, Feb 18, 2019 at 9:35 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Mon, Feb 18, 2019 at 09:26:53AM +1300, Thomas Munro wrote:\n> > Huh. What exactly do you mean by \"killing the postmaster\"? If you\n> > mean SIGKILL or something, one problem with 11 is that\n>\n> I mean unqualified /bin/kill which is kill -TERM (-15 in linux).\n>\n> I gather (pun acknowledged but not intended) you mean \"one problem with PG v11\"\n> and not \"one problem with kill -11\" (which is what I first thought, although I\n> was somehow confusing kill -9, and since I don't know why anyone would ever\n> want to manually send SIGSEGV).\n>\n> I think you're suggesting that's a known issue with v11, so nothing to do.\n\nYeah. I suppose we should probably consider back-patching a fix for that.\n\nI've pushed the second DSM fix. Here is a summary of the errors you\n(and others) have reported, for the benefit of people searching the\narchives. I will give the master commit IDs, but the fixes should be\nin 11.3 and 10.8.\n\n1. \"dsa_allocate could not find %zu free pages\": freepage.c, fixed in 7215efdc.\n2. \"dsa_area could not attach to segment\": dsm.c, fixed in 6c0fb941.\n3. \"cannot unpin a segment that is not pinned\": dsm.c, fixed in 0b55aaac.\n\nThat resolves all the bugs I'm currently aware of in this area.\n(There is still the question of whether we should change the way DSA\ncleans up to avoid a self-deadlock on recursive error, but that needs\nsome more thought.)\n\n-- \nThomas Munro\nhttp://www.enterprisedb.com\n\n",
"msg_date": "Mon, 18 Feb 2019 10:26:12 +1300",
"msg_from": "Thomas Munro <thomas.munro@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: REL_11_STABLE: dsm.c - cannot unpin a segment that is not pinned"
},
{
"msg_contents": "On Mon, Feb 18, 2019 at 09:26:53AM +1300, Thomas Munro wrote:\n> Huh. What exactly do you mean by \"killing the postmaster\"? If you\n> mean SIGKILL or something, one problem with 11 is that\n> gather_readnext() doesn't respond to postmaster death. I fixed that\n> (and every similar place) in master with commit cfdf4dc4fc9, like so:\n\nOn Mon, Feb 18, 2019 at 10:26:12AM +1300, Thomas Munro wrote:\n> Yeah. I suppose we should probably consider back-patching a fix for that.\n\nIt hasn't been an issue for us, but that seems like a restart hazard. Who\nknows what all the distros initscripts do, how thin a layer they are around\npg_ctl or kill, but you risk waiting indefinitely for postmaster and its gather\nbackend/s to die, all the while rejecting new clients with 'the database system\nis shutting down'.\n\n+1\n\nJustin\n\n",
"msg_date": "Sun, 17 Feb 2019 16:36:37 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: REL_11_STABLE: dsm.c - cannot unpin a segment that is not pinned"
},
{
"msg_contents": "On Mon, Feb 18, 2019 at 10:26 AM Thomas Munro\n<thomas.munro@enterprisedb.com> wrote:\n> 1. \"dsa_allocate could not find %zu free pages\": freepage.c, fixed in 7215efdc.\n> 2. \"dsa_area could not attach to segment\": dsm.c, fixed in 6c0fb941.\n> 3. \"cannot unpin a segment that is not pinned\": dsm.c, fixed in 0b55aaac.\n>\n> That resolves all the bugs I'm currently aware of in this area.\n\nBleugh. After that optimistic statement, Justin reminded me of a\none-off segfault report hiding over on the pgsql-performance list[1].\nThat was buried in a bunch of discussion of bug #1 in the above list,\nwhich is now fixed. However, the segfault report was never directly\naddressed.\n\nAfter thinking really hard and doubling down on coffee, I think I know\nhow it happened -- actually it's something I have mentioned once\nbefore as a minor defect, but I hadn't connected all the dots. Bug #1\nabove was occurring occasionally in Jakub's workload, and there is a\npath through the code that would result in plain old allocation\nfailure instead of raising FATAL error #1, and a place where failure\nto allocate can return InvalidDsaPointer instead of raising an \"out of\nmemory\" error so that calling code could finish up eating a null\npointer that it wasn't expecting (that's user controllable, using the\nDSA_ALLOC_NO_OOM flag, but that one place didn't respect it). Here is\na patch that fixes those problems.\n\n[1] https://postgr.es/m/CAJk1zg3ZXhDsFg7tQGJ3ZD6N9dp%2BQ1_DU2N3%3Ds3Ywb-u6Lhc5A%40mail.gmail.com\n\n-- \nThomas Munro\nhttp://www.enterprisedb.com",
"msg_date": "Mon, 18 Feb 2019 13:24:50 +1300",
"msg_from": "Thomas Munro <thomas.munro@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: REL_11_STABLE: dsm.c - cannot unpin a segment that is not pinned"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen explaining a query, I think knowing the actual rows and pages in addition\nto the operation type (e.g seqscan) would be enough to calculate the actual\ncost. The actual cost metric could be useful when we want to look into how off\nis the planner's estimation, and the correlation between time and cost. Would\nit be a feature worth considering?\n\nThank you,\nDonald Dong\n",
"msg_date": "Sat, 16 Feb 2019 15:10:44 -0800",
"msg_from": "Donald Dong <xdong@csumb.edu>",
"msg_from_op": true,
"msg_subject": "Actual Cost"
},
{
"msg_contents": "On Sat, Feb 16, 2019 at 03:10:44PM -0800, Donald Dong wrote:\n> Hi,\n> \n> When explaining a query, I think knowing the actual rows and pages\n> in addition to the operation type (e.g seqscan) would be enough to\n> calculate the actual cost. The actual cost metric could be useful\n> when we want to look into how off is the planner's estimation, and\n> the correlation between time and cost. Would it be a feature worth\n> considering?\n\nAs someone not volunteering to do any of the work, I think it'd be a\nnice thing to have. How large an effort would you guess it would be\nto build a proof of concept?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n",
"msg_date": "Sun, 17 Feb 2019 03:40:05 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: Actual Cost"
},
{
"msg_contents": "On 2/17/19 3:40 AM, David Fetter wrote:\n> On Sat, Feb 16, 2019 at 03:10:44PM -0800, Donald Dong wrote:\n>> Hi,\n>>\n>> When explaining a query, I think knowing the actual rows and pages\n>> in addition to the operation type (e.g seqscan) would be enough to\n>> calculate the actual cost. The actual cost metric could be useful\n>> when we want to look into how off is the planner's estimation, and\n>> the correlation between time and cost. Would it be a feature worth\n>> considering?\n> \n> As someone not volunteering to do any of the work, I think it'd be a\n> nice thing to have. How large an effort would you guess it would be\n> to build a proof of concept?\n> \n\nI don't quite understand what is meant by \"actual cost metric\" and/or\nhow is that different from running EXPLAIN ANALYZE.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Sun, 17 Feb 2019 03:44:25 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Actual Cost"
},
{
"msg_contents": "On Feb 16, 2019, at 6:44 PM, Tomas Vondra wrote:\n> \n> On 2/17/19 3:40 AM, David Fetter wrote:\n>> \n>> As someone not volunteering to do any of the work, I think it'd be a\n>> nice thing to have. How large an effort would you guess it would be\n>> to build a proof of concept?\n> \n> I don't quite understand what is meant by \"actual cost metric\" and/or\n> how is that different from running EXPLAIN ANALYZE.\n\nHere is an example:\n\nHash Join (cost=3.92..18545.70 rows=34 width=32) (actual cost=3.92..18500 time=209.820..1168.831 rows=47 loops=3)\n\nNow we have the actual time. Time can have a high variance (a change\nin system load, or just noises), but I think the actual cost would be\nless likely to change due to external factors.\n\nOn 2/17/19 3:40 AM, David Fetter wrote:\n> On Sat, Feb 16, 2019 at 03:10:44PM -0800, Donald Dong wrote:\n>> Hi,\n>> \n>> When explaining a query, I think knowing the actual rows and pages\n>> in addition to the operation type (e.g seqscan) would be enough to\n>> calculate the actual cost. The actual cost metric could be useful\n>> when we want to look into how off is the planner's estimation, and\n>> the correlation between time and cost. Would it be a feature worth\n>> considering?\n> \n> As someone not volunteering to do any of the work, I think it'd be a\n> nice thing to have. How large an effort would you guess it would be\n> to build a proof of concept?\n\nIntuitively it does not feel very complicated to me, but I think the\ninterface when we're planning (PlannerInfo, RelOptInfo) is different\nfrom the interface when we're explaining a query (QueryDesc). Since\nI'm very new, if I'm doing it myself, it would probably take many\niterations to get to the right point. But still, I'm happy to work on\na proof of concept if no one else wants to work on it.\n\nregards,\nDonald Dong\n",
"msg_date": "Sat, 16 Feb 2019 19:32:59 -0800",
"msg_from": "Donald Dong <xdong@csumb.edu>",
"msg_from_op": true,
"msg_subject": "Re: Actual Cost"
},
{
"msg_contents": "Donald Dong <xdong@csumb.edu> writes:\n> On Feb 16, 2019, at 6:44 PM, Tomas Vondra wrote:\n>> I don't quite understand what is meant by \"actual cost metric\" and/or\n>> how is that different from running EXPLAIN ANALYZE.\n\n> Here is an example:\n\n> Hash Join (cost=3.92..18545.70 rows=34 width=32) (actual cost=3.92..18500 time=209.820..1168.831 rows=47 loops=3)\n\n> Now we have the actual time. Time can have a high variance (a change\n> in system load, or just noises), but I think the actual cost would be\n> less likely to change due to external factors.\n\nI'm with Tomas: you have not explained what you think those\nnumbers mean.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 17 Feb 2019 00:31:33 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Actual Cost"
},
{
"msg_contents": "On Feb 16, 2019, at 9:31 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Donald Dong <xdong@csumb.edu> writes:\n>> On Feb 16, 2019, at 6:44 PM, Tomas Vondra wrote:\n>>> I don't quite understand what is meant by \"actual cost metric\" and/or\n>>> how is that different from running EXPLAIN ANALYZE.\n> \n>> Here is an example:\n> \n>> Hash Join (cost=3.92..18545.70 rows=34 width=32) (actual cost=3.92..18500 time=209.820..1168.831 rows=47 loops=3)\n> \n>> Now we have the actual time. Time can have a high variance (a change\n>> in system load, or just noises), but I think the actual cost would be\n>> less likely to change due to external factors.\n> \n> I'm with Tomas: you have not explained what you think those\n> numbers mean.\n\nYeah, I was considering the actual cost to be the output of the cost\nmodel given the actual rows and pages after we instrument the\nexecution: plug in the values which are no longer estimations.\n\nFor a hash join, we could use the actual inner_rows_total to get the\nactual cost. For a seqscan, we can use the actual rows to get the\nactual CPU cost.\n\nregards,\nDonald Dong\n\n\n",
"msg_date": "Sat, 16 Feb 2019 22:45:13 -0800",
"msg_from": "Donald Dong <xdong@csumb.edu>",
"msg_from_op": true,
"msg_subject": "Re: Actual Cost"
},
{
"msg_contents": "On 2/17/19 7:45 AM, Donald Dong wrote:\n> On Feb 16, 2019, at 9:31 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> Donald Dong <xdong@csumb.edu> writes:\n>>> On Feb 16, 2019, at 6:44 PM, Tomas Vondra wrote:\n>>>> I don't quite understand what is meant by \"actual cost metric\" and/or\n>>>> how is that different from running EXPLAIN ANALYZE.\n>>\n>>> Here is an example:\n>>\n>>> Hash Join (cost=3.92..18545.70 rows=34 width=32) (actual cost=3.92..18500 time=209.820..1168.831 rows=47 loops=3)\n>>\n>>> Now we have the actual time. Time can have a high variance (a change\n>>> in system load, or just noises), but I think the actual cost would be\n>>> less likely to change due to external factors.\n>>\n>> I'm with Tomas: you have not explained what you think those\n>> numbers mean.\n> \n> Yeah, I was considering the actual cost to be the output of the cost\n> model given the actual rows and pages after we instrument the\n> execution: plug in the values which are no longer estimations.\n> \n> For a hash join, we could use the actual inner_rows_total to get the\n> actual cost. For a seqscan, we can use the actual rows to get the\n> actual CPU cost.\n> \n\nPerhaps I'm just too used to comparing the rows/pages directly, but I\ndon't quite see the benefit of having such \"actual cost\". Mostly because\nthe cost model is rather rough anyway.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Sun, 17 Feb 2019 13:11:23 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Actual Cost"
},
{
"msg_contents": "On Sat, Feb 16, 2019 at 10:33 PM Donald Dong <xdong@csumb.edu> wrote:\n\n> On Feb 16, 2019, at 6:44 PM, Tomas Vondra wrote:\n> >\n> > On 2/17/19 3:40 AM, David Fetter wrote:\n> >>\n> >> As someone not volunteering to do any of the work, I think it'd be a\n> >> nice thing to have. How large an effort would you guess it would be\n> >> to build a proof of concept?\n> >\n> > I don't quite understand what is meant by \"actual cost metric\" and/or\n> > how is that different from running EXPLAIN ANALYZE.\n>\n> Here is an example:\n>\n> Hash Join (cost=3.92..18545.70 rows=34 width=32) (actual cost=3.92..18500\n> time=209.820..1168.831 rows=47 loops=3)\n>\n> Now we have the actual time. Time can have a high variance (a change\n> in system load, or just noises), but I think the actual cost would be\n> less likely to change due to external factors.\n>\n\nI don't think there is any way to assign an actual cost. For example how\ndo you know if a buffer read was \"actually\" seq_page_cost or\nrandom_page_cost? And if there were a way, it too would have a high\nvariance.\n\nWhat would I find very useful is a verbosity option to get the cost\nestimates expressed as a multiplier of each *_cost parameter, rather than\njust as a scalar. And at the whole-query level, get an rusage report\nrather than just wall-clock duration. And if the HashAggregate node under\n\"explain analyze\" would report memory and bucket stats; and if\nthe Aggregate node would report...anything.\n\nCheers,\n\nJeff\n\nOn Sat, Feb 16, 2019 at 10:33 PM Donald Dong <xdong@csumb.edu> wrote:On Feb 16, 2019, at 6:44 PM, Tomas Vondra wrote:\n> \n> On 2/17/19 3:40 AM, David Fetter wrote:\n>> \n>> As someone not volunteering to do any of the work, I think it'd be a\n>> nice thing to have. How large an effort would you guess it would be\n>> to build a proof of concept?\n> \n> I don't quite understand what is meant by \"actual cost metric\" and/or\n> how is that different from running EXPLAIN ANALYZE.\n\nHere is an example:\n\nHash Join (cost=3.92..18545.70 rows=34 width=32) (actual cost=3.92..18500 time=209.820..1168.831 rows=47 loops=3)\n\nNow we have the actual time. Time can have a high variance (a change\nin system load, or just noises), but I think the actual cost would be\nless likely to change due to external factors.I don't think there is any way to assign an actual cost. For example how do you know if a buffer read was \"actually\" seq_page_cost or random_page_cost? And if there were a way, it too would have a high variance.What would I find very useful is a verbosity option to get the cost estimates expressed as a multiplier of each *_cost parameter, rather than just as a scalar. And at the whole-query level, get an rusage report rather than just wall-clock duration. And if the HashAggregate node under \"explain analyze\" would report memory and bucket stats; and if the Aggregate node would report...anything.Cheers,Jeff",
"msg_date": "Sun, 17 Feb 2019 11:29:56 -0500",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Actual Cost"
},
{
"msg_contents": "Jeff Janes <jeff.janes@gmail.com> writes:\n> What would I find very useful is a verbosity option to get the cost\n> estimates expressed as a multiplier of each *_cost parameter, rather than\n> just as a scalar.\n\nPerhaps, but refactoring to get that seems impractically invasive &\nexpensive, since e.g. index AM cost estimate functions would have to\nbe redefined, plus we'd have to carry around some kind of cost vector\nrather than single numbers for every Path ...\n\n> And at the whole-query level, get an rusage report\n> rather than just wall-clock duration.\n\nI'm sure you're aware of log_statement_stats and friends already.\nI agree though that that's not necessarily an optimal user interface.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 17 Feb 2019 13:56:23 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Actual Cost"
},
{
"msg_contents": "On Feb 17, 2019, at 10:56 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Perhaps, but refactoring to get that seems impractically invasive &\n> expensive, since e.g. index AM cost estimate functions would have to\n> be redefined, plus we'd have to carry around some kind of cost vector\n> rather than single numbers for every Path ...\n\nMaybe we could walk through the final plan tree and fill the expression? With another tree structure to hold the cost vectors.\n\nOn Feb 17, 2019, at 8:29 AM, Jeff Janes <jeff.janes@gmail.com> wrote:\n> I don't think there is any way to assign an actual cost. For example how do you know if a buffer read was \"actually\" seq_page_cost or random_page_cost? And if there were a way, it too would have a high variance.\n\nThanks for pointing that out! I think it's probably fine to use the same assumptions as the cost model? For example, we assume 3/4ths of accesses are sequential, 1/4th are not.\n\n> What would I find very useful is a verbosity option to get the cost estimates expressed as a multiplier of each *_cost parameter, rather than just as a scalar.\n\nYeah, such expression would also help if we want to plug in the actual values.\n\n\nOn Feb 17, 2019, at 4:11 AM, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n> Perhaps I'm just too used to comparing the rows/pages directly, but I\n> don't quite see the benefit of having such \"actual cost\". Mostly because\n> the cost model is rather rough anyway.\n\nYeah, I think the \"actual cost\" is only \"actual\" for the cost model - the cost it would output given the exact row/page number. Some articles/papers have shown row estimation is the primary reason for planners to go off, so showing the actual (or the updated assumption) might also be useful if people want to compare different plans and want to refer to a more accurate quantitative measure.\n\nregards,\nDonald Dong\n",
"msg_date": "Sun, 17 Feb 2019 11:05:39 -0800",
"msg_from": "Donald Dong <xdong@csumb.edu>",
"msg_from_op": true,
"msg_subject": "Re: Actual Cost"
},
{
"msg_contents": "On Sun, Feb 17, 2019 at 11:29:56AM -0500, Jeff Janes wrote:\n> What would I find very useful is [...] an rusage report rather than just\n> wall-clock duration.\n\nMost of that's available;\n\n[pryzbyj@database ~]$ psql postgres -xtc \"SET client_min_messages=log; SET log_statement_stats=on\" -c 'SELECT max(i) FROM generate_series(1,999999)i'\nSET\nLOG: statement: SELECT max(i) FROM generate_series(1,999999)i\nLOG: QUERY STATISTICS\nDETAIL: ! system usage stats:\n! 0.186643 s user, 0.033622 s system, 0.220780 s elapsed\n! [0.187823 s user, 0.037162 s system total]\n! 61580 kB max resident size\n! 0/0 [0/0] filesystem blocks in/out\n! 0/7918 [0/9042] page faults/reclaims, 0 [0] swaps\n! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n! 0/3 [2/3] voluntary/involuntary context switches\nmax | 999999\n\nSee also\nhttps://commitfest.postgresql.org/20/1691/ => 88bdbd3f746049834ae3cc972e6e650586ec3c9d\nhttps://www.postgresql.org/message-id/flat/7ffb9dbe-c76f-8ca3-12ee-7914ede872e6%40stormcloud9.net\nhttps://www.postgresql.org/docs/current/runtime-config-statistics.html#RUNTIME-CONFIG-STATISTICS-MONITOR\nmax RSS added in commit c039ba0716383ccaf88c9be1a7f0803a77823de1\n\nJustin\n\n",
"msg_date": "Sun, 17 Feb 2019 15:48:00 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Actual Cost"
},
{
"msg_contents": "On Feb 17, 2019, at 11:05 AM, Donald Dong <xdong@csumb.edu> wrote:\n> \n> On Feb 17, 2019, at 10:56 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Perhaps, but refactoring to get that seems impractically invasive &\n>> expensive, since e.g. index AM cost estimate functions would have to\n>> be redefined, plus we'd have to carry around some kind of cost vector\n>> rather than single numbers for every Path ...\n> \n> Maybe we could walk through the final plan tree and fill the expression? With another tree structure to hold the cost vectors.\n\nHere is a draft patch. I added a new structure called CostInfo to the Plan node. The CostInfo is be added in create_plan, and the cost calculation is centered at CostInfo. Is this a reasonable approach?\n\nThank you,\nDonald Dong",
"msg_date": "Sun, 17 Feb 2019 18:27:13 -0800",
"msg_from": "Donald Dong <xdong@csumb.edu>",
"msg_from_op": true,
"msg_subject": "Re: Actual Cost"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm trying to use random_zipfian() for benchmarking of skewed data sets,\nand I ran head-first into an issue with rather excessive CPU costs.\n\nConsider an example like this:\n\n pgbench -i -s 10000 test\n\n pgbench -s 10000 -f zipf.sql -T 30 test\n\nwhere zipf.sql does this:\n\n \\SET id random_zipfian(1, 100000 * :scale, 0.1)\n BEGIN;\n SELECT * FROM pgbench_accounts WHERE aid = :id;\n END;\n\nUnfortunately, this produces something like this:\n\n transaction type: zipf.sql\n scaling factor: 10000\n query mode: simple\n number of clients: 1\n number of threads: 1\n duration: 30 s\n number of transactions actually processed: 1\n latency average = 43849.143 ms\n tps = 0.022805 (including connections establishing)\n tps = 0.022806 (excluding connections establishing)\n\nwhich is somewhat ... not great, I guess. This happens because\ngeneralizedHarmonicNumber() does this:\n\n\tfor (i = n; i > 1; i--)\n\t\tans += pow(i, -s);\n\nwhere n happens to be 1000000000 (range passed to random_zipfian), so\nthe loop takes quite a bit of time. So much that we only ever complete a\nsingle transaction, because this work happens in the context of the\nfirst transction, and so it counts into the 30-second limit.\n\nThe docs actually do mention performance of this function:\n\n The function's performance is poor for parameter values close and\n above 1.0 and on a small range.\n\nBut that does not seem to cover the example I just posted, because 0.1\nis not particularly close to 1.0 (or above 1.0), and 1e9 values hardly\ncounts as \"small range\".\n\nI see this part of random_zipfian comes from \"Quickly Generating\nBillion-Record Synthetic Databases\" which deals with generating data\nsets, where wasting a bit of time is not a huge deal. But when running\nbenchmarks it matters much more. So maybe there's a disconnect here?\n\nInterestingly enough, I don't see this issue on values above 1.0, no\nmatter how close to 1.0 those are. Although the throughput seems lower\nthan with uniform access, so this part of random_zipfian is not quite\ncheap either.\n\nNow, I don't know what to do about this. Ideally, we'd have a faster\nalgorithm to generate zipfian distributions - I don't know if such thing\nexists though. Or maybe we could precompute the distribution first, not\ncounting it into the benchmark duration.\n\nBut I guess the very least we can/should do is improving the docs to\nmake it more obvious which cases are expected to be slow.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Sun, 17 Feb 2019 01:30:56 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "CPU costs of random_zipfian in pgbench"
},
{
"msg_contents": "Hello Tomas,\n\n> I'm trying to use random_zipfian() for benchmarking of skewed data sets, \n> and I ran head-first into an issue with rather excessive CPU costs. \n> [...] This happens because generalizedHarmonicNumber() does this:\n>\n> \tfor (i = n; i > 1; i--)\n> \t\tans += pow(i, -s);\n>\n> where n happens to be 1000000000 (range passed to random_zipfian), so\n> the loop takes quite a bit of time.\n\nIf you find a better formula for the harmonic number, you are welcome \nand probably get your name on it:-)\n\n> So much that we only ever complete a\n> single transaction, because this work happens in the context of the\n> first transction, and so it counts into the 30-second limit.\n\nThat is why the value is cached, so it is done once per size & value.\n\nIf you want skewed but not especially zipfian, use exponential which is \nquite cheap. Also zipfian with a > 1.0 parameter does not have to compute \nthe harmonic number, so it depends in the parameter.\n\nPlease also beware that non uniform keys are correlated (the more frequent \nare close values), which is somewhat non representative of what you would \nexpect in real life. This is why I submitted a pseudo-random permutation \nfunction, which alas does not get much momentum from committers.\n\n> The docs actually do mention performance of this function:\n>\n> The function's performance is poor for parameter values close and\n> above 1.0 and on a small range.\n>\n> But that does not seem to cover the example I just posted, because 0.1\n> is not particularly close to 1.0 (or above 1.0), and 1e9 values hardly\n> counts as \"small range\".\n\nYep. The warning is about the repeated cost of calling random_zipfian, \nwhich is not good when the parameter is close to and above 1.0. It is not \nabout the setup cost when the value is between 0 and &. This could indeed \ndeserve a warning.\n\nNow if you do benchmarking on a database, probably you want to run hours \nof to level out checkpointing and other background tasks, so the setup \ncost should be negligeable, in the end...\n\n> I see this part of random_zipfian comes from \"Quickly Generating\n> Billion-Record Synthetic Databases\" which deals with generating data\n> sets, where wasting a bit of time is not a huge deal. But when running\n> benchmarks it matters much more. So maybe there's a disconnect here?\n\nHmmm. The first author of this paper got a Turing award:-) The disconnect \nis just that the setup cost is neglected, or computed offline.\n\n> Interestingly enough, I don't see this issue on values above 1.0, no\n> matter how close to 1.0 those are. Although the throughput seems lower\n> than with uniform access, so this part of random_zipfian is not quite\n> cheap either.\n\nIndeed. Pg provides an underlying uniform pseudo-random function, so \ngenerating uniform is cheap. Others need more or less expensive \ntransformations.\n\n> Now, I don't know what to do about this. Ideally, we'd have a faster\n> algorithm to generate zipfian distributions\n\nYou are welcome to find one, and get famous (hmmm... among some \nspecialized mathematicians at least:-) for it.\n\n> - I don't know if such thing exists though. Or maybe we could precompute \n> the distribution first, not counting it into the benchmark duration.\n\nCould be done, but this would require significant partial evaluation \nefforts and only work when the size and parameter are constants (although \nusing the function with variable parameters would be a very bad idea \nanyway).\n\n> But I guess the very least we can/should do is improving the docs to\n> make it more obvious which cases are expected to be slow.\n\nYep. Attached is a doc & comment improvement.\n\n-- \nFabien.",
"msg_date": "Sun, 17 Feb 2019 09:51:59 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: CPU costs of random_zipfian in pgbench"
},
{
"msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n>> I'm trying to use random_zipfian() for benchmarking of skewed data sets, \n>> and I ran head-first into an issue with rather excessive CPU costs. \n\n> If you want skewed but not especially zipfian, use exponential which is \n> quite cheap. Also zipfian with a > 1.0 parameter does not have to compute \n> the harmonic number, so it depends in the parameter.\n\nMaybe we should drop support for parameter values < 1.0, then. The idea\nthat pgbench is doing something so expensive as to require caching seems\nflat-out insane from here. That cannot be seen as anything but a foot-gun\nfor unwary users. Under what circumstances would an informed user use\nthat random distribution rather than another far-cheaper-to-compute one?\n\n> ... This is why I submitted a pseudo-random permutation \n> function, which alas does not get much momentum from committers.\n\nTBH, I think pgbench is now much too complex; it does not need more\nfeatures, especially not ones that need large caveats in the docs.\n(What exactly is the point of having zipfian at all?)\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 17 Feb 2019 11:09:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CPU costs of random_zipfian in pgbench"
},
{
"msg_contents": "On Sun, Feb 17, 2019 at 11:09:27AM -0500, Tom Lane wrote:\n> Fabien COELHO <coelho@cri.ensmp.fr> writes:\n> >> I'm trying to use random_zipfian() for benchmarking of skewed data sets, \n> >> and I ran head-first into an issue with rather excessive CPU costs. \n> \n> > If you want skewed but not especially zipfian, use exponential which is \n> > quite cheap. Also zipfian with a > 1.0 parameter does not have to compute \n> > the harmonic number, so it depends in the parameter.\n> \n> Maybe we should drop support for parameter values < 1.0, then. The idea\n> that pgbench is doing something so expensive as to require caching seems\n> flat-out insane from here. That cannot be seen as anything but a foot-gun\n> for unwary users. Under what circumstances would an informed user use\n> that random distribution rather than another far-cheaper-to-compute one?\n> \n> > ... This is why I submitted a pseudo-random permutation \n> > function, which alas does not get much momentum from committers.\n> \n> TBH, I think pgbench is now much too complex; it does not need more\n> features, especially not ones that need large caveats in the docs.\n> (What exactly is the point of having zipfian at all?)\n\nTaking a statistical perspective, Zipfian distributions violate some\nassumptions we make by assuming uniform distributions. This matters\nbecause Zipf-distributed data sets are quite common in real life.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n",
"msg_date": "Sun, 17 Feb 2019 18:33:45 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: CPU costs of random_zipfian in pgbench"
},
{
"msg_contents": "On 2/17/19 6:33 PM, David Fetter wrote:\n> On Sun, Feb 17, 2019 at 11:09:27AM -0500, Tom Lane wrote:\n>> Fabien COELHO <coelho@cri.ensmp.fr> writes:\n>>>> I'm trying to use random_zipfian() for benchmarking of skewed data sets, \n>>>> and I ran head-first into an issue with rather excessive CPU costs. \n>>\n>>> If you want skewed but not especially zipfian, use exponential which is \n>>> quite cheap. Also zipfian with a > 1.0 parameter does not have to compute \n>>> the harmonic number, so it depends in the parameter.\n>>\n>> Maybe we should drop support for parameter values < 1.0, then. The idea\n>> that pgbench is doing something so expensive as to require caching seems\n>> flat-out insane from here. That cannot be seen as anything but a foot-gun\n>> for unwary users. Under what circumstances would an informed user use\n>> that random distribution rather than another far-cheaper-to-compute one?\n>>\n>>> ... This is why I submitted a pseudo-random permutation \n>>> function, which alas does not get much momentum from committers.\n>>\n>> TBH, I think pgbench is now much too complex; it does not need more\n>> features, especially not ones that need large caveats in the docs.\n>> (What exactly is the point of having zipfian at all?)\n> \n> Taking a statistical perspective, Zipfian distributions violate some\n> assumptions we make by assuming uniform distributions. This matters\n> because Zipf-distributed data sets are quite common in real life.\n> \n\nI don't think there's any disagreement about the value of non-uniform\ndistributions. The question is whether it has to be a zipfian one, when\nthe best algorithm we know about is this expensive in some cases? Or\nwould an exponential distribution be enough?\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Sun, 17 Feb 2019 23:02:37 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: CPU costs of random_zipfian in pgbench"
},
{
"msg_contents": "On 2/17/19 5:09 PM, Tom Lane wrote:\n> Fabien COELHO <coelho@cri.ensmp.fr> writes:\n>>> I'm trying to use random_zipfian() for benchmarking of skewed data sets, \n>>> and I ran head-first into an issue with rather excessive CPU costs. \n> \n>> If you want skewed but not especially zipfian, use exponential which is \n>> quite cheap. Also zipfian with a > 1.0 parameter does not have to compute \n>> the harmonic number, so it depends in the parameter.\n> \n> Maybe we should drop support for parameter values < 1.0, then. The idea\n> that pgbench is doing something so expensive as to require caching seems\n> flat-out insane from here.\n\nMaybe.\n\nIt's not quite clear to me why we support the two modes at all? We use\none algorithm for values < 1.0 and another one for values > 1.0, what's\nthe difference there? Are those distributions materially different?\n\nAlso, I wonder if just dropping support for parameters < 1.0 would be\nenough, because the docs say:\n\n The function's performance is poor for parameter values close and\n above 1.0 and on a small range.\n\nwhich seems to suggest it might be slow even for values > 1.0 in some\ncases. Not sure.\n\n> That cannot be seen as anything but a foot-gun\n> for unwary users. Under what circumstances would an informed user use\n> that random distribution rather than another far-cheaper-to-compute one?\n> \n>> ... This is why I submitted a pseudo-random permutation \n>> function, which alas does not get much momentum from committers.\n> \n> TBH, I think pgbench is now much too complex; it does not need more\n> features, especially not ones that need large caveats in the docs.\n> (What exactly is the point of having zipfian at all?)\n> \n\nI wonder about the growing complexity of pgbench too ...\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Sun, 17 Feb 2019 23:08:31 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: CPU costs of random_zipfian in pgbench"
},
{
"msg_contents": "On Sun, Feb 17, 2019 at 8:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> TBH, I think pgbench is now much too complex; it does not need more\n> features, especially not ones that need large caveats in the docs.\n> (What exactly is the point of having zipfian at all?)\n\nI agree that pgbench is too complex, given its mandate and design.\nWhile I found Zipfian useful once or twice, I probably would have done\njust as well with an exponential distribution.\n\nI have been using BenchmarkSQL as a fair-use TPC-C implementation for\nmy indexing project, with great results. pgbench just isn't very\nuseful when validating the changes to B-Tree page splits that I\npropose, because the insertion pattern cannot be modeled\nprobabilistically. Besides, I really think that things like latency\ngraphs are table stakes for this kind of work, which BenchmarkSQL\noffers out of the box. It isn't practical to make pgbench into a\nframework, which is what I'd really like to see. There just isn't that\nmuch more than can be done there.\n\nBenchmarkSQL seems to have recently become abandonware, though it's\nabandonware that I rely on. OpenSCG were acquired by Amazon, and the\nBitbucket repository vanished without explanation. I would be very\npleased if Fabien or somebody else considered maintaining it going\nforward -- it still has a lot of rough edges, and still could stand to\nbe improved in a number of ways (I know that Fabien is interested in\nboth indexing and benchmarking, which is why I thought of him). TPC-C\nis a highly studied benchmark, and I'm sure that there is more we can\nlearn from it. I maintain a mirror of BenchmarkSQL here:\n\nhttps://github.com/petergeoghegan/benchmarksql/\n\nThere are at least 2 or 3 other fair-use implementations of TPC-C that\nwork with Postgres that I'm aware of, all of which seem to have\nseveral major problems. BenchmarkSQL is a solid basis for an\nexternally maintained, defacto standard Postgres benchmarking tool\nthat comes with \"batteries included\".\n\n-- \nPeter Geoghegan\n\n",
"msg_date": "Sun, 17 Feb 2019 14:41:10 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: CPU costs of random_zipfian in pgbench"
},
{
"msg_contents": "Hello Peter,\n\nMy 0.02€: I'm not quite interested in maintaining a tool for *one* \nbenchmark, whatever the benchmark, its standardness or quality.\n\nWhat I like in \"pgbench\" is that it is both versatile and simple so that \npeople can benchmark their own data with their own load and their own \nqueries by writing a few lines of trivial SQL and psql-like slash command \nand adjusting a few options, and extract meaningful statistics out of it.\n\nI've been, but not only me, improving it so that it keeps its usage \nsimplicity but provides key features so that anyone can write a simple but \nrealistic benchmark.\n\nThe key features needed for that, and which happen to be nearly all there \nnow are:\n - some expressions (thanks Roberts for the initial push)\n - non uniform random (ok, some are more expensive, too bad)\n however using non uniform random generates a correlation issue,\n hence the permutation function submission, which took time because\n this is a non trivial problem.\n - conditionals (\\if, taken from psql's implementation)\n - getting a result out and being able to do something with it\n (\\gset, and the associated \\cset that Tom does not like).\n - improved reporting (including around latency, per script/command/...)\n - realistic loads (--rate vs only pedal-to-the-metal runs, --latency-limit)\n\nI have not encountered other tools with this versatility and simplicity. \nThe TPC-C implementation you point out and others I have seen are \nstructurally targetted at TPC-C and nothing else. I do not care about \nTPC-C per se, I care about people being able to run relevant benchmarks \nwith minimal effort.\n\nI'm not planning to submit many things in the future (current: a \nstrict-tpcb implementation which is really of show case of the existing \nfeatures, faster server-side initialization, simple refactoring to \nsimplify/clarify the code structure here and there, maybe some stuff may \nmigrate to fe_utils if useful to psql), and review what other people find \nuseful because I know the code base quite well.\n\nI do thing that the maintainability of the code has globally been improved \nrecently because (1) the process-based implementation has been dropped (2) \nthe FSA implementation makes the code easier to understand and check, \ncompared to the lengthy plenty-of-if many-variables function used \nbeforehand. Bugs have been identified and fixed.\n\n> I agree that pgbench is too complex, given its mandate and design.\n> While I found Zipfian useful once or twice, I probably would have done\n> just as well with an exponential distribution.\n\nYep, I agree that exponential is mostly okay for most practical \nbenchmarking uses, but some benchmark/people seem to really want zipf, so \nzipf and its intrinsic underlying complexity was submitted and finally \nincluded.\n\n> I have been using BenchmarkSQL as a fair-use TPC-C implementation for\n> my indexing project, with great results. pgbench just isn't very\n> useful when validating the changes to B-Tree page splits that I\n> propose, because the insertion pattern cannot be modeled\n> probabilistically.\n\nI do not understand the use case, and why pgbench could not be used for \nthis purpose.\n\n> Besides, I really think that things like latency graphs are table stakes \n> for this kind of work, which BenchmarkSQL offers out of the box. It \n> isn't practical to make pgbench into a framework, which is what I'd \n> really like to see. There just isn't that much more than can be done \n> there.\n\nYep. Pgbench only does \"simple stats\". I script around the per-second \nprogress output for graphical display and additional stats (eg 5 number \nsummary…).\n\n-- \nFabien.",
"msg_date": "Tue, 19 Feb 2019 16:14:06 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: CPU costs of random_zipfian in pgbench"
},
{
"msg_contents": "On Sun, Feb 17, 2019 at 11:02:37PM +0100, Tomas Vondra wrote:\n> On 2/17/19 6:33 PM, David Fetter wrote:\n> > On Sun, Feb 17, 2019 at 11:09:27AM -0500, Tom Lane wrote:\n> >> Fabien COELHO <coelho@cri.ensmp.fr> writes:\n> >>>> I'm trying to use random_zipfian() for benchmarking of skewed data sets, \n> >>>> and I ran head-first into an issue with rather excessive CPU costs. \n> >>\n> >>> If you want skewed but not especially zipfian, use exponential which is \n> >>> quite cheap. Also zipfian with a > 1.0 parameter does not have to compute \n> >>> the harmonic number, so it depends in the parameter.\n> >>\n> >> Maybe we should drop support for parameter values < 1.0, then. The idea\n> >> that pgbench is doing something so expensive as to require caching seems\n> >> flat-out insane from here. That cannot be seen as anything but a foot-gun\n> >> for unwary users. Under what circumstances would an informed user use\n> >> that random distribution rather than another far-cheaper-to-compute one?\n> >>\n> >>> ... This is why I submitted a pseudo-random permutation \n> >>> function, which alas does not get much momentum from committers.\n> >>\n> >> TBH, I think pgbench is now much too complex; it does not need more\n> >> features, especially not ones that need large caveats in the docs.\n> >> (What exactly is the point of having zipfian at all?)\n> > \n> > Taking a statistical perspective, Zipfian distributions violate some\n> > assumptions we make by assuming uniform distributions. This matters\n> > because Zipf-distributed data sets are quite common in real life.\n> > \n> \n> I don't think there's any disagreement about the value of non-uniform\n> distributions. The question is whether it has to be a zipfian one, when\n> the best algorithm we know about is this expensive in some cases? Or\n> would an exponential distribution be enough?\n\nI suppose to people who care about the difference between Zipf and\nexponential would appreciate having the former around to test.\n\nWhether pgbench should support this is a different question, and it's\nsounding a little like the answer to that one is \"no.\"\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n",
"msg_date": "Tue, 19 Feb 2019 19:03:03 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: CPU costs of random_zipfian in pgbench"
},
{
"msg_contents": "On Tue, Feb 19, 2019 at 7:14 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> What I like in \"pgbench\" is that it is both versatile and simple so that\n> people can benchmark their own data with their own load and their own\n> queries by writing a few lines of trivial SQL and psql-like slash command\n> and adjusting a few options, and extract meaningful statistics out of it.\n\nThat's also what I like about it. However, I don't think that pgbench\nis capable of helping me answer questions that are not relatively\nsimple. That is going to become less and less interesting over time.\n\n> I have not encountered other tools with this versatility and simplicity.\n> The TPC-C implementation you point out and others I have seen are\n> structurally targetted at TPC-C and nothing else. I do not care about\n> TPC-C per se, I care about people being able to run relevant benchmarks\n> with minimal effort.\n\nLots and lots of people care about TPC-C. Far more than care about\nTPC-B, which has been officially obsolete for a long time. I don't\ndoubt that there are some bad reasons for the interest that you see\nfrom vendors, but the TPC-C stuff has real merit (just read Jim Gray,\nwho you referenced in relation to the Zipfian generator). Lots of\nsmart people worked for a couple of years on the original\nspecification of TPC-C. There is a lot of papers on TPC-C. It *is*\ncomplicated in various ways, which is a good thing, as it approximates\na real-world workload, and exercises a bunch of code paths that TPC-B\ndoes not. TPC-A and TPC-B were early attempts, and managed to be\nbetter than nothing at a time when performance validation was not\nnearly as advanced as it is today.\n\n> > I have been using BenchmarkSQL as a fair-use TPC-C implementation for\n> > my indexing project, with great results. pgbench just isn't very\n> > useful when validating the changes to B-Tree page splits that I\n> > propose, because the insertion pattern cannot be modeled\n> > probabilistically.\n>\n> I do not understand the use case, and why pgbench could not be used for\n> this purpose.\n\nTPC-C is characterized by *localized* monotonically increasing\ninsertions in most of its indexes. By far the biggest index is the\norder lines table primary key, which is on '(ol_w_id, ol_d_id,\nol_o_id, ol_number)'. You get pathological performance with this\ncurrently, because you should really to split at the point that new\nitems are inserted at, but we do a standard 50/50 page split. The left\nhalf of the page isn't inserted into again (except by rare non-HOT\nupdates), so you end up *reliably* wasting about half of all space in\nthe index.\n\nIOW, there are cases where we should behave like we're doing a\nrightmost page split (kind of), that don't happen to involve the\nrightmost page. The problem was described but not diagnosed in this\nblog post: https://www.commandprompt.com/blog/postgres_autovacuum_bloat_tpc-c/\n\nIf you had random insertions (or insertions that were characterized or\ndefined in terms of a probability distribution and range), then you\nwould not see this problem. Instead, you'd get something like 70%\nspace utilization -- not 50% utilization. I think that it would be\ndifficult if not impossible to reproduce the pathological performance\nwith pgbench, even though it's a totally realistic scenario. There\nneeds to be explicit overall ordering/phases across co-operating\nbackends, or backends that are in some sense localized (e.g.\nassociated with a particular warehouse inputting a particular order).\nTPC-C offers several variations of this same pathological case.\n\nThis is just an example. The point is that there is a lot to be said\nfor investing significant effort in coming up with a benchmark that is\na distillation of a real workload, with realistic though still kind of\nadversarial bottlenecks. I wouldn't have become aware of the page\nsplit problem without TPC-C, which suggests to me that the TPC people\nknow what they're doing. Also, there is an advantage to having\nsomething that is a known quantity, that enables comparisons across\nsystems.\n\nI also think that TPC-E is interesting, since it stresses OLTP systems\nin a way that is quite different to TPC-C. It's much more read-heavy,\nand has many more secondary indexes.\n\n> Yep. Pgbench only does \"simple stats\". I script around the per-second\n> progress output for graphical display and additional stats (eg 5 number\n> summary…).\n\nIt's far easier to spot regressions over time and other such surprises\nif you have latency graphs that break down latency by transaction.\nWhen you're benchmarking queries with joins, then you need to be\nvigilant of planner issues over time. The complexity has its pluses as\nwell as its minuses.\n\nI'm hardly in a position to tell you what to work on. I think that\nthere may be another perspective on this that you could take something\naway from, though.\n\n-- \nPeter Geoghegan\n\n",
"msg_date": "Tue, 19 Feb 2019 16:48:16 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: CPU costs of random_zipfian in pgbench"
},
{
"msg_contents": "On Sun, Feb 17, 2019 at 10:52 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n> > I'm trying to use random_zipfian() for benchmarking of skewed data sets,\n> > and I ran head-first into an issue with rather excessive CPU costs.\n> > [...] This happens because generalizedHarmonicNumber() does this:\n> >\n> > for (i = n; i > 1; i--)\n> > ans += pow(i, -s);\n> >\n> > where n happens to be 1000000000 (range passed to random_zipfian), so\n> > the loop takes quite a bit of time.\n>\n> If you find a better formula for the harmonic number, you are welcome\n> and probably get your name on it:-)\n>\n\nThere are pretty good approximations for s > 1.0 using Riemann zeta\nfunction and Euler derived a formula for the s = 1 case.\n\nI also noticed that i is int in this function, but n is int64. That seems\nlike an oversight.\n\nRegards,\nAnts Aasma\n\nOn Sun, Feb 17, 2019 at 10:52 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:> I'm trying to use random_zipfian() for benchmarking of skewed data sets, \n> and I ran head-first into an issue with rather excessive CPU costs. \n> [...] This happens because generalizedHarmonicNumber() does this:\n>\n> for (i = n; i > 1; i--)\n> ans += pow(i, -s);\n>\n> where n happens to be 1000000000 (range passed to random_zipfian), so\n> the loop takes quite a bit of time.\n\nIf you find a better formula for the harmonic number, you are welcome \nand probably get your name on it:-)There are pretty good approximations for s > 1.0 using Riemann zeta function and Euler derived a formula for the s = 1 case.I also noticed that i is int in this function, but n is int64. That seems like an oversight.Regards,Ants Aasma",
"msg_date": "Fri, 22 Feb 2019 12:22:43 +0200",
"msg_from": "Ants Aasma <ants.aasma@eesti.ee>",
"msg_from_op": false,
"msg_subject": "Re: CPU costs of random_zipfian in pgbench"
},
{
"msg_contents": "\n\nOn 2/22/19 11:22 AM, Ants Aasma wrote:\n> On Sun, Feb 17, 2019 at 10:52 AM Fabien COELHO <coelho@cri.ensmp.fr\n> <mailto:coelho@cri.ensmp.fr>> wrote:\n> \n> > I'm trying to use random_zipfian() for benchmarking of skewed data\n> sets,\n> > and I ran head-first into an issue with rather excessive CPU costs.\n> > [...] This happens because generalizedHarmonicNumber() does this:\n> >\n> > for (i = n; i > 1; i--)\n> > ans += pow(i, -s);\n> >\n> > where n happens to be 1000000000 (range passed to random_zipfian), so\n> > the loop takes quite a bit of time.\n> \n> If you find a better formula for the harmonic number, you are welcome\n> and probably get your name on it:-)\n> \n> \n> There are pretty good approximations for s > 1.0 using Riemann zeta\n> function and Euler derived a formula for the s = 1 case.\n> \n\nI believe that's what random_zipfian() already uses, because for s > 1.0\nit refers to \"Non-Uniform Random Variate Generation\" by Luc Devroye, and\nthe text references the zeta function. Also, I have not observed serious\nissues with the s > 1.0 case (despite the docs seem to suggest there may\nbe some).\n\n> I also noticed that i is int in this function, but n is int64. That\n> seems like an oversight.\n> \n\nIndeed.\n\n> Regards,\n> Ants Aasma\n> \n> \n\ncheers\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 22 Feb 2019 13:03:13 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: CPU costs of random_zipfian in pgbench"
},
{
"msg_contents": "\n>> There are pretty good approximations for s > 1.0 using Riemann zeta\n>> function and Euler derived a formula for the s = 1 case.\n>\n> I believe that's what random_zipfian() already uses, because for s > 1.0\n> it refers to \"Non-Uniform Random Variate Generation\" by Luc Devroye, and\n> the text references the zeta function.\n\nYep.\n\n> Also, I have not observed serious issues with the s > 1.0 case (despite \n> the docs seem to suggest there may be some).\n\nThe performance issue is for s > 1.0 and very close to 1.0, et \nthings like s = 1.000001\n\n>> I also noticed that i is int in this function, but n is int64. That\n>> seems like an oversight.\n\nIndeed, that is a bug!\n\nUsing it for a really int64 value would be very bad, though. Maybe there \nshould be an error if the value is too large, because calling pow billions \nof times is bad for the computer health.\n\n-- \nFabien.\n\n",
"msg_date": "Fri, 22 Feb 2019 17:10:02 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: CPU costs of random_zipfian in pgbench"
},
{
"msg_contents": ">>> I also noticed that i is int in this function, but n is int64. That \n>>> seems like an oversight.\n>\n> Indeed, that is a bug!\n\nHere is a v2 with hopefully better wording, comments and a fix for the bug \nyou pointed out.\n\n-- \nFabien.",
"msg_date": "Fri, 22 Feb 2019 17:17:40 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: CPU costs of random_zipfian in pgbench"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\nFor whatever it is worth, the patch looks good to me.\r\n\r\nA minor nitpick would be to use a verb in the part:\r\n\r\n`cost when the parameter in (0, 1)`\r\n\r\nmaybe:\r\n\r\n`cost when the parameter's value is in (0, 1)` or similar.\r\n\r\nApart from that, I would suggest it that the patch could be moved to\r\nWaiting for Author state.",
"msg_date": "Wed, 13 Mar 2019 08:55:10 +0000",
"msg_from": "Georgios Kokolatos <gkokolatos@pm.me>",
"msg_from_op": false,
"msg_subject": "Re: CPU costs of random_zipfian in pgbench"
},
{
"msg_contents": "> For whatever it is worth, the patch looks good to me.\n>\n> A minor nitpick would be to use a verb in the part:\n>\n> `cost when the parameter in (0, 1)`\n\n> maybe:\n>\n> `cost when the parameter's value is in (0, 1)` or similar.\n\nLooks ok.\n\n> Apart from that, I would suggest it that the patch could be moved to\n> Waiting for Author state.\n\nAttached an upated.\n\n-- \nFabien.",
"msg_date": "Wed, 13 Mar 2019 10:49:23 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: CPU costs of random_zipfian in pgbench"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\nVersion 3 of the patch looks ready for committer.\r\n\r\nThank you for taking the time to code!\n\nThe new status of this patch is: Ready for Committer\n",
"msg_date": "Wed, 13 Mar 2019 09:56:40 +0000",
"msg_from": "Georgios Kokolatos <gkokolatos@pm.me>",
"msg_from_op": false,
"msg_subject": "Re: CPU costs of random_zipfian in pgbench"
},
{
"msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n> [ pgbench-zipf-doc-3.patch ]\n\nI started to look through this, and the more I looked the more unhappy\nI got that we're having this discussion at all. The zipfian support\nin pgbench is seriously over-engineered and under-documented. As an\nexample, I was flabbergasted to find out that the end-of-run summary\nstatistics now include this:\n\n /* Report zipfian cache overflow */\n for (i = 0; i < nthreads; i++)\n {\n totalCacheOverflows += threads[i].zipf_cache.overflowCount;\n }\n if (totalCacheOverflows > 0)\n {\n printf(\"zipfian cache array overflowed %d time(s)\\n\", totalCacheOverflows);\n }\n\nWhat is the point of that, and if there is a point, why is it nowhere\nmentioned in pgbench.sgml? What would a user do with this information,\nand how would they know what to do?\n\nI remain of the opinion that we ought to simply rip out support for\nzipfian with s < 1. It's not useful for benchmarking purposes to have\na random-number function with such poor computational properties.\nI think leaving it in there is just a foot-gun: we'd be a lot better\noff throwing an error that tells people to use some other distribution.\n\nOr if we do leave it in there, we for sure have to have documentation\nthat *actually* explains how to use it, which this patch still doesn't.\nThere's nothing suggesting that you'd better not use a large number of\ndifferent (n,s) combinations.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sat, 23 Mar 2019 13:01:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CPU costs of random_zipfian in pgbench"
},
{
"msg_contents": "\nHello Tom,\n\n> I started to look through this, and the more I looked the more unhappy\n> I got that we're having this discussion at all. The zipfian support\n> in pgbench is seriously over-engineered and under-documented. As an\n> example, I was flabbergasted to find out that the end-of-run summary\n> statistics now include this:\n>\n> /* Report zipfian cache overflow */\n> for (i = 0; i < nthreads; i++)\n> {\n> totalCacheOverflows += threads[i].zipf_cache.overflowCount;\n> }\n> if (totalCacheOverflows > 0)\n> {\n> printf(\"zipfian cache array overflowed %d time(s)\\n\", totalCacheOverflows);\n> }\n>\n> What is the point of that, and if there is a point, why is it nowhere\n> mentioned in pgbench.sgml?\n\nIndeed, there should.\n\n> What would a user do with this information, and how would they know what \n> to do?\n\nSure, but it was unclear what to do. Extending the cache to avoid that \nwould look like over-engineering.\n\n> I remain of the opinion that we ought to simply rip out support for\n> zipfian with s < 1.\n\nSome people really want zipfian because it reflects their data access \npattern, possibly with s < 1.\n\nWe cannot helpt it: real life seems zipfianly distributed:-)\n\n> It's not useful for benchmarking purposes to have a random-number \n> function with such poor computational properties.\n\nThis is mostly a startup cost, the generation cost when a bench is running \nis reasonable. How to best implement the precomputation is an open \nquestion.\n\nAs a reviewer I was not thrilled by the cache stuff, but I had no better \nidea that would not fall under \"over-over engineering\" or the like.\n\nMaybe it could error out and say \"recompile me\", but then someone\nwould have said \"that is unacceptable\".\n\nMaybe it could auto extend the cache, but that is still more \nunnecessary over-engineering, IMHO.\n\nMaybe a there could be some mandatory declarations or partial eval that \ncould precompute the needed parameters out/before the bench is started, \nwith a clear message \"precomputing stuff...\", but that would be over over \nover engineering again... and that would mean restricting random_zipfian \nparameters to near-constants, which would require some explaining, but \nmaybe it is an option. I guess that in the paper original context, the \nparameters (s & n) are known before the bench is started, so that the \nneeded value are computed offline once and for all.\n\n> I think leaving it in there is just a foot-gun: we'd be a lot better off \n> throwing an error that tells people to use some other distribution.\n\nWhen s < 1, the startup cost is indeed a pain. However, it is a pain \nprescribed by a Turing Award.\n\n> Or if we do leave it in there, we for sure have to have documentation\n> that *actually* explains how to use it, which this patch still doesn't.\n\nI'm not sure what explaining there could be about how to use it: one calls \nthe function to obtain pseudo-random integers with the desired \ndistribution?\n\n> There's nothing suggesting that you'd better not use a large number of\n> different (n,s) combinations.\n\nIndeed, there is no caveat about this point, as noted above.\n\nPlease find an updated patch for the documentation, pointing out the \nexistence of the cache and an advice not to overdo it.\n\nIt does not solve the underlying problem which raised your complaint, but \nat least it is documented.\n\n-- \nFabien.\n\n",
"msg_date": "Sat, 23 Mar 2019 18:44:35 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: CPU costs of random_zipfian in pgbench"
},
{
"msg_contents": "Hello again,\n\n>> I started to look through this, and the more I looked the more unhappy\n>> I got that we're having this discussion at all. The zipfian support\n>> in pgbench is seriously over-engineered and under-documented. As an\n>> example, I was flabbergasted to find out that the end-of-run summary\n>> statistics now include this:\n>>\n>> /* Report zipfian cache overflow */\n>> for (i = 0; i < nthreads; i++)\n>> {\n>> totalCacheOverflows += threads[i].zipf_cache.overflowCount;\n>> }\n>> if (totalCacheOverflows > 0)\n>> {\n>> printf(\"zipfian cache array overflowed %d time(s)\\n\", \n>> totalCacheOverflows);\n>> }\n>> \n>> What is the point of that, and if there is a point, why is it nowhere\n>> mentioned in pgbench.sgml?\n\nThe attached patch simplifies the code by erroring on cache overflow, \ninstead of the LRU replacement strategy and unhelpful final report. The \nabove lines are removed.\n\n-- \nFabien.",
"msg_date": "Sat, 23 Mar 2019 19:11:44 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: CPU costs of random_zipfian in pgbench"
},
{
"msg_contents": ">>> What is the point of that, and if there is a point, why is it nowhere\n>>> mentioned in pgbench.sgml?\n>\n> The attached patch simplifies the code by erroring on cache overflow, instead \n> of the LRU replacement strategy and unhelpful final report. The above lines \n> are removed.\n\nSame, but without the compiler warning about an unused variable. Sorry for \nthe noise.\n\n-- \nFabien.",
"msg_date": "Sat, 23 Mar 2019 19:45:33 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: CPU costs of random_zipfian in pgbench"
},
{
"msg_contents": "\n\nOn 3/23/19 7:45 PM, Fabien COELHO wrote:\n> \n>>>> What is the point of that, and if there is a point, why is it nowhere\n>>>> mentioned in pgbench.sgml?\n>>\n>> The attached patch simplifies the code by erroring on cache overflow,\n>> instead of the LRU replacement strategy and unhelpful final report.\n>> The above lines are removed.\n> \n\nEh? Do I understand correctly that pgbench might start failing after\nsome random amount of time, instead of reporting the overflow at the\nend? I'm not sure that's really an improvement ...\n\nWhy is the cache fixed-size at all?\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Sun, 24 Mar 2019 04:06:08 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: CPU costs of random_zipfian in pgbench"
},
{
"msg_contents": "\n\nOn 3/23/19 6:44 PM, Fabien COELHO wrote:\n> \n> Hello Tom,\n> \n>> I started to look through this, and the more I looked the more unhappy\n>> I got that we're having this discussion at all. The zipfian support\n>> in pgbench is seriously over-engineered and under-documented. As an\n>> example, I was flabbergasted to find out that the end-of-run summary\n>> statistics now include this:\n>>\n>> /* Report zipfian cache overflow */\n>> for (i = 0; i < nthreads; i++)\n>> {\n>> totalCacheOverflows += threads[i].zipf_cache.overflowCount;\n>> }\n>> if (totalCacheOverflows > 0)\n>> {\n>> printf(\"zipfian cache array overflowed %d time(s)\\n\",\n>> totalCacheOverflows);\n>> }\n>>\n>> What is the point of that, and if there is a point, why is it nowhere\n>> mentioned in pgbench.sgml?\n> \n> Indeed, there should.\n> \n>> What would a user do with this information, and how would they know\n>> what to do?\n> \n> Sure, but it was unclear what to do. Extending the cache to avoid that\n> would look like over-engineering.\n> \n\nThat seems like a rather strange argument. What exactly is so complex on\nresizing the cache to quality as over-engineering?\n\nIf the choice is between reporting the failure to the user, and\naddressing the failure, surely the latter would be the default option?\nParticularly if the user can't really address the issue easily\n(recompiling psql is not very practical solution).\n\n>> I remain of the opinion that we ought to simply rip out support for\n>> zipfian with s < 1.\n> \n\n+1 to that\n\n> Some people really want zipfian because it reflects their data access\n> pattern, possibly with s < 1.\n> \n> We cannot helpt it: real life seems zipfianly distributed:-)\n> \n\nSure. But that hardly means we ought to provide algorithms that we know\nare not suitable for benchmarking tools, which I'd argue is this case.\n\nAlso, we have two algorithms for generating zipfian distributions. Why\nwouldn't it be sufficient to keep just one of them?\n\n>> It's not useful for benchmarking purposes to have a random-number\n>> function with such poor computational properties.\n> \n> This is mostly a startup cost, the generation cost when a bench is\n> running is reasonable. How to best implement the precomputation is an\n> open question.\n> \n\n... which means it's not a startup cost. IMHO this simply shows pgbench\ndoes not have the necessary infrastructure to provide this feature.\n\n> As a reviewer I was not thrilled by the cache stuff, but I had no better\n> idea that would not fall under \"over-over engineering\" or the like.\n> \n> Maybe it could error out and say \"recompile me\", but then someone\n> would have said \"that is unacceptable\".\n> \n> Maybe it could auto extend the cache, but that is still more unnecessary\n> over-engineering, IMHO.\n> \n\nI'm puzzled. Why would that be over-engineering?\n\n> Maybe a there could be some mandatory declarations or partial eval that\n> could precompute the needed parameters out/before the bench is started,\n> with a clear message \"precomputing stuff...\", but that would be over\n> over over engineering again... and that would mean restricting\n> random_zipfian parameters to near-constants, which would require some\n> explaining, but maybe it is an option. I guess that in the paper\n> original context, the parameters (s & n) are known before the bench is\n> started, so that the needed value are computed offline once and for all.\n> \n>> I think leaving it in there is just a foot-gun: we'd be a lot better\n>> off throwing an error that tells people to use some other distribution.\n> \n> When s < 1, the startup cost is indeed a pain. However, it is a pain\n> prescribed by a Turing Award.\n> \n\nThen we shouldn't have it, probably. Or we should at least implement a\nproper startup phase, so that the costly precomputation is not included\nin the test interval.\n\n>> Or if we do leave it in there, we for sure have to have documentation\n>> that *actually* explains how to use it, which this patch still doesn't.\n> \n> I'm not sure what explaining there could be about how to use it: one\n> calls the function to obtain pseudo-random integers with the desired\n> distribution?\n> \n\nWell, I'd argue the current description \"performance is poor\" is not\nparticularly clear.\n\n>> There's nothing suggesting that you'd better not use a large number of\n>> different (n,s) combinations.\n> \n> Indeed, there is no caveat about this point, as noted above.\n> > Please find an updated patch for the documentation, pointing out the\n> existence of the cache and an advice not to overdo it.\n> > It does not solve the underlying problem which raised your complaint,\n> but at least it is documented.\n> \n\nI think you forgot to attache the patch ...\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Sun, 24 Mar 2019 04:33:03 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: CPU costs of random_zipfian in pgbench"
},
{
"msg_contents": "\n>>>>> What is the point of that, and if there is a point, why is it nowhere\n>>>>> mentioned in pgbench.sgml?\n>>>\n>>> The attached patch simplifies the code by erroring on cache overflow,\n>>> instead of the LRU replacement strategy and unhelpful final report.\n>>> The above lines are removed.\n>\n> Eh? Do I understand correctly that pgbench might start failing after\n> some random amount of time, instead of reporting the overflow at the\n> end?\n\nIndeed, that what this patch would induce, although very probably under a \n*short* random amount of time.\n\n> I'm not sure that's really an improvement ...\n\nDepends. If the cache is full it means repeating the possibly expensive \nconstant computations, which looks like a very bad idea for the user \nanyway.\n\n> Why is the cache fixed-size at all?\n\nThe cache can diverge and the search is linear, so it does not seem a good \nidea to keep it for very long:\n\n \\set size random(100000, 1000000)\n \\set i random_zipfian(1, :size, ...)\n\nThe implementation only makes some sense if there are very few values \n(param & size pairs with param < 1) used in the whole script.\n\nTom is complaining of over engineering, and he has a point, so I'm trying \nto simplify (eg dropping LRU and erroring) for cases where the feature is \nnot really appropriate anyway.\n\n-- \nFabien.\n\n",
"msg_date": "Sun, 24 Mar 2019 15:12:58 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: CPU costs of random_zipfian in pgbench"
},
{
"msg_contents": "Hello Tomas,\n\n>>> What would a user do with this information, and how would they know\n>>> what to do?\n>>\n>> Sure, but it was unclear what to do. Extending the cache to avoid that \n>> would look like over-engineering.\n>\n> That seems like a rather strange argument. What exactly is so complex on\n> resizing the cache to quality as over-engineering?\n\nBecause the cache size can diverge on a bad script, and the search is \ncurrently linear. Even with a log search it does not look attractive.\n\n> If the choice is between reporting the failure to the user, and\n> addressing the failure, surely the latter would be the default option?\n> Particularly if the user can't really address the issue easily\n> (recompiling psql is not very practical solution).\n>\n>>> I remain of the opinion that we ought to simply rip out support for\n>>> zipfian with s < 1.\n>\n> +1 to that\n\nIf this is done, some people with zipfian distribution that currently \nwork might be unhappy.\n\n>> Some people really want zipfian because it reflects their data access\n>> pattern, possibly with s < 1.\n>>\n>> We cannot helpt it: real life seems zipfianly distributed:-)\n>\n> Sure. But that hardly means we ought to provide algorithms that we know\n> are not suitable for benchmarking tools, which I'd argue is this case.\n\nHmmm. It really depends on the values actually used.\n\n> Also, we have two algorithms for generating zipfian distributions. Why\n> wouldn't it be sufficient to keep just one of them?\n\nBecause the two methods work for different values of the parameter, so it \ndepends on the distribution which is targetted. If you want s < 1, the \nexclusion method does not help (precisely, the real \"infinite\" zipfian \ndistribution is not mathematical defined when s < 1 because the underlying \nsum diverges. Having s < 1 can only be done on a finite set).\n\n>>> It's not useful for benchmarking purposes to have a random-number\n>>> function with such poor computational properties.\n>>\n>> This is mostly a startup cost, the generation cost when a bench is\n>> running is reasonable. How to best implement the precomputation is an\n>> open question.\n>\n> ... which means it's not a startup cost. IMHO this simply shows pgbench\n> does not have the necessary infrastructure to provide this feature.\n\nWell, a quick one has been proposed with a cache. Now I can imagine \nalternatives, but I'm wondering how much it is worth it.\n\nEg, restraint random_zipfian to more or less constant parameters when s < \n1, so that a partial evaluation could collect the needed pairs and perform \nthe pre-computations before the bench is started.\n\nNow, that looks like over-engineering, and then someone would complain of \nthe restrictions.\n\n>> As a reviewer I was not thrilled by the cache stuff, but I had no better\n>> idea that would not fall under \"over-over engineering\" or the like.\n>>\n>> Maybe it could error out and say \"recompile me\", but then someone\n>> would have said \"that is unacceptable\".\n>>\n>> Maybe it could auto extend the cache, but that is still more unnecessary\n>> over-engineering, IMHO.\n>\n> I'm puzzled. Why would that be over-engineering?\n\nAs stated above, cache size divergence and O(n) search. Even a log2(n) \nsearch would be bad, IMO.\n\n>>> I think leaving it in there is just a foot-gun: we'd be a lot better\n>>> off throwing an error that tells people to use some other distribution.\n>>\n>> When s < 1, the startup cost is indeed a pain. However, it is a pain\n>> prescribed by a Turing Award.\n>\n> Then we shouldn't have it, probably. Or we should at least implement a\n> proper startup phase, so that the costly precomputation is not included\n> in the test interval.\n\nThat can be done, but I'm not sure of an very elegant design. And I was \njust the reviewer on this one.\n\n>>> Or if we do leave it in there, we for sure have to have documentation\n>>> that *actually* explains how to use it, which this patch still doesn't.\n>>\n>> I'm not sure what explaining there could be about how to use it: one\n>> calls the function to obtain pseudo-random integers with the desired\n>> distribution?\n>\n> Well, I'd argue the current description \"performance is poor\" is not\n> particularly clear.\n>\n>>> There's nothing suggesting that you'd better not use a large number of\n>>> different (n,s) combinations.\n>>\n>> Indeed, there is no caveat about this point, as noted above.\n>>> Please find an updated patch for the documentation, pointing out the\n>> existence of the cache and an advice not to overdo it.\n>>> It does not solve the underlying problem which raised your complaint,\n>> but at least it is documented.\n>>\n>\n> I think you forgot to attache the patch ...\n\nIndeed, this is becoming a habbit:-( Attached. Hopefully.\n\n-- \nFabien.",
"msg_date": "Sun, 24 Mar 2019 15:34:33 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: CPU costs of random_zipfian in pgbench"
},
{
"msg_contents": "Hello Tom & Tomas,\n\n>> If the choice is between reporting the failure to the user, and\n>> addressing the failure, surely the latter would be the default option?\n>> Particularly if the user can't really address the issue easily\n>> (recompiling psql is not very practical solution).\n>> \n>>>> I remain of the opinion that we ought to simply rip out support for\n>>>> zipfian with s < 1.\n>> \n>> +1 to that\n>\n> If this is done, some people with zipfian distribution that currently \n> work might be unhappy.\n\nAfter giving it some thought, I think that this cannot be fully fixed for \n12.\n\nThe attached patch removes the code for param in (0, 1), and slightly \nimprove the documentation about the performance, if you want to proceed.\n\nFor s > 1, there is no such constraint, and it works fine, there is no \nreason to remove it.\n\nGiven the constraint of Jim Gray's approximated method for s in (0, 1), \nwhich really does zipfian for the first two integers and then uses an \nexponential approximation, the only approach is that the parameters must \nbe computed in a partial eval preparation phase before the bench code is \nrun. This means that only (mostly) constants would be allowed as \nparameters when s is in (0, 1), but I think that this is acceptable \nbecause anyway the method fundamentaly requires it. I think that it can be \nimplemented reasonably well (meaning not too much code), but would \nrequires a few round of reviews if someone implements it (for a reminder, \nI was only the reviewer on this one). An added benefit would be that the \nparameter cache could be shared between thread, which would be a good \nthing.\n\nThe attached other attached patch illustrate what I call poor performance \nfor stupid parameters (no point in doing zipfian on 2 integers…) :\n\n ./pgbench -T 3 -D n=2 -D s=1.01 -f zipf_perf.sql # 46981 tps\n ./pgbench -T 3 -D n=2 -D s=1.001 -f zipf_perf.sql # 6187 tps\n ./pgbench -T 3 -D n=2 -D s=1.0001 -f zipf_perf.sql # 710 tps\n\n ./pgbench -T 3 -D n=100 -D s=1.01 -f zipf_perf.sql # 142910 tps\n ./pgbench -T 3 -D n=100 -D s=1.001 -f zipf_perf.sql # 21214 tps\n ./pgbench -T 3 -D n=100 -D s=1.0001 -f zipf_perf.sql # 2466 tps\n\n ./pgbench -T 3 -D n=1000000 -D s=1.01 -f zipf_perf.sql # 376453 tps\n ./pgbench -T 3 -D n=1000000 -D s=1.001 -f zipf_perf.sql # 57441 tps\n ./pgbench -T 3 -D n=1000000 -D s=1.0001 -f zipf_perf.sql # 6780 tps\n\nMaybe the implementation could impose that s is at least 1.001 to avoid\nthe lower performance?\n\n-- \nFabien.",
"msg_date": "Sun, 24 Mar 2019 19:05:49 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: CPU costs of random_zipfian in pgbench"
},
{
"msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n>>>> I remain of the opinion that we ought to simply rip out support for\n>>>> zipfian with s < 1.\n\n>>> +1 to that\n\n>> If this is done, some people with zipfian distribution that currently \n>> work might be unhappy.\n\n> After giving it some thought, I think that this cannot be fully fixed for \n> 12.\n\nJust to clarify --- my complaint about \"over engineering\" referred to\nthe fact that a cache exists at all; fooling around with the overflow\nbehavior isn't really going to answer that.\n\nThe bigger picture here is that a benchmarking tool that contains its\nown performance surprises is not a nice thing to have. It's not very\nhard to imagine somebody wasting a lot of time trying to explain weird\nresults, only to find out that the cause is unstable performance of\nrandom_zipfian. Or worse, people might draw totally incorrect conclusions\nif they fail to drill down enough to realize that there's a problem in\npgbench rather than on the server side.\n\n> Given the constraint of Jim Gray's approximated method for s in (0, 1), \n> which really does zipfian for the first two integers and then uses an \n> exponential approximation, the only approach is that the parameters must \n> be computed in a partial eval preparation phase before the bench code is \n> run. This means that only (mostly) constants would be allowed as \n> parameters when s is in (0, 1), but I think that this is acceptable \n> because anyway the method fundamentaly requires it.\n\nYeah, if we could store all the required harmonic numbers before the\ntest run timing starts, that would address the concern about stable\nperformance. But I have to wonder whether zipfian with s < 1 is useful\nenough to justify so much code.\n\n> The attached other attached patch illustrate what I call poor performance \n> for stupid parameters (no point in doing zipfian on 2 integers…) :\n> ...\n> Maybe the implementation could impose that s is at least 1.001 to avoid\n> the lower performance?\n\nI was wondering about that too. It seems like it'd be a wise idea to\nfurther constrain s and/or n to ensure that the s > 1 code path doesn't do\nanything too awful ... unless someone comes up with a better implementation\nthat has stable performance without such constraints.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 24 Mar 2019 14:27:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CPU costs of random_zipfian in pgbench"
},
{
"msg_contents": "Hello Tom,\n\n>>> If this is done, some people with zipfian distribution that currently\n>>> work might be unhappy.\n>\n>> After giving it some thought, I think that this cannot be fully fixed for\n>> 12.\n>\n> Just to clarify --- my complaint about \"over engineering\" referred to\n> the fact that a cache exists at all; fooling around with the overflow\n> behavior isn't really going to answer that.\n\nHaving some cache really makes sense for s < 1, AFAICS.\n\n> The bigger picture here is that a benchmarking tool that contains its\n> own performance surprises is not a nice thing to have.\n\nHmmm. It seems that it depends. Sometimes self inflected wounds are the \nsoldier's responsability, sometimes they must be prevented.\n\n> It's not very hard to imagine somebody wasting a lot of time trying to \n> explain weird results, only to find out that the cause is unstable \n> performance of random_zipfian. Or worse, people might draw totally \n> incorrect conclusions if they fail to drill down enough to realize that \n> there's a problem in pgbench rather than on the server side.\n\nYep, benchmarking is tougher than it looks: it is very easy to get it \nwrong without knowing, whatever tool you use.\n\n>> Given the constraint of Jim Gray's approximated method for s in (0, 1),\n>> which really does zipfian for the first two integers and then uses an\n>> exponential approximation, the only approach is that the parameters must\n>> be computed in a partial eval preparation phase before the bench code is\n>> run. This means that only (mostly) constants would be allowed as\n>> parameters when s is in (0, 1), but I think that this is acceptable\n>> because anyway the method fundamentaly requires it.\n>\n> Yeah, if we could store all the required harmonic numbers before the\n> test run timing starts, that would address the concern about stable\n> performance. But I have to wonder whether zipfian with s < 1 is useful\n> enough to justify so much code.\n\nI do not know about that. I do not think that Jim Gray chose to invent an \napproximated method for s < 1 because he thought it was fun. I think he \ndid it because his benchmark data required it. If you need it, you need \nit. If you do not need it, you do not care.\n\n>> The attached other attached patch illustrate what I call poor performance\n>> for stupid parameters (no point in doing zipfian on 2 integers…) :\n>> ...\n>> Maybe the implementation could impose that s is at least 1.001 to avoid\n>> the lower performance?\n>\n> I was wondering about that too. It seems like it'd be a wise idea to\n> further constrain s and/or n to ensure that the s > 1 code path doesn't do\n> anything too awful ...\n\nYep. The attached version enforces s >= 1.001, which avoids the worse cost\nof iterating, according to my small tests.\n\n> unless someone comes up with a better implementation that has stable \n> performance without such constraints.\n\nHmmm…\n\n-- \nFabien.",
"msg_date": "Sun, 24 Mar 2019 21:21:45 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: CPU costs of random_zipfian in pgbench"
},
{
"msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n>> I was wondering about that too. It seems like it'd be a wise idea to\n>> further constrain s and/or n to ensure that the s > 1 code path doesn't do\n>> anything too awful ...\n\n> Yep. The attached version enforces s >= 1.001, which avoids the worse cost\n> of iterating, according to my small tests.\n\nSeems reasonable. Pushed with minor documentation editing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 01 Apr 2019 17:39:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CPU costs of random_zipfian in pgbench"
},
{
"msg_contents": "On 2019-Apr-01, Tom Lane wrote:\n\n> Fabien COELHO <coelho@cri.ensmp.fr> writes:\n> >> I was wondering about that too. It seems like it'd be a wise idea to\n> >> further constrain s and/or n to ensure that the s > 1 code path doesn't do\n> >> anything too awful ...\n> \n> > Yep. The attached version enforces s >= 1.001, which avoids the worse cost\n> > of iterating, according to my small tests.\n> \n> Seems reasonable. Pushed with minor documentation editing.\n\nAh, so we now we can get rid of the TState * being passed around\nseparately for expression execution, too?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 1 Apr 2019 18:51:35 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: CPU costs of random_zipfian in pgbench"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Apr-01, Tom Lane wrote:\n>> Seems reasonable. Pushed with minor documentation editing.\n\n> Ah, so we now we can get rid of the TState * being passed around\n> separately for expression execution, too?\n\nI didn't really look for follow-on simplifications, but if there\nare some to be made, great. (That would be further evidence for\nmy opinion that this feature was overcomplicated ...)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 01 Apr 2019 17:59:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CPU costs of random_zipfian in pgbench"
},
{
"msg_contents": "> Ah, so we now we can get rid of the TState * being passed around\n> separately for expression execution, too?\n\nIndeed.\n\nI would have thought that the compiler would have warned if it is unused, \nbut because of the recursion it is uselessly used.\n\nOk, maybe the sentence above is not very clear. Attached a patch which \nsimplifies further.\n\n-- \nFabien.",
"msg_date": "Wed, 3 Apr 2019 21:54:54 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: CPU costs of random_zipfian in pgbench"
},
{
"msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n>> Ah, so we now we can get rid of the TState * being passed around\n>> separately for expression execution, too?\n\n> Indeed.\n\nIndeed. Pushed with some additional cleanup.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 03 Apr 2019 17:17:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CPU costs of random_zipfian in pgbench"
}
] |
[
{
"msg_contents": "Hello,\nWe've noticed that pgadmin 3.x / 4.x doesn't refresh server info like server version after restarting. It makes people confused.\nFor example,\n1. PgAdmin was rinning\n2. We upgraded postgres from 11.1 to 11.2\n3. PgAdmin was restarted\n4. We continued to see 11.1\n5. We must push \"Disconnect from server\" and connect again for updating server version.\n\nIt would be convenient that PgAdmin checks server info from time to time, at least, when PgAdmin was restarted.\n\nThanks!\n\n\n-- \nRegards,\nAndrew K.\nHello,We've noticed that pgadmin 3.x / 4.x doesn't refresh server info like server version after restarting. It makes people confused.For example,1. PgAdmin was rinning2. We upgraded postgres from 11.1 to 11.23. PgAdmin was restarted4. We continued to see 11.15. We must push \"Disconnect from server\" and connect again for updating server version.It would be convenient that PgAdmin checks server info from time to time, at least, when PgAdmin was restarted.Thanks!-- Regards,Andrew K.",
"msg_date": "Mon, 18 Feb 2019 09:20:10 +0300",
"msg_from": "=?UTF-8?B?QW5kcmV5IEtseWNoa292?= <aaklychkov@mail.ru>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?B?UEdBZG1pbiA0IGRvbid0IHJlZnJlc2ggc2VydmVyIGluZm8gYWZ0ZXIgcmVz?=\n =?UTF-8?B?dGFydGluZw==?="
},
{
"msg_contents": "On Mon, Feb 18, 2019 at 1:20 AM Andrey Klychkov <aaklychkov@mail.ru> wrote:\n> Hello,\n> We've noticed that pgadmin 3.x / 4.x doesn't refresh server info like server version after restarting. It makes people confused.\n> For example,\n> 1. PgAdmin was rinning\n> 2. We upgraded postgres from 11.1 to 11.2\n> 3. PgAdmin was restarted\n> 4. We continued to see 11.1\n> 5. We must push \"Disconnect from server\" and connect again for updating server version.\n>\n> It would be convenient that PgAdmin checks server info from time to time, at least, when PgAdmin was restarted.\n\nYou should report this on a pgAdmin mailing list rather than a\nPostgreSQL mailing list.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Tue, 19 Feb 2019 11:41:33 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PGAdmin 4 don't refresh server info after restarting"
},
{
"msg_contents": "> You should report this on a pgAdmin mailing list rather than a\n> PostgreSQL mailing list.\n\nHi, of course, I made a mistake. Thank you for showing me\n\n\n>Вторник, 19 февраля 2019, 19:41 +03:00 от Robert Haas <robertmhaas@gmail.com>:\n>\n>On Mon, Feb 18, 2019 at 1:20 AM Andrey Klychkov < aaklychkov@mail.ru > wrote:\n>> Hello,\n>> We've noticed that pgadmin 3.x / 4.x doesn't refresh server info like server version after restarting. It makes people confused.\n>> For example,\n>> 1. PgAdmin was rinning\n>> 2. We upgraded postgres from 11.1 to 11.2\n>> 3. PgAdmin was restarted\n>> 4. We continued to see 11.1\n>> 5. We must push \"Disconnect from server\" and connect again for updating server version.\n>>\n>> It would be convenient that PgAdmin checks server info from time to time, at least, when PgAdmin was restarted.\n>\n>You should report this on a pgAdmin mailing list rather than a\n>PostgreSQL mailing list.\n>\n>-- \n>Robert Haas\n>EnterpriseDB: http://www.enterprisedb.com\n>The Enterprise PostgreSQL Company\n\n\n-- \nRegards,\nAndrew K.\n\n> You should report this on a pgAdmin mailing list rather than a> PostgreSQL mailing list.Hi, of course, I made a mistake. Thank you for showing me\n\tВторник, 19 февраля 2019, 19:41 +03:00 от Robert Haas <robertmhaas@gmail.com>:\n\n\n\n\n\n\nOn Mon, Feb 18, 2019 at 1:20 AM Andrey Klychkov <aaklychkov@mail.ru> wrote:\n> Hello,\n> We've noticed that pgadmin 3.x / 4.x doesn't refresh server info like server version after restarting. It makes people confused.\n> For example,\n> 1. PgAdmin was rinning\n> 2. We upgraded postgres from 11.1 to 11.2\n> 3. PgAdmin was restarted\n> 4. We continued to see 11.1\n> 5. We must push \"Disconnect from server\" and connect again for updating server version.\n>\n> It would be convenient that PgAdmin checks server info from time to time, at least, when PgAdmin was restarted.\n\nYou should report this on a pgAdmin mailing list rather than a\nPostgreSQL mailing list.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n\n\n-- Regards,Andrew K.",
"msg_date": "Tue, 19 Feb 2019 23:39:54 +0300",
"msg_from": "=?UTF-8?B?QW5kcmV5IEtseWNoa292?= <aaklychkov@mail.ru>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?B?UmVbMl06IFBHQWRtaW4gNCBkb24ndCByZWZyZXNoIHNlcnZlciBpbmZvIGFm?=\n =?UTF-8?B?dGVyIHJlc3RhcnRpbmc=?="
}
] |
[
{
"msg_contents": "I'm forking this thread from\nhttps://postgr.es/m/flat/20190202083822.GC32531@gust.leadboat.com, which\nreported a race condition involving the \"apparent wraparound\" safety check in\nSimpleLruTruncate():\n\nOn Wed, Feb 13, 2019 at 11:26:23PM -0800, Noah Misch wrote:\n> 1. The result of the test is valid only until we release the SLRU ControlLock,\n> which we do before SlruScanDirCbDeleteCutoff() uses the cutoff to evaluate\n> segments for deletion. Once we release that lock, latest_page_number can\n> advance. This creates a TOCTOU race condition, allowing excess deletion:\n> \n> [local] test=# table trunc_clog_concurrency ;\n> ERROR: could not access status of transaction 2149484247\n> DETAIL: Could not open file \"pg_xact/0801\": No such file or directory.\n\n> Fixes are available:\n\n> b. Arrange so only one backend runs vac_truncate_clog() at a time. Other than\n> AsyncCtl, every SLRU truncation appears in vac_truncate_clog(), in a\n> checkpoint, or in the startup process. Hence, also arrange for only one\n> backend to call SimpleLruTruncate(AsyncCtl) at a time.\n\nMore specifically, restrict vac_update_datfrozenxid() to one backend per\ndatabase, and restrict vac_truncate_clog() and asyncQueueAdvanceTail() to one\nbackend per cluster. This, attached, was rather straightforward.\n\nI wonder about performance in a database with millions of small relations,\nparticularly considering my intent to back-patch this. In such databases,\nvac_update_datfrozenxid() can be a major part of the VACUUM's cost. Two\nthings work in our favor. First, vac_update_datfrozenxid() runs once per\nVACUUM command, not once per relation. Second, Autovacuum has this logic:\n\n\t * ... we skip\n\t * this if (1) we found no work to do and (2) we skipped at least one\n\t * table due to concurrent autovacuum activity. In that case, the other\n\t * worker has already done it, or will do so when it finishes.\n\t */\n\tif (did_vacuum || !found_concurrent_worker)\n\t\tvac_update_datfrozenxid();\n\nThat makes me relatively unworried. I did consider some alternatives:\n\n- Run vac_update_datfrozenxid()'s pg_class scan before taking a lock. If we\n find the need for pg_database updates, take the lock and scan pg_class again\n to get final numbers. This doubles the work in cases that end up taking the\n lock, so I'm not betting it being a net win.\n\n- Use LockWaiterCount() to skip vac_update_datfrozenxid() if some other\n backend is already waiting. This is similar to Autovacuum's\n found_concurrent_worker test. It is tempting. I'm not proposing it,\n because it changes the states possible when manual VACUUM completes. Today,\n you can predict post-VACUUM datfrozenxid from post-VACUUM relfrozenxid\n values. If manual VACUUM could skip vac_update_datfrozenxid() this way,\n datfrozenxid could lag until some concurrent VACUUM finishes.\n\nThanks,\nnm",
"msg_date": "Sun, 17 Feb 2019 23:31:03 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "SimpleLruTruncate() mutual exclusion"
},
{
"msg_contents": "On Sun, Feb 17, 2019 at 11:31:03PM -0800, Noah Misch wrote:\n> I'm forking this thread from\n> https://postgr.es/m/flat/20190202083822.GC32531@gust.leadboat.com, which\n> reported a race condition involving the \"apparent wraparound\" safety check in\n> SimpleLruTruncate():\n> \n> On Wed, Feb 13, 2019 at 11:26:23PM -0800, Noah Misch wrote:\n> > 1. The result of the test is valid only until we release the SLRU ControlLock,\n> > which we do before SlruScanDirCbDeleteCutoff() uses the cutoff to evaluate\n> > segments for deletion. Once we release that lock, latest_page_number can\n> > advance. This creates a TOCTOU race condition, allowing excess deletion:\n> > \n> > [local] test=# table trunc_clog_concurrency ;\n> > ERROR: could not access status of transaction 2149484247\n> > DETAIL: Could not open file \"pg_xact/0801\": No such file or directory.\n> \n> > Fixes are available:\n> \n> > b. Arrange so only one backend runs vac_truncate_clog() at a time. Other than\n> > AsyncCtl, every SLRU truncation appears in vac_truncate_clog(), in a\n> > checkpoint, or in the startup process. Hence, also arrange for only one\n> > backend to call SimpleLruTruncate(AsyncCtl) at a time.\n> \n> More specifically, restrict vac_update_datfrozenxid() to one backend per\n> database, and restrict vac_truncate_clog() and asyncQueueAdvanceTail() to one\n> backend per cluster. This, attached, was rather straightforward.\n\nRebased. The conflicts were limited to comments and documentation.\n\n> I wonder about performance in a database with millions of small relations,\n> particularly considering my intent to back-patch this. In such databases,\n> vac_update_datfrozenxid() can be a major part of the VACUUM's cost. Two\n> things work in our favor. First, vac_update_datfrozenxid() runs once per\n> VACUUM command, not once per relation. Second, Autovacuum has this logic:\n> \n> \t * ... we skip\n> \t * this if (1) we found no work to do and (2) we skipped at least one\n> \t * table due to concurrent autovacuum activity. In that case, the other\n> \t * worker has already done it, or will do so when it finishes.\n> \t */\n> \tif (did_vacuum || !found_concurrent_worker)\n> \t\tvac_update_datfrozenxid();\n> \n> That makes me relatively unworried. I did consider some alternatives:\n> \n> - Run vac_update_datfrozenxid()'s pg_class scan before taking a lock. If we\n> find the need for pg_database updates, take the lock and scan pg_class again\n> to get final numbers. This doubles the work in cases that end up taking the\n> lock, so I'm not betting it being a net win.\n> \n> - Use LockWaiterCount() to skip vac_update_datfrozenxid() if some other\n> backend is already waiting. This is similar to Autovacuum's\n> found_concurrent_worker test. It is tempting. I'm not proposing it,\n> because it changes the states possible when manual VACUUM completes. Today,\n> you can predict post-VACUUM datfrozenxid from post-VACUUM relfrozenxid\n> values. If manual VACUUM could skip vac_update_datfrozenxid() this way,\n> datfrozenxid could lag until some concurrent VACUUM finishes.",
"msg_date": "Fri, 28 Jun 2019 10:06:28 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "Re: SimpleLruTruncate() mutual exclusion"
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> On Sun, Feb 17, 2019 at 11:31:03PM -0800, Noah Misch wrote:\n>>> b. Arrange so only one backend runs vac_truncate_clog() at a time. Other than\n>>> AsyncCtl, every SLRU truncation appears in vac_truncate_clog(), in a\n>>> checkpoint, or in the startup process. Hence, also arrange for only one\n>>> backend to call SimpleLruTruncate(AsyncCtl) at a time.\n\n>> More specifically, restrict vac_update_datfrozenxid() to one backend per\n>> database, and restrict vac_truncate_clog() and asyncQueueAdvanceTail() to one\n>> backend per cluster. This, attached, was rather straightforward.\n\n> Rebased. The conflicts were limited to comments and documentation.\n\nI tried to review this, along with your adjacent patch to adjust the\nsegment-roundoff logic, but I didn't get far. I see the point that\nthe roundoff might create problems when we are close to hitting\nwraparound. I do not, however, see why serializing vac_truncate_clog\nhelps. I'm pretty sure it was intentional that multiple backends\ncould run truncation directory scans concurrently, and I don't really\nwant to give that up if possible.\n\nAlso, if I understand the data-loss hazard properly, it's what you\nsaid in the other thread: the latest_page_number could advance after\nwe make our decision about what to truncate, and then maybe we could\ntruncate newly-added data. We surely don't want to lock out the\noperations that can advance last_page_number, so how does serializing\nvac_truncate_clog help?\n\nI wonder whether what we need to do is add some state in shared\nmemory saying \"it is only safe to create pages before here\", and\nmake SimpleLruZeroPage or its callers check that. The truncation\nlogic would advance that value, but only after completing a scan.\n(Oh ..., hmm, maybe the point of serializing truncations is to\nensure that we know that we can safely advance that?) \n\nCan you post whatever script you used to detect/reproduce the problem\nin the first place?\n\n\t\t\tregards, tom lane\n\nPS: also, re-reading this code, it's worrisome that we are not checking\nfor failure of the unlink calls. I think the idea was that it didn't\nreally matter because if we did re-use an existing file we'd just\nre-zero it; hence failing the truncate is an overreaction. But probably\nsome comments about that are in order.\n\n\n",
"msg_date": "Mon, 29 Jul 2019 12:58:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SimpleLruTruncate() mutual exclusion"
},
{
"msg_contents": "On Mon, Jul 29, 2019 at 12:58:17PM -0400, Tom Lane wrote:\n> > On Sun, Feb 17, 2019 at 11:31:03PM -0800, Noah Misch wrote:\n> >>> b. Arrange so only one backend runs vac_truncate_clog() at a time. Other than\n> >>> AsyncCtl, every SLRU truncation appears in vac_truncate_clog(), in a\n> >>> checkpoint, or in the startup process. Hence, also arrange for only one\n> >>> backend to call SimpleLruTruncate(AsyncCtl) at a time.\n> \n> >> More specifically, restrict vac_update_datfrozenxid() to one backend per\n> >> database, and restrict vac_truncate_clog() and asyncQueueAdvanceTail() to one\n> >> backend per cluster. This, attached, was rather straightforward.\n\n> I'm pretty sure it was intentional that multiple backends\n> could run truncation directory scans concurrently, and I don't really\n> want to give that up if possible.\n\nI want to avoid a design that requires painstaking concurrency analysis. Such\nanalysis is worth it for heap_update(), but vac_truncate_clog() isn't enough\nof a hot path. If there's a way to make vac_truncate_clog() easy to analyze\nand still somewhat concurrent, fair.\n\n> Also, if I understand the data-loss hazard properly, it's what you\n> said in the other thread: the latest_page_number could advance after\n> we make our decision about what to truncate, and then maybe we could\n> truncate newly-added data. We surely don't want to lock out the\n> operations that can advance last_page_number, so how does serializing\n> vac_truncate_clog help?\n> \n> I wonder whether what we need to do is add some state in shared\n> memory saying \"it is only safe to create pages before here\", and\n> make SimpleLruZeroPage or its callers check that. The truncation\n> logic would advance that value, but only after completing a scan.\n> (Oh ..., hmm, maybe the point of serializing truncations is to\n> ensure that we know that we can safely advance that?) \n\nvac_truncate_clog() already records \"it is only safe to create pages before\nhere\" in ShmemVariableCache->xidWrapLimit, which it updates after any unlinks.\nThe trouble comes when two vac_truncate_clog() run in parallel and you get a\nsequence of events like this:\n\nvac_truncate_clog() instance 1 starts, considers segment ABCD eligible to unlink\nvac_truncate_clog() instance 2 starts, considers segment ABCD eligible to unlink\nvac_truncate_clog() instance 1 unlinks segment ABCD\nvac_truncate_clog() instance 1 calls SetTransactionIdLimit()\nvac_truncate_clog() instance 1 finishes\nsome backend calls SimpleLruZeroPage(), creating segment ABCD\nvac_truncate_clog() instance 2 unlinks segment ABCD\n\nSerializing vac_truncate_clog() fixes that.\n\n> Can you post whatever script you used to detect/reproduce the problem\n> in the first place?\n\nI've attached it, but it's pretty hackish. Apply the patch on commit 7170268,\nmake sure your --bindir is in $PATH, copy \"conf-test-pg\" to your home\ndirectory, and run \"make trunc-check\". This incorporates xid-burn\nacceleration code that Jeff Janes shared with -hackers at some point.\n\n> PS: also, re-reading this code, it's worrisome that we are not checking\n> for failure of the unlink calls. I think the idea was that it didn't\n> really matter because if we did re-use an existing file we'd just\n> re-zero it; hence failing the truncate is an overreaction. But probably\n> some comments about that are in order.\n\nThat's my understanding. We'll zero any page before reusing it. Failing the\nwhole vac_truncate_clog() (and therefore not advancing the wrap limit) would\ndo more harm than the bit of wasted disk space. Still, a LOG or WARNING\nmessage would be fine, I think.\n\nThanks,\nnm",
"msg_date": "Wed, 31 Jul 2019 23:51:17 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "Re: SimpleLruTruncate() mutual exclusion"
},
{
"msg_contents": "On Thu, Aug 1, 2019 at 6:51 PM Noah Misch <noah@leadboat.com> wrote:\n> vac_truncate_clog() instance 1 starts, considers segment ABCD eligible to unlink\n> vac_truncate_clog() instance 2 starts, considers segment ABCD eligible to unlink\n> vac_truncate_clog() instance 1 unlinks segment ABCD\n> vac_truncate_clog() instance 1 calls SetTransactionIdLimit()\n> vac_truncate_clog() instance 1 finishes\n> some backend calls SimpleLruZeroPage(), creating segment ABCD\n> vac_truncate_clog() instance 2 unlinks segment ABCD\n>\n> Serializing vac_truncate_clog() fixes that.\n\nI've wondered before (in a -bugs thread[1] about unexplained pg_serial\nwraparound warnings) if we could map 64 bit xids to wide SLRU file\nnames that never wrap around and make this class of problem go away.\nUnfortunately multixacts would need 64 bit support too...\n\n[1] https://www.postgresql.org/message-id/flat/CAEBTBzuS-01t12GGVD6qCezce8EFD8aZ1V%2Bo_3BZ%3DbuVLQBtRg%40mail.gmail.com\n\n\n",
"msg_date": "Mon, 4 Nov 2019 15:26:35 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SimpleLruTruncate() mutual exclusion"
},
{
"msg_contents": "On Mon, Nov 04, 2019 at 03:26:35PM +1300, Thomas Munro wrote:\n> On Thu, Aug 1, 2019 at 6:51 PM Noah Misch <noah@leadboat.com> wrote:\n> > vac_truncate_clog() instance 1 starts, considers segment ABCD eligible to unlink\n> > vac_truncate_clog() instance 2 starts, considers segment ABCD eligible to unlink\n> > vac_truncate_clog() instance 1 unlinks segment ABCD\n> > vac_truncate_clog() instance 1 calls SetTransactionIdLimit()\n> > vac_truncate_clog() instance 1 finishes\n> > some backend calls SimpleLruZeroPage(), creating segment ABCD\n> > vac_truncate_clog() instance 2 unlinks segment ABCD\n> >\n> > Serializing vac_truncate_clog() fixes that.\n> \n> I've wondered before (in a -bugs thread[1] about unexplained pg_serial\n> wraparound warnings) if we could map 64 bit xids to wide SLRU file\n> names that never wrap around and make this class of problem go away.\n> Unfortunately multixacts would need 64 bit support too...\n> \n> [1] https://www.postgresql.org/message-id/flat/CAEBTBzuS-01t12GGVD6qCezce8EFD8aZ1V%2Bo_3BZ%3DbuVLQBtRg%40mail.gmail.com\n\nThat change sounds good to me.\n\n\n",
"msg_date": "Mon, 4 Nov 2019 15:43:09 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "Re: SimpleLruTruncate() mutual exclusion"
},
{
"msg_contents": "On Mon, Nov 04, 2019 at 03:43:09PM -0800, Noah Misch wrote:\n> On Mon, Nov 04, 2019 at 03:26:35PM +1300, Thomas Munro wrote:\n> > On Thu, Aug 1, 2019 at 6:51 PM Noah Misch <noah@leadboat.com> wrote:\n> > > vac_truncate_clog() instance 1 starts, considers segment ABCD eligible to unlink\n> > > vac_truncate_clog() instance 2 starts, considers segment ABCD eligible to unlink\n> > > vac_truncate_clog() instance 1 unlinks segment ABCD\n> > > vac_truncate_clog() instance 1 calls SetTransactionIdLimit()\n> > > vac_truncate_clog() instance 1 finishes\n> > > some backend calls SimpleLruZeroPage(), creating segment ABCD\n> > > vac_truncate_clog() instance 2 unlinks segment ABCD\n> > >\n> > > Serializing vac_truncate_clog() fixes that.\n> > \n> > I've wondered before (in a -bugs thread[1] about unexplained pg_serial\n> > wraparound warnings) if we could map 64 bit xids to wide SLRU file\n> > names that never wrap around and make this class of problem go away.\n> > Unfortunately multixacts would need 64 bit support too...\n> > \n> > [1] https://www.postgresql.org/message-id/flat/CAEBTBzuS-01t12GGVD6qCezce8EFD8aZ1V%2Bo_3BZ%3DbuVLQBtRg%40mail.gmail.com\n> \n> That change sounds good to me.\n\nI do think $SUBJECT should move forward independent of that.\n\n\n",
"msg_date": "Fri, 15 Nov 2019 06:52:04 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "Re: SimpleLruTruncate() mutual exclusion"
},
{
"msg_contents": "> On Wed, Jul 31, 2019 at 11:51:17PM -0700, Noah Misch wrote:\n\nHi,\n\n> > Also, if I understand the data-loss hazard properly, it's what you\n> > said in the other thread: the latest_page_number could advance after\n> > we make our decision about what to truncate, and then maybe we could\n> > truncate newly-added data. We surely don't want to lock out the\n> > operations that can advance last_page_number, so how does serializing\n> > vac_truncate_clog help?\n> >\n> > I wonder whether what we need to do is add some state in shared\n> > memory saying \"it is only safe to create pages before here\", and\n> > make SimpleLruZeroPage or its callers check that. The truncation\n> > logic would advance that value, but only after completing a scan.\n> > (Oh ..., hmm, maybe the point of serializing truncations is to\n> > ensure that we know that we can safely advance that?)\n>\n> vac_truncate_clog() already records \"it is only safe to create pages before\n> here\" in ShmemVariableCache->xidWrapLimit, which it updates after any unlinks.\n> The trouble comes when two vac_truncate_clog() run in parallel and you get a\n> sequence of events like this:\n>\n> vac_truncate_clog() instance 1 starts, considers segment ABCD eligible to unlink\n> vac_truncate_clog() instance 2 starts, considers segment ABCD eligible to unlink\n> vac_truncate_clog() instance 1 unlinks segment ABCD\n> vac_truncate_clog() instance 1 calls SetTransactionIdLimit()\n> vac_truncate_clog() instance 1 finishes\n> some backend calls SimpleLruZeroPage(), creating segment ABCD\n> vac_truncate_clog() instance 2 unlinks segment ABCD\n>\n> Serializing vac_truncate_clog() fixes that.\n\nI'm probably missing something, so just wanted to clarify. Do I\nunderstand correctly, that thread [1] and this one are independent, and\nit is assumed in the scenario above that we're at the end of XID space,\nbut not necessarily having rounding issues? I'm a bit confused, since\nthe reproduce script does something about cutoffPage, and I'm not sure\nif it's important or not.\n\n> > Can you post whatever script you used to detect/reproduce the problem\n> > in the first place?\n>\n> I've attached it, but it's pretty hackish. Apply the patch on commit 7170268,\n> make sure your --bindir is in $PATH, copy \"conf-test-pg\" to your home\n> directory, and run \"make trunc-check\". This incorporates xid-burn\n> acceleration code that Jeff Janes shared with -hackers at some point.\n\nI've tried to use it to understand the problem better, but somehow\ncouldn't reproduce via suggested script. I've applied it to 7170268\n(tried also apply rebased to the master with the same results) with the\nconf-test-pg in place, but after going through all steps there are no\nerrors like:\n\n ERROR: could not access status of transaction 2149484247\n DETAIL: Could not open file \"pg_xact/0801\": No such file or directory.\n\nIs there anything extra one needs to do to reproduce the problem, maybe\nadjust delays or something?\n\n[1]: https://www.postgresql.org/message-id/flat/20190202083822.GC32531%40gust.leadboat.com\n\n\n",
"msg_date": "Sun, 17 Nov 2019 12:56:52 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SimpleLruTruncate() mutual exclusion"
},
{
"msg_contents": "On Sun, Nov 17, 2019 at 12:56:52PM +0100, Dmitry Dolgov wrote:\n> > On Wed, Jul 31, 2019 at 11:51:17PM -0700, Noah Misch wrote:\n> > > Also, if I understand the data-loss hazard properly, it's what you\n> > > said in the other thread: the latest_page_number could advance after\n> > > we make our decision about what to truncate, and then maybe we could\n> > > truncate newly-added data. We surely don't want to lock out the\n> > > operations that can advance last_page_number, so how does serializing\n> > > vac_truncate_clog help?\n> > >\n> > > I wonder whether what we need to do is add some state in shared\n> > > memory saying \"it is only safe to create pages before here\", and\n> > > make SimpleLruZeroPage or its callers check that. The truncation\n> > > logic would advance that value, but only after completing a scan.\n> > > (Oh ..., hmm, maybe the point of serializing truncations is to\n> > > ensure that we know that we can safely advance that?)\n> >\n> > vac_truncate_clog() already records \"it is only safe to create pages before\n> > here\" in ShmemVariableCache->xidWrapLimit, which it updates after any unlinks.\n> > The trouble comes when two vac_truncate_clog() run in parallel and you get a\n> > sequence of events like this:\n> >\n> > vac_truncate_clog() instance 1 starts, considers segment ABCD eligible to unlink\n> > vac_truncate_clog() instance 2 starts, considers segment ABCD eligible to unlink\n> > vac_truncate_clog() instance 1 unlinks segment ABCD\n> > vac_truncate_clog() instance 1 calls SetTransactionIdLimit()\n> > vac_truncate_clog() instance 1 finishes\n> > some backend calls SimpleLruZeroPage(), creating segment ABCD\n> > vac_truncate_clog() instance 2 unlinks segment ABCD\n> >\n> > Serializing vac_truncate_clog() fixes that.\n> \n> I'm probably missing something, so just wanted to clarify. Do I\n> understand correctly, that thread [1] and this one are independent, and\n> it is assumed in the scenario above that we're at the end of XID space,\n> but not necessarily having rounding issues? I'm a bit confused, since\n> the reproduce script does something about cutoffPage, and I'm not sure\n> if it's important or not.\n\nI think the repro recipe contained an early fix for the other thread's bug.\nWhile they're independent in principle, I likely never reproduced this bug\nwithout having a fix in place for the other thread's bug. The bug in the\nother thread was easier to hit.\n\n> > > Can you post whatever script you used to detect/reproduce the problem\n> > > in the first place?\n> >\n> > I've attached it, but it's pretty hackish. Apply the patch on commit 7170268,\n> > make sure your --bindir is in $PATH, copy \"conf-test-pg\" to your home\n> > directory, and run \"make trunc-check\". This incorporates xid-burn\n> > acceleration code that Jeff Janes shared with -hackers at some point.\n> \n> I've tried to use it to understand the problem better, but somehow\n> couldn't reproduce via suggested script. I've applied it to 7170268\n> (tried also apply rebased to the master with the same results) with the\n> conf-test-pg in place, but after going through all steps there are no\n> errors like:\n\nThat is unfortunate.\n\n> Is there anything extra one needs to do to reproduce the problem, maybe\n> adjust delays or something?\n\nIt wouldn't surprise me. I did all my testing on one or two systems; the\nhard-coded delays suffice there, but I'm sure there exist systems needing\ndifferent delays.\n\nThough I did reproduce this bug, I'm motivated by the abstract problem more\nthan any particular way to reproduce it. Commit 996d273 inspired me; by\nremoving a GetCurrentTransactionId(), it allowed the global xmin to advance at\ntimes it previously could not. That subtly changed the concurrency\npossibilities. I think safe, parallel SimpleLruTruncate() is difficult to\nmaintain and helps too rarely to justify such maintenance. That's why I\npropose eliminating the concurrency.\n\n> [1]: https://www.postgresql.org/message-id/flat/20190202083822.GC32531%40gust.leadboat.com\n\n\n",
"msg_date": "Sun, 17 Nov 2019 22:14:26 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "Re: SimpleLruTruncate() mutual exclusion"
},
{
"msg_contents": "> On Sun, Nov 17, 2019 at 10:14:26PM -0800, Noah Misch wrote:\n>\n> Though I did reproduce this bug, I'm motivated by the abstract problem more\n> than any particular way to reproduce it. Commit 996d273 inspired me; by\n> removing a GetCurrentTransactionId(), it allowed the global xmin to advance at\n> times it previously could not. That subtly changed the concurrency\n> possibilities. I think safe, parallel SimpleLruTruncate() is difficult to\n> maintain and helps too rarely to justify such maintenance. That's why I\n> propose eliminating the concurrency.\n\nSure, I see the point and the possibility for the issue itself, but of\ncourse it's easier to reason about an issue I can reproduce :)\n\n> I wonder about performance in a database with millions of small relations,\n> particularly considering my intent to back-patch this. In such databases,\n> vac_update_datfrozenxid() can be a major part of the VACUUM's cost. Two\n> things work in our favor. First, vac_update_datfrozenxid() runs once per\n> VACUUM command, not once per relation. Second, Autovacuum has this logic:\n>\n> \t * ... we skip\n> \t * this if (1) we found no work to do and (2) we skipped at least one\n> \t * table due to concurrent autovacuum activity. In that case, the other\n> \t * worker has already done it, or will do so when it finishes.\n> \t */\n> \tif (did_vacuum || !found_concurrent_worker)\n> \t\tvac_update_datfrozenxid();\n>\n> That makes me relatively unworried. I did consider some alternatives:\n\nBtw, I've performed few experiments with parallel vacuuming of 10^4\nsmall tables that are taking some small inserts, the results look like\nthis:\n\n # with patch\n # funclatency -u bin/postgres:vac_update_datfrozenxid\n\n usecs : count distribution\n 0 -> 1 : 0 | |\n 2 -> 3 : 0 | |\n 4 -> 7 : 0 | |\n 8 -> 15 : 0 | |\n 16 -> 31 : 0 | |\n 32 -> 63 : 0 | |\n 64 -> 127 : 0 | |\n 128 -> 255 : 0 | |\n 256 -> 511 : 0 | |\n 512 -> 1023 : 3 |*** |\n 1024 -> 2047 : 38 |****************************************|\n 2048 -> 4095 : 15 |*************** |\n 4096 -> 8191 : 15 |*************** |\n 8192 -> 16383 : 2 |** |\n\n # without patch\n # funclatency -u bin/postgres:vac_update_datfrozenxid\n\n usecs : count distribution\n 0 -> 1 : 0 | |\n 2 -> 3 : 0 | |\n 4 -> 7 : 0 | |\n 8 -> 15 : 0 | |\n 16 -> 31 : 0 | |\n 32 -> 63 : 0 | |\n 64 -> 127 : 0 | |\n 128 -> 255 : 0 | |\n 256 -> 511 : 0 | |\n 512 -> 1023 : 5 |**** |\n 1024 -> 2047 : 49 |****************************************|\n 2048 -> 4095 : 11 |******** |\n 4096 -> 8191 : 5 |**** |\n 8192 -> 16383 : 1 | |\n\nIn general it seems that latency tends to be a bit higher, but I don't\nthink it's significant.\n\n\n",
"msg_date": "Fri, 22 Nov 2019 16:32:22 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SimpleLruTruncate() mutual exclusion"
},
{
"msg_contents": "> On Sun, Nov 17, 2019 at 10:14:26PM -0800, Noah Misch wrote:\n>\n> > I'm probably missing something, so just wanted to clarify. Do I\n> > understand correctly, that thread [1] and this one are independent, and\n> > it is assumed in the scenario above that we're at the end of XID space,\n> > but not necessarily having rounding issues? I'm a bit confused, since\n> > the reproduce script does something about cutoffPage, and I'm not sure\n> > if it's important or not.\n>\n> I think the repro recipe contained an early fix for the other thread's bug.\n> While they're independent in principle, I likely never reproduced this bug\n> without having a fix in place for the other thread's bug. The bug in the\n> other thread was easier to hit.\n\nJust to clarify, since the CF item for this patch was withdrawn\nrecently. Does it mean that eventually the thread [1] covers this one\ntoo?\n\n[1]: https://www.postgresql.org/message-id/flat/20190202083822.GC32531%40gust.leadboat.com\n\n\n",
"msg_date": "Thu, 30 Jan 2020 16:34:33 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SimpleLruTruncate() mutual exclusion"
},
{
"msg_contents": "On Thu, Jan 30, 2020 at 04:34:33PM +0100, Dmitry Dolgov wrote:\n> > On Sun, Nov 17, 2019 at 10:14:26PM -0800, Noah Misch wrote:\n> > > I'm probably missing something, so just wanted to clarify. Do I\n> > > understand correctly, that thread [1] and this one are independent, and\n> > > it is assumed in the scenario above that we're at the end of XID space,\n> > > but not necessarily having rounding issues? I'm a bit confused, since\n> > > the reproduce script does something about cutoffPage, and I'm not sure\n> > > if it's important or not.\n> >\n> > I think the repro recipe contained an early fix for the other thread's bug.\n> > While they're independent in principle, I likely never reproduced this bug\n> > without having a fix in place for the other thread's bug. The bug in the\n> > other thread was easier to hit.\n> \n> Just to clarify, since the CF item for this patch was withdrawn\n> recently. Does it mean that eventually the thread [1] covers this one\n> too?\n> \n> [1]: https://www.postgresql.org/message-id/flat/20190202083822.GC32531%40gust.leadboat.com\n\nI withdrew $SUBJECT because, if someone reviews one of my patches, I want it\nto be the one you cite in [1]. I plan not to commit [1] without a Ready for\nCommitter, and I plan not to commit $SUBJECT before committing [1]. I would\nbe willing to commit $SUBJECT without getting a review, however.\n\n\n",
"msg_date": "Fri, 31 Jan 2020 00:42:13 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "Re: SimpleLruTruncate() mutual exclusion"
},
{
"msg_contents": "On Fri, Jun 28, 2019 at 10:06:28AM -0700, Noah Misch wrote:\n> On Sun, Feb 17, 2019 at 11:31:03PM -0800, Noah Misch wrote:\n> > I'm forking this thread from\n> > https://postgr.es/m/flat/20190202083822.GC32531@gust.leadboat.com, which\n> > reported a race condition involving the \"apparent wraparound\" safety check in\n> > SimpleLruTruncate():\n> > \n> > On Wed, Feb 13, 2019 at 11:26:23PM -0800, Noah Misch wrote:\n> > > 1. The result of the test is valid only until we release the SLRU ControlLock,\n> > > which we do before SlruScanDirCbDeleteCutoff() uses the cutoff to evaluate\n> > > segments for deletion. Once we release that lock, latest_page_number can\n> > > advance. This creates a TOCTOU race condition, allowing excess deletion:\n> > > \n> > > [local] test=# table trunc_clog_concurrency ;\n> > > ERROR: could not access status of transaction 2149484247\n> > > DETAIL: Could not open file \"pg_xact/0801\": No such file or directory.\n> > \n> > > Fixes are available:\n> > \n> > > b. Arrange so only one backend runs vac_truncate_clog() at a time. Other than\n> > > AsyncCtl, every SLRU truncation appears in vac_truncate_clog(), in a\n> > > checkpoint, or in the startup process. Hence, also arrange for only one\n> > > backend to call SimpleLruTruncate(AsyncCtl) at a time.\n> > \n> > More specifically, restrict vac_update_datfrozenxid() to one backend per\n> > database, and restrict vac_truncate_clog() and asyncQueueAdvanceTail() to one\n> > backend per cluster. This, attached, was rather straightforward.\n> \n> Rebased. The conflicts were limited to comments and documentation.\n\nRebased, most notably over the lock renames of commit 5da1493. In\nLockTagTypeNames, I did s/frozen IDs/frozenid/ for consistency with that\ncommit having done s/speculative token/spectoken/.",
"msg_date": "Mon, 25 May 2020 13:34:19 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "Re: SimpleLruTruncate() mutual exclusion"
},
{
"msg_contents": "On Fri, Jan 31, 2020 at 12:42:13AM -0800, Noah Misch wrote:\n> On Thu, Jan 30, 2020 at 04:34:33PM +0100, Dmitry Dolgov wrote:\n> > Just to clarify, since the CF item for this patch was withdrawn\n> > recently. Does it mean that eventually the thread [1] covers this one\n> > too?\n> > \n> > [1]: https://www.postgresql.org/message-id/flat/20190202083822.GC32531%40gust.leadboat.com\n> \n> I withdrew $SUBJECT because, if someone reviews one of my patches, I want it\n> to be the one you cite in [1]. I plan not to commit [1] without a Ready for\n> Committer, and I plan not to commit $SUBJECT before committing [1]. I would\n> be willing to commit $SUBJECT without getting a review, however.\n\nAfter further reflection, I plan to push $SUBJECT shortly after 2020-08-13,\nnot waiting for [1] to change status. Reasons:\n\n- While I put significant weight on the fact that I couldn't reproduce\n $SUBJECT problems without first fixing [1], I'm no longer confident of that\n representing real-world experiences. Reproducing [1] absolutely requires a\n close approach to a wrap limit, but $SUBJECT problems might not.\n\n- Adding locks won't change correct functional behavior to incorrect\n functional behavior.\n\n- By pushing at that particular time, the fix ordinarily will appear in v13.0\n before appearing in a back branch release. If problematic contention arises\n quickly in the field, that's a more-comfortable way to discover it.\n\n\n",
"msg_date": "Sat, 1 Aug 2020 23:11:42 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "Re: SimpleLruTruncate() mutual exclusion"
}
] |
[
{
"msg_contents": "I propose the attached patch to replace the CACHEX_elog() macros with a\nsingle varargs version.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 18 Feb 2019 13:56:52 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Use varargs macro for CACHEDEBUG"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-18 13:56:52 +0100, Peter Eisentraut wrote:\n> I propose the attached patch to replace the CACHEX_elog() macros with a\n> single varargs version.\n\n+1\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 18 Feb 2019 08:29:15 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Use varargs macro for CACHEDEBUG"
}
] |
[
{
"msg_contents": "\nOver on our shiny new PDF builder at\n<https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=alabio&dt=2019-02-18%2012%3A32%3A08&stg=make-pdfs> \nFOP is complaining about a bunch of unresolved ID references.\n\nIt appears that these are due to title elements having id tags. At\n<http://www.sagehill.net/docbookxsl/CrossRefs.html> is says:\n\n When adding an |id| or |xml:id| attribute, put it on the element\n itself, not the |title|.\n\nSo maybe we need to fix those up?\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 18 Feb 2019 10:12:42 -0500",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "FOP warnings about id attributes in title tags"
},
{
"msg_contents": "On 2019-02-18 16:12, Andrew Dunstan wrote:\n> Over on our shiny new PDF builder at\n> <https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=alabio&dt=2019-02-18%2012%3A32%3A08&stg=make-pdfs> \n> FOP is complaining about a bunch of unresolved ID references.\n\nYes, this is an ancient issue. The issue is at the level of FO\nprocessing, nothing we are doing wrong in the DocBook markup.\n\n> It appears that these are due to title elements having id tags. At\n> <http://www.sagehill.net/docbookxsl/CrossRefs.html> is says:\n> \n> When adding an |id| or |xml:id| attribute, put it on the element\n> itself, not the |title|.\n> \n> So maybe we need to fix those up?\n\nYou can't just remove the ids, since some of them are referenced from\nelsewhere.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Mon, 18 Feb 2019 16:37:35 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: FOP warnings about id attributes in title tags"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> Over on our shiny new PDF builder at\n> <https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=alabio&dt=2019-02-18%2012%3A32%3A08&stg=make-pdfs> \n> FOP is complaining about a bunch of unresolved ID references.\n> It appears that these are due to title elements having id tags. At\n> <http://www.sagehill.net/docbookxsl/CrossRefs.html> is says:\n> When adding an |id| or |xml:id| attribute, put it on the element\n> itself, not the |title|.\n> So maybe we need to fix those up?\n\nHm. I think the places where we did that were so that links to a section\ncould use the section title text as the link text (rather than \"Section\nx.y.z\", or whatever randomness you get for an unnumbered section). Maybe\nthere's another way to do that, or we can think of something that reads\nwell enough without such special pushups. The current arrangement is\ndefinitely a hangover from the old doc toolchain, so very possibly there's\na better way.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 18 Feb 2019 10:38:45 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FOP warnings about id attributes in title tags"
},
{
"msg_contents": "On 2019-02-18 16:37, Peter Eisentraut wrote:\n>> It appears that these are due to title elements having id tags. At\n>> <http://www.sagehill.net/docbookxsl/CrossRefs.html> is says:\n>>\n>> When adding an |id| or |xml:id| attribute, put it on the element\n>> itself, not the |title|.\n>>\n>> So maybe we need to fix those up?\n> \n> You can't just remove the ids, since some of them are referenced from\n> elsewhere.\n\nHere was a discussion on getting rid of them:\nhttps://www.postgresql.org/message-id/flat/4a60dfc3-061b-01c4-2b86-279d3a612fd2%402ndquadrant.com\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Tue, 19 Feb 2019 17:07:25 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: FOP warnings about id attributes in title tags"
},
{
"msg_contents": "\nOn 2/19/19 11:07 AM, Peter Eisentraut wrote:\n> On 2019-02-18 16:37, Peter Eisentraut wrote:\n>>> It appears that these are due to title elements having id tags. At\n>>> <http://www.sagehill.net/docbookxsl/CrossRefs.html> is says:\n>>>\n>>> When adding an |id| or |xml:id| attribute, put it on the element\n>>> itself, not the |title|.\n>>>\n>>> So maybe we need to fix those up?\n>> You can't just remove the ids, since some of them are referenced from\n>> elsewhere.\n> Here was a discussion on getting rid of them:\n> https://www.postgresql.org/message-id/flat/4a60dfc3-061b-01c4-2b86-279d3a612fd2%402ndquadrant.com\n>\n\n\n\nYeah,\n\n\nI did some experimentation, and found that removing the id on the title\ntag, and the corresponding endterm attributes, and adding an xreflabel\nto the linkend object seemed to have the desired effect. Not yet tested\nwith FOP but this looks like a good direction.\n\n\nTest case:\n\n\ndiff --git a/doc/src/sgml/ref/alter_collation.sgml b/doc/src/sgml/ref/alter_collation.sgml\nindex b51b3a2564..432495e522 100644\n--- a/doc/src/sgml/ref/alter_collation.sgml\n+++ b/doc/src/sgml/ref/alter_collation.sgml\n@@ -93,16 +93,15 @@ ALTER COLLATION <replaceable>name</replaceable> SET SCHEMA <replaceable>new_sche\n <listitem>\n <para>\n Update the collation's version.\n- See <xref linkend=\"sql-altercollation-notes\"\n- endterm=\"sql-altercollation-notes-title\"/> below.\n+ See <xref linkend=\"sql-altercollation-notes\"/> below.\n </para>\n </listitem>\n </varlistentry>\n </variablelist>\n </refsect1>\n \n- <refsect1 id=\"sql-altercollation-notes\">\n- <title id=\"sql-altercollation-notes-title\">Notes</title>\n+ <refsect1 id=\"sql-altercollation-notes\" xreflabel=\"Notes\">\n+ <title>Notes</title>\n \n <para>\n When using collations provided by the ICU library, the ICU-specific version\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 22 Feb 2019 09:57:33 -0500",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: FOP warnings about id attributes in title tags"
},
{
"msg_contents": "On 2/22/19 9:57 AM, Andrew Dunstan wrote:\n> On 2/19/19 11:07 AM, Peter Eisentraut wrote:\n>> On 2019-02-18 16:37, Peter Eisentraut wrote:\n>>>> It appears that these are due to title elements having id tags. At\n>>>> <http://www.sagehill.net/docbookxsl/CrossRefs.html> is says:\n>>>>\n>>>> When adding an |id| or |xml:id| attribute, put it on the element\n>>>> itself, not the |title|.\n>>>>\n>>>> So maybe we need to fix those up?\n>>> You can't just remove the ids, since some of them are referenced from\n>>> elsewhere.\n>> Here was a discussion on getting rid of them:\n>> https://www.postgresql.org/message-id/flat/4a60dfc3-061b-01c4-2b86-279d3a612fd2%402ndquadrant.com\n>>\n>\n>\n> Yeah,\n>\n>\n> I did some experimentation, and found that removing the id on the title\n> tag, and the corresponding endterm attributes, and adding an xreflabel\n> to the linkend object seemed to have the desired effect. Not yet tested\n> with FOP but this looks like a good direction.\n>\n>\n> Test case:\n>\n>\n> diff --git a/doc/src/sgml/ref/alter_collation.sgml b/doc/src/sgml/ref/alter_collation.sgml\n> index b51b3a2564..432495e522 100644\n> --- a/doc/src/sgml/ref/alter_collation.sgml\n> +++ b/doc/src/sgml/ref/alter_collation.sgml\n> @@ -93,16 +93,15 @@ ALTER COLLATION <replaceable>name</replaceable> SET SCHEMA <replaceable>new_sche\n> <listitem>\n> <para>\n> Update the collation's version.\n> - See <xref linkend=\"sql-altercollation-notes\"\n> - endterm=\"sql-altercollation-notes-title\"/> below.\n> + See <xref linkend=\"sql-altercollation-notes\"/> below.\n> </para>\n> </listitem>\n> </varlistentry>\n> </variablelist>\n> </refsect1>\n> \n> - <refsect1 id=\"sql-altercollation-notes\">\n> - <title id=\"sql-altercollation-notes-title\">Notes</title>\n> + <refsect1 id=\"sql-altercollation-notes\" xreflabel=\"Notes\">\n> + <title>Notes</title>\n> \n> <para>\n> When using collations provided by the ICU library, the ICU-specific version\n\n\nThis worked reasonably well in most cases, but not in the cases where\nthere was formatting in the title text. So I adopted a different\napproach which wrapped the title text in a phrase tag and put the id on\nthat tag instead of on the title tag itself. The documentation seems to\nsuggest that supplying a place to put an id tag around a small piece of\ntext is largely the purpose of the phrase tag. Anyway, this worked. It\nalso has the upside that we're not duplicating the title text.\n\n\nAt the same time I removed some apparently pointless id tags on a\nhandful of refname objects.\n\n\nGiven these two changes the PDFs build free of warnings about unresolved\nID references.\n\n\nSome of these titles with id attributes are not actually referred to\nanywhere, but that seems reasonably harmless.\n\n\npatch attached.\n\n\ncheers\n\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sat, 23 Feb 2019 09:47:54 -0500",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: FOP warnings about id attributes in title tags"
}
] |
[
{
"msg_contents": "I propose to add an equivalent to unconstify() for volatile. When\nworking on this, I picked the name unvolatize() mostly as a joke, but it\nappears it's a real word. Other ideas?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 18 Feb 2019 16:39:25 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "unconstify equivalent for volatile"
},
{
"msg_contents": "Hi,\n\nOn February 18, 2019 7:39:25 AM PST, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n>I propose to add an equivalent to unconstify() for volatile. When\n>working on this, I picked the name unvolatize() mostly as a joke, but\n>it\n>appears it's a real word. Other ideas?\n\nShouldn't we just remove just about all those use of volatile? Basically the only valid use these days is on sigsetjmp call sites.\n\nAndres\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n",
"msg_date": "Mon, 18 Feb 2019 07:42:33 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: unconstify equivalent for volatile"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> I propose to add an equivalent to unconstify() for volatile. When\n> working on this, I picked the name unvolatize() mostly as a joke, but it\n> appears it's a real word. Other ideas?\n\nUmm ... wouldn't this amount to papering over actual bugs? I can\nthink of legitimate reasons to cast away const, but casting away\nvolatile seems pretty dangerous, and not something to encourage\nby making it notationally easy.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 18 Feb 2019 10:43:50 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: unconstify equivalent for volatile"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-18 10:43:50 -0500, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> > I propose to add an equivalent to unconstify() for volatile. When\n> > working on this, I picked the name unvolatize() mostly as a joke, but it\n> > appears it's a real word. Other ideas?\n> \n> Umm ... wouldn't this amount to papering over actual bugs? I can\n> think of legitimate reasons to cast away const, but casting away\n> volatile seems pretty dangerous, and not something to encourage\n> by making it notationally easy.\n\nMost of those seem to be cases where volatile was just to make sigsetjmp\nsafe. There's plently legitimate cases where we need to cast volatile\naway in those, e.g. because the variable needs to be passed to memcpy.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 18 Feb 2019 08:32:00 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: unconstify equivalent for volatile"
},
{
"msg_contents": "On 18/02/2019 16:43, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> I propose to add an equivalent to unconstify() for volatile. When\n>> working on this, I picked the name unvolatize() mostly as a joke, but it\n>> appears it's a real word. Other ideas?\n> \n> Umm ... wouldn't this amount to papering over actual bugs? I can\n> think of legitimate reasons to cast away const, but casting away\n> volatile seems pretty dangerous, and not something to encourage\n> by making it notationally easy.\n> \n\nI'd argue that it's not making it easier to do but rather easier to spot\n(vs normal type casting) which is IMO a good thing from patch review\nperspective.\n\n-- \n Petr Jelinek http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Mon, 18 Feb 2019 21:18:42 +0100",
"msg_from": "Petr Jelinek <petr.jelinek@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: unconstify equivalent for volatile"
},
{
"msg_contents": "Petr Jelinek <petr.jelinek@2ndquadrant.com> writes:\n> On 18/02/2019 16:43, Tom Lane wrote:\n>> Umm ... wouldn't this amount to papering over actual bugs?\n\n> I'd argue that it's not making it easier to do but rather easier to spot\n> (vs normal type casting) which is IMO a good thing from patch review\n> perspective.\n\nYeah, fair point. As Peter noted about unconstify, these could be\nviewed as TODO markers.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 18 Feb 2019 15:20:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: unconstify equivalent for volatile"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-18 16:39:25 +0100, Peter Eisentraut wrote:\n> diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c\n> index 7da337d11f..fa7d72ef76 100644\n> --- a/src/backend/storage/ipc/latch.c\n> +++ b/src/backend/storage/ipc/latch.c\n> @@ -381,7 +381,7 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock,\n> \n> \tif (wakeEvents & WL_LATCH_SET)\n> \t\tAddWaitEventToSet(set, WL_LATCH_SET, PGINVALID_SOCKET,\n> -\t\t\t\t\t\t (Latch *) latch, NULL);\n> +\t\t\t\t\t\t unvolatize(Latch *, latch), NULL);\n> \n> \t/* Postmaster-managed callers must handle postmaster death somehow. */\n> \tAssert(!IsUnderPostmaster ||\n\nISTM this one should rather be solved by removing all volatiles from\nlatch.[ch]. As that's a cross-process concern we can't rely on it\nanyway (and have placed barriers a few years back to allay concerns /\nbugs due to reordering).\n\n\n> diff --git a/src/backend/storage/ipc/pmsignal.c b/src/backend/storage/ipc/pmsignal.c\n> index d707993bf6..48f4311464 100644\n> --- a/src/backend/storage/ipc/pmsignal.c\n> +++ b/src/backend/storage/ipc/pmsignal.c\n> @@ -134,7 +134,7 @@ PMSignalShmemInit(void)\n> \n> \tif (!found)\n> \t{\n> -\t\tMemSet(PMSignalState, 0, PMSignalShmemSize());\n> +\t\tMemSet(unvolatize(PMSignalData *, PMSignalState), 0, PMSignalShmemSize());\n> \t\tPMSignalState->num_child_flags = MaxLivePostmasterChildren();\n> \t}\n> }\n\nSame. Did you put an type assertion into MemSet(), or how did you\ndiscover this one as needing to be changed?\n\n.oO(We really ought to remove MemSet()).\n\n",
"msg_date": "Mon, 18 Feb 2019 12:25:11 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: unconstify equivalent for volatile"
},
{
"msg_contents": "On 2019-02-18 21:25, Andres Freund wrote:\n> ISTM this one should rather be solved by removing all volatiles from\n> latch.[ch]. As that's a cross-process concern we can't rely on it\n> anyway (and have placed barriers a few years back to allay concerns /\n> bugs due to reordering).\n\nAren't the volatiles there so that Latch variables can be set from\nsignal handlers?\n\n>> diff --git a/src/backend/storage/ipc/pmsignal.c b/src/backend/storage/ipc/pmsignal.c\n>> index d707993bf6..48f4311464 100644\n>> --- a/src/backend/storage/ipc/pmsignal.c\n>> +++ b/src/backend/storage/ipc/pmsignal.c\n>> @@ -134,7 +134,7 @@ PMSignalShmemInit(void)\n>> \n>> \tif (!found)\n>> \t{\n>> -\t\tMemSet(PMSignalState, 0, PMSignalShmemSize());\n>> +\t\tMemSet(unvolatize(PMSignalData *, PMSignalState), 0, PMSignalShmemSize());\n>> \t\tPMSignalState->num_child_flags = MaxLivePostmasterChildren();\n>> \t}\n>> }\n> \n> Same. Did you put an type assertion into MemSet(), or how did you\n> discover this one as needing to be changed?\n\nBuild with -Wcast-qual, which warns for this because MemSet() does a\n(void *) cast.\n\n> .oO(We really ought to remove MemSet()).\n\nyeah\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Tue, 19 Feb 2019 16:00:58 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: unconstify equivalent for volatile"
},
{
"msg_contents": "Hi,\n\nOn February 19, 2019 7:00:58 AM PST, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n>On 2019-02-18 21:25, Andres Freund wrote:\n>> ISTM this one should rather be solved by removing all volatiles from\n>> latch.[ch]. As that's a cross-process concern we can't rely on it\n>> anyway (and have placed barriers a few years back to allay concerns /\n>> bugs due to reordering).\n>\n>Aren't the volatiles there so that Latch variables can be set from\n>signal handlers?\n\nWhy would they be required, given we have an explicit compiler & memory barrier? That forces the compiler to spill the writes to memory already, even in a signal handler. And conversely the reading side is similarly forced - if not we have a bug in multi core systems - to read the variable from memory after a barrier.\n\nThe real reason why variables commonly need to be volatile when used in signal handlers is not the signal handler side, but the normal code flow side. Without using explicit care the variable's value can be \"cached\"in a register, and a change not noticed. But as volatiles aren't sufficient for cross process safety, latches need more anyway.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n",
"msg_date": "Tue, 19 Feb 2019 08:09:21 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: unconstify equivalent for volatile"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> The real reason why variables commonly need to be volatile when used in\n> signal handlers is not the signal handler side, but the normal code flow\n> side.\n\nYeah, exactly. You have not explained why it'd be safe to ignore that.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 19 Feb 2019 11:48:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: unconstify equivalent for volatile"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-19 11:48:16 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > The real reason why variables commonly need to be volatile when used in\n> > signal handlers is not the signal handler side, but the normal code flow\n> > side.\n> \n> Yeah, exactly. You have not explained why it'd be safe to ignore that.\n\nBecause SetLatch() is careful to have a pg_memory_barrier() before\ntouching shared state and conversely so are ResetLatch() (and\nWaitEventSetWait(), which already has no volatiles). And if we've got\nthis wrong they aren't safe for shared latches, because volatiles don't\nenforce meaningful ordering on weakly ordered architectures.\n\nBut even if we were to decide we'd want to keep a volatile in SetLatch()\n- which I think really would only serve to hide bugs - that'd not mean\nit's a good idea to keep it on all the other functions in latch.c.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Tue, 19 Feb 2019 09:02:44 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: unconstify equivalent for volatile"
},
{
"msg_contents": "On 2019-02-19 18:02, Andres Freund wrote:\n> Because SetLatch() is careful to have a pg_memory_barrier() before\n> touching shared state and conversely so are ResetLatch() (and\n> WaitEventSetWait(), which already has no volatiles). And if we've got\n> this wrong they aren't safe for shared latches, because volatiles don't\n> enforce meaningful ordering on weakly ordered architectures.\n\nThat makes sense.\n\n> But even if we were to decide we'd want to keep a volatile in SetLatch()\n> - which I think really would only serve to hide bugs - that'd not mean\n> it's a good idea to keep it on all the other functions in latch.c.\n\nWhat is even the meaning of having a volatile Latch * argument on a\nfunction when the actual latch variable (MyLatch) isn't volatile? That\nwould just enforce certain constraints on the compiler inside that\nfunction but not on the overall program, right?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 22 Feb 2019 12:38:35 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: unconstify equivalent for volatile"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-22 12:38:35 +0100, Peter Eisentraut wrote:\n> On 2019-02-19 18:02, Andres Freund wrote:\n> > But even if we were to decide we'd want to keep a volatile in SetLatch()\n> > - which I think really would only serve to hide bugs - that'd not mean\n> > it's a good idea to keep it on all the other functions in latch.c.\n> \n> What is even the meaning of having a volatile Latch * argument on a\n> function when the actual latch variable (MyLatch) isn't volatile? That\n> would just enforce certain constraints on the compiler inside that\n> function but not on the overall program, right?\n\nRight. But we should ever look/write into the contents of a latch\noutside of latch.c, so I don't think that'd really be a problem, even if\nwe relied on volatiles.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Fri, 22 Feb 2019 12:31:39 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: unconstify equivalent for volatile"
},
{
"msg_contents": "On 2019-02-22 21:31, Andres Freund wrote:\n> On 2019-02-22 12:38:35 +0100, Peter Eisentraut wrote:\n>> On 2019-02-19 18:02, Andres Freund wrote:\n>>> But even if we were to decide we'd want to keep a volatile in SetLatch()\n>>> - which I think really would only serve to hide bugs - that'd not mean\n>>> it's a good idea to keep it on all the other functions in latch.c.\n\n> Right. But we should ever look/write into the contents of a latch\n> outside of latch.c, so I don't think that'd really be a problem, even if\n> we relied on volatiles.\n\nSo how about this patch?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 25 Feb 2019 09:29:36 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: unconstify equivalent for volatile"
},
{
"msg_contents": "On 2019-02-25 09:29, Peter Eisentraut wrote:\n> On 2019-02-22 21:31, Andres Freund wrote:\n>> On 2019-02-22 12:38:35 +0100, Peter Eisentraut wrote:\n>>> On 2019-02-19 18:02, Andres Freund wrote:\n>>>> But even if we were to decide we'd want to keep a volatile in SetLatch()\n>>>> - which I think really would only serve to hide bugs - that'd not mean\n>>>> it's a good idea to keep it on all the other functions in latch.c.\n> \n>> Right. But we should ever look/write into the contents of a latch\n>> outside of latch.c, so I don't think that'd really be a problem, even if\n>> we relied on volatiles.\n> \n> So how about this patch?\n\ncommitted\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Mon, 4 Mar 2019 11:36:48 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: unconstify equivalent for volatile"
},
{
"msg_contents": "On 2019-02-18 16:42, Andres Freund wrote:\n> On February 18, 2019 7:39:25 AM PST, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n>> I propose to add an equivalent to unconstify() for volatile. When\n>> working on this, I picked the name unvolatize() mostly as a joke, but\n>> it\n>> appears it's a real word. Other ideas?\n> \n> Shouldn't we just remove just about all those use of volatile? Basically the only valid use these days is on sigsetjmp call sites.\n\nSo, after some recent cleanups and another one attached here, we're now\ndown to 1.5 uses of this. (0.5 because the hunk in pmsignal.c doesn't\nhave a cast right now, but it would need one if s/MemSet/memset/.)\nThose seem legitimate.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 19 Mar 2019 11:52:47 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: unconstify equivalent for volatile"
}
] |
[
{
"msg_contents": "HI,\n\nI realized that 'boolean' and 'bool' are mixed as SQL data type in the\ndocumentation. Here is the quick check result;\n\n$ git grep -c \"<type>bool</type>\" doc\ndoc/src/sgml/bki.sgml:1\ndoc/src/sgml/btree-gin.sgml:1\ndoc/src/sgml/catalogs.sgml:88\ndoc/src/sgml/datatype.sgml:1\ndoc/src/sgml/ecpg.sgml:2\ndoc/src/sgml/func.sgml:20\ndoc/src/sgml/gin.sgml:2\ndoc/src/sgml/monitoring.sgml:1\ndoc/src/sgml/plpython.sgml:1\ndoc/src/sgml/xfunc.sgml:1\ndoc/src/sgml/xml2.sgml:2\n\n $ git grep -c \"<type>boolean</type>\" doc\ndoc/src/sgml/adminpack.sgml:2\ndoc/src/sgml/auto-explain.sgml:6\ndoc/src/sgml/catalogs.sgml:23\ndoc/src/sgml/config.sgml:93\ndoc/src/sgml/cube.sgml:10\ndoc/src/sgml/datatype.sgml:7\ndoc/src/sgml/ecpg.sgml:2\ndoc/src/sgml/extend.sgml:2\ndoc/src/sgml/func.sgml:102\ndoc/src/sgml/hstore.sgml:2\ndoc/src/sgml/information_schema.sgml:1\ndoc/src/sgml/intarray.sgml:5\ndoc/src/sgml/isn.sgml:3\ndoc/src/sgml/json.sgml:2\ndoc/src/sgml/ltree.sgml:18\ndoc/src/sgml/monitoring.sgml:2\ndoc/src/sgml/pgbuffercache.sgml:1\ndoc/src/sgml/pgprewarm.sgml:1\ndoc/src/sgml/pgrowlocks.sgml:1\ndoc/src/sgml/pgstatstatements.sgml:2\ndoc/src/sgml/pgtrgm.sgml:5\ndoc/src/sgml/plperl.sgml:1\ndoc/src/sgml/plpgsql.sgml:1\ndoc/src/sgml/plpython.sgml:2\ndoc/src/sgml/queries.sgml:1\ndoc/src/sgml/ref/alter_subscription.sgml:2\ndoc/src/sgml/ref/alter_view.sgml:1\ndoc/src/sgml/ref/copy.sgml:1\ndoc/src/sgml/ref/create_cast.sgml:1\ndoc/src/sgml/ref/create_policy.sgml:2\ndoc/src/sgml/ref/create_rule.sgml:1\ndoc/src/sgml/ref/create_subscription.sgml:4\ndoc/src/sgml/ref/create_table.sgml:2\ndoc/src/sgml/ref/create_type.sgml:1\ndoc/src/sgml/ref/create_view.sgml:1\ndoc/src/sgml/ref/delete.sgml:1\ndoc/src/sgml/ref/insert.sgml:1\ndoc/src/sgml/ref/select.sgml:2\ndoc/src/sgml/ref/update.sgml:1\ndoc/src/sgml/sepgsql.sgml:2\ndoc/src/sgml/spgist.sgml:1\ndoc/src/sgml/syntax.sgml:1\ndoc/src/sgml/typeconv.sgml:2\ndoc/src/sgml/xfunc.sgml:1\ndoc/src/sgml/xindex.sgml:1\ndoc/src/sgml/xoper.sgml:2\n\nAFAICS there seems not to be explicit rules and policies for usage of\n'boolean' and 'bool'. We use 'bool' for colum data type of almost\nsystem catalogs but use 'boolean' for some catalogs (pg_pltemplate and\npg_policy). The same is true for functions and system views. Is it\nbetter to unify them into 'boolean' for consistency and so as not\nunnecessarily confuse users? FYI the name of boolean type is 'boolean'\nin the standard.\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n",
"msg_date": "Tue, 19 Feb 2019 00:56:19 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "boolean and bool in documentation"
},
{
"msg_contents": "On Tue, Feb 19, 2019 at 12:56:19AM +0900, Masahiko Sawada wrote:\n> AFAICS there seems not to be explicit rules and policies for usage of\n> 'boolean' and 'bool'. We use 'bool' for colum data type of almost\n> system catalogs but use 'boolean' for some catalogs (pg_pltemplate and\n> pg_policy). The same is true for functions and system views. Is it\n> better to unify them into 'boolean' for consistency and so as not\n> unnecessarily confuse users? FYI the name of boolean type is 'boolean'\n> in the standard.\n\nYes, I think so.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n",
"msg_date": "Thu, 21 Feb 2019 19:13:13 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: boolean and bool in documentation"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Tue, Feb 19, 2019 at 12:56:19AM +0900, Masahiko Sawada wrote:\n>> AFAICS there seems not to be explicit rules and policies for usage of\n>> 'boolean' and 'bool'. We use 'bool' for colum data type of almost\n>> system catalogs but use 'boolean' for some catalogs (pg_pltemplate and\n>> pg_policy). The same is true for functions and system views. Is it\n>> better to unify them into 'boolean' for consistency and so as not\n>> unnecessarily confuse users? FYI the name of boolean type is 'boolean'\n>> in the standard.\n\n> Yes, I think so.\n\nFWIW, I'm not excited about this. We accept \"bool\" and \"boolean\"\ninterchangeably, and it does not seem like an improvement for the\ndocs to use only one form. By that argument, somebody should go\nthrough the docs and nuke every usage of \"::\" in favor of\nSQL-standard CAST(...) notation, every usage of \"float8\"\nin favor of DOUBLE PRECISION, every usage of \"timestamptz\" in\nfavor of the long form, etc etc.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 21 Feb 2019 19:30:49 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: boolean and bool in documentation"
},
{
"msg_contents": "On Thu, Feb 21, 2019 at 7:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> FWIW, I'm not excited about this. We accept \"bool\" and \"boolean\"\n> interchangeably, and it does not seem like an improvement for the\n> docs to use only one form. By that argument, somebody should go\n> through the docs and nuke every usage of \"::\" in favor of\n> SQL-standard CAST(...) notation, every usage of \"float8\"\n> in favor of DOUBLE PRECISION, every usage of \"timestamptz\" in\n> favor of the long form, etc etc.\n\nI'm not terribly excited about it either, but mostly because it seems\nlike a lot of churn for a minimal gain, and it'll be consistent for\nabout 6 months before somebody re-introduces a conflicting usage. I\ndo not, on the other hand, believe that there's no point in being\nconsistent about anything unless we're consistent about everything;\nthat's a straw man.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Fri, 22 Feb 2019 10:26:44 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: boolean and bool in documentation"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Feb 21, 2019 at 7:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> FWIW, I'm not excited about this. We accept \"bool\" and \"boolean\"\n>> interchangeably, and it does not seem like an improvement for the\n>> docs to use only one form. By that argument, somebody should go\n>> through the docs and nuke every usage of \"::\" in favor of\n>> SQL-standard CAST(...) notation, every usage of \"float8\"\n>> in favor of DOUBLE PRECISION, every usage of \"timestamptz\" in\n>> favor of the long form, etc etc.\n\n> I'm not terribly excited about it either, but mostly because it seems\n> like a lot of churn for a minimal gain, and it'll be consistent for\n> about 6 months before somebody re-introduces a conflicting usage.\n\nYeah, that last is a really good point.\n\n> I do not, on the other hand, believe that there's no point in being\n> consistent about anything unless we're consistent about everything;\n> that's a straw man.\n\nThat wasn't my argument; rather, I was saying that if someone presents\na patch for s/bool/boolean/g and we accept it, then logically we should\nalso accept patches for any of these other cases as well. I doubt that\nwe would, if only because of the carpal-tunnel angle. Does that mean\nour policy is \"we'll be consistent as long as it doesn't add too many\ncharacters\"? Ugh.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 22 Feb 2019 12:45:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: boolean and bool in documentation"
}
] |
[
{
"msg_contents": "Hello,\n\nWe have an app that copies some metrics between predefined tables on the\nsource and destination clusters, truncating the table at the source\nafterward.\n\nThe app locks both the source and the destination table at the beginning and\nthen, once copy concludes, prepares a transaction on each of the affected\nhosts and commits it at last. The naming schema for the prepared transaction\ninvolves the source and the destination table; while it doesn’t guarantee\nthe uniqueness if multiple copy processes run on the same host, the\npossibility of that happening should be prevented by the locks held on\ntables involved.\n\nOr so we thought. From time to time, our cron jobs reported a transaction\nidentifier already in use. After digging into this, my colleague Ildar Musin\nhas produced a simple pgbench script to reproduce the issue:\n\n----------------------------------------------------------------\n$ cat custom.sql\nbegin;\nlock abc in exclusive mode;\ninsert into abc values ((random() * 1000)::int);\n\nprepare transaction 'test transaction';\ncommit prepared 'test transaction’;\n----------------------------------------------------------------\n\npgbench launched with:\n\npgbench -T 20 -c 10 -f custom.sql postgres\n\n(after creating the table abc) produces a number of complaints:\n\nERROR: transaction identifier \"test transaction\" is already in use. \n\nI could easily reproduce that on Linux, but not Mac OS, with both Postgres\n10.7 and 11.2.\n\nThat looks like a race condition to me. What happens is that another\ntransaction with the name identical to the running one can start and proceed\nto the prepare phase while the original one commits, failing at last instead\nof waiting for the original one to finish.\n\nBy looking at the source code of FinishPreparedTransaction() I can see the\nRemoveGXact() call, which removes the prepared transaction from\nTwoPhaseState->prepXacts. It is placed at the very end of the function,\nafter the post-commit callbacks that clear out the locks held by the\ntransaction. Those callbacks are not guarded by the TwoPhaseStateLock,\nresulting in a possibility for a concurrent session to proceed will\nMarkAsPreparing after acquiring the locks released by them.\n\nI couldn’t find any documentation on the expected outcome in cases like\nthis, so I assume it might not be a bug, but an undocumented behavior.\n\nShould I go about and produce a patch to put a note in the description of\ncommit prepared, or is there any interest in changing the behavior to avoid\nsuch conflicts altogether (perhaps by holding the lock throughout the\ncleanup phase)?\n\nRegards,\nOleksii Kliukin\n\n\n",
"msg_date": "Mon, 18 Feb 2019 17:05:13 +0100",
"msg_from": "Oleksii Kliukin <alexk@hintbits.com>",
"msg_from_op": true,
"msg_subject": "Prepared transaction releasing locks before deregistering its GID"
},
{
"msg_contents": "On Mon, Feb 18, 2019 at 05:05:13PM +0100, Oleksii Kliukin wrote:\n> That looks like a race condition to me. What happens is that another\n> transaction with the name identical to the running one can start and proceed\n> to the prepare phase while the original one commits, failing at last instead\n> of waiting for the original one to finish.\n\nIt took me 50 clients and a bit more than 20 seconds, but I have been\nable to reproduce the problem with one error. Thanks for the\nreproducer. This is indeed a race condition with 2PC.\n\n> By looking at the source code of FinishPreparedTransaction() I can see the\n> RemoveGXact() call, which removes the prepared transaction from\n> TwoPhaseState->prepXacts. It is placed at the very end of the function,\n> after the post-commit callbacks that clear out the locks held by the\n> transaction. Those callbacks are not guarded by the TwoPhaseStateLock,\n> resulting in a possibility for a concurrent session to proceed will\n> MarkAsPreparing after acquiring the locks released by them.\n\nHm, I see. Taking a breakpoint just after ProcessRecords() or putting\na sleep there makes the issue plain. The same issue can happen with\npredicate locks.\n\n> I couldn’t find any documentation on the expected outcome in cases like\n> this, so I assume it might not be a bug, but an undocumented behavior.\n\nIf you run two transactions in parallel using your script, the second\ntransaction would wait at LOCK time until the first transaction\nreleases its locks with the COMMIT PREPARED.\n\n> Should I go about and produce a patch to put a note in the description of\n> commit prepared, or is there any interest in changing the behavior to avoid\n> such conflicts altogether (perhaps by holding the lock throughout the\n> cleanup phase)?\n\nThat's a bug, let's fix it. I agree with your suggestion that we had\nbetter not release the 2PC lock using the callbacks for COMMIT\nPREPARED or ROLLBACK PREPARED until the shared memory state is\ncleared. At the same time, I think that we should be smarter in the\nordering of the actions: we care about predicate locks here, but not\nabout the on-disk file removal and the stat counters. One trick is\nthat the post-commit callback calls TwoPhaseGetDummyProc() which would\ntry to take TwoPhaseStateLock which needs to be applied so we need\nsome extra logic to not take a lock in this case. From what I can see\nthis is older than 9.4 as the removal of the GXACT entry in shared\nmemory and the post-commit hooks are out of sync for a long time :(\n\nAttached is a patch that fixes the problem for me. Please note the\nsafety net in TwoPhaseGetGXact().\n--\nMichael",
"msg_date": "Tue, 19 Feb 2019 12:26:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Prepared transaction releasing locks before deregistering its GID"
},
{
"msg_contents": "On 2019-Feb-19, Michael Paquier wrote:\n\n> diff --git a/src/include/access/twophase.h b/src/include/access/twophase.h\n> index 6228b091d8..2dcd08e9fa 100644\n> --- a/src/include/access/twophase.h\n> +++ b/src/include/access/twophase.h\n> @@ -34,7 +34,7 @@ extern void TwoPhaseShmemInit(void);\n> extern void AtAbort_Twophase(void);\n> extern void PostPrepare_Twophase(void);\n> \n> -extern PGPROC *TwoPhaseGetDummyProc(TransactionId xid);\n> +extern PGPROC *TwoPhaseGetDummyProc(TransactionId xid, bool lock_held);\n> extern BackendId TwoPhaseGetDummyBackendId(TransactionId xid);\n> \n> extern GlobalTransaction MarkAsPreparing(TransactionId xid, const char *gid,\n\nHmm, ABI break ...\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Tue, 19 Feb 2019 01:07:06 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Prepared transaction releasing locks before deregistering its GID"
},
{
"msg_contents": "On Tue, Feb 19, 2019 at 01:07:06AM -0300, Alvaro Herrera wrote:\n> On 2019-Feb-19, Michael Paquier wrote:\n>> extern GlobalTransaction MarkAsPreparing(TransactionId xid, const char *gid,\n> \n> Hmm, ABI break ...\n\nWell, sure. I always post patches for HEAD first. And I was actually\nwondering if that's worth back-patching per the odds of facing the\nerror and seeing how old it is.\n--\nMichael",
"msg_date": "Tue, 19 Feb 2019 13:44:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Prepared transaction releasing locks before deregistering its GID"
},
{
"msg_contents": "\n\nOn 19.02.2019 7:44, Michael Paquier wrote:\n> On Tue, Feb 19, 2019 at 01:07:06AM -0300, Alvaro Herrera wrote:\n>> On 2019-Feb-19, Michael Paquier wrote:\n>>> extern GlobalTransaction MarkAsPreparing(TransactionId xid, const char *gid,\n>> Hmm, ABI break ...\n> Well, sure. I always post patches for HEAD first. And I was actually\n> wondering if that's worth back-patching per the odds of facing the\n> error and seeing how old it is.\n> --\n> Michael\n\nMay be I missed something, but why it is not possible just to move \nremoving 2PC GXact before releasing transaction locks:\n\ndiff --git a/src/backend/access/transam/twophase.c \nb/src/backend/access/transam/twophase.c\nindex 9a8a6bb..574d28b 100644\n--- a/src/backend/access/transam/twophase.c\n+++ b/src/backend/access/transam/twophase.c\n@@ -1560,17 +1560,6 @@ FinishPreparedTransaction(const char *gid, bool \nisCommit)\n ������� if (hdr->initfileinval)\n ��������������� RelationCacheInitFilePostInvalidate();\n\n-������ /* And now do the callbacks */\n-������ if (isCommit)\n-�������������� ProcessRecords(bufptr, xid, twophase_postcommit_callbacks);\n-������ else\n-�������������� ProcessRecords(bufptr, xid, twophase_postabort_callbacks);\n-\n-������ PredicateLockTwoPhaseFinish(xid, isCommit);\n-\n-������ /* Count the prepared xact as committed or aborted */\n-������ AtEOXact_PgStat(isCommit);\n-\n ������� /*\n �������� * And now we can clean up any files we may have left.\n �������� */\n@@ -1582,6 +1571,17 @@ FinishPreparedTransaction(const char *gid, bool \nisCommit)\n ������� LWLockRelease(TwoPhaseStateLock);\n ������� MyLockedGxact = NULL;\n\n+������ /* And now do the callbacks */\n+������ if (isCommit)\n+�������������� ProcessRecords(bufptr, xid, twophase_postcommit_callbacks);\n+������ else\n+�������������� ProcessRecords(bufptr, xid, twophase_postabort_callbacks);\n+\n+������ PredicateLockTwoPhaseFinish(xid, isCommit);\n+\n+������ /* Count the prepared xact as committed or aborted */\n+������ AtEOXact_PgStat(isCommit);\n+\n ������� RESUME_INTERRUPTS();\n\n ������� pfree(buf);\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Tue, 19 Feb 2019 12:26:04 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Prepared transaction releasing locks before deregistering its GID"
},
{
"msg_contents": "On Tue, Feb 19, 2019 at 12:26:04PM +0300, Konstantin Knizhnik wrote:\n> May be I missed something, but why it is not possible just to move removing\n> 2PC GXact before releasing transaction locks:\n\nBecause we need to keep the 2PC reference in shared memory when\nreleasing the locks before removing the physical file. The 2PC test\nsuite for recovery has good coverage, and I imagine that these blow up\nwith your suggestion (no time to test now, please see\n009_twophase.pl).\n--\nMichael",
"msg_date": "Tue, 19 Feb 2019 18:57:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Prepared transaction releasing locks before deregistering its GID"
},
{
"msg_contents": "Hi,\n\nMichael Paquier <michael@paquier.xyz> wrote:\n\n> On Mon, Feb 18, 2019 at 05:05:13PM +0100, Oleksii Kliukin wrote:\n>> That looks like a race condition to me. What happens is that another\n>> transaction with the name identical to the running one can start and proceed\n>> to the prepare phase while the original one commits, failing at last instead\n>> of waiting for the original one to finish.\n> \n> It took me 50 clients and a bit more than 20 seconds, but I have been\n> able to reproduce the problem with one error. Thanks for the\n> reproducer. This is indeed a race condition with 2PC.\n\nYes, as a race condition I could consistently reproduce it on one system,\nbut not on another. On my macbook it never manifested in an error; however,\non one occasion I saw sessions waiting on a lock and an orphaned record in\npg_prepared_xacts. I couldn’t find any evidences in the logs or elsewhere\nwhat has happened to that session, so I just assumed it was, perhaps, killed\nby the test cleanup.\n\n>> By looking at the source code of FinishPreparedTransaction() I can see the\n>> RemoveGXact() call, which removes the prepared transaction from\n>> TwoPhaseState->prepXacts. It is placed at the very end of the function,\n>> after the post-commit callbacks that clear out the locks held by the\n>> transaction. Those callbacks are not guarded by the TwoPhaseStateLock,\n>> resulting in a possibility for a concurrent session to proceed will\n>> MarkAsPreparing after acquiring the locks released by them.\n> \n> Hm, I see. Taking a breakpoint just after ProcessRecords() or putting\n> a sleep there makes the issue plain. The same issue can happen with\n> predicate locks.\n\nI did verify this on my part by putting a TwoPhaseStateLock right before the\nline saying \"We assume it's safe to do this without taking\nTwoPhaseStateLock” and observing that the errors has stopped over multiple\nruns.\n\n>> I couldn’t find any documentation on the expected outcome in cases like\n>> this, so I assume it might not be a bug, but an undocumented behavior.\n> \n> If you run two transactions in parallel using your script, the second\n> transaction would wait at LOCK time until the first transaction\n> releases its locks with the COMMIT PREPARED.\n\nThat is the desired outcome, right?\n\n> \n>> Should I go about and produce a patch to put a note in the description of\n>> commit prepared, or is there any interest in changing the behavior to avoid\n>> such conflicts altogether (perhaps by holding the lock throughout the\n>> cleanup phase)?\n> \n> That's a bug, let's fix it. I agree with your suggestion that we had\n> better not release the 2PC lock using the callbacks for COMMIT\n> PREPARED or ROLLBACK PREPARED until the shared memory state is\n> cleared. At the same time, I think that we should be smarter in the\n> ordering of the actions: we care about predicate locks here, but not\n> about the on-disk file removal and the stat counters. One trick is\n> that the post-commit callback calls TwoPhaseGetDummyProc() which would\n> try to take TwoPhaseStateLock which needs to be applied so we need\n> some extra logic to not take a lock in this case. From what I can see\n> this is older than 9.4 as the removal of the GXACT entry in shared\n> memory and the post-commit hooks are out of sync for a long time :(\n> \n> Attached is a patch that fixes the problem for me. Please note the\n> safety net in TwoPhaseGetGXact().\n\nThe approach looks good to me. Surprisingly, I saw no stalled backends\nbecause of the double acquisition of lock at TwoPhaseGetGXact once I put a\nsimple TwoPhaseStateLock right before the \"gxact->valid = false” line; I\nwill test your patch and post the outcome.\n\nRegards,\nOleksii Kliukin\n\n\n",
"msg_date": "Tue, 19 Feb 2019 10:59:33 +0100",
"msg_from": "Oleksii Kliukin <alexk@hintbits.com>",
"msg_from_op": true,
"msg_subject": "Re: Prepared transaction releasing locks before deregistering its GID"
},
{
"msg_contents": "On Tue, Feb 19, 2019 at 10:59:33AM +0100, Oleksii Kliukin wrote:\n> Michael Paquier <michael@paquier.xyz> wrote:\n>> If you run two transactions in parallel using your script, the second\n>> transaction would wait at LOCK time until the first transaction\n>> releases its locks with the COMMIT PREPARED.\n> \n> That is the desired outcome, right?\n\nYes, that is the correct one in my opinion, and we should not have GID\nconflicts when running the scenario you provided upthread.\n--\nMichael",
"msg_date": "Tue, 19 Feb 2019 20:54:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Prepared transaction releasing locks before deregistering its GID"
},
{
"msg_contents": "Hi,\n\nOleksii Kliukin <alexk@hintbits.com> wrote:\n> \n> The approach looks good to me. Surprisingly, I saw no stalled backends\n> because of the double acquisition of lock at TwoPhaseGetGXact once I put a\n> simple TwoPhaseStateLock right before the \"gxact->valid = false” line; I\n> will test your patch and post the outcome.\n\nI gave it a spin on the same VM host as shown to constantly reproduce the\nissue and observed neither 'identifier already in use' nor any locking\nissues over a few dozens of runs, so it looks good to me.\n\nThat was HEAD, but since FinishPreparedTransaction seems to be identical\nthere and on the back branches it should work for PG 10 and 11 as well.\n\nRegards,\nOleksii Kliukin\n\n\n",
"msg_date": "Tue, 19 Feb 2019 20:17:14 +0100",
"msg_from": "Oleksii Kliukin <alexk@hintbits.com>",
"msg_from_op": true,
"msg_subject": "Re: Prepared transaction releasing locks before deregistering its GID"
},
{
"msg_contents": "On Tue, Feb 19, 2019 at 08:17:14PM +0100, Oleksii Kliukin wrote:\n> I gave it a spin on the same VM host as shown to constantly reproduce the\n> issue and observed neither 'identifier already in use' nor any locking\n> issues over a few dozens of runs, so it looks good to me.\n\nThanks for the confirmation. I'll try to wrap up this one then.\n\n> That was HEAD, but since FinishPreparedTransaction seems to be identical\n> there and on the back branches it should work for PG 10 and 11 as well.\n\nThe issue exists since two-phase commit has been added, down to 8.1 if\nmy read of the code is right, so it took 14 years to be found.\nCongrats. I have also manually tested that the code is broken down to\n9.4 though.\n\nIt is good that you stress-test 2PC the way you do, and that's the\nsecond bug you have spotted since the duplicate XIDs in the standby\nsnapshots which are now lazily discarded at recovery. Now, it seems\nto me that the potential ABI breakage argument (which can be solved by\nintroducing an extra routine, which means more conflicts to handle\nwhen back-patching 2PC stuff), and the time it took to find the issue\nare pointing out that we should patch only HEAD. In the event of a\nback-patch, on top of the rest we would need to rework RemoveGXact so\nas it does not take a LWLock on two-phase state data in 10 and\nupwards, so that would be invasive, and we have way less automated\ntesting in those versions..\n--\nMichael",
"msg_date": "Wed, 20 Feb 2019 09:39:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Prepared transaction releasing locks before deregistering its GID"
},
{
"msg_contents": "On Tue, Feb 19, 2019 at 12:27 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Feb 18, 2019 at 05:05:13PM +0100, Oleksii Kliukin wrote:\n> > That looks like a race condition to me. What happens is that another\n> > transaction with the name identical to the running one can start and proceed\n> > to the prepare phase while the original one commits, failing at last instead\n> > of waiting for the original one to finish.\n>\n> It took me 50 clients and a bit more than 20 seconds, but I have been\n> able to reproduce the problem with one error. Thanks for the\n> reproducer. This is indeed a race condition with 2PC.\n>\n> > By looking at the source code of FinishPreparedTransaction() I can see the\n> > RemoveGXact() call, which removes the prepared transaction from\n> > TwoPhaseState->prepXacts. It is placed at the very end of the function,\n> > after the post-commit callbacks that clear out the locks held by the\n> > transaction. Those callbacks are not guarded by the TwoPhaseStateLock,\n> > resulting in a possibility for a concurrent session to proceed will\n> > MarkAsPreparing after acquiring the locks released by them.\n>\n> Hm, I see. Taking a breakpoint just after ProcessRecords() or putting\n> a sleep there makes the issue plain. The same issue can happen with\n> predicate locks.\n>\n> > I couldn’t find any documentation on the expected outcome in cases like\n> > this, so I assume it might not be a bug, but an undocumented behavior.\n>\n> If you run two transactions in parallel using your script, the second\n> transaction would wait at LOCK time until the first transaction\n> releases its locks with the COMMIT PREPARED.\n>\n> > Should I go about and produce a patch to put a note in the description of\n> > commit prepared, or is there any interest in changing the behavior to avoid\n> > such conflicts altogether (perhaps by holding the lock throughout the\n> > cleanup phase)?\n>\n> That's a bug, let's fix it. I agree with your suggestion that we had\n> better not release the 2PC lock using the callbacks for COMMIT\n> PREPARED or ROLLBACK PREPARED until the shared memory state is\n> cleared. At the same time, I think that we should be smarter in the\n> ordering of the actions: we care about predicate locks here, but not\n> about the on-disk file removal and the stat counters. One trick is\n> that the post-commit callback calls TwoPhaseGetDummyProc() which would\n> try to take TwoPhaseStateLock which needs to be applied so we need\n> some extra logic to not take a lock in this case. From what I can see\n> this is older than 9.4 as the removal of the GXACT entry in shared\n> memory and the post-commit hooks are out of sync for a long time :(\n>\n> Attached is a patch that fixes the problem for me. Please note the\n> safety net in TwoPhaseGetGXact().\n\nThank you for working on this.\n\n@@ -811,6 +811,9 @@ TwoPhaseGetGXact(TransactionId xid)\n static TransactionId cached_xid = InvalidTransactionId;\n static GlobalTransaction cached_gxact = NULL;\n\n+ Assert(!lock_held ||\n+ LWLockHeldByMeInMode(TwoPhaseStateLock, LW_EXCLUSIVE));\n+\n /*\n * During a recovery, COMMIT PREPARED, or ABORT PREPARED,\nwe'll be called\n * repeatedly for the same XID. We can save work with a simple cache.\n@@ -818,7 +821,8 @@ TwoPhaseGetGXact(TransactionId xid)\n if (xid == cached_xid)\n return cached_gxact;\n\n- LWLockAcquire(TwoPhaseStateLock, LW_SHARED);\n+ if (!lock_held)\n+ LWLockAcquire(TwoPhaseStateLock, LW_SHARED);\n\n for (i = 0; i < TwoPhaseState->numPrepXacts; i++)\n {\n\nIt seems strange to me, why do we always require an exclusive lock\nhere in spite of acquiring a shared lock?\n\n-----\n@@ -854,7 +859,7 @@ TwoPhaseGetGXact(TransactionId xid)\n BackendId\n TwoPhaseGetDummyBackendId(TransactionId xid)\n {\n- GlobalTransaction gxact = TwoPhaseGetGXact(xid);\n+ GlobalTransaction gxact = TwoPhaseGetGXact(xid, false);\n\n return gxact->dummyBackendId;\n }\n\nTwoPhaseGetDummyBackendId() is called by\nmultixact_twophase_postcommit() which is called while holding\nTwoPhaseStateLock in exclusive mode in FinishPreparedTransaction().\nSince we cache the global transaction when we call\nlock_twophase_commit() I couldn't produce issue but it seems to have a\npotential of locking problem.\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n",
"msg_date": "Wed, 20 Feb 2019 15:14:07 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Prepared transaction releasing locks before deregistering its GID"
},
{
"msg_contents": "On Wed, Feb 20, 2019 at 03:14:07PM +0900, Masahiko Sawada wrote:\n> @@ -811,6 +811,9 @@ TwoPhaseGetGXact(TransactionId xid)\n> static TransactionId cached_xid = InvalidTransactionId;\n> static GlobalTransaction cached_gxact = NULL;\n> \n> + Assert(!lock_held ||\n> + LWLockHeldByMeInMode(TwoPhaseStateLock, LW_EXCLUSIVE));\n> +\n> \n> It seems strange to me, why do we always require an exclusive lock\n> here in spite of acquiring a shared lock?\n\nThanks for looking at the patch. LWLockHeldByMe() is fine here. It\nseems that I had a brain fade.\n\n> TwoPhaseGetDummyBackendId() is called by\n> multixact_twophase_postcommit() which is called while holding\n> TwoPhaseStateLock in exclusive mode in FinishPreparedTransaction().\n> Since we cache the global transaction when we call\n> lock_twophase_commit() I couldn't produce issue but it seems to have a\n> potential of locking problem.\n\nYou are right: this could cause a deadlock problem, but it actually\ncannot be triggered now because the repetitive calls of\nTwoPhaseGetGXact() make the resulting XID to be cached, so the lock\nfinishes by never be taken, but that could become broken in the\nfuture. From a consistency point of view, adding the same option to\nTwoPhaseGetGXact() and TwoPhaseGetGXact() to control the lock\nacquisition is much better as well. Please note that the recovery\ntests stress multixact post-commit callbacks already, so you can see\nthe caching behavior with that one:\nBEGIN;\nCREATE TABLE t_009_tbl2 (id int, msg text);\nSAVEPOINT s1;\nINSERT INTO t_009_tbl2 VALUES (27, 'issued to stuff');\nPREPARE TRANSACTION 'xact_009_13';\nCHECKPOINT;\nCOMMIT PREPARED 'xact_009_13';\n\nThe assertion really needs to be before the cached XID is checked\nthough.\n\nAttached is an updated patch. Thanks for the feedback. \n--\nMichael",
"msg_date": "Wed, 20 Feb 2019 17:22:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Prepared transaction releasing locks before deregistering its GID"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> wrote:\n\n> \n> Attached is an updated patch. Thanks for the feedback. \n\n@@ -1755,7 +1755,7 @@ void\n multixact_twophase_recover(TransactionId xid, uint16 info,\n void *recdata, uint32 len)\n {\n- BackendId dummyBackendId = TwoPhaseGetDummyBackendId(xid);\n+ BackendId dummyBackendId = TwoPhaseGetDummyBackendId(xid, true);\n MultiXactId oldestMember;\n \nRecoverPreparedTransactions calls ProcessRecords with\ntwophase_recover_callbacks right after releasing the TwoPhaseStateLock, so I\nthink lock_held should be false here (matching the similar call of\nTwoPhaseGetDummyProc at lock_twophase_recover).\n\nSince the patch touches TwoPhaseGetDummyBackendId, let’s fix the comment\nheader to mention it instead of TwoPhaseGetDummyProc\n\n> Now, it seems\n> to me that the potential ABI breakage argument (which can be solved by\n> introducing an extra routine, which means more conflicts to handle\n> when back-patching 2PC stuff), and the time it took to find the issue\n> are pointing out that we should patch only HEAD.\n\nSounds reasonable.\n\nRegards,\nOleksii Kliukin\n\n\n",
"msg_date": "Wed, 20 Feb 2019 11:50:42 +0100",
"msg_from": "Oleksii Kliukin <alexk@hintbits.com>",
"msg_from_op": true,
"msg_subject": "Re: Prepared transaction releasing locks before deregistering its GID"
},
{
"msg_contents": "On Wed, Feb 20, 2019 at 11:50:42AM +0100, Oleksii Kliukin wrote:\n> RecoverPreparedTransactions calls ProcessRecords with\n> twophase_recover_callbacks right after releasing the TwoPhaseStateLock, so I\n> think lock_held should be false here (matching the similar call of\n> TwoPhaseGetDummyProc at lock_twophase_recover).\n\nIndeed. That would be a bad idea. We don't actually stress this\nroutine in 009_twophase.pl or the assertion I added would have blown\nup immediately. So I propose to close the gap, and the updated patch\nattached adds dedicated tests causing the problem you spotted to be\ntriggered (soft and hard restarts with 2PC transactions including\nDDLs). The previous version of the patch and those tests cause the\nassertion to blow up, failing the tests.\n\n> Since the patch touches TwoPhaseGetDummyBackendId, let’s fix the comment\n> header to mention it instead of TwoPhaseGetDummyProc\n\nNot sure I understand the issue you are pointing out here. The patch\nadds an extra argument to both TwoPhaseGetDummyProc() and\nTwoPhaseGetDummyBackendId(), and the headers of both functions\ndocument the new argument.\n\nOne extra trick I have been using for testing this patch is the\nfollowing:\n--- a/src/backend/access/transam/twophase.c\n+++ b/src/backend/access/transam/twophase.c\n@@ -816,13 +816,6 @@ TwoPhaseGetGXact(TransactionId xid, bool lock_held)\n\n Assert(!lock_held || LWLockHeldByMe(TwoPhaseStateLock));\n\n- /*\n- * During a recovery, COMMIT PREPARED, or ABORT PREPARED, we'll be called\n- * repeatedly for the same XID. We can save work with a simple cache.\n- */\n- if (xid == cached_xid)\n- return cached_gxact;\n\nThis has the advantage to check more aggressively for lock conflicts,\ncausing the tests to deadlock if the flag is not correctly set in the\nworst case scenario.\n--\nMichael",
"msg_date": "Thu, 21 Feb 2019 14:54:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Prepared transaction releasing locks before deregistering its GID"
},
{
"msg_contents": "Hi,\n\nMichael Paquier <michael@paquier.xyz> wrote:\n\n> On Wed, Feb 20, 2019 at 11:50:42AM +0100, Oleksii Kliukin wrote:\n>> RecoverPreparedTransactions calls ProcessRecords with\n>> twophase_recover_callbacks right after releasing the TwoPhaseStateLock, so I\n>> think lock_held should be false here (matching the similar call of\n>> TwoPhaseGetDummyProc at lock_twophase_recover).\n> \n> Indeed. That would be a bad idea. We don't actually stress this\n> routine in 009_twophase.pl or the assertion I added would have blown\n> up immediately. So I propose to close the gap, and the updated patch\n> attached adds dedicated tests causing the problem you spotted to be\n> triggered (soft and hard restarts with 2PC transactions including\n> DDLs). The previous version of the patch and those tests cause the\n> assertion to blow up, failing the tests.\n\nThank you for updating the patch and sorry for the delay, it looks good to\nme, the tests pass on my system.\n\n>> Since the patch touches TwoPhaseGetDummyBackendId, let’s fix the comment\n>> header to mention it instead of TwoPhaseGetDummyProc\n> \n> Not sure I understand the issue you are pointing out here. The patch\n> adds an extra argument to both TwoPhaseGetDummyProc() and\n> TwoPhaseGetDummyBackendId(), and the headers of both functions\n> document the new argument.\n\nJust this:\n\n@@ -844,17 +851,18 @@ TwoPhaseGetGXact(TransactionId xid)\n }\n\n /*\n- * TwoPhaseGetDummyProc\n+ * TwoPhaseGetDummyBackendId\n * Get the dummy backend ID for prepared transaction specified by XID\n\n> \n> One extra trick I have been using for testing this patch is the\n> following:\n> --- a/src/backend/access/transam/twophase.c\n> +++ b/src/backend/access/transam/twophase.c\n> @@ -816,13 +816,6 @@ TwoPhaseGetGXact(TransactionId xid, bool lock_held)\n> \n> Assert(!lock_held || LWLockHeldByMe(TwoPhaseStateLock));\n> \n> - /*\n> - * During a recovery, COMMIT PREPARED, or ABORT PREPARED, we'll be called\n> - * repeatedly for the same XID. We can save work with a simple cache.\n> - */\n> - if (xid == cached_xid)\n> - return cached_gxact;\n> \n> This has the advantage to check more aggressively for lock conflicts,\n> causing the tests to deadlock if the flag is not correctly set in the\n> worst case scenario.\n\nNice, thank you. I ran all my tests with it and found no further issues.\n\nRegards,\nOleksii Kliukin\n\n\n",
"msg_date": "Fri, 22 Feb 2019 12:17:01 +0100",
"msg_from": "Oleksii Kliukin <alexk@hintbits.com>",
"msg_from_op": true,
"msg_subject": "Re: Prepared transaction releasing locks before deregistering its GID"
},
{
"msg_contents": "On Fri, Feb 22, 2019 at 12:17:01PM +0100, Oleksii Kliukin wrote:\n> Thank you for updating the patch and sorry for the delay, it looks good to\n> me, the tests pass on my system.\n\nThanks. I am still looking at this patch an extra time, so this may\ntake at most a couple of days from my side. For now I have committed\nthe test portion, which is independently useful and caused recovery of\nmultixact post-commit callbacks to never be stressed.\n\n> Just this:\n> \n> @@ -844,17 +851,18 @@ TwoPhaseGetGXact(TransactionId xid)\n> }\n> \n> /*\n> - * TwoPhaseGetDummyProc\n> + * TwoPhaseGetDummyBackendId\n> * Get the dummy backend ID for prepared transaction specified by XID\n\nI have not been paying much attention to this one, good catch. This\nis unrelated to the patch of this thread so I have fixed it\nseparately.\n--\nMichael",
"msg_date": "Sat, 23 Feb 2019 08:44:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Prepared transaction releasing locks before deregistering its GID"
},
{
"msg_contents": "On Sat, Feb 23, 2019 at 08:44:43AM +0900, Michael Paquier wrote:\n> Thanks. I am still looking at this patch an extra time, so this may\n> take at most a couple of days from my side. For now I have committed\n> the test portion, which is independently useful and caused recovery of\n> multixact post-commit callbacks to never be stressed.\n\nDone. I have spent some time today looking at the performance of the\npatch, designing a worst-case scenario to see how much bloat this adds\nin COMMIT PREPARED by running across many sessions 2PC transactions\ntaking SHARE locks across many tables, as done in the script attached.\n\nThis test case runs across multiple sessions:\nBEGIN;\nSELECT lock_tables($NUM_TABLES);\nPREPARE TRANSACTION 't_$i';\nCOMMIT PREPARED 't_$i';\n\nlock_tables() is able to take a lock on a set of tables, bloating the\nnumber of locks registered in the 2PC transaction, still those do not\nconflict, so it gives an idea of the extra impact of holding\nTwoPhaseStateLock longer. The script also includes a function to\ncreate thousands of tables easily, and can be controlled with a couple\nof parameters:\n- number of tables to use, which will be locked.\n- number of sessions.\n- run time of pgbench.\n--\nMichael",
"msg_date": "Mon, 25 Feb 2019 14:28:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Prepared transaction releasing locks before deregistering its GID"
},
{
"msg_contents": "On Mon, Feb 25, 2019 at 02:28:23PM +0900, Michael Paquier wrote:\n> Done. I have spent some time today looking at the performance of the\n> patch, designing a worst-case scenario to see how much bloat this adds\n> in COMMIT PREPARED by running across many sessions 2PC transactions\n> taking SHARE locks across many tables, as done in the script attached.\n\nAnd of course I forgot the script, which is now attached.\n--\nMichael",
"msg_date": "Mon, 25 Feb 2019 14:30:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Prepared transaction releasing locks before deregistering its GID"
}
] |
[
{
"msg_contents": "Hello all,\n\nI'm working on a foreign data wrapper that uses INSERT, and I noticed some\nodd behaviour. If I insert just one row, the\nTupleDesc->attr[0]->attname.data has the column names in it. However, in a\nmulti-row string, all those are empty strings:\n\nI added this debugging code to BeginForeignInsert in\nhttps://bitbucket.org/adunstan/blackhole_fdw on postgres 10.\n\nint i;\n FormData_pg_attribute *attr;\n TupleDesc tupleDesc;\n\n tupleDesc = slot->tts_tupleDescriptor;\n\n for (i = 0; i < tupleDesc -> natts; i++) {\n attr = tupleDesc->attrs[i];\n elog(WARNING, \"found column '%s'\", attr->attname.data);\n }\n\nNow with a single row insert, this works as you'd expect:\n\nliz=# INSERT INTO bhtable (key, value) VALUES ('hello', 'world');\nWARNING: found column 'key'\nWARNING: found column 'value'\nINSERT 0 1\n\nBut with a multi-row, all the column names are empty:\nliz=# INSERT INTO bhtable (key, value) VALUES ('hello', 'world'),\n('goodmorning', 'world');\nWARNING: found column ''\nWARNING: found column ''\nWARNING: found column ''\nWARNING: found column ''\nINSERT 0 2\n\nIt doesn't seem unreasonable to me that this data wouldn't be duplicated,\nbut there's no mention of how I would go about retriving these column names\nfor my individual rows, and most foreign data wrappers I can find are\nwrite-only.\n\nAm I missing something obvious? Is this a bug?\n\nThanks,\n\nLiz\n\nHello all,I'm working on a foreign data wrapper that uses INSERT, and I noticed some odd behaviour. If I insert just one row, the TupleDesc->attr[0]->attname.data has the column names in it. However, in a multi-row string, all those are empty strings: I added this debugging code to BeginForeignInsert in https://bitbucket.org/adunstan/blackhole_fdw on postgres 10.int i; FormData_pg_attribute *attr; TupleDesc tupleDesc; tupleDesc = slot->tts_tupleDescriptor; for (i = 0; i < tupleDesc -> natts; i++) { attr = tupleDesc->attrs[i]; elog(WARNING, \"found column '%s'\", attr->attname.data); }Now with a single row insert, this works as you'd expect:liz=# INSERT INTO bhtable (key, value) VALUES ('hello', 'world');WARNING: found column 'key'WARNING: found column 'value'INSERT 0 1But with a multi-row, all the column names are empty:liz=# INSERT INTO bhtable (key, value) VALUES ('hello', 'world'), ('goodmorning', 'world');WARNING: found column ''WARNING: found column ''WARNING: found column ''WARNING: found column ''INSERT 0 2It doesn't seem unreasonable to me that this data wouldn't be duplicated, but there's no mention of how I would go about retriving these column names for my individual rows, and most foreign data wrappers I can find are write-only.Am I missing something obvious? Is this a bug?Thanks,Liz",
"msg_date": "Mon, 18 Feb 2019 14:34:43 -0500",
"msg_from": "Liz Frost <web@stillinbeta.com>",
"msg_from_op": true,
"msg_subject": "Missing Column names with multi-insert"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-18 14:34:43 -0500, Liz Frost wrote:\n> I'm working on a foreign data wrapper that uses INSERT, and I noticed some\n> odd behaviour. If I insert just one row, the\n> TupleDesc->attr[0]->attname.data has the column names in it. However, in a\n> multi-row string, all those are empty strings:\n> \n> I added this debugging code to BeginForeignInsert in\n> https://bitbucket.org/adunstan/blackhole_fdw on postgres 10.\n> \n> int i;\n> FormData_pg_attribute *attr;\n> TupleDesc tupleDesc;\n> \n> tupleDesc = slot->tts_tupleDescriptor;\n> \n> for (i = 0; i < tupleDesc -> natts; i++) {\n> attr = tupleDesc->attrs[i];\n> elog(WARNING, \"found column '%s'\", attr->attname.data);\n> }\n> \n> Now with a single row insert, this works as you'd expect:\n> \n> liz=# INSERT INTO bhtable (key, value) VALUES ('hello', 'world');\n> WARNING: found column 'key'\n> WARNING: found column 'value'\n> INSERT 0 1\n> \n> But with a multi-row, all the column names are empty:\n> liz=# INSERT INTO bhtable (key, value) VALUES ('hello', 'world'),\n> ('goodmorning', 'world');\n> WARNING: found column ''\n> WARNING: found column ''\n> WARNING: found column ''\n> WARNING: found column ''\n> INSERT 0 2\n> \n> It doesn't seem unreasonable to me that this data wouldn't be duplicated,\n> but there's no mention of how I would go about retriving these column names\n> for my individual rows\n\nI think you might be looking at the wrong tuple descriptor. You ought to\nlook at the tuple descriptor from the target relation, not the one from\nthe input slot. It's more or less an accident / efficiency hack that\nthe slot in the first case actually carries the column names.\n\nThe callback should have a ResultRelInfo as a paramter, I think\nsomething like\n Relation rel = resultRelInfo->ri_RelationDesc;\n TupleDesc tupdesc = RelationGetDescr(rel);\n\nought to give you the tuple descriptor you want.\n\n\n> , and most foreign data wrappers I can find are write-only.\n\nDid you mean read-only? If not, I'm unfortunately not following...\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 18 Feb 2019 11:47:21 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Missing Column names with multi-insert"
},
{
"msg_contents": "Liz Frost <web@stillinbeta.com> writes:\n> I'm working on a foreign data wrapper that uses INSERT, and I noticed some\n> odd behaviour. If I insert just one row, the\n> TupleDesc->attr[0]->attname.data has the column names in it. However, in a\n> multi-row string, all those are empty strings:\n\nThere's not really any expectation that those be valid info during\nplanning. The parser has a substantially different code path for\nsingle-row INSERT than multi-row (mostly to minimize overhead for the\nsingle-row case), and probably somewhere in there is the reason why\nthese happen to show up as valid in that case.\n\nIf you want column names, the eref field of RTEs is usually the right\nplace to look.\n\n> Am I missing something obvious? Is this a bug?\n\nI don't think it's a bug.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 18 Feb 2019 14:49:29 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Missing Column names with multi-insert"
},
{
"msg_contents": "\nOn 2/18/19 2:34 PM, Liz Frost wrote:\n> Hello all,\n>\n> I'm working on a foreign data wrapper that uses INSERT, and I noticed\n> some odd behaviour. If I insert just one row, the\n> TupleDesc->attr[0]->attname.data has the column names in it. However,\n> in a multi-row string, all those are empty strings: \n>\n> I added this debugging code to BeginForeignInsert\n> in https://bitbucket.org/adunstan/blackhole_fdw on postgres 10.\n>\n> int i;\n> FormData_pg_attribute *attr;\n> TupleDesc tupleDesc;\n>\n> tupleDesc = slot->tts_tupleDescriptor;\n>\n> for (i = 0; i < tupleDesc -> natts; i++) {\n> attr = tupleDesc->attrs[i];\n> elog(WARNING, \"found column '%s'\", attr->attname.data);\n> }\n>\n> Now with a single row insert, this works as you'd expect:\n>\n> liz=# INSERT INTO bhtable (key, value) VALUES ('hello', 'world');\n> WARNING: found column 'key'\n> WARNING: found column 'value'\n> INSERT 0 1\n>\n> But with a multi-row, all the column names are empty:\n> liz=# INSERT INTO bhtable (key, value) VALUES ('hello', 'world'),\n> ('goodmorning', 'world');\n> WARNING: found column ''\n> WARNING: found column ''\n> WARNING: found column ''\n> WARNING: found column ''\n> INSERT 0 2\n>\n> It doesn't seem unreasonable to me that this data wouldn't be\n> duplicated, but there's no mention of how I would go about retriving\n> these column names for my individual rows, and most foreign data\n> wrappers I can find are write-only.\n>\n>\n\n\nThere are numerous writable FDWs, including postgres_fdw in contrib, and\na whole lot more listed at <https://wiki.postgresql.org/wiki/Fdw> That\nshould supply you with plenty of example code.\n\n\ncheers\n\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 18 Feb 2019 17:50:20 -0500",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Missing Column names with multi-insert"
}
] |
[
{
"msg_contents": "Hello\n\nI just came across a crash while debugging some corrupted system\ncatalogs; pg_identify_object_as_address fails to cope with some NULL\ninput, causing a crash. Attached patch fixes it. Naturally, the output\narray will contain NULL members in the output, but that's better than\ncrashing ...\n\n(The first hunk is purely speculative; I haven't seen anything that\nrequires that. The actual fix is in the other hunks. But seems better\nto be defensive.)\n\nThe crash can be reproduced thusly\n\ncreate function f() returns int language plpgsql as $$ begin return 1; end; $$;\nupdate pg_proc set pronamespace = 9999 where proname = 'f' returning oid \\gset\nselect * from pg_identify_object_as_address('pg_proc'::regclass, :oid, 0);\n\nAfter the patch, the last line returns:\n\n type | object_names | object_args \n----------+--------------+-------------\n function | {NULL,f} | {}\n\nwhere the NULL obviously corresponds to the bogus pg_namespace OID being\nreferenced.\n\nThe patch is on 9.6. I checked 10 and it applies fine there. My\nintention is to apply to all branches since 9.5.\n\n-- \n�lvaro Herrera PostgreSQL Expert, https://www.2ndQuadrant.com/",
"msg_date": "Mon, 18 Feb 2019 17:27:43 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "crash in pg_identify_object_as_address"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> I just came across a crash while debugging some corrupted system\n> catalogs; pg_identify_object_as_address fails to cope with some NULL\n> input, causing a crash. Attached patch fixes it. Naturally, the output\n> array will contain NULL members in the output, but that's better than\n> crashing ...\n\nHm, does this overlap with Paquier's much-delayed patch in\nhttps://commitfest.postgresql.org/22/1947/\n?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 18 Feb 2019 17:13:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: crash in pg_identify_object_as_address"
},
{
"msg_contents": "On Mon, Feb 18, 2019 at 05:13:27PM -0500, Tom Lane wrote:\n> Hm, does this overlap with Paquier's much-delayed patch in\n> https://commitfest.postgresql.org/22/1947/\n\nIt partially overlaps, still my patch set would crash as well in that\ncase. Treating object_names the same way as object_args sounds good\nto me, as well as making strlist_to_textarray smarter to treat NULL\ninputs.\n--\nMichael",
"msg_date": "Tue, 19 Feb 2019 11:21:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: crash in pg_identify_object_as_address"
},
{
"msg_contents": "Pushed, thanks.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Wed, 20 Feb 2019 11:27:23 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: crash in pg_identify_object_as_address"
}
] |
[
{
"msg_contents": "Hi,\n\n\nIn the PostgreSQL Documentation, both ECPG PREPARE and SQL statement PREPARE can be used in ECPG [1].\nHowever, SQL statement PREPARE does not work.\n\nI wrote the source code as follows.\n\n<test_app.pgc>\n============================\n EXEC SQL PREPARE test_prep (int) AS SELECT id from test_table where id = $1;\n EXEC SQL EXECUTE test_prep (2);\n============================\n\nPostgreSQL 11.2 ECPG produced following code.\n\n<test_app.c>\n============================\n { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, \"prepare \\\"test_prep\\\" ( int ) as \\\" select id from test_table where id = $1 \\\"\", ECPGt_EOIT, ECPGt_EORT);\n#line 16 \"test_app.pgc\"\n\nif (sqlca.sqlcode < 0) error_exit ( );}\n#line 16 \"test_app.pgc\"\n\n { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_execute, \"test_prep\", ECPGt_EOIT, ECPGt_EORT);\n#line 17 \"test_app.pgc\"\n\nif (sqlca.sqlcode < 0) error_exit ( );}\n#line 17 \"test_app.pgc\"\n============================\n\n\nThere are following problems.\n\n(1)\nWhen I run this program, it failed with \"PostgreSQL error : -202[too few arguments on line 16]\".\nThe reason is ECPGdo has no argument though prepare statement has \"$1\".\n\n(2)\nI want to execute test_prep (2), but ECPGst_execute does not have argument.\n\n\nCan SQL statement PREPARE be really used in ECPG?\n\n\n[1] - https://www.postgresql.org/docs/11/ecpg-sql-prepare.html\n\n\nRegards,\nRyohei Takahashi\n\n\n",
"msg_date": "Tue, 19 Feb 2019 01:04:46 +0000",
"msg_from": "\"Takahashi, Ryohei\" <r.takahashi_2@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "SQL statement PREPARE does not work in ECPG"
},
{
"msg_contents": "Hi\n\nI think SQL statement PREPARE *without* parameter is supported,\nbut one with parameter is not supported (or has some fatal bugs).\n\nBecause route for SQL statement PREPARE (line-1837 of preproc.y) always has output an invalid SQL statement and\nthere is no regression test for SQL statement PREPARE.\n\n[preproc.y]\n 1832 | PrepareStmt\n 1833 {\n 1834 if ($1.type == NULL || strlen($1.type) == 0)\n 1835 output_prepare_statement($1.name, $1.stmt);\n 1836 else\n 1837 output_statement(cat_str(5, mm_strdup(\"prepare\"), $1.name, $1.type, mm_strdup(\"as\"), $1.stmt), 0, ECPGst_normal);\n 1838 }\n\nThe next is log of ECPGdebug() and PQtrace() for the following statement.\n\n exec sql prepare st(int) as select col1 from foo;\n\n[14968]: ecpg_execute on line 17: query: prepare \"st\" ( int ) as \" select 1 \"; with 0 parameter(s) on connection conn\nTo backend> Msg Q\nTo backend> \"prepare \"st\" ( int ) as \" select 1 \"\"\nTo backend> Msg complete, length 42\n2019-02-19 06:23:30.429 UTC [14969] ERROR: syntax error at or near \"\" select 1 \"\" at character 25\n2019-02-19 06:23:30.429 UTC [14969] STATEMENT: prepare \"st\" ( int ) as \" select 1 \"\n\n\nRegards\nRyo Matsumura\n\n",
"msg_date": "Tue, 19 Feb 2019 06:58:51 +0000",
"msg_from": "\"Matsumura, Ryo\" <matsumura.ryo@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: SQL statement PREPARE does not work in ECPG"
},
{
"msg_contents": "> I think SQL statement PREPARE *without* parameter is supported,\n> but one with parameter is not supported (or has some fatal bugs).\n\nIt surely should be supported.\n\n>> I wrote the source code as follows.\n>> <test_app.pgc>\n>> ============================\n>> EXEC SQL PREPARE test_prep (int) AS SELECT id from test_table where\nid = $1;\n>> EXEC SQL EXECUTE test_prep (2);\n>> ============================\n\nPlease try this instead:\n\nEXEC SQL PREPARE test_prep (int) AS SELECT id from test_table where id\n= $1;\nEXEC SQL EXECUTE test_prep using 2;\n\nThis should work. \n\nAnd yes, it does look like a bug to me, or better like changes in the\nbackend that were not synced to ecpg.\n\nMichael\n-- \nMichael Meskes\nMichael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)\nMeskes at (Debian|Postgresql) dot Org\nJabber: michael at xmpp dot meskes dot org\nVfL Borussia! Força Barça! SF 49ers! Use Debian GNU/Linux, PostgreSQL\n\n\n",
"msg_date": "Tue, 19 Feb 2019 13:14:31 +0100",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: SQL statement PREPARE does not work in ECPG"
},
{
"msg_contents": "Hi Meskes-san,\r\n\r\n\r\nThank you for your replying.\r\n\r\n\r\n> Please try this instead:\r\n> \r\n> EXEC SQL PREPARE test_prep (int) AS SELECT id from test_table where id\r\n> = $1;\r\n> EXEC SQL EXECUTE test_prep using 2;\r\n> \r\n> This should work.\r\n\r\n\r\nI tried as follows.\r\n\r\n<test_app.pgc>\r\n============================\r\n EXEC SQL PREPARE test_prep (int) AS SELECT id from test_table where id = $1;\r\n EXEC SQL EXECUTE test_prep using 2;\r\n============================\r\n\r\n<test_app.c>\r\n============================\r\n { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, \"prepare \\\"test_prep\\\" ( int ) as \\\" select id from test_table where id = $1 \\\"\", ECPGt_EOIT, ECPGt_EORT);\r\n#line 16 \"test_app.pgc\"\r\n\r\nif (sqlca.sqlcode < 0) error_exit ( );}\r\n#line 16 \"test_app.pgc\"\r\n\r\n { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_execute, \"test_prep\",\r\n ECPGt_const,\"2\",(long)1,(long)1,strlen(\"2\"),\r\n ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EOIT, ECPGt_EORT);\r\n#line 17 \"test_app.pgc\"\r\n\r\nif (sqlca.sqlcode < 0) error_exit ( );}\r\n#line 17 \"test_app.pgc\"\r\n============================\r\n\r\n\r\nUnfortunately, this does not work.\r\nECPGst_execute seems good, but prepare statement is the same as my first post.\r\nIt fails with \"PostgreSQL error : -202[too few arguments on line 16]\".\r\n\r\nThis error is caused by following source code.\r\n\r\n\r\n<execute.c ecpg_build_params()>\r\n\t/* Check if there are unmatched things left. */\r\n\tif (next_insert(stmt->command, position, stmt->questionmarks, std_strings) >= 0)\r\n\t{\r\n\t\tecpg_raise(stmt->lineno, ECPG_TOO_FEW_ARGUMENTS,\r\n\t\t\t\t ECPG_SQLSTATE_USING_CLAUSE_DOES_NOT_MATCH_PARAMETERS, NULL);\r\n\t\tecpg_free_params(stmt, false);\r\n\t\treturn false;\r\n\t}\r\n\r\n<execute.c next_insert()>\r\n\t\t\tif (text[p] == '$' && isdigit((unsigned char) text[p + 1]))\r\n\t\t\t{\r\n\t\t\t\t/* this can be either a dollar quote or a variable */\r\n\t\t\t\tint\t\t\ti;\r\n\r\n\t\t\t\tfor (i = p + 1; isdigit((unsigned char) text[i]); i++)\r\n\t\t\t\t\t /* empty loop body */ ;\r\n\t\t\t\tif (!isalpha((unsigned char) text[i]) &&\r\n\t\t\t\t\tisascii((unsigned char) text[i]) &&text[i] != '_')\r\n\t\t\t\t\t/* not dollar delimited quote */\r\n\t\t\t\t\treturn p;\r\n\t\t\t}\r\n\r\n\r\nI think next_insert() should ignore \"$n\" in the case of SQL statement PREPARE.\r\n\r\n\r\nIn addition, we should fix following, right?\r\n\r\n(1)\r\nAs Matsumura-san wrote, ECPG should not produce '\"' for SQL statement PREPARE.\r\n\r\n(2)\r\nECPG should produce argument for execute statement such as \"EXEC SQL EXECUTE test_prep (2);\"\r\n\r\n\r\n\r\nRegards,\r\nRyohei Takahashi\r\n\r\n",
"msg_date": "Wed, 20 Feb 2019 01:53:22 +0000",
"msg_from": "\"Takahashi, Ryohei\" <r.takahashi_2@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: SQL statement PREPARE does not work in ECPG"
},
{
"msg_contents": "Hi,\r\n\r\nMaybe, there is no work-around.\r\n\r\n\r\nFor supporting it, there are two steps.\r\nstep1. fix for PREPARE.\r\nstep2. fix for EXECUTE.\r\n\r\n\r\nAbout step1, there are two way.\r\nI want to choose Idea-2.\r\n\r\nIdea-1.\r\necpglib prepares Oids of type listed in PREPARE statement for 5th argument of PQprepare().\r\nBut it's difficult because ecpg has to know Oids of type.\r\n# Just an idea, create an Oid list in parsing.\r\n\r\nIdea-2.\r\nUse ECPGdo with whole PREPARE statement. In this way, there is no problem about Oid type because backend resolves it.\r\nI think the current implementation may aim to it.\r\n\r\nIf we choose Idea-2, we should make a valid SQL-command(remove double quotation) and avoid any processing about prep_type_clause and PreparableStmt except for parsing.\r\nOne of such processing is the checking a number of parameters that occured the error.\r\nIt may take time, but it's easier than Idea-1.\r\n\r\nIs the direction of fixing good?\r\n\r\nAbout step2, there is the work-arround pointed by Meskes-san.\r\n\r\nRegards\r\nRyo Matsumura\r\n",
"msg_date": "Wed, 20 Feb 2019 10:40:26 +0000",
"msg_from": "\"Matsumura, Ryo\" <matsumura.ryo@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: SQL statement PREPARE does not work in ECPG"
},
{
"msg_contents": "Matsumura-san,\n\n> Maybe, there is no work-around.\n\nDid you analyze the bug? Do you know where it comes from?\n\n> For supporting it, there are two steps.\n\nCould you please start with explaining where you see the problem? I'm\nactually not sure what you are trying to do here.\n\nMichael\n-- \nMichael Meskes\nMichael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)\nMeskes at (Debian|Postgresql) dot Org\nJabber: michael at xmpp dot meskes dot org\nVfL Borussia! Força Barça! SF 49ers! Use Debian GNU/Linux, PostgreSQL\n\n\n",
"msg_date": "Wed, 20 Feb 2019 12:41:17 +0100",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: SQL statement PREPARE does not work in ECPG"
},
{
"msg_contents": "Takahashi-san,\n\n> I tried as follows.\n> ...\n> Unfortunately, this does not work.\n> ECPGst_execute seems good, but prepare statement is the same as my\n> first post.\n\nAh right, my bad. The workaround should have been:\n\nEXEC SQL PREPARE test_prep from \"SELECT id from test_table where id =\n$1\";\nEXEC SQL EXECUTE test_prep using 2;\n\n> It fails with \"PostgreSQL error : -202[too few arguments on line\n> 16]\".\n> \n> This error is caused by following source code.\n> ...\n> I think next_insert() should ignore \"$n\" in the case of SQL statement\n> PREPARE.\n\nActually I am not so sure.\n\n> In addition, we should fix following, right?\n> \n> (1)\n> As Matsumura-san wrote, ECPG should not produce '\"' for SQL statement\n> PREPARE.\n\nWhy's that?\n\n> (2)\n> ECPG should produce argument for execute statement such as \"EXEC SQL\n> EXECUTE test_prep (2);\"\n\nCorrect.\n\nAs for the PREPARE statement itself, could you try the attached small\npatch please.\n\nThis seems to create the correct ECPGPrepare call, but I have not yet\ntested it for many use cases.\n\nThanks.\n\nMichael\n-- \nMichael Meskes\nMichael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)\nMeskes at (Debian|Postgresql) dot Org\nJabber: michael at xmpp dot meskes dot org\nVfL Borussia! Força Barça! SF 49ers! Use Debian GNU/Linux, PostgreSQL",
"msg_date": "Wed, 20 Feb 2019 12:47:00 +0100",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: SQL statement PREPARE does not work in ECPG"
},
{
"msg_contents": "Takahashi-san,\n\n> EXEC SQL EXECUTE test_prep (2);\n\nCould you please also verify for me if this works correctly if you use\na variable instead of the const? As in:\n\nEXEC SQL BEGIN DECLARE SECTION;\nint i=2;\nEXEC SQL END DECLARE SECTION;\n...\nEXEC SQL EXECUTE test_prep (:i);\n\nI guess the problem is that constants in this subtree are move to the\noutput literally instead treated as parameters.\n\nThanks.\n\nMichael\n-- \nMichael Meskes\nMichael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)\nMeskes at (Debian|Postgresql) dot Org\nJabber: michael at xmpp dot meskes dot org\nVfL Borussia! Força Barça! SF 49ers! Use Debian GNU/Linux, PostgreSQL\n\n\n",
"msg_date": "Wed, 20 Feb 2019 13:17:12 +0100",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: SQL statement PREPARE does not work in ECPG"
},
{
"msg_contents": "Hi Meskes-san,\r\n\r\n\r\n> Ah right, my bad. The workaround should have been:\r\n\r\nThank you. It works.\r\n\r\n\r\n> As for the PREPARE statement itself, could you try the attached small\r\n> patch please.\r\n\r\nIt works well for my statement\r\n\"EXEC SQL PREPARE test_prep (int) AS SELECT id from test_table where id = $1;\".\r\n\r\nHowever, since data type information is not used, it does not works well\r\nfor prepare statements which need data type information such as \r\n\"EXEC SQL PREPARE test_prep (int, int) AS SELECT $1 + $2;\".\r\n\r\nIt fails with \"PostgreSQL error : -400[operator is not unique: unknown + unknown on line 20]\".\r\n\r\n(Of course, \"EXEC SQL PREPARE test_prep AS SELECT $1::int + $2::int;\" works well.)\r\n\r\n\r\n> Could you please also verify for me if this works correctly if you use\r\n> a variable instead of the const? As in:\r\n\r\n> EXEC SQL BEGIN DECLARE SECTION;\r\n> int i=2;\r\n> EXEC SQL END DECLARE SECTION;\r\n> ...\r\n> EXEC SQL EXECUTE test_prep (:i);\r\n\r\nIt also works.\r\n(Actually, I wrote \"EXEC SQL EXECUTE test_prep (:i) INTO :ID;\".)\r\n\r\nECPG produced as follows.\r\n\r\n============================\r\nECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_execute, \"test_prep\",\r\n ECPGt_int,&(i),(long)1,(long)1,sizeof(int),\r\n ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EOIT,\r\n ECPGt_int,&(ID),(long)1,(long)1,sizeof(int),\r\n ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EORT);\r\n============================\r\n\r\n\r\nRegards,\r\nRyohei Takahashi\r\n\r\n",
"msg_date": "Thu, 21 Feb 2019 05:41:51 +0000",
"msg_from": "\"Takahashi, Ryohei\" <r.takahashi_2@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: SQL statement PREPARE does not work in ECPG"
},
{
"msg_contents": "Takahashi-san,\n\n> It works well for my statement\n> \n> \"EXEC SQL PREPARE test_prep (int) AS SELECT id from test_table where\n> id = $1;\".\n> \n> However, since data type information is not used, it does not works\n> well\n> for prepare statements which need data type information such as \n> \"EXEC SQL PREPARE test_prep (int, int) AS SELECT $1 + $2;\".\n\nYes, that was to be expected. This is what Matsumura-san was trying to\nfix. However, I'm not sure yet which of his ideas is the best. \n\n> It also works.\n> (Actually, I wrote \"EXEC SQL EXECUTE test_prep (:i) INTO :ID;\".)\n\nOk, thanks. That means the parser has to handle the list of execute\narguments differently, which in hindsight is pretty obvious. \n\nMichael\n-- \nMichael Meskes\nMichael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)\nMeskes at (Debian|Postgresql) dot Org\nJabber: michael at xmpp dot meskes dot org\nVfL Borussia! Força Barça! SF 49ers! Use Debian GNU/Linux, PostgreSQL\n\n\n",
"msg_date": "Thu, 21 Feb 2019 10:02:40 +0100",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: SQL statement PREPARE does not work in ECPG"
},
{
"msg_contents": "Meskes-san\r\n\r\n> Did you analyze the bug? Do you know where it comes from?\r\n\r\nAt first, I show the flow of Prepare statement without AS clause and\r\nthe flow of Prepare statement with AS clause but without parameter list.\r\n\r\n[preproc/preproc.y]\r\n 1832 | PrepareStmt\r\n 1834 if ($1.type == NULL || strlen($1.type) == 0)\r\n 1835 output_prepare_statement($1.name, $1.stmt);\r\n\r\n[preproc/output.c]\r\n168 output_prepare_statement(char *name, char *stmt)\r\n169 {\r\n170 fprintf(base_yyout, \"{ ECPGprepare(__LINE__, %s, %d, \", connection ? connection : \"NULL\", questionmarks);\r\n171 output_escaped_str(name, true);\r\n172 fputs(\", \", base_yyout);\r\n173 output_escaped_str(stmt, true);\r\n174 fputs(\");\", base_yyout);\r\n\r\nIt makes the following C-program and it can work.\r\n\r\n /* exec sql prepare st as select 1; */\r\n ECPGprepare(__LINE__, NULL, 0, \"st\", \" select 1 \");\r\n\r\n /* exec sql prepare st from \"select 1\"; */\r\n ECPGprepare(__LINE__, NULL, 0, \"st\", \"select 1\");\r\n\r\n /* exec sql prepare st from \"select ?\"; */\r\n ECPGprepare(__LINE__, NULL, 0, \"st\", \"select ?\");\r\n\r\necpglib processes as the following:\r\n\r\n[ecpglib/prepare.c]\r\n174 ECPGprepare(int lineno, const char *connection_name, const bool questionmarks,\r\n175 const char *name, const char *variable)\r\n199 this = ecpg_find_prepared_statement(name, con, &prev);\r\n200 if (this && !deallocate_one(lineno, ECPG_COMPAT_PGSQL, con, prev, this))\r\n201 return false;\r\n203 return prepare_common(lineno, con, name, variable);\r\n\r\n[ecpglib/prepare.c]\r\n115 prepare_common(int lineno, struct connection *con, const char *name, const char *variable)\r\n135 stmt->lineno = lineno;\r\n136 stmt->connection = con;\r\n137 stmt->command = ecpg_strdup(variable, lineno);\r\n138 stmt->inlist = stmt->outlist = NULL;\r\n141 replace_variables(&(stmt->command), lineno);\r\n144 this->name = ecpg_strdup(name, lineno);\r\n145 this->stmt = stmt;\r\n148 query = PQprepare(stmt->connection->connection, name, stmt->command, 0, NULL);\r\n\r\nThe following is log of PQtrace().\r\n To backend> Msg P\r\n To backend> \"st\"\r\n To backend> \"select $1\"\r\n To backend (2#)> 0\r\n [6215]: prepare_common on line 21: name st; query: \"select $1\"\r\n\r\nAn important point of the route is that it calls PQprepare() and PQprepare()\r\nneeds type-Oid list. (Idea-1) If we fix for Prepare statement with AS clause and\r\nwith parameter list to walk through the route, preprocessor must parse the parameter list and\r\npreprocessor or ecpglib must make type-Oid list. I think it's difficult.\r\nEspecially, I wonder if it can treat user defined type and complex structure type.\r\n\r\n\r\nAt second, I show the flow of Prepare statement with AS clause.\r\n\r\n 1836 else\r\n 1837 output_statement(cat_str(5, mm_strdup(\"prepare\"), $1.name, $1.type, mm_strdup(\"as\"), $1.stmt), 0, ECPGst_normal);\r\n\r\nIt makes the following C-program, but it cannot work because AS clause is double quoted.\r\nSo there is no work-around for this route.\r\n\r\n /* exec sql prepare st(int) as select $1; */\r\n ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, \"prepare \\\"st\\\" ( int ) as \\\" select $1 \\\"\", ECPGt_EOIT, ECPGt_EORT);\r\n\r\nWhen it runs, the following error is occured.\r\n [5895]: raising sqlcode -202 on line 20: too few arguments on line 20\r\n SQL error: too few arguments on line 20\r\n\r\nThe following may be expected.\r\n ECPGdo(__LINE__, 0 , 1, NULL, 0, ECPGst_normal, \"prepare st ( int ) as select $1 \", ECPGt_EOIT, ECPGt_EORT);\r\n\r\nEven if the above C-program is made, another error is occured.\r\nThe error is occured in the following flow.\r\n\r\n[ecpglib/execute.c]\r\n1196 ecpg_build_params(struct statement *stmt)\r\n1214 var = stmt->inlist;\r\n1215 while (var)\r\n\r\n ecpg_store_input(var--->tobeinserted)\r\n\r\n1393 if ((position = next_insert(stmt->command, position, stmt->questionmarks, std_strings) + 1) == 0)\r\n\r\n1411 if (var->type == ECPGt_char_variable)\r\n1413 int ph_len = (stmt->command[position] == '?') ? strlen(\"?\") : strlen(\"$1\");\r\n1415 if (!insert_tobeinserted(position, ph_len, stmt, tobeinserted))\r\n\r\n1428 else if (stmt->command[position] == '0')\r\n1430 if (!insert_tobeinserted(position, 2, stmt, tobeinserted))\r\n\r\n1437 else\r\n1468 if (stmt->command[position] == '?')\r\n1480 snprintf(tobeinserted, buffersize, \"$%d\", counter++);\r\n1474 if (!(tobeinserted = (char *) ecpg_alloc(buffersize, stmt->lineno)))\r\n\r\n1492 var = var->next;\r\n1493 }\r\n\r\n1495 /* Check if there are unmatched things left. */\r\n1496 if (next_insert(stmt->command, position, stmt->questionmarks, std_strings) >= 0)\r\n1497 {\r\n1498 ecpg_raise(stmt->lineno, ECPG_TOO_FEW_ARGUMENTS,\r\n1499 ECPG_SQLSTATE_USING_CLAUSE_DOES_NOT_MATCH_PARAMETERS, NULL);\r\n *** The above is raised. ***\r\n\r\nThe checking (line-1495) is meaningless for AS clause.\r\nIt checks if all $0 is replaced to literal and all ? is replaced to $[0-9]* by insert_tobeinserted(),\r\nbut it always fails because $[0-9]* in AS clause are not replaced (and should not be replaced).\r\nI don't search if there is other similar case. It is Idea-2.\r\n\r\nWhat is ECPGt_char_variable?\r\n[preproc.y]\r\n 65 static struct ECPGtype ecpg_query = {ECPGt_char_variable, NULL, NULL, NULL, {NULL}, 0};\r\n15333 ECPGCursorStmt: DECLARE cursor_name cursor_options CURSOR opt_hold FOR prepared_name\r\n15367 thisquery->type = &ecpg_query;\r\n15381 add_variable_to_head(&(this->argsinsert), thisquery, &no_indicator);\r\n\r\nWhat is $0?\r\n In ECPG, the followings can be specified by host variable.\r\n - cursor name\r\n - value of ALTER SYSTEM SET statement\r\n e.g. ALTER SYSTEM SET aaaa = $1\r\n - fetch counter\r\n e.g. FETCH ABSOLUTE count\r\n\r\n Basically, ECPG-preprocessor changes the host variables to $[0-9]* and adds\r\n variables to arguments of ECPGdo, and ecpglib calls PQexecParams(stmt, vars).\r\n In case of the above, they cannot be passed to vars of PQexecParams() because \r\n backend cannot accept them.\r\n So ecpg_build_params() replace $0 to literal.\r\n\r\nRegards\r\nRyo Matsumura\r\n",
"msg_date": "Fri, 22 Feb 2019 08:49:10 +0000",
"msg_from": "\"Matsumura, Ryo\" <matsumura.ryo@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: SQL statement PREPARE does not work in ECPG"
},
{
"msg_contents": "Meskes-san\r\n\r\nI made mistake.\r\n\r\n> The checking (line-1495) is meaningless for AS clause.\r\n> It checks if all $0 is replaced to literal and all ? is replaced to $[0-9]* by insert_tobeinserted(),\r\n> but it always fails because $[0-9]* in AS clause are not replaced (and should not be replaced).\r\n> I don't search if there is other similar case. It is Idea-2.\r\n\r\nIt checks if a number of variables equals a number of $* after replacing $0 and ?.\r\nIt always fails because there is no variable for $* in AS clause.\r\nWe should skip AS clause at the cheking.\r\n\r\n\r\nUmm... The skipping seems to be not easy too.\r\n\r\nnext_insert(char *text, int pos, bool questionmarks, bool std_strings)\r\n{\r\n pos = get_pos_of_as_clause(text); <-- parse text in ecpglib???\r\n for (; text[p] != '\\0'; p++)\r\n if(is_prepare_statement(stmt) && invalid_pos(pos))\r\n break;\r\n\r\nRegards\r\nRyo Matsumura\r\n",
"msg_date": "Fri, 22 Feb 2019 09:59:26 +0000",
"msg_from": "\"Matsumura, Ryo\" <matsumura.ryo@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: SQL statement PREPARE does not work in ECPG"
},
{
"msg_contents": "Hi Matsumura-san,\r\n\r\n\r\nThank you for your explaining.\r\n\r\n> An important point of the route is that it calls PQprepare() and PQprepare()\r\n> needs type-Oid list. (Idea-1) If we fix for Prepare statement with AS clause and\r\n> with parameter list to walk through the route, preprocessor must parse the parameter list and\r\n> preprocessor or ecpglib must make type-Oid list. I think it's difficult.\r\n> Especially, I wonder if it can treat user defined type and complex structure type.\r\n\r\nI agree.\r\nIn the case of standard types, ECPG can get oids from pg_type.h.\r\nHowever, in the case of user defined types, ECPG needs to access pg_type table and it is overhead.\r\n\r\nTherefore, the second idea seems better.\r\n\r\n\r\nBy the way, should we support prepare statement like following?\r\n(I think yes.)\r\n\r\n============================\r\nEXEC SQL PREPARE test_prep (int) AS SELECT id from test_table where id = :ID or id =$1;\r\n============================\r\n\r\nCurrent ECPG produces following code.\r\n\r\n============================\r\nECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, \"prepare \\\"test_prep\\\" ( int ) as \\\" select id from test_table where id = $1 or id = $1 \\\"\",\r\n ECPGt_int,&(ID),(long)1,(long)1,sizeof(int),\r\n ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EOIT, ECPGt_EORT);\r\n============================\r\n\r\n\r\nIn this case, both \":ID\" and \"$1\" in the original statement are converted to \"$1\" and ECPGdo() cannot distinguish them.\r\nTherefore, ECPG should produce different code.\r\n\r\nFor example, \r\n- ECPG convert \":ID\" to \"$1\" and \"$1\" in the original statement to \"$$1\"\r\n- next_insert() do not check \"$$1\"\r\n- ECPGdo() reconvert \"$$1\" to \"$1\"\r\n\r\n\r\n\r\nRegards,\r\nRyohei Takahashi\r\n\r\n\r\n",
"msg_date": "Tue, 26 Feb 2019 02:45:58 +0000",
"msg_from": "\"Takahashi, Ryohei\" <r.takahashi_2@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: SQL statement PREPARE does not work in ECPG"
},
{
"msg_contents": "Hi Takahashi-san\n\n> In the case of standard types, ECPG can get oids from pg_type.h.\n> However, in the case of user defined types, ECPG needs to access\n> pg_type table and it is overhead.\n\nThe overhead wouldn't be too bad. In fact it's already done, at least\nsometimes. Please check ecpg_is_type_an_array().\n\n> By the way, should we support prepare statement like following?\n> (I think yes.)\n\nIf the standard allows it, we want to be able to process it.\n\n> ============================\n> EXEC SQL PREPARE test_prep (int) AS SELECT id from test_table where\n> id = :ID or id =$1;\n> ============================\n> \n> Current ECPG produces following code.\n> \n> ============================\n> ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, \"prepare \\\"test_prep\\\"\n> ( int ) as \\\" select id from test_table where id = $1 or id = $1\n> \\\"\",\n> ECPGt_int,&(ID),(long)1,(long)1,sizeof(int),\n> ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EOIT,\n> ECPGt_EORT);\n> ============================\n> \n> \n> In this case, both \":ID\" and \"$1\" in the original statement are\n> converted to \"$1\" and ECPGdo() cannot distinguish them.\n> Therefore, ECPG should produce different code.\n\nI agree. It seems that stuff really broke over the years and nobody\nnoticed, sigh.\n\nMichael\n-- \nMichael Meskes\nMichael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)\nMeskes at (Debian|Postgresql) dot Org\nJabber: michael at xmpp dot meskes dot org\nVfL Borussia! Força Barça! SF 49ers! Use Debian GNU/Linux, PostgreSQL\n\n\n",
"msg_date": "Tue, 26 Feb 2019 10:09:36 +0100",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: SQL statement PREPARE does not work in ECPG"
},
{
"msg_contents": "Hi Meskes-san, Takahashi-san\r\n\r\n> If the standard allows it, we want to be able to process it.\r\n\r\nI will try to implement it with the Idea-2 that doesn't use PQprepare() and\r\nTakahasi-san's following idea.\r\n\r\n> For example, \r\n> - ECPG convert \":ID\" to \"$1\" and \"$1\" in the original statement to \"$$1\"\r\n> - next_insert() do not check \"$$1\"\r\n> - ECPGdo() reconvert \"$$1\" to \"$1\"\r\n\r\nBut I will probably be late because I don't understand parse.pl very well.\r\nI think that the following rule is made by parse.pl.\r\n\r\n\t PreparableStmt:\r\n\t SelectStmt\r\n\t {\r\n\t is_in_preparable_stmt = true; <--- I want to add it.\r\n\t $$ = $1;\r\n\t}\r\n\t| InsertStmt\r\n\t.....\r\n\r\nThe above variable is used in ecpg.trailer.\r\n\r\n\tecpg_param: PARAM {\r\n\t\tif(is_in_preparable_stmt)\r\n\t\t\t$$ = mm_strdup(replace_dollar_to_something());\r\n\t\telse\r\n\t\t\t $$ = make_name();\r\n\t } ;\r\n\r\n\r\nI will use @1 instend of $$1 because the replacing is almost same as the existing replacing function in ecpglib.\r\nIs it good?\r\n\r\n\r\nRegards\r\nRyo Matsumura\r\n",
"msg_date": "Fri, 1 Mar 2019 08:41:58 +0000",
"msg_from": "\"Matsumura, Ryo\" <matsumura.ryo@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: SQL statement PREPARE does not work in ECPG"
},
{
"msg_contents": "Hi Matsumura-san,\n\n> But I will probably be late because I don't understand parse.pl very\n> well.\n> I think that the following rule is made by parse.pl.\n> \n> \t PreparableStmt:\n> \t SelectStmt\n> \t {\n> \t is_in_preparable_stmt = true; <--- I want to add it.\n> \t $$ = $1;\n> \t}\n> \t| InsertStmt\n> \t.....\n\nThe only way to add this is by creating a replacement production for\nthis rule. parse.pl cannot do it itself. \n\n> I will use @1 instend of $$1 because the replacing is almost same as\n> the existing replacing function in ecpglib.\n> Is it good?\n\nI'd say so.\n\nThanks.\n\nMichael\n-- \nMichael Meskes\nMichael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)\nMeskes at (Debian|Postgresql) dot Org\nJabber: michael at xmpp dot meskes dot org\nVfL Borussia! Força Barça! SF 49ers! Use Debian GNU/Linux, PostgreSQL\n\n\n",
"msg_date": "Fri, 01 Mar 2019 11:10:45 +0100",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: SQL statement PREPARE does not work in ECPG"
},
{
"msg_contents": "Meskes-san\r\n\r\nThank you for your comment.\r\n\r\n> The only way to add this is by creating a replacement production for\r\n> this rule. parse.pl cannot do it itself.\r\n\r\nI will read README.parser, ecpg.addons, and *.pl to understand.\r\n\r\n> > I will use @1 instend of $$1 because the replacing is almost same as\r\n> > the existing replacing function in ecpglib.\r\n> > Is it good?\r\n> \r\n> I'd say so.\r\n\r\nI try it.\r\n\r\nRegards\r\nRyo Matsumura\r\n",
"msg_date": "Fri, 1 Mar 2019 10:42:34 +0000",
"msg_from": "\"Matsumura, Ryo\" <matsumura.ryo@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: SQL statement PREPARE does not work in ECPG"
},
{
"msg_contents": "Hi Matsumura-san,\n\n> I will read README.parser, ecpg.addons, and *.pl to understand.\n\nFeel free to ask, when anything comes up.\n\nMichael\n-- \nMichael Meskes\nMichael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)\nMeskes at (Debian|Postgresql) dot Org\nJabber: michael at xmpp dot meskes dot org\nVfL Borussia! Força Barça! SF 49ers! Use Debian GNU/Linux, PostgreSQL\n\n\n",
"msg_date": "Fri, 01 Mar 2019 12:41:36 +0100",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: SQL statement PREPARE does not work in ECPG"
},
{
"msg_contents": "Hi Meskes-san\r\n\r\nI must use a midrule action like the following that works as expected.\r\nI wonder how to write the replacement to ecpg.addons.\r\nI think it's impossible, right? Please give me some advice.\r\n\r\n PrepareStmt:\r\nPREPARE prepared_name prep_type_clause AS\r\n\t{\r\n\t\tprepared_name = $2;\r\n\t\tprepare_type_clause = $3;\r\n\t\tis_in_preparable_stmt = true;\r\n\t}\r\n\tPreparableStmt\r\n\t{\r\n\t\t$$.name = prepared_name;\r\n\t\t$$.type = prepare_type_clause;\r\n\t\t$$.stmt = $6;\r\n\t\tis_in_preparable_stmt = false;\r\n\t}\r\n\t| PREPARE prepared_name FROM execstring\r\n\r\nRegards\r\nRyo Matsumura\r\n\r\n> -----Original Message-----\r\n> From: Michael Meskes [mailto:meskes@postgresql.org]\r\n> Sent: Friday, March 1, 2019 8:42 PM\r\n> To: Matsumura, Ryo/松村 量 <matsumura.ryo@jp.fujitsu.com>; Takahashi,\r\n> Ryohei/高橋 良平 <r.takahashi_2@jp.fujitsu.com>;\r\n> 'pgsql-hackers@postgresql.org' <pgsql-hackers@postgresql.org>\r\n> Subject: Re: SQL statement PREPARE does not work in ECPG\r\n> \r\n> Hi Matsumura-san,\r\n> \r\n> > I will read README.parser, ecpg.addons, and *.pl to understand.\r\n> \r\n> Feel free to ask, when anything comes up.\r\n> \r\n> Michael\r\n> --\r\n> Michael Meskes\r\n> Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)\r\n> Meskes at (Debian|Postgresql) dot Org\r\n> Jabber: michael at xmpp dot meskes dot org\r\n> VfL Borussia! Força Barça! SF 49ers! Use Debian GNU/Linux, PostgreSQL\r\n> \r\n\r\n",
"msg_date": "Fri, 1 Mar 2019 12:30:01 +0000",
"msg_from": "\"Matsumura, Ryo\" <matsumura.ryo@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: SQL statement PREPARE does not work in ECPG"
},
{
"msg_contents": "Hi Matsumura-san,\n\n> I must use a midrule action like the following that works as\n> expected.\n> I wonder how to write the replacement to ecpg.addons.\n> I think it's impossible, right? Please give me some advice.\n\nYou are right, for this change you have to replace the whole rule. This\ncannot be done with an addon. Please see the attached for a way to do\nthis, untested though.\n\nMichael\n-- \nMichael Meskes\nMichael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)\nMeskes at (Debian|Postgresql) dot Org\nJabber: michael at xmpp dot meskes dot org\nVfL Borussia! Força Barça! SF 49ers! Use Debian GNU/Linux, PostgreSQL",
"msg_date": "Fri, 01 Mar 2019 14:01:07 +0100",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: SQL statement PREPARE does not work in ECPG"
},
{
"msg_contents": "Hi Meskes-san\r\n\r\n\r\nThank you for your advice.\r\n\r\nI attach a patch.\r\nI didn't add additional tests to regression yet.\r\n\r\n\r\nThe patch allows the following:\r\n\r\n exec sql prepare(int) as select $1;\r\n exec sql execute st(1) into :out;\r\n\r\n exec sql prepare(text, text) as select $1 || $2;\r\n exec sql execute st('aaa', 'bbb') into :out;\r\n\r\nBut it doesn't allow to use host variable in parameter clause of EXECUTE statement like the following.\r\nI'm afraid that it's not usefull. I will research the standard and other RDBMS.\r\nIf you have some information, please adivise to me.\r\n\r\n exec sql begin declare section;\r\n int var;\r\n exec sql end declare section;\r\n\r\n exec sql prepare(int) as select $1;\r\n exec sql execute st(:var) into :out;\r\n\r\n SQL error: bind message supplies 1 parameters, but prepared statement \"\" requires 0\r\n\r\n\r\n\r\nI explain about the patch.\r\n\r\n* PREPARE FROM or PREPARE AS without type clause\r\n It uses PQprepare(). It's not changed.\r\n\r\n [Preprocessor output]\r\n /* exec sql prepare st from \"select ?\"; */\r\n { ECPGprepare(__LINE__, NULL, 0, \"st\", \"select ?\");\r\n\r\n /* exec sql prepare st as select 1; */\r\n { ECPGprepare(__LINE__, NULL, 0, \"st\", \" select 1 \");\r\n\r\n\r\n* PREPARE AS with type clause\r\n It doesn't use PQprepare() but uses PQexecuteParams().\r\n\r\n [Preprocessor output]\r\n /* exec sql prepare st(text, text) as select $1 || '@2'; */\r\n { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, \"prepare \\\"st\\\" ( text , text ) as select @1 || '@2'\", ECPGt_EOIT, ECPGt_EORT);\r\n\r\n $1 in as clause is replaced by preprocessor at ecpg_param rule.\r\n @1 is replaced to $1 by ecpglib at end of ecpg_build_params().\r\n\r\n\r\n* EXECUTE without type clause\r\n It uses PQexecPrepared(). It's not changed.\r\n\r\n [Preprocessor output]\r\n /* exec sql execute st into :ovar using :var; */\r\n { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_execute, \"st\",\r\n ECPGt_int,&(var),(long)1,.....\r\n\r\n* EXECUTE with parameter clause\r\n It uses PQexecuteParams().\r\n\r\n [Preprocessor output]\r\n /* exec sql execute st('abcde') into :ovar_s; */\r\n { ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, \"execute \\\"st\\\" ( 'abcde' )\", ECPGt_EOIT,\r\n .....\r\n\r\nThis approach causes the above constraint that users cannot use host variables in parameter clause in EXECUTE statement\r\nbecause ecpglib sends 'P' message with \"execute \\\"st\\\" ($1)\" and sends 'B' one parameter, but backend always regards the number of parameters in EXECUTE statement as zero.\r\nI don't have any other idea...\r\n\r\n\r\nRegards\r\nRyo Matsumura",
"msg_date": "Mon, 4 Mar 2019 12:41:05 +0000",
"msg_from": "\"Matsumura, Ryo\" <matsumura.ryo@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: SQL statement PREPARE does not work in ECPG"
},
{
"msg_contents": "Hi all,\n\n> ...\n> But it doesn't allow to use host variable in parameter clause of\n> EXECUTE statement like the following.\n> I'm afraid that it's not usefull. I will research the standard and\n> other RDBMS.\n> If you have some information, please adivise to me.\n\nThis also seems to be conflicting with\nbd7c95f0c1a38becffceb3ea7234d57167f6d4bf. If we want to keep this\ncommit in for the release, I think we need to get these things fixed. \n\nMichael\n-- \nMichael Meskes\nMichael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)\nMeskes at (Debian|Postgresql) dot Org\nJabber: michael at xmpp dot meskes dot org\nVfL Borussia! Força Barça! SF 49ers! Use Debian GNU/Linux, PostgreSQL\n\n\n",
"msg_date": "Wed, 06 Mar 2019 13:18:32 +0100",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: SQL statement PREPARE does not work in ECPG"
},
{
"msg_contents": "Hi Meskes-san\r\n\r\nThank you for your comment.\r\nI write three points in this mail.\r\n\r\n1.\r\n> This also seems to be conflicting with\r\n> bd7c95f0c1a38becffceb3ea7234d57167f6d4bf. If we want to keep this\r\n> commit in for the release, I think we need to get these things fixed. \r\n\r\nI understand it.\r\nMy idea is that add an argument for statement-name to ECPGdo() or\r\nadd a new function that is a wrapper of ECPGdo() and has the argument.\r\nThe argument is used for lookup related connection. Is it good?\r\n\r\n\r\n2.\r\n> I wrote:\r\n> But it doesn't allow to use host variable in parameter clause of EXECUTE statement like the following.\r\n\r\nI found a way to support host variables in parameter list of EXECUTE statement.\r\necpg_build_params() replace each parameter to string-formatted data that can be\r\ncreated by ecpg_store_input(). I will try it.\r\n\r\n\r\n3.\r\nI found a bug in my patch. Replacing $ to @ in AS clause is not good\r\nbecause @ is absolute value operator.\r\nTherefore, the replacing cannot accept valid statement like the following.\r\n\r\n exec sql prepare st(int) select $1 + @1; -- It equals to \"select $1 + 1\"\r\n\r\nI choose $$1 unavoidably.\r\nOther character seems to be used as any operator.\r\n\r\n\r\nP.S.\r\n- PREPARE with FROM is the standard for Embedded SQL.\r\n- PREPARE with AS is not defined in the standard.\r\n- PREPARE with AS clause is PostgreSQL style.\r\n- Oracle and MySQL support only the standard.\r\n\r\nRegards\r\nRyo Matsumura\r\n",
"msg_date": "Thu, 7 Mar 2019 09:35:21 +0000",
"msg_from": "\"Matsumura, Ryo\" <matsumura.ryo@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: SQL statement PREPARE does not work in ECPG"
},
{
"msg_contents": "Hi Meskes-san\r\ncc: Takahashi-san, Kuroda-san, Ideriha-san\r\n\r\nI attach a new patch. Please review it.\r\n\r\n Excuse:\r\n It doesn't include regression tests and pass them.\r\n Because I must reset all expected C program of regression.\r\n # I add an argument to ECPGdo().\r\n\r\nI explain the patch as follows:\r\n\r\n1. Specification\r\n It accepts the following .pgc.\r\n I confirmed it works well for AT clause.\r\n All results for st1 and st2 are same.\r\n\r\n exec sql prepare st0 as select 1;\r\n exec sql prepare st1(int,int) as select $1 + 5 + $2;\r\n exec sql prepare st2 from \"select ? + 5 + ?\";\r\n exec sql prepare st3(bytea) as select octet_length($1);\r\n \r\n exec sql execute st0 into :ovar;\r\n exec sql execute st1(:var1,:var2) into :ovar;\r\n exec sql execute st1(11, :var2) into :ovar;\r\n exec sql execute st2(:var1,:var2) into :ovar;\r\n exec sql execute st2(11, :var2) into :ovar;\r\n exec sql execute st1 into :ovar using :var1,:var2;\r\n exec sql execute st2 into :ovar using :var1,:var2;\r\n exec sql execute st3(:b) into :ovar;\r\n\r\n2. Behavior of ecpglib\r\n(1) PREPARE with AS clause\r\n Ecpglib sends the PREPARE statement to backend as is. (using PQexec).\r\n\r\n(2) EXECUTE with parameter list\r\n Ecpglib sends the EXECUTE statement as is (using PQexec), but all host variables in\r\n the list are converted to string-formatted and embedded into the EXECUTE statement.\r\n\r\n(3) PREPARE with FROM clause (not changed)\r\n Ecpglib sends 'P' libpq-message with statement (using PQprepare).\r\n\r\n(4) EXECUTE without parameter list (not changed)\r\n Ecpglib sends 'B' libpq-message with parameters. (using PQexecPrepared).\r\n\r\n\r\n3. Change of preprocessor\r\n\r\n - I add ECPGst_prepare and ECPGst_execnormal.\r\n ECPGst_prepare is only for (1) and ECPGst_execnormal is only for (2).\r\n # I think the names are not good.\r\n\r\n - I add one argument to ECPGdo(). It's for prepared statement name.\r\n\r\n\r\n4.\r\nI wonder whether I should merge (3) to (1) and (4) to (4) or not.\r\n\r\n\r\n\r\nRegards\r\nRyo Matsumura",
"msg_date": "Wed, 13 Mar 2019 11:55:12 +0000",
"msg_from": "\"Matsumura, Ryo\" <matsumura.ryo@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: SQL statement PREPARE does not work in ECPG"
},
{
"msg_contents": "Hi all and thank you Matsumura-san.\n\n> Excuse:\n> It doesn't include regression tests and pass them.\n> Because I must reset all expected C program of regression.\n> # I add an argument to ECPGdo().\n\nSure, let's do this at the very end.\n\n> 1. Specification\n> It accepts the following .pgc.\n> I confirmed it works well for AT clause.\n> All results for st1 and st2 are same.\n\nI have a similar text case and can confirm that the output is the same\nfor both ways of preparing the statement.\n\n> 2. Behavior of ecpglib\n> (1) PREPARE with AS clause\n> Ecpglib sends the PREPARE statement to backend as is. (using\n> PQexec).\n> \n> (2) EXECUTE with parameter list\n> Ecpglib sends the EXECUTE statement as is (using PQexec), but all\n> host variables in\n> the list are converted to string-formatted and embedded into the\n> EXECUTE statement.\n> \n> (3) PREPARE with FROM clause (not changed)\n> Ecpglib sends 'P' libpq-message with statement (using PQprepare).\n> \n> (4) EXECUTE without parameter list (not changed)\n> Ecpglib sends 'B' libpq-message with parameters. (using\n> PQexecPrepared).\n> \n> \n> 3. Change of preprocessor\n> \n> - I add ECPGst_prepare and ECPGst_execnormal.\n> ECPGst_prepare is only for (1) and ECPGst_execnormal is only for\n> (2).\n> # I think the names are not good.\n> \n> - I add one argument to ECPGdo().p It's for prepared statement name.\n\nOne question though, why is the statement name always quoted? Do we\nreally need that? Seems to be more of a hassle than and advantage.\n\n> 4.\n> I wonder whether I should merge (3) to (1) and (4) to (4) or not.\n\nI would prefer to merge as much as possible, as I am afraid that if we\ndo not merge the approaches, we will run into issues later. There was a\nreason why we added PQprepare, but I do not remember it from the top of\nmy head. Need to check when I'm online again.\n\nMichael\n-- \nMichael Meskes\nMichael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)\nMeskes at (Debian|Postgresql) dot Org\nJabber: michael at xmpp dot meskes dot org\nVfL Borussia! Força Barça! SF 49ers! Use Debian GNU/Linux, PostgreSQL\n\n\n",
"msg_date": "Fri, 15 Mar 2019 22:34:47 +0100",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: SQL statement PREPARE does not work in ECPG"
},
{
"msg_contents": "Meskes-san\r\n\r\nThank you for your comment.\r\n\r\n> One question though, why is the statement name always quoted? Do we\r\n> really need that? Seems to be more of a hassle than and advantage.\r\n\r\nThe following can be accepted by preproc, ecpglib, libpq, and backend in previous versions.\r\n exec sql prepare \"st x\" from \"select 1\";\r\n exec sql execute \"st x\";\r\n\r\nThe above was preprocessed to the following.\r\n PQprepare(conn, \"\\\"st x\\\"\", \"select 1\");\r\n PQexec(conn, \"\\\"st x\\\"\");\r\n\r\nBy the way, the following also can be accepted.\r\n PQexecParams(conn, \"prepare \\\"st x\\\" ( int ) as select $1\", 0, NULL, NULL, NULL, NULL, 0);\r\n PQexecParams(conn, \"execute \\\"st x\\\"( 1 )\", 0, NULL, NULL, NULL, NULL, 0);\r\n\r\nTherefore, I think that the quoting statement name is needed in PREPARE/EXECUTE case, too.\r\n\r\n> I would prefer to merge as much as possible, as I am afraid that if we\r\n> do not merge the approaches, we will run into issues later. There was a\r\n> reason why we added PQprepare, but I do not remember it from the top of\r\n> my head. Need to check when I'm online again.\r\n\r\nI will also consider it.\r\n\r\nRegards\r\nRyo Matsumura\r\n",
"msg_date": "Mon, 18 Mar 2019 07:31:51 +0000",
"msg_from": "\"Matsumura, Ryo\" <matsumura.ryo@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: SQL statement PREPARE does not work in ECPG"
},
{
"msg_contents": "Matsumura-san,\n\n> Therefore, I think that the quoting statement name is needed in\n> PREPARE/EXECUTE case, too.\n\nI agree that we have to accept a quoted statement name and your\nobservations are correct of course, I am merely wondering if we need\nthe escaped quotes in the call to the ecpg functions or the libpq\nfunctions.\n\n> > I would prefer to merge as much as possible, as I am afraid that if\n> > we\n> > do not merge the approaches, we will run into issues later. There\n> > was a\n> > reason why we added PQprepare, but I do not remember it from the\n> > top of\n> > my head. Need to check when I'm online again.\n> \n> I will also consider it.\n\nThank you.\n\nMichael\n-- \nMichael Meskes\nMichael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)\nMeskes at (Debian|Postgresql) dot Org\nJabber: michael at xmpp dot meskes dot org\nVfL Borussia! Força Barça! SF 49ers! Use Debian GNU/Linux, PostgreSQL\n\n\n",
"msg_date": "Sun, 24 Mar 2019 02:03:43 +0100",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: SQL statement PREPARE does not work in ECPG"
},
{
"msg_contents": "Meskes-san\r\n\r\nI'm sorry for my slow reply.\r\n\r\n> I agree that we have to accept a quoted statement name and your\r\n> observations are correct of course, I am merely wondering if we need\r\n> the escaped quotes in the call to the ecpg functions or the libpq\r\n> functions.\r\n\r\nThe following allows to use statement name including white space not double-quoted statement name.\r\n\r\n exec sql prepare \"st1 x\" from \"select 1\";\r\n\r\n# I don't know whether the existing ECPG allows it intentionally or not.\r\n# In honestly, I think that it's not necessary to allow it.\r\n\r\nIf we also allow the statement name including white space in PREPRARE AS,\r\nwe have to make backend parser to scan it as IDENT.\r\nDouble-quoting is one way. There may be another way.\r\n\r\nIf we want to pass double-quoted statement name to backend through libpq,\r\npreprocessor have to escape it.\r\n\r\n> > I would prefer to merge as much as possible, as I am afraid that if\r\n> > we\r\n> > do not merge the approaches, we will run into issues later. There\r\n> > was a\r\n> > reason why we added PQprepare, but I do not remember it from the\r\n> > top of\r\n> > my head. Need to check when I'm online again.\r\n> \r\n> I will also consider it.\r\n\r\nI cannot think of anything.\r\nI may notice if I try to merge.\r\n\r\nRegards\r\nRyo Matsumura\r\n",
"msg_date": "Mon, 1 Apr 2019 09:29:06 +0000",
"msg_from": "\"Matsumura, Ryo\" <matsumura.ryo@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: SQL statement PREPARE does not work in ECPG"
},
{
"msg_contents": "Hi Matsumura-san,\n\n> If we also allow the statement name including white space in PREPRARE\n> AS,\n> we have to make backend parser to scan it as IDENT.\n\nCorrect, without quoting would only work when using PQprepare() et al.\n\n> I cannot think of anything.\n> I may notice if I try to merge.\n\nThanks.\n\nMichael\n-- \nMichael Meskes\nMichael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)\nMeskes at (Debian|Postgresql) dot Org\nJabber: michael at xmpp dot meskes dot org\nVfL Borussia! Força Barça! SF 49ers! Use Debian GNU/Linux, PostgreSQL\n\n\n\n",
"msg_date": "Mon, 01 Apr 2019 14:14:56 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: SQL statement PREPARE does not work in ECPG"
},
{
"msg_contents": "Hi, Meskes-san\r\n\r\nI'm sorry for my long blank. I restarted.\r\n\r\nReview of previous discussion:\r\nI made a patch that makes ecpglib to send \"PREPARE st(typelist) AS PreparableStmt\"\r\nwith PQexec(), because the type resolution is difficult.\r\nI tried to merge PREPARE FROM that uses PQprepare() to the PREPARE AS.\r\nMeskes-san pointed that there may be a problem that PREPARE FROM cannot use PQexec().\r\n\r\n\r\nNow, I noticed a problem of the merging.\r\nTherefore, I will not change the existing implementation of PREPARE FROM.\r\n\r\nThe problem is:\r\nStatement name of PREPARE FROM can include double quote, because the statement name\r\nis sent by PQprepare() directly and backend doesn't parse it.\r\nIn other hand, the statement name of PREPARE AS cannot include double quote,\r\nbecause it is embedded into query and backend parser disallows it.\r\nThis is a specification of PostgreSQL's PREPARE AS.\r\n\r\nI defined the following specifications. Please review it.\r\n* ECPG can accept any valid PREPARE AS statement.\r\n* A char-type host variable can be used as the statement name of PREPARE AS,\r\n but its value is constrained by the specification of PREPARE AS.\r\n (e.g. the name cannot include double quotation.)\r\n* The above also allows the following. It's a bit strange but there is no reason\r\n for forbidding.\r\n prepare :st(type_list) as select $1\r\n* ECPG can accept EXECUTE statement with expression list that is allocated\r\n by both PREPARE FROM and PREPARE AS under the following constraints:\r\n - It must not include a using-clause.\r\n - The statement name must follow to the specification of PREPARE AS.\r\n\r\nRegards\r\nRyo Matsumura\r\n",
"msg_date": "Sun, 5 May 2019 19:37:40 +0000",
"msg_from": "\"Matsumura, Ryo\" <matsumura.ryo@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: SQL statement PREPARE does not work in ECPG"
},
{
"msg_contents": "Hi Matsumura-san,\n\n> I defined the following specifications. Please review it.\n> \n> * ECPG can accept any valid PREPARE AS statement.\n> * A char-type host variable can be used as the statement name of\n> PREPARE AS,\n> but its value is constrained by the specification of PREPARE AS.\n> (e.g. the name cannot include double quotation.)\n> * The above also allows the following. It's a bit strange but there\n> is no reason\n> for forbidding.\n> prepare :st(type_list) as select $1\n> * ECPG can accept EXECUTE statement with expression list that is\n> allocated\n> by both PREPARE FROM and PREPARE AS under the following\n> constraints:\n> - It must not include a using-clause.\n> - The statement name must follow to the specification of PREPARE\n> AS.\n\nThis look very reasonable to me. I'm completely fine with this\nrestriction being placed on PREPARE FROM.\n\nMichael\n-- \nMichael Meskes\nMichael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)\nMeskes at (Debian|Postgresql) dot Org\nJabber: michael at xmpp dot meskes dot org\nVfL Borussia! Força Barça! SF 49ers! Use Debian GNU/Linux, PostgreSQL\n\n\n\n",
"msg_date": "Mon, 06 May 2019 15:37:34 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: SQL statement PREPARE does not work in ECPG"
},
{
"msg_contents": "Meskes-san\r\n\r\n> This look very reasonable to me. I'm completely fine with this\r\n> restriction being placed on PREPARE FROM.\r\n\r\nThank you. I start making a new patch.\r\n\r\nRegards\r\nRyo Matsumura\r\n",
"msg_date": "Tue, 7 May 2019 01:30:57 +0000",
"msg_from": "\"Matsumura, Ryo\" <matsumura.ryo@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: SQL statement PREPARE does not work in ECPG"
},
{
"msg_contents": "Hi Meskes-san\r\n\r\nThere are two points.\r\n\r\n(1)\r\nI attach a new patch. Please review it.\r\n\r\n - Preproc replaces any prepared_name to \"$0\" and changes it to an input-variable\r\n for PREARE with typelist and EXECUTE with paramlist.\r\n $0 is replaced in ecpg_build_params().\r\n It's enable not to change ECPGdo interface.\r\n - Host variables can be used in paramlist of EXECUTE.\r\n\r\n(2)\r\nI found some bugs (two types). I didn't fix them and only avoid bison error.\r\n\r\nType1. Bugs or intentional unsupported features.\r\n - EXPLAIN EXECUTE\r\n - CREATE TABLE AS with using clause\r\n\r\n e.g.\r\n EXPLAIN EXECUTE st; /* It has not been supported before.*/\r\n\r\n ExplainableStmt:\r\n ExecuteStmt\r\n {\r\n - $$ = $1;\r\n + $$ = $1.name; /* only work arround for bison error */\r\n }\r\n\r\nType2. In multi-bytes encoding environment, a part of character of message is cut off.\r\n\r\n It may be very difficult to fix. I pretend I didn't see it for a while.\r\n\r\n [ecpglib/error.c]\r\n snprintf(sqlca->sqlerrm.sqlerrmc, sizeof(sqlca->sqlerrm.sqlerrmc), \"%s on line %d\", message, line);\r\n sqlca->sqlerrm.sqlerrml = strlen(sqlca->sqlerrm.sqlerrmc);\r\n ecpg_log(\"raising sqlstate %.*s (sqlcode %ld): %s\\n\",\r\n (int) sizeof(sqlca->sqlstate), sqlca->sqlstate, sqlca->sqlcode, sqlca->sqlerrm.sqlerrmc);\r\n\r\nRegards\r\nRyo Matsumura",
"msg_date": "Tue, 7 May 2019 12:56:57 +0000",
"msg_from": "\"Matsumura, Ryo\" <matsumura.ryo@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: SQL statement PREPARE does not work in ECPG"
},
{
"msg_contents": "Hi Matsumura-san,\n\n> (1)\n> I attach a new patch. Please review it.\n> ...\n\nThis looks good to me. It passes all my tests, too.\n\nThere were two minor issues, the regression test did not run and gcc\ncomplained about the indentation in ECPGprepare(). Both I easily fixed.\n\nUnless somebody objects I will commit it as soon as I have time at\nhand. Given that this patch also and mostly fixes some completely\nbroken old logic I'm tempted to do so despite us being pretty far in\nthe release cycle. Any objections?\n\n> (2)\n> I found some bugs (two types). I didn't fix them and only avoid bison\n> error.\n> \n> Type1. Bugs or intentional unsupported features.\n> - EXPLAIN EXECUTE\n> - CREATE TABLE AS with using clause\n> ...\n\nPlease send a patch. I'm on vacation and won't be able to spend time on\nthis for the next couple of weeks.\n\n> Type2. In multi-bytes encoding environment, a part of character of\n> message is cut off.\n> \n> It may be very difficult to fix. I pretend I didn't see it for a\n> while.\n> ...\n\nHmm, any suggestion?\n\nMichael\n-- \nMichael Meskes\nMichael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)\nMeskes at (Debian|Postgresql) dot Org\nJabber: michael at xmpp dot meskes dot org\nVfL Borussia! Força Barça! SF 49ers! Use Debian GNU/Linux, PostgreSQL\n\n\n\n",
"msg_date": "Tue, 21 May 2019 23:12:40 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: SQL statement PREPARE does not work in ECPG"
},
{
"msg_contents": "Michael Meskes <meskes@postgresql.org> writes:\n> Unless somebody objects I will commit it as soon as I have time at\n> hand. Given that this patch also and mostly fixes some completely\n> broken old logic I'm tempted to do so despite us being pretty far in\n> the release cycle. Any objections?\n\nNone here. You might want to get it in in the next 12 hours or so\nso you don't have to rebase over a pgindent run.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 21 May 2019 17:48:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SQL statement PREPARE does not work in ECPG"
},
{
"msg_contents": "> None here. You might want to get it in in the next 12 hours or so\n> so you don't have to rebase over a pgindent run.\n\nThanks for the heads-up Tom, pushed.\n\nAnd thanks to Matsumura-san for the patch.\n\nMichael\n-- \nMichael Meskes\nMichael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)\nMeskes at (Debian|Postgresql) dot Org\nJabber: michael at xmpp dot meskes dot org\nVfL Borussia! Força Barça! SF 49ers! Use Debian GNU/Linux, PostgreSQL\n\n\n\n",
"msg_date": "Wed, 22 May 2019 05:10:14 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: SQL statement PREPARE does not work in ECPG"
},
{
"msg_contents": "On Wed, May 22, 2019 at 05:10:14AM +0200, Michael Meskes wrote:\n> Thanks for the heads-up Tom, pushed.\n> \n> And thanks to Matsumura-san for the patch.\n\nThis patch seems to have little incidence on the stability, but FWIW I\nam not cool with the concept of asking for objections and commit a\npatch only 4 hours after-the-fact, particularly after feature freeze.\n--\nMichael",
"msg_date": "Wed, 22 May 2019 15:43:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: SQL statement PREPARE does not work in ECPG"
},
{
"msg_contents": "> This patch seems to have little incidence on the stability, but FWIW\n> I\n> am not cool with the concept of asking for objections and commit a\n> patch only 4 hours after-the-fact, particularly after feature freeze.\n\nThis was only done to beat the pg_indent run as Tom pointed out. I\nfigured worse case we can revert the patch if people object.\n\nMichael\n-- \nMichael Meskes\nMichael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)\nMeskes at (Debian|Postgresql) dot Org\nJabber: michael at xmpp dot meskes dot org\nVfL Borussia! Força Barça! SF 49ers! Use Debian GNU/Linux, PostgreSQL",
"msg_date": "Wed, 22 May 2019 12:07:35 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: SQL statement PREPARE does not work in ECPG"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> This patch seems to have little incidence on the stability, but FWIW I\n> am not cool with the concept of asking for objections and commit a\n> patch only 4 hours after-the-fact, particularly after feature freeze.\n\nFWIW, I figured it was okay since ECPG has essentially no impact on\nthe rest of the system. The motivation for having feature freeze is\nto get us to concentrate on stability, but any new bugs in ECPG\naren't going to affect the stability of anything else.\n\nAlso, I don't think it's that hard to look at this as a bug fix\nrather than a new feature. The general expectation is that ECPG\ncan parse any command the backend can --- that's why we went to\nall the trouble of automatically building its grammar from the\nbackend's. So I was surprised to hear that it didn't work on\nsome EXECUTE variants, and filling in that gap doesn't seem like a\n\"new feature\" to me. Note the lack of any documentation additions\nin the patch.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 May 2019 09:32:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SQL statement PREPARE does not work in ECPG"
},
{
"msg_contents": "Meskes-san\r\n\r\n> This looks good to me. It passes all my tests, too.\r\n> \r\n> There were two minor issues, the regression test did not run and gcc\r\n> complained about the indentation in ECPGprepare(). Both I easily fixed.\r\n\r\nThank you so much !\r\n\r\n> > (2)\r\n> > I found some bugs (two types). I didn't fix them and only avoid bison\r\n> > error.\r\n> >\r\n> > Type1. Bugs or intentional unsupported features.\r\n> > - EXPLAIN EXECUTE\r\n> > - CREATE TABLE AS with using clause\r\n> > ...\r\n> \r\n> Please send a patch. I'm on vacation and won't be able to spend time on\r\n> this for the next couple of weeks.\r\n\r\nI begin to fix it. It may spend a while (1 or 2 week).\r\n\r\n\r\n> > Type2. In multi-bytes encoding environment, a part of character of\r\n> > message is cut off.\r\n> >\r\n> > It may be very difficult to fix. I pretend I didn't see it for a\r\n> > while.\r\n> > ...\r\n> \r\n> Hmm, any suggestion?\r\n\r\nI think that it's better to import length_in_encoding() defined in backend/utils/mb/mbutils.c into client side.\r\nJust an idea.\r\n\r\n\r\nRegards\r\nRyo Matsumura\r\n",
"msg_date": "Fri, 31 May 2019 06:55:19 +0000",
"msg_from": "\"Matsumura, Ryo\" <matsumura.ryo@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: SQL statement PREPARE does not work in ECPG"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nI am switching to this new email address for PostgreSQL mailing lists. I\njust wanted you to know that I'm still reachable at my EnterpriseDB address\nand I'm still happily employed by EnterpriseDB, working on PostgreSQL!\nThis change is just to avoid sending automated corporate footer messages\nthat are intended for a different audience, in line with standard mailing\nlist etiquette.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\nHello hackers,I am switching to this new email address for PostgreSQL mailing lists. I just wanted you to know that I'm still reachable at my EnterpriseDB address and I'm still happily employed by EnterpriseDB, working on PostgreSQL! This change is just to avoid sending automated corporate footer messages that are intended for a different audience, in line with standard mailing list etiquette.-- Thomas Munrohttps://enterprisedb.com",
"msg_date": "Tue, 19 Feb 2019 15:54:59 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Change of email address"
},
{
"msg_contents": "On Tue, Feb 19, 2019 at 03:54:59PM +1300, Thomas Munro wrote:\n\n> This change is just to avoid sending automated corporate footer messages\n> that are intended for a different audience, in line with standard mailing\n> list etiquette.\n> \n> -- \n> Thomas Munro\n> https://enterprisedb.com\n\n^^\nI guess you better update that signature for gmail :-)\n\n-- \nShawn Debnath\nAmazon Web Services (AWS)\n\n",
"msg_date": "Mon, 18 Feb 2019 21:54:28 -0800",
"msg_from": "Shawn Debnath <sdn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Change of email address"
},
{
"msg_contents": "On Tue, Feb 19, 2019 at 6:54 PM Shawn Debnath <sdn@amazon.com> wrote:\n> On Tue, Feb 19, 2019 at 03:54:59PM +1300, Thomas Munro wrote:\n> > This change is just to avoid sending automated corporate footer messages\n> > that are intended for a different audience, in line with standard mailing\n> > list etiquette.\n> >\n> > --\n> > Thomas Munro\n> > https://enterprisedb.com\n>\n> ^^\n> I guess you better update that signature for gmail :-)\n\nHah, no I hope that little tagline is OK, I just didn't think the\nlonger message we've recently added would be appreciated after the\nfirst few emails.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n",
"msg_date": "Tue, 19 Feb 2019 20:10:21 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Change of email address"
}
] |
[
{
"msg_contents": "Hi, \n\nI found ECPG's bug which SET xxx = DEFAULT statement could not be used. \n\n[PROBLEM]\nWhen the source code (*.pgc) has \"EXEC SQL set xxx = default;\", created c program by ECPG has no \" = default\". \nFor example, \"EXEC SQL set search_path = default;\" in *.pgc will be converted to \"{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, \"set search_path\", ECPGt_EOIT, ECPGt_EORT);}\" in c program.\n\n[Investigation]\ngram.y lacks \";\" in the end of section \"generic_set\", so preproc.y's syntax is broken. \n\nsrc/backend/parser/gram.y\n-------------------------------------------\ngeneric_set:\n\n | var_name '=' DEFAULT\n {\n VariableSetStmt *n = makeNode(VariableSetStmt);\n n->kind = VAR_SET_DEFAULT;\n n->name = $1;\n $$ = n;\n }\n\nset_rest_more: /* Generic SET syntaxes: */\n-------------------------------------------\n\nsrc/interfaces/ecpg/preproc/preproc.y\n-------------------------------------------\n generic_set:\n\n| var_name TO DEFAULT\n {\n $$ = cat_str(2,$1,mm_strdup(\"to default\"));\n}\n| var_name '=' DEFAULT set_rest_more:\n generic_set\n {\n $$ = $1;\n}\n-------------------------------------------\n\nI attached the patch which has \";\" in the end of section \"generic_set\" and regression. \n\nRegards, \nDaisuke, Higuchi",
"msg_date": "Tue, 19 Feb 2019 04:04:10 +0000",
"msg_from": "\"Higuchi, Daisuke\" <higuchi.daisuke@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "[Bug Fix] ECPG: could not use set xxx to default statement"
},
{
"msg_contents": "\"Higuchi, Daisuke\" <higuchi.daisuke@jp.fujitsu.com> writes:\n> [ missing semicolon in gram.y breaks ecpg parsing of same construct ]\n\nThat's pretty nasty. The fix in gram.y is certainly needed, but I'm\nunexcited by the regression test additions you propose. What I really\nwant to know is why a syntax error in gram.y wasn't detected by any\nof the tools we use, and whether we can do something about that.\nOtherwise the next bug of the same kind may go just as undetected;\nin fact, I've got little confidence there aren't other such omissions\nalready :-(\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 18 Feb 2019 23:27:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [Bug Fix] ECPG: could not use set xxx to default statement"
},
{
"msg_contents": "I wrote:\n> \"Higuchi, Daisuke\" <higuchi.daisuke@jp.fujitsu.com> writes:\n>> [ missing semicolon in gram.y breaks ecpg parsing of same construct ]\n\n> That's pretty nasty. The fix in gram.y is certainly needed, but I'm\n> unexcited by the regression test additions you propose. What I really\n> want to know is why a syntax error in gram.y wasn't detected by any\n> of the tools we use,\n\nUgh ... the Bison NEWS file has this:\n\n* Changes in version 1.875, 2003-01-01:\n ...\n - Semicolons are once again optional at the end of grammar rules.\n This reverts to the behavior of Bison 1.33 and earlier, and improves\n compatibility with Yacc.\n\nI'd remembered how we had to run around and insert semicolons to satisfy\nBison 1.3-something, and supposed that that restriction still held.\nBut it doesn't. It seems though that our conversion script for creating\npreproc.y depends on there being semicolons.\n\nI think we need to fix that script to either cope with missing semicolons,\nor at least complain about them. Too tired to look into how, right now.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 19 Feb 2019 00:05:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [Bug Fix] ECPG: could not use set xxx to default statement"
},
{
"msg_contents": "On Tue, 2019-02-19 at 00:05 -0500, Tom Lane wrote:\n> I wrote:\n> > \"Higuchi, Daisuke\" <higuchi.daisuke@jp.fujitsu.com> writes:\n> > > [ missing semicolon in gram.y breaks ecpg parsing of same\n> > > construct ]\n> > That's pretty nasty. The fix in gram.y is certainly needed, but\n> > I'm\n> > unexcited by the regression test additions you propose. What I\n> > really\n> > want to know is why a syntax error in gram.y wasn't detected by any\n> > of the tools we use,\n\nI'm actually surprised it only shows by one incorrectly working rule\nand did not mangle the whole file by combining to rules. I guess that's\nbecause bison now finds the end of the rule somehow even without the\nsemicolon.\n\n> But it doesn't. It seems though that our conversion script for\n> creating\n> preproc.y depends on there being semicolons.\n\nYes, it does. There has to be a way for the script to find the end of a\nrule and I wonder if bison's way can be used in such a simple perl\nscript too.\n\n> I think we need to fix that script to either cope with missing\n> semicolons,\n> or at least complain about them. Too tired to look into how, right\n> now.\n\nIf we can identify a missing semicolon we probably can also figure out\nwhere it had to be.\n\nMichael\n-- \nMichael Meskes\nMichael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)\nMeskes at (Debian|Postgresql) dot Org\nJabber: michael at xmpp dot meskes dot org\nVfL Borussia! Força Barça! SF 49ers! Use Debian GNU/Linux, PostgreSQL\n\n\n",
"msg_date": "Tue, 19 Feb 2019 10:26:51 +0100",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: [Bug Fix] ECPG: could not use set xxx to default statement"
},
{
"msg_contents": "Hi, \n\n> I think we need to fix that script to either cope with missing semicolons,\n> or at least complain about them. Too tired to look into how, right now.\n\nI attached the patch which cope with missing semicolons. \nPrevious parse.pl find semicolon and dump data to buffer. When attached patch's parse.pl find new tokens before finding a semicolon, it also dumps data to buffer.\n\nRegards, \nDaisuke, Higuchi",
"msg_date": "Tue, 19 Feb 2019 09:38:01 +0000",
"msg_from": "\"Higuchi, Daisuke\" <higuchi.daisuke@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [Bug Fix] ECPG: could not use set xxx to default statement"
},
{
"msg_contents": "Higuchi-san,\n\n> I attached the patch which cope with missing semicolons. \n> Previous parse.pl find semicolon and dump data to buffer. When\n> attached patch's parse.pl find new tokens before finding a semicolon,\n> it also dumps data to buffer.\n\nNow this seems to be much easier than I expected. Thanks. My first test\nshow two \"missing\" semicolons in gram.y. :)\n\nMichael\n-- \nMichael Meskes\nMichael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)\nMeskes at (Debian|Postgresql) dot Org\nJabber: michael at xmpp dot meskes dot org\nVfL Borussia! Força Barça! SF 49ers! Use Debian GNU/Linux, PostgreSQL\n\n\n",
"msg_date": "Tue, 19 Feb 2019 11:51:29 +0100",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: [Bug Fix] ECPG: could not use set xxx to default statement"
},
{
"msg_contents": "Higuchi-san,\n\n> I attached the patch which cope with missing semicolons. \n> Previous parse.pl find semicolon and dump data to buffer. When\n> attached patch's parse.pl find new tokens before finding a semicolon,\n> it also dumps data to buffer.\n\nIt just occurred to me that check_rules.pl probably uses the same logic\nto identify each rule and thus needs to be changed, too. \n\nAlso, IIRC bison allows blanks between the symbol name and the colon,\nor in other words \"generic_set:\" is equal to \"generic_set :\". If this\nhappens after a \"missing\" semicolon I think your patch does not notice\nthe end of the rule.\n\nMichael\n-- \nMichael Meskes\nMichael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)\nMeskes at (Debian|Postgresql) dot Org\nJabber: michael at xmpp dot meskes dot org\nVfL Borussia! Força Barça! SF 49ers! Use Debian GNU/Linux, PostgreSQL\n\n\n",
"msg_date": "Tue, 19 Feb 2019 12:21:22 +0100",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: [Bug Fix] ECPG: could not use set xxx to default statement"
},
{
"msg_contents": "\nOn 2/19/19 6:21 AM, Michael Meskes wrote:\n> Higuchi-san,\n>\n>> I attached the patch which cope with missing semicolons. \n>> Previous parse.pl find semicolon and dump data to buffer. When\n>> attached patch's parse.pl find new tokens before finding a semicolon,\n>> it also dumps data to buffer.\n> It just occurred to me that check_rules.pl probably uses the same logic\n> to identify each rule and thus needs to be changed, too. \n>\n> Also, IIRC bison allows blanks between the symbol name and the colon,\n> or in other words \"generic_set:\" is equal to \"generic_set :\". If this\n> happens after a \"missing\" semicolon I think your patch does not notice\n> the end of the rule.\n>\n\n\n\nYeah, it also seems too possibly liberal about where it matches a rule\nname. AUIU this should be the first token on a line; is that right?\nOTOH, it won't handle any case where an action block is not the last\nthing in the rule, since it only sets $fill_semicolon on seeing a\nclosing brace (on its own).\n\nI just looked at the bison manual at gnu.org and also at `info bison` on\nmy local machine, and couldn't see any reference to semicolons being\noptional at the end of a rule. Under the heading \"Syntax of Grammar\nRules\" it says this:\n\n A Bison grammar rule has the following general form:\n\n RESULT: COMPONENTS...;\n\nMaking it optional without putting that in the manual is just awful.\n\n\ncheers\n\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 19 Feb 2019 07:51:27 -0500",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [Bug Fix] ECPG: could not use set xxx to default statement"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> I just looked at the bison manual at gnu.org and also at `info bison` on\n> my local machine, and couldn't see any reference to semicolons being\n> optional at the end of a rule. Under the heading \"Syntax of Grammar\n> Rules\" it says this:\n> A Bison grammar rule has the following general form:\n> RESULT: COMPONENTS...;\n> Making it optional without putting that in the manual is just awful.\n\nYeah. I wonder if they removed that info in 1.34 and failed to\nput it back in 1.875?\n\nAnyway, I'm of the opinion that omitting the semi is poor style\nand our tools should insist on it even if Bison does not. Thus,\nI think the correct fix is for the scripts to complain about a\nmissing semi, not cope.\n\nMy initial look at parse.pl last night left me feeling pretty\ndisheartened about its robustness in general --- for example,\nit looks like { } /* or */ inside a string literal or Bison\ncharacter token would break it completely, because it wouldn't\ndistinguish those cases from the same things outside a string.\nIt's just luck we haven't broken it yet (or, perhaps, we have\nand nobody exercised the relevant productions yet?).\n\nProbably, somebody who's a better Perl programmer than me\nought to take point on improving that.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 19 Feb 2019 09:29:24 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [Bug Fix] ECPG: could not use set xxx to default statement"
},
{
"msg_contents": "\nOn 2/19/19 9:29 AM, Tom Lane wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> I just looked at the bison manual at gnu.org and also at `info bison` on\n>> my local machine, and couldn't see any reference to semicolons being\n>> optional at the end of a rule. Under the heading \"Syntax of Grammar\n>> Rules\" it says this:\n>> A Bison grammar rule has the following general form:\n>> RESULT: COMPONENTS...;\n>> Making it optional without putting that in the manual is just awful.\n> Yeah. I wonder if they removed that info in 1.34 and failed to\n> put it back in 1.875?\n>\n> Anyway, I'm of the opinion that omitting the semi is poor style\n> and our tools should insist on it even if Bison does not. Thus,\n> I think the correct fix is for the scripts to complain about a\n> missing semi, not cope.\n\n\nYeah, agreed.\n\n\n\n>\n> My initial look at parse.pl last night left me feeling pretty\n> disheartened about its robustness in general --- for example,\n> it looks like { } /* or */ inside a string literal or Bison\n> character token would break it completely, because it wouldn't\n> distinguish those cases from the same things outside a string.\n> It's just luck we haven't broken it yet (or, perhaps, we have\n> and nobody exercised the relevant productions yet?).\n>\n> Probably, somebody who's a better Perl programmer than me\n> ought to take point on improving that.\n>\n> \t\t\t\n\n\nAgreed.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 19 Feb 2019 09:47:31 -0500",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [Bug Fix] ECPG: could not use set xxx to default statement"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 2/19/19 9:29 AM, Tom Lane wrote:\n>> Probably, somebody who's a better Perl programmer than me\n>> ought to take point on improving that.\n\n> Agreed.\n\nNot seeing any motion on this, here's a draft patch to make these\nscripts complain about missing semicolons. Against the current\ngram.y (which contains 2 such errors, as Michael noted) you\nget output like\n\n'/usr/bin/perl' ./parse.pl . < ../../../backend/parser/gram.y > preproc.y\nunterminated rule at ./parse.pl line 370, <> line 1469.\nmake: *** [preproc.y] Error 255\nmake: *** Deleting file `preproc.y'\n\nThat's not *super* friendly, but it does give you the right line number\nto look at in gram.y. We could adjust the script (and the Makefile)\nfurther so that the message would cite the gram.y filename, but I'm not\nsure if it's worth the trouble. Thoughts?\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 22 Feb 2019 14:36:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [Bug Fix] ECPG: could not use set xxx to default statement"
},
{
"msg_contents": "> Not seeing any motion on this, here's a draft patch to make these\n> scripts complain about missing semicolons. Against the current\n> gram.y (which contains 2 such errors, as Michael noted) you\n> get output like\n\nThanks Tom for looking into this. Are we agreed then that we want\ngram.y to have semicolons? \n\n> '/usr/bin/perl' ./parse.pl . < ../../../backend/parser/gram.y >\n> preproc.y\n> unterminated rule at ./parse.pl line 370, <> line 1469.\n> make: *** [preproc.y] Error 255\n> make: *** Deleting file `preproc.y'\n> \n> That's not *super* friendly, but it does give you the right line\n> number\n> to look at in gram.y. We could adjust the script (and the Makefile)\n> further so that the message would cite the gram.y filename, but I'm\n> not\n> sure if it's worth the trouble. Thoughts?\n\nIMO it's not worth it. We all know where the grammar is and that the\necpg tools only parse that one file. Why putting effort into writing it\ndown too?\n\nMichael\n-- \nMichael Meskes\nMichael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)\nMeskes at (Debian|Postgresql) dot Org\nJabber: michael at xmpp dot meskes dot org\nVfL Borussia! Força Barça! SF 49ers! Use Debian GNU/Linux, PostgreSQL\n\n\n",
"msg_date": "Sat, 23 Feb 2019 10:35:06 +0100",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: [Bug Fix] ECPG: could not use set xxx to default statement"
},
{
"msg_contents": "Michael Meskes <meskes@postgresql.org> writes:\n>> Not seeing any motion on this, here's a draft patch to make these\n>> scripts complain about missing semicolons. Against the current\n>> gram.y (which contains 2 such errors, as Michael noted) you\n>> get output like\n\n> Thanks Tom for looking into this. Are we agreed then that we want\n> gram.y to have semicolons? \n\nHearing no objections, I pushed it that way.\n\n>> That's not *super* friendly, but it does give you the right line\n>> number to look at in gram.y. We could adjust the script (and the Makefile)\n>> further so that the message would cite the gram.y filename, but I'm\n>> not sure if it's worth the trouble. Thoughts?\n\n> IMO it's not worth it. We all know where the grammar is and that the\n> ecpg tools only parse that one file. Why putting effort into writing it\n> down too?\n\nI did manage to fix the \"die\" commands so that you get something like\n\nunterminated rule at grammar line 1461\n\nwithout the extraneous detail about the script's internals.\nThat seems clear enough from here.\n\nI'm still disturbed by the scripts' ability to get fooled by\nbraces or comment markers within quoted strings. However, that's\nnot something I have good ideas about how to fix, and there's not\nany evidence that it's a live problem at the moment.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 24 Feb 2019 12:59:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [Bug Fix] ECPG: could not use set xxx to default statement"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nAs discussed in various threads, PostgreSQL-on-NFS is viewed with\nsuspicion. Perhaps others knew this already, but I first learned of\nthe specific mechanism (or at least one of them) for corruption from\nCraig Ringer's writing[1] about fsync() on Linux.\n\nThe problem is that close() and fsync() could report ENOSPC,\nindicating that your dirty data has been dropped from the Linux page\ncache, and then future fsync() operations could succeed as if nothing\nhappened. It's easy to see that happening[2].\n\nSince Craig's report, we committed a patch based on his PANIC\nproposal: we now panic on fsync() and close() failure. Recovering\nfrom the WAL may or may not be possible, but at no point will we allow\na checkpoint to retry and bogusly succeed.\n\nSo, does this mean we fixed the problems with NFS? Not sure, but I do\nsee a couple of problems (and they're problems Craig raised in his\nthread):\n\nThe first is practical. Running out of diskspace (or quota) is not\nall that rare (much more common that EIO from a dying disk, I'd\nguess), and definitely recoverable by an administrator: just create\nmore space. It would be really nice to avoid panicking for an\n*expected* condition.\n\nTo do that, I think we'd need to move the ENOSPC error back relation\nextension time (when we call pwrite()), as happens for local\nfilesystems. Luckily NFS 4.2 provides a mechanism to do that: the NFS\n4.2 ALLOCATE[3] command. To make this work, I think there are two\nsubproblems to solve:\n\n1. Figure out how to get the ALLOCATE command all the way through the\nstack from PostgreSQL to the remote NFS server, and know for sure that\nit really happened. On the Debian buster Linux 4.18 system I checked,\nfallocate() reports EOPNOTSUPP for fallocate(), and posix_fallocate()\nappears to succeed but it doesn't really do anything at all (though I\nunderstand that some versions sometimes write zeros to simulate\nallocation, which in this case would be equally useless as it doesn't\nreserve anything on an NFS server). We need the server and NFS client\nand libc to be of the right version and cooperate and tell us that\nthey have really truly reserved space, but there isn't currently a way\nas far as I can tell. How can we achieve that, without writing our\nown NFS client?\n\n2. Deal with the resulting performance suckage. Extending 8kb at a\ntime with synchronous network round trips won't fly.\n\nA theoretical question I thought of is whether there are any\ninterleavings of operations that allow a checkpoint to complete\nbogusly, while a concurrent close() in a regular backend fails with\nEIO for data that was included in the checkpoint, and panics. I\n*suspect* the answer is that every interleaving is safe for 4.16+\nkernels that report IO errors to every descriptor. In older kernels I\nwonder if there could be a schedule where an arbitrary backend eats\nthe error while closing, then the checkpointer calls fsync()\nsuccessfully and then logs a checkpoint, and then then the arbitrary\nbackend panics (too late). I suspect EIO on close() doesn't happen in\npractice on regular local filesystems, which is why I mention it in\nthe context of NFS, but I could be wrong about that.\n\nEverything I said above about NFS may also apply to CIFS, I dunno.\n\n[1] https://www.postgresql.org/message-id/flat/CAMsr%2BYHh%2B5Oq4xziwwoEfhoTZgr07vdGG%2Bhu%3D1adXx59aTeaoQ%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAEepm%3D1FGo%3DACPKRmAxvb53mBwyVC%3DTDwTE0DMzkWjdbAYw7sw%40mail.gmail.com\n[3] https://tools.ietf.org/html/rfc7862#page-64\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n",
"msg_date": "Tue, 19 Feb 2019 20:03:05 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Some thoughts on NFS"
},
{
"msg_contents": "On Tue, Feb 19, 2019 at 2:03 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> How can we achieve that, without writing our\n> own NFS client?\n\n<dons crash helmet>\n\nInstead of writing our own NFS client, how about writing our own\nnetwork storage protocol? Imagine a stripped-down postmaster process\nrunning on the NFS server that essentially acts as a block server.\nThrough some sort of network chatter, it can exchange blocks with the\nreal postmaster running someplace else. The mini-protocol would\ncontain commands like READ_BLOCK, WRITE_BLOCK, EXTEND_RELATION,\nFSYNC_SEGMENT, etc. - basically whatever the relevant operations at\nthe smgr layer are. And the user would see the remote server as a\ntablespace mapped to a special smgr.\n\nAs compared with your proposal, this has both advantages and\ndisadvantages. The main advantage is that we aren't dependent on\nbeing able to make NFS behave in any particular way; indeed, this type\nof solution could be used not only to work around problems with NFS,\nbut also problems with any other network filesystem. We get to reuse\nall of the work we've done to try to make local operation reliable;\nthe remote server can run the same code that would be run locally\nwhenever the master tells it to do so. And you can even imagine\ntrying to push more work to the remote side in some future version of\nthe protocol. The main disadvantage is that it doesn't help unless\nyou can actually run software on the remote box. If the only access\nyou have to the remote side is that it exposes an NFS interface, then\nthis sort of thing is useless. And that's probably a pretty common\nscenario.\n\nSo that brings us back to your proposal. I don't know whether there's\nanyway of solving the problem you postulate: \"We need the server and\nNFS client and libc to be of the right version and cooperate and tell\nus that they have really truly reserved space.\" If there's not a set\nof APIs that can be used to make that happen, then I don't know how we\ncan ever solve this problem without writing our own client. Well, I\nguess we could submit patches to every libc in the world to add those\nAPIs. But that seems like a painful way forward.\n\nI'm kinda glad you're thinking about this problem because I think the\nunreliably of PostgreSQL on NFS is a real problem for users and kind\nof a black eye for the project. However, I am not sure that I see an\neasy solution in what you wrote, or in general.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Tue, 19 Feb 2019 10:45:47 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Some thoughts on NFS"
},
{
"msg_contents": "On Tue, Feb 19, 2019 at 4:46 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Feb 19, 2019 at 2:03 AM Thomas Munro <thomas.munro@gmail.com>\n> wrote:\n> > How can we achieve that, without writing our\n> > own NFS client?\n>\n> <dons crash helmet>\n>\n\nYou'll need it :)\n\n\nInstead of writing our own NFS client, how about writing our own\n> network storage protocol? Imagine a stripped-down postmaster process\n> running on the NFS server that essentially acts as a block server.\n> Through some sort of network chatter, it can exchange blocks with the\n> real postmaster running someplace else. The mini-protocol would\n> contain commands like READ_BLOCK, WRITE_BLOCK, EXTEND_RELATION,\n> FSYNC_SEGMENT, etc. - basically whatever the relevant operations at\n> the smgr layer are. And the user would see the remote server as a\n> tablespace mapped to a special smgr.\n>\n> As compared with your proposal, this has both advantages and\n> disadvantages. The main advantage is that we aren't dependent on\n> being able to make NFS behave in any particular way; indeed, this type\n> of solution could be used not only to work around problems with NFS,\n> but also problems with any other network filesystem. We get to reuse\n> all of the work we've done to try to make local operation reliable;\n> the remote server can run the same code that would be run locally\n> whenever the master tells it to do so. And you can even imagine\n> trying to push more work to the remote side in some future version of\n> the protocol. The main disadvantage is that it doesn't help unless\n> you can actually run software on the remote box. If the only access\n> you have to the remote side is that it exposes an NFS interface, then\n> this sort of thing is useless. And that's probably a pretty common\n> scenario.\n>\n\nIn my experience, that covers approximately 100% of the usecase.\n\nThe only case I've run into people wanting to use postgres on NFS, the NFS\nserver is a big filer from netapp or hitachi or whomever. And you're not\ngoing to be able to run something like that on top of it.\n\nThere might be a use-case for the split that you mention, absolutely, but\nit's not going to solve the people-who-want-NFS situation. You'd solve more\nof that by having the middle layer speak \"raw device\" underneath and be\nable to sit on top of things like iSCSI (yes, really).\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Feb 19, 2019 at 4:46 PM Robert Haas <robertmhaas@gmail.com> wrote:On Tue, Feb 19, 2019 at 2:03 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> How can we achieve that, without writing our\n> own NFS client?\n\n<dons crash helmet>You'll need it :)\nInstead of writing our own NFS client, how about writing our own\nnetwork storage protocol? Imagine a stripped-down postmaster process\nrunning on the NFS server that essentially acts as a block server.\nThrough some sort of network chatter, it can exchange blocks with the\nreal postmaster running someplace else. The mini-protocol would\ncontain commands like READ_BLOCK, WRITE_BLOCK, EXTEND_RELATION,\nFSYNC_SEGMENT, etc. - basically whatever the relevant operations at\nthe smgr layer are. And the user would see the remote server as a\ntablespace mapped to a special smgr.\n\nAs compared with your proposal, this has both advantages and\ndisadvantages. The main advantage is that we aren't dependent on\nbeing able to make NFS behave in any particular way; indeed, this type\nof solution could be used not only to work around problems with NFS,\nbut also problems with any other network filesystem. We get to reuse\nall of the work we've done to try to make local operation reliable;\nthe remote server can run the same code that would be run locally\nwhenever the master tells it to do so. And you can even imagine\ntrying to push more work to the remote side in some future version of\nthe protocol. The main disadvantage is that it doesn't help unless\nyou can actually run software on the remote box. If the only access\nyou have to the remote side is that it exposes an NFS interface, then\nthis sort of thing is useless. And that's probably a pretty common\nscenario.In my experience, that covers approximately 100% of the usecase.The only case I've run into people wanting to use postgres on NFS, the NFS server is a big filer from netapp or hitachi or whomever. And you're not going to be able to run something like that on top of it.There might be a use-case for the split that you mention, absolutely, but it's not going to solve the people-who-want-NFS situation. You'd solve more of that by having the middle layer speak \"raw device\" underneath and be able to sit on top of things like iSCSI (yes, really).-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Tue, 19 Feb 2019 16:59:35 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Some thoughts on NFS"
},
{
"msg_contents": "On 2/19/19 10:59 AM, Magnus Hagander wrote:\n> On Tue, Feb 19, 2019 at 4:46 PM Robert Haas <robertmhaas@gmail.com\n> <mailto:robertmhaas@gmail.com>> wrote:\n> \n> On Tue, Feb 19, 2019 at 2:03 AM Thomas Munro <thomas.munro@gmail.com\n> <mailto:thomas.munro@gmail.com>> wrote:\n> > How can we achieve that, without writing our\n> > own NFS client?\n> \n> <dons crash helmet>\n> \n> \n> You'll need it :)\n> \n> \n> Instead of writing our own NFS client, how about writing our own\n> network storage protocol? Imagine a stripped-down postmaster process\n> running on the NFS server that essentially acts as a block server.\n> Through some sort of network chatter, it can exchange blocks with the\n> real postmaster running someplace else. The mini-protocol would\n> contain commands like READ_BLOCK, WRITE_BLOCK, EXTEND_RELATION,\n> FSYNC_SEGMENT, etc. - basically whatever the relevant operations at\n> the smgr layer are. And the user would see the remote server as a\n> tablespace mapped to a special smgr.\n> \n> As compared with your proposal, this has both advantages and\n> disadvantages. The main advantage is that we aren't dependent on\n> being able to make NFS behave in any particular way; indeed, this type\n> of solution could be used not only to work around problems with NFS,\n> but also problems with any other network filesystem. We get to reuse\n> all of the work we've done to try to make local operation reliable;\n> the remote server can run the same code that would be run locally\n> whenever the master tells it to do so. And you can even imagine\n> trying to push more work to the remote side in some future version of\n> the protocol. The main disadvantage is that it doesn't help unless\n> you can actually run software on the remote box. If the only access\n> you have to the remote side is that it exposes an NFS interface, then\n> this sort of thing is useless. And that's probably a pretty common\n> scenario.\n> \n> \n> In my experience, that covers approximately 100% of the usecase.\n> \n> The only case I've run into people wanting to use postgres on NFS, the\n> NFS server is a big filer from netapp or hitachi or whomever. And you're\n> not going to be able to run something like that on top of it.\n\n\nExactly my experience too.\n\n\n> There might be a use-case for the split that you mention, absolutely,\n> but it's not going to solve the people-who-want-NFS situation. You'd\n> solve more of that by having the middle layer speak \"raw device\"\n> underneath and be able to sit on top of things like iSCSI (yes, really).\n\nInteresting idea but sounds ambitious ;-)\n\nJoe\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development",
"msg_date": "Tue, 19 Feb 2019 11:11:29 -0500",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Some thoughts on NFS"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Tue, Feb 19, 2019 at 2:03 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > How can we achieve that, without writing our\n> > own NFS client?\n> \n> <dons crash helmet>\n> \n> Instead of writing our own NFS client, how about writing our own\n> network storage protocol? Imagine a stripped-down postmaster process\n> running on the NFS server that essentially acts as a block server.\n> Through some sort of network chatter, it can exchange blocks with the\n> real postmaster running someplace else. The mini-protocol would\n> contain commands like READ_BLOCK, WRITE_BLOCK, EXTEND_RELATION,\n> FSYNC_SEGMENT, etc. - basically whatever the relevant operations at\n> the smgr layer are. And the user would see the remote server as a\n> tablespace mapped to a special smgr.\n\nIn reading this, I honestly thought somewhere along the way you'd say\n\"and then you have WAL, so just run a replica and forget this whole\nnetwork filesystem business.\"\n\nThe practical issue of WAL replay being single-process is an issue\nthough. It seems like your mini-protocol was going in a direction that\nwould have allowed multiple processes to be working between the PG\nsystem and the storage system concurrently, avoiding the single-threaded\nissue with WAL but also making it such that the replica wouldn't be able\nto be used for read-only queries (without some much larger changes\nhappening anyway). I'm not sure the use-case is big enough but it does\nseem to me that we're getting to a point where people are generating\nenough WAL with systems that they care an awful lot about that they\nmight be willing to forgo having the ability to perform read-only\nqueries on the replica as long as they know that they can flip traffic\nover to the replica without losing data.\n\nSo, what this all really boils down to is that I think this idea of a\ndifferent protocol that would allow PG to essentially replicate to a\nremote system, or possibly run entirely off of the remote system without\nany local storage, could be quite interesting in some situations.\n\nOn the other hand, I pretty much agree 100% with Magnus that the NFS\nuse-case is almost entirely because someone bought a big piece of\nhardware that talks NFS and no, you don't get to run whatever code you\nwant on it.\n\nThanks!\n\nStephen",
"msg_date": "Tue, 19 Feb 2019 11:18:16 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Some thoughts on NFS"
},
{
"msg_contents": "On Tue, Feb 19, 2019 at 10:59 AM Magnus Hagander <magnus@hagander.net> wrote:\n> The only case I've run into people wanting to use postgres on NFS, the NFS server is a big filer from netapp or hitachi or whomever. And you're not going to be able to run something like that on top of it.\n\nYeah. :-(\n\nIt seems, however, we have no way of knowing to what extent that big\nfiler actually implements the latest NFS specs and does so correctly.\nAnd if it doesn't, and data goes down the tubes, people are going to\nblame PostgreSQL, not the big filer, either because they really\nbelieve we ought to be able to handle it, or because they know that\nfiling a trouble ticket with NetApp isn't likely to provoke any sort\nof swift response. If PostgreSQL itself is speaking NFS, we might at\nleast have a little more information about what behavior the filer\nclaims to implement, but even then it could easily be \"lying.\" And if\nwe're just seeing it as a filesystem mount, then we're just ... flying\nblind.\n\n> There might be a use-case for the split that you mention, absolutely, but it's not going to solve the people-who-want-NFS situation. You'd solve more of that by having the middle layer speak \"raw device\" underneath and be able to sit on top of things like iSCSI (yes, really).\n\nNot sure I follow this part.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Tue, 19 Feb 2019 11:20:41 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Some thoughts on NFS"
},
{
"msg_contents": "\nOn 2/19/19 5:20 PM, Robert Haas wrote:\n> On Tue, Feb 19, 2019 at 10:59 AM Magnus Hagander <magnus@hagander.net> wrote:\n>> The only case I've run into people wanting to use postgres on NFS,\n>> the NFS server is a big filer from netapp or hitachi or whomever. And\n>> you're not going to be able to run something like that on top of it.\n> \n> Yeah. :-(\n> \n> It seems, however, we have no way of knowing to what extent that big\n> filer actually implements the latest NFS specs and does so correctly.\n> And if it doesn't, and data goes down the tubes, people are going to\n> blame PostgreSQL, not the big filer, either because they really\n> believe we ought to be able to handle it, or because they know that\n> filing a trouble ticket with NetApp isn't likely to provoke any sort\n> of swift response. If PostgreSQL itself is speaking NFS, we might at\n> least have a little more information about what behavior the filer\n> claims to implement, but even then it could easily be \"lying.\" And if\n> we're just seeing it as a filesystem mount, then we're just ... flying\n> blind.\n> \n\nPerhaps we should have something like pg_test_nfs, then?\n\n>> There might be a use-case for the split that you mention, \n>> absolutely, but it's not going to solve the people-who-want-NFS \n>> situation. You'd solve more of that by having the middle layer \n>> speak \"raw device\" underneath and be able to sit on top of things\n>> like iSCSI (yes, really).\n>\n> Not sure I follow this part.\n>\n\nI think Magnus says that people running PostgreSQL on NFS generally\ndon't do that because they somehow chose NFS, but because that's what\ntheir company uses for network storage. Even if we support the custom\nblock protocol, they probably won't be able to use it.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Tue, 19 Feb 2019 17:33:18 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Some thoughts on NFS"
},
{
"msg_contents": "## Magnus Hagander (magnus@hagander.net):\n\n> You'd solve more\n> of that by having the middle layer speak \"raw device\" underneath and be\n> able to sit on top of things like iSCSI (yes, really).\n\nBack in ye olden days we called these middle layers \"kernel\" and\n\"filesystem\" and had that maintained by specialists.\n\nRegards,\nChristoph\n\n-- \nSpare Space.\n\n",
"msg_date": "Tue, 19 Feb 2019 17:38:24 +0100",
"msg_from": "Christoph Moench-Tegeder <cmt@burggraben.net>",
"msg_from_op": false,
"msg_subject": "Re: Some thoughts on NFS"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-19 20:03:05 +1300, Thomas Munro wrote:\n> The first is practical. Running out of diskspace (or quota) is not\n> all that rare (much more common that EIO from a dying disk, I'd\n> guess), and definitely recoverable by an administrator: just create\n> more space. It would be really nice to avoid panicking for an\n> *expected* condition.\n\nWell, that's true, but OTOH, we don't even handle that properly on local\nfilesystems for WAL. And while people complain, it's not *that* common.\n\n\n> 1. Figure out how to get the ALLOCATE command all the way through the\n> stack from PostgreSQL to the remote NFS server, and know for sure that\n> it really happened. On the Debian buster Linux 4.18 system I checked,\n> fallocate() reports EOPNOTSUPP for fallocate(), and posix_fallocate()\n> appears to succeed but it doesn't really do anything at all (though I\n> understand that some versions sometimes write zeros to simulate\n> allocation, which in this case would be equally useless as it doesn't\n> reserve anything on an NFS server). We need the server and NFS client\n> and libc to be of the right version and cooperate and tell us that\n> they have really truly reserved space, but there isn't currently a way\n> as far as I can tell. How can we achieve that, without writing our\n> own NFS client?\n> \n> 2. Deal with the resulting performance suckage. Extending 8kb at a\n> time with synchronous network round trips won't fly.\n\nI think I'd just go for fsync();pwrite();fsync(); as the extension\nmechanism, iff we're detecting a tablespace is on NFS. The first fsync()\nto make sure there's no previous errors that we could mistake for\nENOSPC, the pwrite to extend, the second fsync to make sure there's\nactually space. Then we can detect ENOSPC properly. That possibly does\nleave some errors where we could mistake ENOSPC as something more benign\nthan it is, but the cases seem pretty narrow, due to the previous\nfsync() (maybe the other side could be thin provisioned and get an\nENOSPC there - but in that case we didn't actually loose any data. The\nonly dangerous scenario I can come up with is that the remote side is on\nthinly provisioned CoW system, and a concurrent write to an earlier\nblock runs out of space - but seriously, good riddance to you).\n\nGiven the current code we'll already try to extend in bigger chunks when\nthere's contention, we just need to combine the writes for those, that\nought to not be that hard now that we don't initialize bulk-extended\npages anymore. That won't solve the issue of extending during single\nthreaded writes, but I feel like that's secondary to actually being\ncorrect. And using bulk-extension in more cases doesn't sound too hard\nto me.\n\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Tue, 19 Feb 2019 08:52:11 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Some thoughts on NFS"
},
{
"msg_contents": "On Tue, Feb 19, 2019 at 5:33 PM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n>\n> On 2/19/19 5:20 PM, Robert Haas wrote:\n> > On Tue, Feb 19, 2019 at 10:59 AM Magnus Hagander <magnus@hagander.net>\n> wrote:\n>\n> >> There might be a use-case for the split that you mention,\n> >> absolutely, but it's not going to solve the people-who-want-NFS\n> >> situation. You'd solve more of that by having the middle layer\n> >> speak \"raw device\" underneath and be able to sit on top of things\n> >> like iSCSI (yes, really).\n> >\n> > Not sure I follow this part.\n> >\n>\n> I think Magnus says that people running PostgreSQL on NFS generally\n> don't do that because they somehow chose NFS, but because that's what\n> their company uses for network storage. Even if we support the custom\n> block protocol, they probably won't be able to use it.\n>\n\nYes, with the addition that they also often export iSCSI endpoints today,\nso if we wanted to sit on top of something that could also work. But not\nsit on top of a custom block protocol we invent ourselves.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Feb 19, 2019 at 5:33 PM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\nOn 2/19/19 5:20 PM, Robert Haas wrote:\n> On Tue, Feb 19, 2019 at 10:59 AM Magnus Hagander <magnus@hagander.net> wrote:\n>> There might be a use-case for the split that you mention, \n>> absolutely, but it's not going to solve the people-who-want-NFS \n>> situation. You'd solve more of that by having the middle layer \n>> speak \"raw device\" underneath and be able to sit on top of things\n>> like iSCSI (yes, really).\n>\n> Not sure I follow this part.\n>\n\nI think Magnus says that people running PostgreSQL on NFS generally\ndon't do that because they somehow chose NFS, but because that's what\ntheir company uses for network storage. Even if we support the custom\nblock protocol, they probably won't be able to use it.Yes, with the addition that they also often export iSCSI endpoints today, so if we wanted to sit on top of something that could also work. But not sit on top of a custom block protocol we invent ourselves. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Tue, 19 Feb 2019 19:10:30 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Some thoughts on NFS"
},
{
"msg_contents": "On Tue, Feb 19, 2019 at 5:38 PM Christoph Moench-Tegeder <cmt@burggraben.net>\nwrote:\n\n> ## Magnus Hagander (magnus@hagander.net):\n>\n> > You'd solve more\n> > of that by having the middle layer speak \"raw device\" underneath and be\n> > able to sit on top of things like iSCSI (yes, really).\n>\n> Back in ye olden days we called these middle layers \"kernel\" and\n> \"filesystem\" and had that maintained by specialists.\n>\n\nYeah. Unfortunately that turned out in a number of cases to be things like\nspecialists that considered fsync unimportant, or dropping data from the\nbuffer cache without errors.\n\nBut what I'm mainly saying is that if we want to run postgres on top of a\nblock device protocol, we should go all the way and do it, not somewhere\nhalf way that will be unable to help most people. I'm not saying that we\n*should*, there is a very big if in that.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Feb 19, 2019 at 5:38 PM Christoph Moench-Tegeder <cmt@burggraben.net> wrote:## Magnus Hagander (magnus@hagander.net):\n\n> You'd solve more\n> of that by having the middle layer speak \"raw device\" underneath and be\n> able to sit on top of things like iSCSI (yes, really).\n\nBack in ye olden days we called these middle layers \"kernel\" and\n\"filesystem\" and had that maintained by specialists.Yeah. Unfortunately that turned out in a number of cases to be things like specialists that considered fsync unimportant, or dropping data from the buffer cache without errors.But what I'm mainly saying is that if we want to run postgres on top of a block device protocol, we should go all the way and do it, not somewhere half way that will be unable to help most people. I'm not saying that we *should*, there is a very big if in that.-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Tue, 19 Feb 2019 19:12:31 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Some thoughts on NFS"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-19 16:59:35 +0100, Magnus Hagander wrote:\n> There might be a use-case for the split that you mention, absolutely, but\n> it's not going to solve the people-who-want-NFS situation. You'd solve more\n> of that by having the middle layer speak \"raw device\" underneath and be\n> able to sit on top of things like iSCSI (yes, really).\n\nThere's decent iSCSI implementations in several kernels, without the NFS\nproblems. I'm not sure what we'd gain by reimplementing those?\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Tue, 19 Feb 2019 10:17:15 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Some thoughts on NFS"
},
{
"msg_contents": "On Tue, Feb 19, 2019 at 1:17 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2019-02-19 16:59:35 +0100, Magnus Hagander wrote:\n> > There might be a use-case for the split that you mention, absolutely, but\n> > it's not going to solve the people-who-want-NFS situation. You'd solve more\n> > of that by having the middle layer speak \"raw device\" underneath and be\n> > able to sit on top of things like iSCSI (yes, really).\n>\n> There's decent iSCSI implementations in several kernels, without the NFS\n> problems. I'm not sure what we'd gain by reimplementing those?\n\nIs that a new thing? I ran across PostgreSQL-over-iSCSI a number of\nyears ago and the evidence strongly suggested that it did not reliably\nreport disk errors back to PostgreSQL, leading to corruption.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Tue, 19 Feb 2019 13:21:21 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Some thoughts on NFS"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-19 13:21:21 -0500, Robert Haas wrote:\n> On Tue, Feb 19, 2019 at 1:17 PM Andres Freund <andres@anarazel.de> wrote:\n> > On 2019-02-19 16:59:35 +0100, Magnus Hagander wrote:\n> > > There might be a use-case for the split that you mention, absolutely, but\n> > > it's not going to solve the people-who-want-NFS situation. You'd solve more\n> > > of that by having the middle layer speak \"raw device\" underneath and be\n> > > able to sit on top of things like iSCSI (yes, really).\n> >\n> > There's decent iSCSI implementations in several kernels, without the NFS\n> > problems. I'm not sure what we'd gain by reimplementing those?\n> \n> Is that a new thing? I ran across PostgreSQL-over-iSCSI a number of\n> years ago and the evidence strongly suggested that it did not reliably\n> report disk errors back to PostgreSQL, leading to corruption.\n\nHow many years ago are we talking? I think it's been mostly robust in\nthe last 6-10 years, maybe? But note that the postgres + linux fsync\nissues would have plagued that use case just as well as it did local\nstorage, at a likely higher incidence of failures (i.e. us forgetting to\nretry fsyncs in checkpoints, and linux throwing away dirty data after\nfsync failure would both have caused problems that aren't dependent on\niSCSI). And I think it's not that likely that we'd not screw up a\nnumber of times implementing iSCSI ourselves - not to speak of the fact\nthat that seems like an odd place to focus development on, given that\nit'd basically require all the infrastructure also needed for local DIO,\nwhich'd likely gain us much more.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Tue, 19 Feb 2019 10:29:42 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Some thoughts on NFS"
},
{
"msg_contents": "On Tue, Feb 19, 2019 at 1:29 PM Andres Freund <andres@anarazel.de> wrote:\n> > Is that a new thing? I ran across PostgreSQL-over-iSCSI a number of\n> > years ago and the evidence strongly suggested that it did not reliably\n> > report disk errors back to PostgreSQL, leading to corruption.\n>\n> How many years ago are we talking? I think it's been mostly robust in\n> the last 6-10 years, maybe?\n\nI think it was ~9 years ago.\n\n> But note that the postgres + linux fsync\n> issues would have plagued that use case just as well as it did local\n> storage, at a likely higher incidence of failures (i.e. us forgetting to\n> retry fsyncs in checkpoints, and linux throwing away dirty data after\n> fsync failure would both have caused problems that aren't dependent on\n> iSCSI).\n\nIIRC, and obviously that's difficult to do after so long, there were\nclearly disk errors in the kernel logs, but no hint of a problem in\nthe PostgreSQL logs. So it wasn't just a case of us responding to\nerrors with sufficient vigor -- either they weren't being reported at\nall, or only to system calls we weren't checking, e.g. close or\nsomething.\n\n> And I think it's not that likely that we'd not screw up a\n> number of times implementing iSCSI ourselves - not to speak of the fact\n> that that seems like an odd place to focus development on, given that\n> it'd basically require all the infrastructure also needed for local DIO,\n> which'd likely gain us much more.\n\nI don't really disagree with you here, but I also think it's important\nto be honest about what size hammer is likely to be sufficient to fix\nthe problem. Project policy for many years has been essentially\n\"let's assume the kernel guys know what they are doing,\" but, I don't\nknow, color me a little skeptical at this point. We've certainly made\nlots of mistakes all of our own, and it's certainly true that\nreimplementing large parts of what the kernel does in user space is\nnot very appealing ... but on the other hand it looks like filesystem\nerror reporting isn't even really reliable for local operation (unless\nwe do an incredibly complicated fd-passing thing that has deadlock\nproblems we don't know how to solve and likely performance problems\ntoo, or convert the whole backend to use threads) or for NFS operation\n(though maybe your suggestion will fix that) so the idea that iSCSI is\njust going to be all right seems a bit questionable to me. Go ahead,\ncall me a pessimist...\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Tue, 19 Feb 2019 13:45:28 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Some thoughts on NFS"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-19 13:45:28 -0500, Robert Haas wrote:\n> On Tue, Feb 19, 2019 at 1:29 PM Andres Freund <andres@anarazel.de> wrote:\n> > And I think it's not that likely that we'd not screw up a\n> > number of times implementing iSCSI ourselves - not to speak of the fact\n> > that that seems like an odd place to focus development on, given that\n> > it'd basically require all the infrastructure also needed for local DIO,\n> > which'd likely gain us much more.\n> \n> I don't really disagree with you here, but I also think it's important\n> to be honest about what size hammer is likely to be sufficient to fix\n> the problem. Project policy for many years has been essentially\n> \"let's assume the kernel guys know what they are doing,\" but, I don't\n> know, color me a little skeptical at this point.\n\nYea, and I think around e.g. using the kernel page cache / not using\nDIO, several people, including kernel developers and say me, told us\nthat's stupid.\n\n\n> We've certainly made lots of mistakes all of our own, and it's\n> certainly true that reimplementing large parts of what the kernel does\n> in user space is not very appealing ... but on the other hand it looks\n> like filesystem error reporting isn't even really reliable for local\n> operation (unless we do an incredibly complicated fd-passing thing\n> that has deadlock problems we don't know how to solve and likely\n> performance problems too, or convert the whole backend to use threads)\n> or for NFS operation (though maybe your suggestion will fix that) so\n> the idea that iSCSI is just going to be all right seems a bit\n> questionable to me. Go ahead, call me a pessimist...\n\nMy point is that for iSCSC to be performant we'd need *all* the\ninfrastructure we also need for direct IO *and* a *lot* more. And that\nit seems insane to invest very substantial resources into developing our\nown iSCSI client when we don't even have DIO support. And DIO support\nwould allow us to address the error reporting issues, while also\ndrastically improving performance in a lot of situations. And we'd not\nhave to essentially develop our own filesystem etc.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Tue, 19 Feb 2019 10:56:21 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Some thoughts on NFS"
},
{
"msg_contents": "On Tue, Feb 19, 2019 at 1:56 PM Andres Freund <andres@anarazel.de> wrote:\n> My point is that for iSCSC to be performant we'd need *all* the\n> infrastructure we also need for direct IO *and* a *lot* more. And that\n> it seems insane to invest very substantial resources into developing our\n> own iSCSI client when we don't even have DIO support. And DIO support\n> would allow us to address the error reporting issues, while also\n> drastically improving performance in a lot of situations. And we'd not\n> have to essentially develop our own filesystem etc.\n\nOK, got it. So, I'll merge the patch for direct I/O support tomorrow,\nand then the iSCSI patch can go in on Thursday. OK? :-)\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Tue, 19 Feb 2019 13:58:22 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Some thoughts on NFS"
},
{
"msg_contents": "On Tue, Feb 19, 2019 at 7:58 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Feb 19, 2019 at 1:56 PM Andres Freund <andres@anarazel.de> wrote:\n> > My point is that for iSCSC to be performant we'd need *all* the\n> > infrastructure we also need for direct IO *and* a *lot* more. And that\n> > it seems insane to invest very substantial resources into developing our\n> > own iSCSI client when we don't even have DIO support. And DIO support\n> > would allow us to address the error reporting issues, while also\n> > drastically improving performance in a lot of situations. And we'd not\n> > have to essentially develop our own filesystem etc.\n>\n> OK, got it. So, I'll merge the patch for direct I/O support tomorrow,\n> and then the iSCSI patch can go in on Thursday. OK? :-)\n>\n\nC'mon Robert.\n\nSurely you know that such patches should be landed on *Fridays*, not\nThursdays.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Feb 19, 2019 at 7:58 PM Robert Haas <robertmhaas@gmail.com> wrote:On Tue, Feb 19, 2019 at 1:56 PM Andres Freund <andres@anarazel.de> wrote:\n> My point is that for iSCSC to be performant we'd need *all* the\n> infrastructure we also need for direct IO *and* a *lot* more. And that\n> it seems insane to invest very substantial resources into developing our\n> own iSCSI client when we don't even have DIO support. And DIO support\n> would allow us to address the error reporting issues, while also\n> drastically improving performance in a lot of situations. And we'd not\n> have to essentially develop our own filesystem etc.\n\nOK, got it. So, I'll merge the patch for direct I/O support tomorrow,\nand then the iSCSI patch can go in on Thursday. OK? :-)C'mon Robert.Surely you know that such patches should be landed on *Fridays*, not Thursdays. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Tue, 19 Feb 2019 20:04:54 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Some thoughts on NFS"
},
{
"msg_contents": "On Tue, Feb 19, 2019 at 2:05 PM Magnus Hagander <magnus@hagander.net> wrote:\n> C'mon Robert.\n>\n> Surely you know that such patches should be landed on *Fridays*, not Thursdays.\n\nOh, right. And preferably via airplane wifi from someplace over the\nAtlantic ocean, right?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Tue, 19 Feb 2019 14:11:10 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Some thoughts on NFS"
},
{
"msg_contents": "On Tue, Feb 19, 2019 at 8:03 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> A theoretical question I thought of is whether there are any\n> interleavings of operations that allow a checkpoint to complete\n> bogusly, while a concurrent close() in a regular backend fails with\n> EIO for data that was included in the checkpoint, and panics. I\n> *suspect* the answer is that every interleaving is safe for 4.16+\n> kernels that report IO errors to every descriptor. In older kernels I\n> wonder if there could be a schedule where an arbitrary backend eats\n> the error while closing, then the checkpointer calls fsync()\n> successfully and then logs a checkpoint, and then then the arbitrary\n> backend panics (too late). I suspect EIO on close() doesn't happen in\n> practice on regular local filesystems, which is why I mention it in\n> the context of NFS, but I could be wrong about that.\n\nUgh. It looks like Linux NFS doesn't even use the new errseq_t\nmachinery in 4.16+. So even if we had the fd-passing patch, I think\nthere may be a dangerous schedule like this:\n\nA: close() -> EIO, clears AS_EIO flag\nB: fsync() -> SUCCESS, log a checkpoint\nA: panic! (but it's too late, we already logged a checkpoint but\ndidn't flush all the dirty data the belonged to it)\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n",
"msg_date": "Wed, 20 Feb 2019 11:08:45 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Some thoughts on NFS"
},
{
"msg_contents": "On Wed, Feb 20, 2019 at 5:52 AM Andres Freund <andres@anarazel.de> wrote:\n> > 1. Figure out how to get the ALLOCATE command all the way through the\n> > stack from PostgreSQL to the remote NFS server, and know for sure that\n> > it really happened. On the Debian buster Linux 4.18 system I checked,\n> > fallocate() reports EOPNOTSUPP for fallocate(), and posix_fallocate()\n> > appears to succeed but it doesn't really do anything at all (though I\n> > understand that some versions sometimes write zeros to simulate\n> > allocation, which in this case would be equally useless as it doesn't\n> > reserve anything on an NFS server). We need the server and NFS client\n> > and libc to be of the right version and cooperate and tell us that\n> > they have really truly reserved space, but there isn't currently a way\n> > as far as I can tell. How can we achieve that, without writing our\n> > own NFS client?\n> >\n> > 2. Deal with the resulting performance suckage. Extending 8kb at a\n> > time with synchronous network round trips won't fly.\n>\n> I think I'd just go for fsync();pwrite();fsync(); as the extension\n> mechanism, iff we're detecting a tablespace is on NFS. The first fsync()\n> to make sure there's no previous errors that we could mistake for\n> ENOSPC, the pwrite to extend, the second fsync to make sure there's\n> actually space. Then we can detect ENOSPC properly. That possibly does\n> leave some errors where we could mistake ENOSPC as something more benign\n> than it is, but the cases seem pretty narrow, due to the previous\n> fsync() (maybe the other side could be thin provisioned and get an\n> ENOSPC there - but in that case we didn't actually loose any data. The\n> only dangerous scenario I can come up with is that the remote side is on\n> thinly provisioned CoW system, and a concurrent write to an earlier\n> block runs out of space - but seriously, good riddance to you).\n\nThis seems to make sense, and has the advantage that it uses\ninterfaces that exist right now. But it seems a bit like we'll have\nto wait for them to finish building out the errseq_t support for NFS\nto avoid various races around the mapping's AS_EIO flag (A: fsync() ->\nEIO, B: fsync() -> SUCCESS, log checkpoint; A: panic), and then maybe\nwe'd have to get at least one of { fd-passing, direct IO, threads }\nworking on our side ...\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n",
"msg_date": "Wed, 20 Feb 2019 11:25:22 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Some thoughts on NFS"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-20 11:25:22 +1300, Thomas Munro wrote:\n> This seems to make sense, and has the advantage that it uses\n> interfaces that exist right now. But it seems a bit like we'll have\n> to wait for them to finish building out the errseq_t support for NFS\n> to avoid various races around the mapping's AS_EIO flag (A: fsync() ->\n> EIO, B: fsync() -> SUCCESS, log checkpoint; A: panic), and then maybe\n> we'd have to get at least one of { fd-passing, direct IO, threads }\n> working on our side ...\n\nI think we could \"just\" make use of DIO for relation extensions when\ndetecting NFS. Given that we just about never actually read the result\nof the file extension write, just converting that write to DIO shouldn't\nhave that bad an overall impact - of course it'll cause slowdowns, but\nonly while extending files. And that ought to handle ENOSPC correctly,\nwhile leaving the EIO handling separate?\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Tue, 19 Feb 2019 14:29:19 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Some thoughts on NFS"
},
{
"msg_contents": "On Wed, Feb 20, 2019 at 7:58 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Tue, Feb 19, 2019 at 1:56 PM Andres Freund <andres@anarazel.de> wrote:\n> > My point is that for iSCSC to be performant we'd need *all* the\n> > infrastructure we also need for direct IO *and* a *lot* more. And that\n> > it seems insane to invest very substantial resources into developing our\n> > own iSCSI client when we don't even have DIO support. And DIO support\n> > would allow us to address the error reporting issues, while also\n> > drastically improving performance in a lot of situations. And we'd not\n> > have to essentially develop our own filesystem etc.\n>\n> OK, got it. So, I'll merge the patch for direct I/O support tomorrow,\n> and then the iSCSI patch can go in on Thursday. OK? :-)\n\nNot something I paid a lot of attention to as an application\ndeveloper, but in a past life I have seen a lot of mission critical\nDB2 and Oracle systems running on ext4 or XFS over (kernel) iSCSI\nplugged into big monster filers, and also I think perhaps also cases\nof NFS, but those systems use DIO by default (and the latter has its\nown NFS client IIUC). So I suspect if you can just get DIO merged today\nwe can probably skip the userland iSCSI and call it done. :-P\n\n\n--\nThomas Munro\nhttps://enterprisedb.com\n\n",
"msg_date": "Wed, 20 Feb 2019 11:57:26 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Some thoughts on NFS"
}
] |
[
{
"msg_contents": "Hello\n\nIn order to increase our security, we have started deploying row-level \nsecurity in order to add another safety net if any issue was to happen in our \napplications.\nAfter a careful monitoring of our databases, we found out that a lot of \nqueries started to go south, going extremely slow.\nThe root of these slowdowns is that a lot of the PostgreSQL functions are not \nmarked as leakproof, especially the ones used for operators.\nIn current git master, the following query returns 258 functions that are used \nby operators returning booleans and not marked leakproof:\n\nSELECT proname FROM pg_proc \n WHERE exists(select 1 from pg_operator where oprcode::name = proname) \n AND NOT(proleakproof) \n AND prorettype = 16;\n\n\nAmong these, if you have for instance a table like:\ncreate table demo_enum (\n username text,\n flag my_enum\n);\n\nWith an index on (username, flag), the index will only be used on username \nbecause flag is an enum and the enum_eq function is not marked leakproof.\n\nFor simple functions like enum_eq/enum_ne, marking them leakproof is an \nobvious fix (patch attached to this email, including also textin/textout). And \nif we had a 'RLS-enabled' context on functions, a way to make a lot of built-\nin functions leakproof would simply be to prevent them from logging errors \ncontaining values.\n\nFor others, like arraycontains, it's much trickier : arraycontains can be \nleakproof only if the comparison function of the inner type is leakproof too. \nThis raises some interesting issues, like arraycontains being marked as strict \nparallel safe while there is nothing making it mandatory for the inner type.\nIn order to optimize queries on a table like:\ncreate table demo_array (username text, my_array int[]);\none would need to mark arraycontains leakproof, thus implying that any \ncomparison operator must be leakproof too? Another possibility, not that \ncomplicated, would be to create specific-types entries in pg_proc for each \ntype that has a leakproof comparison operator. Or the complicated path would \nbe to have a 'dynamic' leakproof for functions like arraycontains that depend \non the inner type, but since this is determined at planning, it seems very \ncomplicated to implement.\n\nA third issue we noticed is the usage of functional indexes. If you create an \nindex on, for instance, (username, leaky_function(my_field)), and then search \non leaky_functon(my_field) = 42, the functional index can be used only if \nleaky_function is marked leakproof, even if it is never going to be executed \non invalid rows thanks to the index. After a quick look at the source code, it \nalso seems complicated to implement since the decision to reject potential \ndangerous calls to leaky_function is done really early in the process, before \nthe optimizer starts.\n\n\nI am willing to spend some time on these issues, but I have no specific \nknowledge of the planner and optimizer, and I fear proper fixes would require \nmuch digging there. If you think it would be appropriate for functions like \narraycontains or range_contains to require the 'inner' comparison function to \nbe leakproof, or just keep looking at the other functions in pg_proc and \nleakproof the ones that can be, I would be happy to write the corresponding \npatches.\n\n\nBest regards",
"msg_date": "Tue, 19 Feb 2019 19:37:22 +0100",
"msg_from": "Pierre Ducroquet <p.psql@pinaraf.info>",
"msg_from_op": true,
"msg_subject": "Row Level Security =?UTF-8?B?4oiS?= leakproof-ness and performance\n implications"
},
{
"msg_contents": "Pierre Ducroquet wrote:\n> In order to increase our security, we have started deploying row-level \n> security in order to add another safety net if any issue was to happen in our \n> applications.\n> After a careful monitoring of our databases, we found out that a lot of \n> queries started to go south, going extremely slow.\n> The root of these slowdowns is that a lot of the PostgreSQL functions are not \n> marked as leakproof, especially the ones used for operators.\n> In current git master, the following query returns 258 functions that are used \n> by operators returning booleans and not marked leakproof:\n> \n> SELECT proname FROM pg_proc \n> WHERE exists(select 1 from pg_operator where oprcode::name = proname) \n> AND NOT(proleakproof) \n> AND prorettype = 16;\n> \n> \n> Among these, if you have for instance a table like:\n> create table demo_enum (\n> username text,\n> flag my_enum\n> );\n> \n> With an index on (username, flag), the index will only be used on username \n> because flag is an enum and the enum_eq function is not marked leakproof.\n> \n> For simple functions like enum_eq/enum_ne, marking them leakproof is an \n> obvious fix (patch attached to this email, including also textin/textout). And\n\nThe enum_eq part looks totally safe, but the text functions allocate memory,\nso you could create memory pressure, wait for error messages that tell you\nthe size of the allocation that failed and this way learn about the data.\n\nIs this a paranoid idea?\n\n> if we had a 'RLS-enabled' context on functions, a way to make a lot of built-\n> in functions leakproof would simply be to prevent them from logging errors \n> containing values.\n> \n> For others, like arraycontains, it's much trickier :[...]\n\nI feel a little out of my depth here, so I won't comment.\n\n> A third issue we noticed is the usage of functional indexes. If you create an \n> index on, for instance, (username, leaky_function(my_field)), and then search \n> on leaky_functon(my_field) = 42, the functional index can be used only if \n> leaky_function is marked leakproof, even if it is never going to be executed \n> on invalid rows thanks to the index. After a quick look at the source code, it \n> also seems complicated to implement since the decision to reject potential \n> dangerous calls to leaky_function is done really early in the process, before \n> the optimizer starts.\n\nIf there is a bitmap index scan, the condition will be rechecked, so the\nfunction will be executed.\n\n> I am willing to spend some time on these issues, but I have no specific \n> knowledge of the planner and optimizer, and I fear proper fixes would require \n> much digging there. If you think it would be appropriate for functions like \n> arraycontains or range_contains to require the 'inner' comparison function to \n> be leakproof, or just keep looking at the other functions in pg_proc and \n> leakproof the ones that can be, I would be happy to write the corresponding \n> patches.\n\nThanks, and I think that every function that can safely be marked leakproof\nis a progress!\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Wed, 20 Feb 2019 00:43:50 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Row Level Security =?UTF-8?Q?=E2=88=92?= leakproof-ness and\n performance implications"
},
{
"msg_contents": "On Wednesday, February 20, 2019 12:43:50 AM CET Laurenz Albe wrote:\n> Pierre Ducroquet wrote:\n> > In order to increase our security, we have started deploying row-level\n> > security in order to add another safety net if any issue was to happen in\n> > our applications.\n> > After a careful monitoring of our databases, we found out that a lot of\n> > queries started to go south, going extremely slow.\n> > The root of these slowdowns is that a lot of the PostgreSQL functions are\n> > not marked as leakproof, especially the ones used for operators.\n> > In current git master, the following query returns 258 functions that are\n> > used by operators returning booleans and not marked leakproof:\n> > \n> > SELECT proname FROM pg_proc\n> > \n> > WHERE exists(select 1 from pg_operator where oprcode::name =\n> > proname)\n> > AND NOT(proleakproof)\n> > AND prorettype = 16;\n> > \n> > Among these, if you have for instance a table like:\n> > create table demo_enum (\n> > \n> > username text,\n> > flag my_enum\n> > \n> > );\n> > \n> > With an index on (username, flag), the index will only be used on username\n> > because flag is an enum and the enum_eq function is not marked leakproof.\n> > \n> > For simple functions like enum_eq/enum_ne, marking them leakproof is an\n> > obvious fix (patch attached to this email, including also textin/textout).\n> > And\n> The enum_eq part looks totally safe, but the text functions allocate memory,\n> so you could create memory pressure, wait for error messages that tell you\n> the size of the allocation that failed and this way learn about the data.\n> \n> Is this a paranoid idea?\n\nThis is not paranoid, it depends on your threat model.\nIn the model we implemented, the biggest threat we consider from an user point \nof view is IDOR (Insecure Direct Object Reference): a developer forgetting to \ncheck that its input is sane and matches the other parts of the URL or the \ncurrent session user. Exploiting the leak you mentioned in such a situation is \nalmost impossible, and to be honest the attacker has much easier targets if he \ngets to that level…\nMaybe leakproof is too simple? Should we be able to specify a 'paranoid-level' \nto allow some leaks depending on our threat model?\n\n> > if we had a 'RLS-enabled' context on functions, a way to make a lot of\n> > built- in functions leakproof would simply be to prevent them from\n> > logging errors containing values.\n> > \n> > For others, like arraycontains, it's much trickier :[...]\n> \n> I feel a little out of my depth here, so I won't comment.\n> \n> > A third issue we noticed is the usage of functional indexes. If you create\n> > an index on, for instance, (username, leaky_function(my_field)), and then\n> > search on leaky_functon(my_field) = 42, the functional index can be used\n> > only if leaky_function is marked leakproof, even if it is never going to\n> > be executed on invalid rows thanks to the index. After a quick look at\n> > the source code, it also seems complicated to implement since the\n> > decision to reject potential dangerous calls to leaky_function is done\n> > really early in the process, before the optimizer starts.\n> \n> If there is a bitmap index scan, the condition will be rechecked, so the\n> function will be executed.\n\nThat's exactly my point: using a bitmap index scan would be dangerous and thus \nshould not be allowed, but an index scan works fine. Or the recheck on bitmap \nscan must first check the RLS condition before doing its check.\n\n> > I am willing to spend some time on these issues, but I have no specific\n> > knowledge of the planner and optimizer, and I fear proper fixes would\n> > require much digging there. If you think it would be appropriate for\n> > functions like arraycontains or range_contains to require the 'inner'\n> > comparison function to be leakproof, or just keep looking at the other\n> > functions in pg_proc and leakproof the ones that can be, I would be happy\n> > to write the corresponding patches.\n> \n> Thanks, and I think that every function that can safely be marked leakproof\n> is a progress!\n\nThank you for your comments\n\n\n\n\n",
"msg_date": "Wed, 20 Feb 2019 08:14:11 +0100",
"msg_from": "Pierre Ducroquet <p.psql@pinaraf.info>",
"msg_from_op": true,
"msg_subject": "Re: Row Level Security =?UTF-8?B?4oiS?= leakproof-ness and\n performance implications"
},
{
"msg_contents": "On 2/19/19 6:43 PM, Laurenz Albe wrote:\n> Pierre Ducroquet wrote:\n>> if we had a 'RLS-enabled' context on functions, a way to make a lot of built-\n>> in functions leakproof would simply be to prevent them from logging errors \n>> containing values.\n>> \n>> For others, like arraycontains, it's much trickier :[...]\n> \n> I feel a little out of my depth here, so I won't comment.\n\nI had more-or-less the same idea, and even did a PoC patch (not\npreventing the log entries but rather redacting the variable data), but\nafter discussing offlist with some other hackers I got the impression\nthat it would be a really hard sell.\n\nIt does seem to me though that such a feature would be pretty useful.\nSomething like use a GUC to turn it on, and while on log messages get\nredacted. If you really needed to see the details, you could either\nduplicate your issue on another installation with the redaction turned\noff, or maybe turn it off on production in a controlled manner just long\nenough to capture the full error message.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development",
"msg_date": "Wed, 20 Feb 2019 09:46:35 -0500",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?Q?Re=3a_Row_Level_Security_=e2=88=92_leakproof-ness_and_per?=\n =?UTF-8?Q?formance_implications?="
},
{
"msg_contents": "Pierre Ducroquet <p.psql@pinaraf.info> writes:\n> For simple functions like enum_eq/enum_ne, marking them leakproof is an \n> obvious fix (patch attached to this email, including also textin/textout).\n\nThis is not nearly as \"obvious\" as you think. See prior discussions,\nnotably\nhttps://www.postgresql.org/message-id/flat/31042.1546194242%40sss.pgh.pa.us\n\nUp to now we've taken a very strict definition of what leakproofness\nmeans; as Noah stated, if a function can throw errors for some inputs,\nit's not considered leakproof, even if those inputs should never be\nencountered in practice. Most of the things we've been willing to\nmark leakproof are straight-line code that demonstrably can't throw\nany errors at all.\n\nThe previous thread seemed to have consensus that the hazards in\ntext_cmp and friends were narrow enough that nobody had a practical\nproblem with marking them leakproof --- but we couldn't define an\nobjective policy statement that would allow making such decisions,\nso nothing's been changed as yet. I think it is important to have\nan articulable policy about this, not just a seat-of-the-pants\nconclusion about the risk level in a particular function.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 20 Feb 2019 11:24:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Row Level Security =?UTF-8?B?4oiS?= leakproof-ness and\n performance implications"
},
{
"msg_contents": "On Wednesday, February 20, 2019 5:24:17 PM CET Tom Lane wrote:\n> Pierre Ducroquet <p.psql@pinaraf.info> writes:\n> > For simple functions like enum_eq/enum_ne, marking them leakproof is an\n> > obvious fix (patch attached to this email, including also textin/textout).\n> \n> This is not nearly as \"obvious\" as you think. See prior discussions,\n> notably\n> https://www.postgresql.org/message-id/flat/31042.1546194242%40sss.pgh.pa.us\n> \n> Up to now we've taken a very strict definition of what leakproofness\n> means; as Noah stated, if a function can throw errors for some inputs,\n> it's not considered leakproof, even if those inputs should never be\n> encountered in practice. Most of the things we've been willing to\n> mark leakproof are straight-line code that demonstrably can't throw\n> any errors at all.\n> \n> The previous thread seemed to have consensus that the hazards in\n> text_cmp and friends were narrow enough that nobody had a practical\n> problem with marking them leakproof --- but we couldn't define an\n> objective policy statement that would allow making such decisions,\n> so nothing's been changed as yet. I think it is important to have\n> an articulable policy about this, not just a seat-of-the-pants\n> conclusion about the risk level in a particular function.\n> \n> \t\t\tregards, tom lane\n\n\nI undestand these decisions, but it makes RLS quite fragile, with numerous un-\ndocumented side-effects. In order to save difficulties from future users, I \nwrote this patch to the documentation, listing the biggest restrictions I hit \nwith RLS so far.\n\nRegards\n\n Pierre",
"msg_date": "Thu, 21 Feb 2019 15:56:07 +0100",
"msg_from": "Pierre Ducroquet <p.psql@pinaraf.info>",
"msg_from_op": true,
"msg_subject": "Re: Row Level Security =?UTF-8?B?4oiS?= leakproof-ness and\n performance implications"
},
{
"msg_contents": "On 2/20/19 11:24 AM, Tom Lane wrote:\n> Pierre Ducroquet <p.psql@pinaraf.info> writes:\n>> For simple functions like enum_eq/enum_ne, marking them leakproof is an \n>> obvious fix (patch attached to this email, including also textin/textout).\n> \n> This is not nearly as \"obvious\" as you think. See prior discussions,\n> notably\n> https://www.postgresql.org/message-id/flat/31042.1546194242%40sss.pgh.pa.us\n> \n> Up to now we've taken a very strict definition of what leakproofness\n> means; as Noah stated, if a function can throw errors for some inputs,\n> it's not considered leakproof, even if those inputs should never be\n> encountered in practice. Most of the things we've been willing to\n> mark leakproof are straight-line code that demonstrably can't throw\n> any errors at all.\n> \n> The previous thread seemed to have consensus that the hazards in\n> text_cmp and friends were narrow enough that nobody had a practical\n> problem with marking them leakproof --- but we couldn't define an\n> objective policy statement that would allow making such decisions,\n> so nothing's been changed as yet. I think it is important to have\n> an articulable policy about this, not just a seat-of-the-pants\n> conclusion about the risk level in a particular function.\n\nWhat if we provided an option to redact all client messages (leaving\nlogged messages as-is). Separately we could provide a GUC to force all\nfunctions to be resolved as leakproof. Depending on your requirements,\nhaving both options turned on could be perfectly acceptable.\n\nPatch for discussion attached.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development",
"msg_date": "Wed, 27 Feb 2019 18:03:19 -0500",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?Q?Re=3a_Row_Level_Security_=e2=88=92_leakproof-ness_and_per?=\n =?UTF-8?Q?formance_implications?="
},
{
"msg_contents": "On Wed, Feb 27, 2019 at 6:03 PM Joe Conway <mail@joeconway.com> wrote:\n> Patch for discussion attached.\n\nSo... you're just going to replace ALL error messages of any kind with\n\"ERROR: missing error text\" when this option is enabled? That sounds\nunusable. I mean if I'm reading it right this would get not only\nmessages from SQL-callable functions but also things like \"deadlock\ndetected\" and \"could not read block %u in file %s\" and \"database is\nnot accepting commands to avoid wraparound data loss in database with\nOID %u\". You can't even shut it off conveniently, because the way\nyou've designed it it has to be PGC_POSTMASTER to avoid TOCTTOU\nvulnerabilities. Maybe I'm misreading the patch?\n\nI don't think it would be crazy to have a mode where we try to redact\nthe particular error messages that might leak information, but I think\nwe'd need to make it only those. A wild idea might be to let\nproleakproof take on three values: yes, no, and maybe. When 'maybe'\nfunctions are involved, we tell them whether or not the current query\ninvolves any security barriers, and if so they self-censor.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Thu, 28 Feb 2019 09:12:25 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re=3A_Row_Level_Security_=E2=88=92_leakproof=2Dness_and_perfor?=\n\t=?UTF-8?Q?mance_implications?="
},
{
"msg_contents": "On 2/28/19 9:12 AM, Robert Haas wrote:\n> On Wed, Feb 27, 2019 at 6:03 PM Joe Conway <mail@joeconway.com> wrote:\n>> Patch for discussion attached.\n> \n> So... you're just going to replace ALL error messages of any kind with\n> \"ERROR: missing error text\" when this option is enabled? That sounds\n> unusable. I mean if I'm reading it right this would get not only\n> messages from SQL-callable functions but also things like \"deadlock\n> detected\" and \"could not read block %u in file %s\" and \"database is\n> not accepting commands to avoid wraparound data loss in database with\n> OID %u\". You can't even shut it off conveniently, because the way\n> you've designed it it has to be PGC_POSTMASTER to avoid TOCTTOU\n> vulnerabilities. Maybe I'm misreading the patch?\n\nYou have it correct.\n\nI completely disagree that is is unusable though. The way I envision\nthis is that you enable force_leakproof on your development machine\nwithout suppress_client_messages being turned on. Do your debugging there.\n\nOn production, both are turned on. You still get full unredacted\nmessages in your pg log. The client on a prod system does not need these\ndetails. If you *really* need to, you can restart to turn it on for a\nshort while on prod, but hopefully you have a non prod system where you\nreproduce issues for debugging anyway.\n\nI am not married to making this only changeable via restart though --\nthat's why I posted the patch for discussion. Perhaps a superuserset\nwould be better so debugging could be done on one session only on the\nprod machine.\n\n> I don't think it would be crazy to have a mode where we try to redact\n> the particular error messages that might leak information, but I think\n> we'd need to make it only those. A wild idea might be to let\n> proleakproof take on three values: yes, no, and maybe. When 'maybe'\n> functions are involved, we tell them whether or not the current query\n> involves any security barriers, and if so they self-censor.\n\nAgain, I disagree. See above -- you have all you need in the server logs.\n\nJoe\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development",
"msg_date": "Thu, 28 Feb 2019 09:45:22 -0500",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?Q?Re=3a_Row_Level_Security_=e2=88=92_leakproof-ness_and_per?=\n =?UTF-8?Q?formance_implications?="
},
{
"msg_contents": "On Thu, 28 Feb 2019 at 14:13, Robert Haas <robertmhaas@gmail.com> wrote:\n> A wild idea might be to let\n> proleakproof take on three values: yes, no, and maybe. When 'maybe'\n> functions are involved, we tell them whether or not the current query\n> involves any security barriers, and if so they self-censor.\n>\n\nDoes self-censoring mean that they might still throw an error for some\ninputs, but that error won't reveal any information about the input\nvalues? That's not entirely consistent with my understanding of the\ndefinition of leakproof, but maybe there are situations where that\namount of information leakage would be OK. So maybe we could have\n\"strictly leakproof\" functions that never throw errors and \"weakly\nleakproof\" functions (needs a better name) that can throw errors, as\nlong as those errors don't include data values. Then we could allow\nstrict and weak security barriers on a per-table basis, depending on\nhow sensitive the data is in each table (I'm not a fan of using GUCs\nto control this).\n\nRegards,\nDean\n\n",
"msg_date": "Thu, 28 Feb 2019 14:52:17 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re=3A_Row_Level_Security_=E2=88=92_leakproof=2Dness_and_perfor?=\n\t=?UTF-8?Q?mance_implications?="
},
{
"msg_contents": "On 2/28/19 9:52 AM, Dean Rasheed wrote:\n\n> Does self-censoring mean that they might still throw an error for some\n> inputs, but that error won't reveal any information about the input\n> values? That's not entirely consistent with my understanding of the\n> definition of leakproof\n\nThat's the question I was also preparing to ask ... I understood the\ndefinition to exclude even the possibility that some inputs could\nproduce errors.\n\n> amount of information leakage would be OK. So maybe we could have\n> \"strictly leakproof\" functions that never throw errors and \"weakly\n> leakproof\" functions (needs a better name) that can throw errors, as\n> long as those errors don't include data values. Then we could allow\n> strict and weak security barriers on a per-table basis\n\nInteresting idea. I wonder if the set { strictly, weakly } would be\nbetter viewed as a user-definable set (a site might define \"leakproof\nwrt HIPAA\", \"leakproof wrt FERPA\", etc.), and then configure which\ncombination of leakproof properties must apply where.\n\nOTOH, I'd have to wonder about the feasibility of auditing code for\nleakproofness at that kind of granularity.\n\nRegards,\n-Chap\n\n",
"msg_date": "Thu, 28 Feb 2019 10:04:30 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?Q?Re=3a_Row_Level_Security_=e2=88=92_leakproof-ness_and_per?=\n =?UTF-8?Q?formance_implications?="
},
{
"msg_contents": "On Thu, Feb 28, 2019 at 9:12 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\nHi, Robert, it has been a while :)\n\n>\n> So... you're just going to replace ALL error messages of any kind with\n> \"ERROR: missing error text\" when this option is enabled? That sounds\n> unusable. I mean if I'm reading it right this would get not only\n> messages from SQL-callable functions but also things like \"deadlock\n> detected\" and \"could not read block %u in file %s\" and \"database is\n> not accepting commands to avoid wraparound data loss in database with\n> OID %u\". You can't even shut it off conveniently, because the way\n\nThis makes complete sense to me. The client side of a client/server\nprotocol doesn't have any way to fix 'could not read block %u in file\n%s', the client doesn't need that kind of detailed information about a\nserver, and in fact that information could be security sensitive.\nImagine connecting to a webserver with a web browser and getting a\nsimilar message.\n\nWhen something critical happens the servers logs can be viewed and the\nerror can be addressed there, on the server.\n\n",
"msg_date": "Thu, 28 Feb 2019 10:04:30 -0500",
"msg_from": "Joshua Brindle <joshua.brindle@crunchydata.com>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re=3A_Row_Level_Security_=E2=88=92_leakproof=2Dness_and_perfor?=\n\t=?UTF-8?Q?mance_implications?="
},
{
"msg_contents": "Joshua Brindle <joshua.brindle@crunchydata.com> writes:\n> On Thu, Feb 28, 2019 at 9:12 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>> So... you're just going to replace ALL error messages of any kind with\n>> \"ERROR: missing error text\" when this option is enabled? That sounds\n>> unusable. I mean if I'm reading it right this would get not only\n>> messages from SQL-callable functions but also things like \"deadlock\n>> detected\" and \"could not read block %u in file %s\" and \"database is\n>> not accepting commands to avoid wraparound data loss in database with\n>> OID %u\". You can't even shut it off conveniently, because the way\n\n> This makes complete sense to me. The client side of a client/server\n> protocol doesn't have any way to fix 'could not read block %u in file\n> %s', the client doesn't need that kind of detailed information about a\n> server, and in fact that information could be security sensitive.\n\nI agree with Robert that this idea is a nonstarter. Not only is it\na complete disaster from a usability standpoint, but *it does not\nfix the problem*. The mere fact that an error occurred, or didn't,\nis already an information leak. Sure, you can only extract one bit\nper query that way, but a slow leak is still a leak. See the Spectre\nvulnerability for a pretty exact parallel.\n\nThe direction I think we're going to need to go in is to weaken our\nstandards for what we'll consider a leakproof function, and/or try\nharder to make common WHERE-clause operators leakproof. The thread\nover at\nhttps://www.postgresql.org/message-id/flat/7DF52167-4379-4A1E-A957-90D774EBDF21%40winand.at\nraises the question of why we don't expect that *all* indexable\noperators are leakproof, at least with respect to the index column\ncontents. (Failing to distinguish which of the inputs can be\nleaked seems like a pretty fundamental mistake in hindsight.\nFor instance, some opclasses can directly index regex operators,\nand one would not wish to give up the ability for a regex operator\nto complain about an invalid pattern. But it could be leakproof\nwith respect to the data side.)\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 28 Feb 2019 10:49:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re:\n =?UTF-8?Q?Re=3A_Row_Level_Security_=E2=88=92_leakproof=2Dness_and_perfor?=\n =?UTF-8?Q?mance_implications?="
},
{
"msg_contents": "On Thu, Feb 28, 2019 at 10:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Joshua Brindle <joshua.brindle@crunchydata.com> writes:\n> > On Thu, Feb 28, 2019 at 9:12 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >> So... you're just going to replace ALL error messages of any kind with\n> >> \"ERROR: missing error text\" when this option is enabled? That sounds\n> >> unusable. I mean if I'm reading it right this would get not only\n> >> messages from SQL-callable functions but also things like \"deadlock\n> >> detected\" and \"could not read block %u in file %s\" and \"database is\n> >> not accepting commands to avoid wraparound data loss in database with\n> >> OID %u\". You can't even shut it off conveniently, because the way\n>\n> > This makes complete sense to me. The client side of a client/server\n> > protocol doesn't have any way to fix 'could not read block %u in file\n> > %s', the client doesn't need that kind of detailed information about a\n> > server, and in fact that information could be security sensitive.\n>\n> I agree with Robert that this idea is a nonstarter. Not only is it\n> a complete disaster from a usability standpoint, but *it does not\n> fix the problem*. The mere fact that an error occurred, or didn't,\n> is already an information leak. Sure, you can only extract one bit\n> per query that way, but a slow leak is still a leak. See the Spectre\n> vulnerability for a pretty exact parallel.\n\nHow is leakproof defined WRT Postgres? Generally speaking a 1 bit\nerror path would be considered a covert channel on most systems and\nare relatively slow even compared to e.g., timing channels.\n\nRedacting error information, outside of the global leakproof setting,\nseems useful to prevent data leakage to a client on another system,\nsuch as Robert's example above \"could not read block %u in file %s\".\n\nAlthough, and Joe may hate me for saying this, I think only the\nnon-constants should be redacted to keep some level of usability for\nregular SQL errors. Maybe system errors like the above should be\nremoved from client messages in general.\n\n> The direction I think we're going to need to go in is to weaken our\n> standards for what we'll consider a leakproof function, and/or try\n> harder to make common WHERE-clause operators leakproof. The thread\n> over at\n> https://www.postgresql.org/message-id/flat/7DF52167-4379-4A1E-A957-90D774EBDF21%40winand.at\n> raises the question of why we don't expect that *all* indexable\n> operators are leakproof, at least with respect to the index column\n> contents. (Failing to distinguish which of the inputs can be\n> leaked seems like a pretty fundamental mistake in hindsight.\n> For instance, some opclasses can directly index regex operators,\n> and one would not wish to give up the ability for a regex operator\n> to complain about an invalid pattern. But it could be leakproof\n> with respect to the data side.)\n>\n> regards, tom lane\n\n",
"msg_date": "Thu, 28 Feb 2019 11:03:28 -0500",
"msg_from": "Joshua Brindle <joshua.brindle@crunchydata.com>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re=3A_Re=3A_Row_Level_Security_=E2=88=92_leakproof=2Dness_and_pe?=\n\t=?UTF-8?Q?rformance_implications?="
},
{
"msg_contents": "On 2/28/19 11:03 AM, Joshua Brindle wrote:\n\n> How is leakproof defined WRT Postgres? Generally speaking a 1 bit\n\n From the CREATE FUNCTION reference page:\n\nLEAKPROOF indicates that the function has no side effects. It reveals no\ninformation about its arguments other than by its return value. For\nexample, a function which *throws an error message for some argument\nvalues but not others*, or which includes the argument values in any\nerror message, is not leakproof.\n\n(*emphasis* mine.)\n\nRegards,\n-Chap\n\n",
"msg_date": "Thu, 28 Feb 2019 11:13:41 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?Q?Re=3a_Row_Level_Security_=e2=88=92_leakproof-ness_and_per?=\n =?UTF-8?Q?formance_implications?="
},
{
"msg_contents": "On 2/28/19 11:03 AM, Joshua Brindle wrote:\n> On Thu, Feb 28, 2019 at 10:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> Joshua Brindle <joshua.brindle@crunchydata.com> writes:\n>> > On Thu, Feb 28, 2019 at 9:12 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>> >> So... you're just going to replace ALL error messages of any kind with\n>> >> \"ERROR: missing error text\" when this option is enabled? That sounds\n>> >> unusable. I mean if I'm reading it right this would get not only\n>> >> messages from SQL-callable functions but also things like \"deadlock\n>> >> detected\" and \"could not read block %u in file %s\" and \"database is\n>> >> not accepting commands to avoid wraparound data loss in database with\n>> >> OID %u\". You can't even shut it off conveniently, because the way\n>>\n>> > This makes complete sense to me. The client side of a client/server\n>> > protocol doesn't have any way to fix 'could not read block %u in file\n>> > %s', the client doesn't need that kind of detailed information about a\n>> > server, and in fact that information could be security sensitive.\n>>\n>> I agree with Robert that this idea is a nonstarter. Not only is it\n>> a complete disaster from a usability standpoint, but *it does not\n>> fix the problem*. The mere fact that an error occurred, or didn't,\n>> is already an information leak. Sure, you can only extract one bit\n>> per query that way, but a slow leak is still a leak. See the Spectre\n>> vulnerability for a pretty exact parallel.\n> \n> How is leakproof defined WRT Postgres? Generally speaking a 1 bit\n> error path would be considered a covert channel on most systems and\n> are relatively slow even compared to e.g., timing channels.\n\nYes, I am pretty sure there are plenty of very security conscious\nenvironments that would be willing to make this tradeoff in order to get\nreliable RLS performance.\n\n> Redacting error information, outside of the global leakproof setting,\n> seems useful to prevent data leakage to a client on another system,\n> such as Robert's example above \"could not read block %u in file %s\".\n\nTrue\n\n> Although, and Joe may hate me for saying this, I think only the\n> non-constants should be redacted to keep some level of usability for\n> regular SQL errors. Maybe system errors like the above should be\n> removed from client messages in general.\n\nI started down this path and it looked fragile. I guess if there is\ngenerally enough support to think this might be viable I could open up\nthat door again, but I don't want to waste time if the approach is\nreally a non-starter as stated upthread :-/.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development",
"msg_date": "Thu, 28 Feb 2019 11:14:50 -0500",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?Q?Re=3a_Row_Level_Security_=e2=88=92_leakproof-ness_and_per?=\n =?UTF-8?Q?formance_implications?="
},
{
"msg_contents": "On Thu, Feb 28, 2019 at 11:14 AM Joe Conway <mail@joeconway.com> wrote:\n>\n> > Although, and Joe may hate me for saying this, I think only the\n> > non-constants should be redacted to keep some level of usability for\n> > regular SQL errors. Maybe system errors like the above should be\n> > removed from client messages in general.\n>\n> I started down this path and it looked fragile. I guess if there is\n> generally enough support to think this might be viable I could open up\n> that door again, but I don't want to waste time if the approach is\n> really a non-starter as stated upthread :-/.\n>\n\nThe only non-starter for Tom was weakening leakproof, right? Can we\nkeep the suppression, and work on strengthening leakproof as a\nseparate activity?\n\n",
"msg_date": "Thu, 28 Feb 2019 11:24:29 -0500",
"msg_from": "Joshua Brindle <joshua.brindle@crunchydata.com>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re=3A_Row_Level_Security_=E2=88=92_leakproof=2Dness_and_perfor?=\n\t=?UTF-8?Q?mance_implications?="
},
{
"msg_contents": "On Thu, Feb 28, 2019 at 11:14 AM Joe Conway <mail@joeconway.com> wrote:\n> > Although, and Joe may hate me for saying this, I think only the\n> > non-constants should be redacted to keep some level of usability for\n> > regular SQL errors. Maybe system errors like the above should be\n> > removed from client messages in general.\n>\n> I started down this path and it looked fragile. I guess if there is\n> generally enough support to think this might be viable I could open up\n> that door again, but I don't want to waste time if the approach is\n> really a non-starter as stated upthread :-/.\n\nHmm. It seems to me that if there's a function that sometimes throws\nan error and other times does not, and if that behavior is dependent\non the input, then even redacting the error message down to 'ERROR:\nerror' does not remove the leak. So it seems to me that regardless of\nwhat one thinks about the proposal from a usability perspective, it's\nprobably not correct from a security standpoint. Information that\ncouldn't be leaked until present rules would leak with this change,\nwhen the new GUCs were turned on.\n\nAm I wrong?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Thu, 28 Feb 2019 11:37:43 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re=3A_Row_Level_Security_=E2=88=92_leakproof=2Dness_and_perfor?=\n\t=?UTF-8?Q?mance_implications?="
},
{
"msg_contents": "On 2/28/19 11:37 AM, Robert Haas wrote:\n> On Thu, Feb 28, 2019 at 11:14 AM Joe Conway <mail@joeconway.com> wrote:\n>> > Although, and Joe may hate me for saying this, I think only the\n>> > non-constants should be redacted to keep some level of usability for\n>> > regular SQL errors. Maybe system errors like the above should be\n>> > removed from client messages in general.\n>>\n>> I started down this path and it looked fragile. I guess if there is\n>> generally enough support to think this might be viable I could open up\n>> that door again, but I don't want to waste time if the approach is\n>> really a non-starter as stated upthread :-/.\n> \n> Hmm. It seems to me that if there's a function that sometimes throws\n> an error and other times does not, and if that behavior is dependent\n> on the input, then even redacting the error message down to 'ERROR:\n> error' does not remove the leak. So it seems to me that regardless of\n> what one thinks about the proposal from a usability perspective, it's\n> probably not correct from a security standpoint. Information that\n> couldn't be leaked until present rules would leak with this change,\n> when the new GUCs were turned on.\n> \n> Am I wrong?\n\n\nNo, and Tom stated as much too, but life is all about tradeoffs. Some\npeople will find this an acceptable compromise. For those that don't\nthey don't have to use it. IMHO we tend toward too much nannyism too often.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development",
"msg_date": "Thu, 28 Feb 2019 11:44:55 -0500",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?Q?Re=3a_Row_Level_Security_=e2=88=92_leakproof-ness_and_per?=\n =?UTF-8?Q?formance_implications?="
},
{
"msg_contents": "On Thu, Feb 28, 2019 at 11:44 AM Joe Conway <mail@joeconway.com> wrote:\n> No, and Tom stated as much too, but life is all about tradeoffs. Some\n> people will find this an acceptable compromise. For those that don't\n> they don't have to use it. IMHO we tend toward too much nannyism too often.\n\nWell, I agree with that, too.\n\nHmm. I don't think there's anything preventing you from implementing\nthis in \"userspace,\" is there? A logging hook could suppress all\nerror message text, and you could just mark all functions leakproof\nafter that, and you'd have this exact behavior in an existing release\nwith no core code changes, I think.\n\nIf you do that, or just stick this patch into your own distro, I would\nbe interested to hear some experiences from customers (and those who\nsupport them) after some time had gone by. I find it hard to imagine\ndelivering customer support in an environment configured this way, but\nsometimes my imagination is limited.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Thu, 28 Feb 2019 11:50:05 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re=3A_Row_Level_Security_=E2=88=92_leakproof=2Dness_and_perfor?=\n\t=?UTF-8?Q?mance_implications?="
},
{
"msg_contents": "On 2/28/19 11:50 AM, Robert Haas wrote:\n> On Thu, Feb 28, 2019 at 11:44 AM Joe Conway <mail@joeconway.com> wrote:\n>> No, and Tom stated as much too, but life is all about tradeoffs. Some\n>> people will find this an acceptable compromise. For those that don't\n>> they don't have to use it. IMHO we tend toward too much nannyism too often.\n> \n> Well, I agree with that, too.\n> \n> Hmm. I don't think there's anything preventing you from implementing\n> this in \"userspace,\" is there? A logging hook could suppress all\n> error message text, and you could just mark all functions leakproof\n> after that, and you'd have this exact behavior in an existing release\n> with no core code changes, I think.\n\nI think that would affect the server logs too, no? Worth thinking about\nthough...\n\nAlso manually marking all functions leakproof is far less convenient\nthan turning off the check as this patch effectively allows. You would\nwant to keep track of the initial condition and be able to restore it if\nneeded. Doable but much uglier. Perhaps we could tolerate a hook that\nwould allow an extension to do this though?\n\n> If you do that, or just stick this patch into your own distro, I would\n> be interested to hear some experiences from customers (and those who\n> support them) after some time had gone by. I find it hard to imagine\n> delivering customer support in an environment configured this way, but\n> sometimes my imagination is limited.\n\nAgain, remember that the actual messages are available in the server\nlogs. The presumption is that the server logs are kept secure, and it is\nok to leak information into them. How the customer does or does not\ndecide to pass some of that information on to a support group becomes a\nproblem to deal with on a case by case basis.\n\nAlso, as mentioned up-thread, in many cases there is or should be a\nnon-production instance available to use for reproducing problems to\ndebug them. Presumably the data on such a system is faked or has already\nbeen cleaned up for a wider audience.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development",
"msg_date": "Thu, 28 Feb 2019 12:05:25 -0500",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?Q?Re=3a_Row_Level_Security_=e2=88=92_leakproof-ness_and_per?=\n =?UTF-8?Q?formance_implications?="
},
{
"msg_contents": "On Thu, Feb 28, 2019 at 12:05 PM Joe Conway <mail@joeconway.com> wrote:\n> I think that would affect the server logs too, no? Worth thinking about\n> though...\n\nYeah, I suppose so, although there might be a way to work around that.\n\n> Also manually marking all functions leakproof is far less convenient\n> than turning off the check as this patch effectively allows. You would\n> want to keep track of the initial condition and be able to restore it if\n> needed. Doable but much uglier. Perhaps we could tolerate a hook that\n> would allow an extension to do this though?\n\nYeah, possibly. I guess we'd have to see how ugly that looks, but....\n\n> Again, remember that the actual messages are available in the server\n> logs. The presumption is that the server logs are kept secure, and it is\n> ok to leak information into them. How the customer does or does not\n> decide to pass some of that information on to a support group becomes a\n> problem to deal with on a case by case basis.\n>\n> Also, as mentioned up-thread, in many cases there is or should be a\n> non-production instance available to use for reproducing problems to\n> debug them. Presumably the data on such a system is faked or has already\n> been cleaned up for a wider audience.\n\nMmmph. If your customers always have a non-production instance where\nproblems from production can be easily reproduced, your customers are\nnot much like our customers.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Thu, 28 Feb 2019 12:28:38 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re=3A_Row_Level_Security_=E2=88=92_leakproof=2Dness_and_perfor?=\n\t=?UTF-8?Q?mance_implications?="
},
{
"msg_contents": "On 2/28/19 12:28 PM, Robert Haas wrote:\n> Mmmph. If your customers always have a non-production instance where\n> problems from production can be easily reproduced, your customers are\n> not much like our customers.\n\nWell I certainly did not mean to imply that this is always the case ;-)\n\nBut I think it is fair to tell customers that have these tradeoffs in\nfront of them that it would be even more wise in the case they decided\nto use this capability.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n",
"msg_date": "Thu, 28 Feb 2019 12:35:53 -0500",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?Q?Re=3a_Row_Level_Security_=e2=88=92_leakproof-ness_and_per?=\n =?UTF-8?Q?formance_implications?="
},
{
"msg_contents": "On 2019-02-21 15:56, Pierre Ducroquet wrote:\n> I undestand these decisions, but it makes RLS quite fragile, with numerous un-\n> documented side-effects. In order to save difficulties from future users, I \n> wrote this patch to the documentation, listing the biggest restrictions I hit \n> with RLS so far.\n\nThis appears to be the patch of record for this commit fest entry.\n\nI agree that it would be useful to document and discuss better which\nbuilt-in operators are leak-proof and which are not. But I don't think\nthe CREATE POLICY reference page is the place to do it. Note that the\nleak-proofness mechanism was originally introduced for security-barrier\nviews (an early form of RLS if you will), so someone could also\nreasonably expect a discussion there.\n\nI'm not sure of the best place to put it. Perhaps adding a section to\nthe Functions and Operators chapter would work.\n\nAlso, once you start such a list, there will be an expectation that it's\ncomplete. So that would need to be ensured. You only list a few things\nyou found. Are there others? How do we keep this up to date?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Mon, 18 Mar 2019 20:08:49 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?Q?Re:_Row_Level_Security_=e2=88=92_leakproof-ness_and_perfo?=\n =?UTF-8?Q?rmance_implications?="
},
{
"msg_contents": "On 2019-02-28 00:03, Joe Conway wrote:\n> What if we provided an option to redact all client messages (leaving\n> logged messages as-is). Separately we could provide a GUC to force all\n> functions to be resolved as leakproof. Depending on your requirements,\n> having both options turned on could be perfectly acceptable.\n\nThere are two commit fest entries for this thread, one in Pierre's name\nand one in yours. Is your entry for the error message redacting\nfunctionality? I think that approach has been found not to actually\nsatisfy the leakproofness criteria.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Mon, 18 Mar 2019 20:52:31 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?Q?Re:_Row_Level_Security_=e2=88=92_leakproof-ness_and_perfo?=\n =?UTF-8?Q?rmance_implications?="
},
{
"msg_contents": "On 3/18/19 3:52 PM, Peter Eisentraut wrote:\n> On 2019-02-28 00:03, Joe Conway wrote:\n>> What if we provided an option to redact all client messages (leaving\n>> logged messages as-is). Separately we could provide a GUC to force all\n>> functions to be resolved as leakproof. Depending on your requirements,\n>> having both options turned on could be perfectly acceptable.\n> \n> There are two commit fest entries for this thread, one in Pierre's name\n> and one in yours. Is your entry for the error message redacting\n> functionality? I think that approach has been found not to actually\n> satisfy the leakproofness criteria.\n\n\nIt is a matter of opinion with regard to what the criteria actually is,\nand when it ought to apply. But in any case the clear consensus was\nagainst me, so I guess I'll assume \"my patch was rejected by PostgreSQL\nall I got was this tee shirt\" (...I know I have one that says something\nlike that somewhere...) ;-)\n\nI have no idea what the other entry is all about as I have not had the\ntime to look.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development",
"msg_date": "Mon, 18 Mar 2019 16:13:56 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?Q?Re=3a_Row_Level_Security_=e2=88=92_leakproof-ness_and_per?=\n =?UTF-8?Q?formance_implications?="
},
{
"msg_contents": "Hi Pierre,\n\nOn 3/18/19 8:13 PM, Joe Conway wrote:\n> \n> I have no idea what the other entry is all about as I have not had the\n> time to look.\n\nThere doesn't seem to be consensus on your patch, either -- I'm planning \nto mark it rejected at the end of the CF unless you have a new patch for \nconsideration.\n\nThis thread got a bit hijacked and is hard to follow. If you do submit \na new patch I recommend creating a new thread.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Fri, 29 Mar 2019 11:09:39 +0000",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?Q?Re=3a_Re=3a_Row_Level_Security_=e2=88=92_leakproof-ness_a?=\n =?UTF-8?Q?nd_performance_implications?="
},
{
"msg_contents": "On 2019-03-18 20:08, Peter Eisentraut wrote:\n> I agree that it would be useful to document and discuss better which\n> built-in operators are leak-proof and which are not. But I don't think\n> the CREATE POLICY reference page is the place to do it. Note that the\n> leak-proofness mechanism was originally introduced for security-barrier\n> views (an early form of RLS if you will), so someone could also\n> reasonably expect a discussion there.\n\nIt sounds like this will need significant additional work, so setting as\nreturned with feedback for now.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 4 Apr 2019 21:17:47 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?Q?Re:_Row_Level_Security_=e2=88=92_leakproof-ness_and_perfo?=\n =?UTF-8?Q?rmance_implications?="
}
] |
[
{
"msg_contents": "While contemplating the wreckage of \nhttps://commitfest.postgresql.org/22/1778/\nI had the beginnings of an idea of another way to fix that problem.\n\nThe issue largely arises from the fact that for UPDATE, we expect\nthe plan tree to emit a tuple that's ready to be stored back into\nthe target rel ... well, almost, because it also has a CTID or some\nother row-identity column, so we have to do some work on it anyway.\nBut the point is this means we potentially need a different\ntargetlist for each child table in an inherited UPDATE.\n\nWhat if we dropped that idea, and instead defined the plan tree as\nreturning only the columns that are updated by SET, plus the row\nidentity? It would then be the ModifyTable node's job to fetch the\noriginal tuple using the row identity (which it must do anyway) and\nform the new tuple by combining the updated columns from the plan\noutput with the non-updated columns from the original tuple.\n\nDELETE would be even simpler, since it only needs the row identity\nand nothing else.\n\nHaving done that, we could toss inheritance_planner into the oblivion\nit so richly deserves, and just treat all types of inheritance or\npartitioning queries as expand-at-the-bottom, as SELECT has always\ndone it.\n\nArguably, this would be more efficient even for non-inheritance join\nsituations, as less data (typically) would need to propagate through the\njoin tree. I'm not sure exactly how it'd shake out for trivial updates;\nwe might be paying for two tuple deconstructions not one, though perhaps\nthere's a way to finesse that. (One easy way would be to stick to the\nold approach when there is no inheritance going on.)\n\nIn the case of a standard inheritance or partition tree, this seems to\ngo through really easily, since all the children could share the same\nreturned CTID column (I guess you'd also need a TABLEOID column so you\ncould figure out which table to direct the update back into). It gets\na bit harder if the tree contains some foreign tables, because they might\nhave different concepts of row identity, but I'd think in most cases you\ncould still combine those into a small number of output columns.\n\nI have no idea how this might play with the pluggable-storage work.\n\nObviously this'd be a major rewrite with no chance of making it into v12,\nbut it doesn't sound too big to get done during v13.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 19 Feb 2019 16:48:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Another way to fix inherited UPDATE/DELETE"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-19 16:48:55 -0500, Tom Lane wrote:\n> I have no idea how this might play with the pluggable-storage work.\n\nI don't think it'd have a meaningful impact, except for needing changes\nto an overlapping set of lines. But given the different timeframes, I'd\nnot expect a problem with that.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Tue, 19 Feb 2019 14:44:45 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Another way to fix inherited UPDATE/DELETE"
},
{
"msg_contents": "On Wed, 20 Feb 2019 at 10:49, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> What if we dropped that idea, and instead defined the plan tree as\n> returning only the columns that are updated by SET, plus the row\n> identity? It would then be the ModifyTable node's job to fetch the\n> original tuple using the row identity (which it must do anyway) and\n> form the new tuple by combining the updated columns from the plan\n> output with the non-updated columns from the original tuple.\n>\n> DELETE would be even simpler, since it only needs the row identity\n> and nothing else.\n\nWhile I didn't look at the patch in great detail, I think this is how\nPavan must have made MERGE work for partitioned targets. I recall\nseeing the tableoid being added to the target list and a lookup of the\nResultRelInfo by tableoid.\n\nMaybe Pavan can provide more useful details than I can.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Wed, 20 Feb 2019 11:53:19 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Another way to fix inherited UPDATE/DELETE"
},
{
"msg_contents": "Hi,\n\nOn 2019/02/20 6:48, Tom Lane wrote:\n> While contemplating the wreckage of \n> https://commitfest.postgresql.org/22/1778/\n> I had the beginnings of an idea of another way to fix that problem.\n>\n> The issue largely arises from the fact that for UPDATE, we expect\n> the plan tree to emit a tuple that's ready to be stored back into\n> the target rel ... well, almost, because it also has a CTID or some\n> other row-identity column, so we have to do some work on it anyway.\n> But the point is this means we potentially need a different\n> targetlist for each child table in an inherited UPDATE.\n> \n> What if we dropped that idea, and instead defined the plan tree as\n> returning only the columns that are updated by SET, plus the row\n> identity? It would then be the ModifyTable node's job to fetch the\n> original tuple using the row identity (which it must do anyway) and\n> form the new tuple by combining the updated columns from the plan\n> output with the non-updated columns from the original tuple.\n> \n> DELETE would be even simpler, since it only needs the row identity\n> and nothing else.\n\nI had bookmarked link to an archived email of yours from about 5 years\nago, in which you described a similar attack plan for UPDATE planning:\n\nhttps://www.postgresql.org/message-id/1598.1399826841%40sss.pgh.pa.us\n\nIt's been kind of in the back of my mind for a while, even considered\nimplementing it based on your sketch back then, but didn't have solutions\nfor some issues surrounding optimization of updates of foreign partitions\n(see below). Maybe I should've mentioned that on this thread at some point.\n\n> Having done that, we could toss inheritance_planner into the oblivion\n> it so richly deserves, and just treat all types of inheritance or\n> partitioning queries as expand-at-the-bottom, as SELECT has always\n> done it.\n> \n> Arguably, this would be more efficient even for non-inheritance join\n> situations, as less data (typically) would need to propagate through the\n> join tree. I'm not sure exactly how it'd shake out for trivial updates;\n> we might be paying for two tuple deconstructions not one, though perhaps\n> there's a way to finesse that. (One easy way would be to stick to the\n> old approach when there is no inheritance going on.)\n> \n> In the case of a standard inheritance or partition tree, this seems to\n> go through really easily, since all the children could share the same\n> returned CTID column (I guess you'd also need a TABLEOID column so you\n> could figure out which table to direct the update back into). It gets\n> a bit harder if the tree contains some foreign tables, because they might\n> have different concepts of row identity, but I'd think in most cases you\n> could still combine those into a small number of output columns.\n\nRegarding child target relations that are foreign tables, the\nexpand-target-inheritance-at-the-bottom approach perhaps leaves no way to\nallow pushing the update (possibly with joins) to remote side?\n\n-- no inheritance\nexplain (costs off, verbose) update ffoo f set a = f.a + 1 from fbar b\nwhere f.a = b.a;\n QUERY PLAN\n\n──────────────────────────────────────────────────────────────────────────────────────────────────────\n Update on public.ffoo f\n -> Foreign Update\n Remote SQL: UPDATE public.foo r1 SET a = (r1.a + 1) FROM\npublic.bar r2 WHERE ((r1.a = r2.a))\n(3 rows)\n\n-- inheritance\nexplain (costs off, verbose) update p set aa = aa + 1 from ffoo f where\np.aa = f.a;\n QUERY PLAN\n\n───────────────────────────────────────────────────────────────────────────────────────────────────────────\n Update on public.p\n Update on public.p1\n Update on public.p2\n Foreign Update on public.p3\n -> Nested Loop\n Output: (p1.aa + 1), p1.ctid, f.*\n -> Seq Scan on public.p1\n Output: p1.aa, p1.ctid\n -> Foreign Scan on public.ffoo f\n Output: f.*, f.a\n Remote SQL: SELECT a FROM public.foo WHERE (($1::integer = a))\n -> Nested Loop\n Output: (p2.aa + 1), p2.ctid, f.*\n -> Seq Scan on public.p2\n Output: p2.aa, p2.ctid\n -> Foreign Scan on public.ffoo f\n Output: f.*, f.a\n Remote SQL: SELECT a FROM public.foo WHERE (($1::integer = a))\n -> Foreign Update\n Remote SQL: UPDATE public.base3 r5 SET aa = (r5.aa + 1) FROM\npublic.foo r2 WHERE ((r5.aa = r2.a))\n(20 rows)\n\nDoes that seem salvageable?\n\nThanks,\nAmit\n\n\n",
"msg_date": "Wed, 20 Feb 2019 10:55:45 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Another way to fix inherited UPDATE/DELETE"
},
{
"msg_contents": "On 2019/02/20 10:55, Amit Langote wrote:\n> Maybe I should've mentioned that on this thread at some point.\n\nI meant the other thread where we're discussing my patches.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Wed, 20 Feb 2019 10:57:29 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Another way to fix inherited UPDATE/DELETE"
},
{
"msg_contents": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes:\n> On 2019/02/20 6:48, Tom Lane wrote:\n>> What if we dropped that idea, and instead defined the plan tree as\n>> returning only the columns that are updated by SET, plus the row\n>> identity? It would then be the ModifyTable node's job to fetch the\n>> original tuple using the row identity (which it must do anyway) and\n>> form the new tuple by combining the updated columns from the plan\n>> output with the non-updated columns from the original tuple.\n\n> Regarding child target relations that are foreign tables, the\n> expand-target-inheritance-at-the-bottom approach perhaps leaves no way to\n> allow pushing the update (possibly with joins) to remote side?\n\nThat's something we'd need to think about. Obviously, anything\nalong this line breaks the existing FDW update APIs, but let's assume\nthat's acceptable. Is it impossible, or even hard, for an FDW to\nsupport this definition of UPDATE rather than the existing one?\nI don't think so --- it seems like it's just different --- but\nI might well be missing something.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 19 Feb 2019 23:54:22 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Another way to fix inherited UPDATE/DELETE"
},
{
"msg_contents": "On Wed, Feb 20, 2019 at 4:23 AM David Rowley <david.rowley@2ndquadrant.com>\nwrote:\n\n> On Wed, 20 Feb 2019 at 10:49, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > What if we dropped that idea, and instead defined the plan tree as\n> > returning only the columns that are updated by SET, plus the row\n> > identity? It would then be the ModifyTable node's job to fetch the\n> > original tuple using the row identity (which it must do anyway) and\n> > form the new tuple by combining the updated columns from the plan\n> > output with the non-updated columns from the original tuple.\n> >\n> > DELETE would be even simpler, since it only needs the row identity\n> > and nothing else.\n>\n> While I didn't look at the patch in great detail, I think this is how\n> Pavan must have made MERGE work for partitioned targets. I recall\n> seeing the tableoid being added to the target list and a lookup of the\n> ResultRelInfo by tableoid.\n>\n> Maybe Pavan can provide more useful details than I can.\n>\n\nYes, that's the approach I took in MERGE, primarily because of the hurdles\nI faced in handling partitioned tables, which take entirely different route\nfor UPDATE/DELETE vs INSERT and in MERGE we had to do all three together.\nBut the approach also showed significant performance improvements.\nUPDATE/DELETE via MERGE is far quicker as compared to regular UPDATE/DELETE\nwhen there are non-trivial number of partitions. That's also a reason why I\nrecommended doing the same for regular UPDATE/DELETE, but that got lost in\nthe MERGE discussions. So +1 for the approach.\n\nWe will need to consider how this affects EvalPlanQual which currently\ndoesn't have to do anything special for partitioned tables. I solved that\nvia tracking the expanded-at-the-bottom child in a separate\nmergeTargetRelation, but that approach has been criticised. May be Tom's\nidea doesn't have the same problem or most likely he will have a far better\napproach to address that.\n\nThanks,\nPavan\n\n-- \n Pavan Deolasee http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\nOn Wed, Feb 20, 2019 at 4:23 AM David Rowley <david.rowley@2ndquadrant.com> wrote:On Wed, 20 Feb 2019 at 10:49, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> What if we dropped that idea, and instead defined the plan tree as\n> returning only the columns that are updated by SET, plus the row\n> identity? It would then be the ModifyTable node's job to fetch the\n> original tuple using the row identity (which it must do anyway) and\n> form the new tuple by combining the updated columns from the plan\n> output with the non-updated columns from the original tuple.\n>\n> DELETE would be even simpler, since it only needs the row identity\n> and nothing else.\n\nWhile I didn't look at the patch in great detail, I think this is how\nPavan must have made MERGE work for partitioned targets. I recall\nseeing the tableoid being added to the target list and a lookup of the\nResultRelInfo by tableoid.\n\nMaybe Pavan can provide more useful details than I can.Yes, that's the approach I took in MERGE, primarily because of the hurdles I faced in handling partitioned tables, which take entirely different route for UPDATE/DELETE vs INSERT and in MERGE we had to do all three together. But the approach also showed significant performance improvements. UPDATE/DELETE via MERGE is far quicker as compared to regular UPDATE/DELETE when there are non-trivial number of partitions. That's also a reason why I recommended doing the same for regular UPDATE/DELETE, but that got lost in the MERGE discussions. So +1 for the approach.We will need to consider how this affects EvalPlanQual which currently doesn't have to do anything special for partitioned tables. I solved that via tracking the expanded-at-the-bottom child in a separate mergeTargetRelation, but that approach has been criticised. May be Tom's idea doesn't have the same problem or most likely he will have a far better approach to address that. Thanks,Pavan-- Pavan Deolasee http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Wed, 20 Feb 2019 10:41:11 +0530",
"msg_from": "Pavan Deolasee <pavan.deolasee@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Another way to fix inherited UPDATE/DELETE"
},
{
"msg_contents": "Pavan Deolasee <pavan.deolasee@gmail.com> writes:\n> We will need to consider how this affects EvalPlanQual which currently\n> doesn't have to do anything special for partitioned tables. I solved that\n> via tracking the expanded-at-the-bottom child in a separate\n> mergeTargetRelation, but that approach has been criticised. May be Tom's\n> idea doesn't have the same problem or most likely he will have a far better\n> approach to address that.\n\nI did spend a few seconds thinking about that, and my gut says that\nthis wouldn't change anything interesting for EPQ. But the devil\nis in the details as always, so maybe working out the patch would\nfind problems ...\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 20 Feb 2019 00:21:53 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Another way to fix inherited UPDATE/DELETE"
},
{
"msg_contents": "On 2019/02/20 13:54, Tom Lane wrote:\n> Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes:\n>> On 2019/02/20 6:48, Tom Lane wrote:\n>>> What if we dropped that idea, and instead defined the plan tree as\n>>> returning only the columns that are updated by SET, plus the row\n>>> identity? It would then be the ModifyTable node's job to fetch the\n>>> original tuple using the row identity (which it must do anyway) and\n>>> form the new tuple by combining the updated columns from the plan\n>>> output with the non-updated columns from the original tuple.\n> \n>> Regarding child target relations that are foreign tables, the\n>> expand-target-inheritance-at-the-bottom approach perhaps leaves no way to\n>> allow pushing the update (possibly with joins) to remote side?\n> \n> That's something we'd need to think about. Obviously, anything\n> along this line breaks the existing FDW update APIs, but let's assume\n> that's acceptable. Is it impossible, or even hard, for an FDW to\n> support this definition of UPDATE rather than the existing one?\n> I don't think so --- it seems like it's just different --- but\n> I might well be missing something.\n\nIIUC, in the new approach, only the root of the inheritance tree (target\ntable specified in the query) will appear in the query's join tree, not\nthe child target tables, so pushing updates with joins to the remote side\nseems a bit hard, because we're not going to consider child joins. Maybe\nI'm missing something though.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Wed, 20 Feb 2019 16:37:48 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Another way to fix inherited UPDATE/DELETE"
},
{
"msg_contents": "(2019/02/20 6:48), Tom Lane wrote:\n> In the case of a standard inheritance or partition tree, this seems to\n> go through really easily, since all the children could share the same\n> returned CTID column (I guess you'd also need a TABLEOID column so you\n> could figure out which table to direct the update back into). It gets\n> a bit harder if the tree contains some foreign tables, because they might\n> have different concepts of row identity, but I'd think in most cases you\n> could still combine those into a small number of output columns.\n\nIf this is the direction we go in, we should work on the row ID issue \n[1] before this?\n\nBest regards,\nEtsuro Fujita\n\n[1] https://www.postgresql.org/message-id/1590.1542393315%40sss.pgh.pa.us\n\n\n",
"msg_date": "Wed, 20 Feb 2019 19:50:42 +0900",
"msg_from": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Another way to fix inherited UPDATE/DELETE"
},
{
"msg_contents": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes:\n> On 2019/02/20 13:54, Tom Lane wrote:\n>> That's something we'd need to think about. Obviously, anything\n>> along this line breaks the existing FDW update APIs, but let's assume\n>> that's acceptable. Is it impossible, or even hard, for an FDW to\n>> support this definition of UPDATE rather than the existing one?\n>> I don't think so --- it seems like it's just different --- but\n>> I might well be missing something.\n\n> IIUC, in the new approach, only the root of the inheritance tree (target\n> table specified in the query) will appear in the query's join tree, not\n> the child target tables, so pushing updates with joins to the remote side\n> seems a bit hard, because we're not going to consider child joins. Maybe\n> I'm missing something though.\n\nHm. Even if that's true (I'm not convinced), I don't think it's such a\nsignificant use-case as to be considered a blocker.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 20 Feb 2019 10:06:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Another way to fix inherited UPDATE/DELETE"
},
{
"msg_contents": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp> writes:\n> (2019/02/20 6:48), Tom Lane wrote:\n>> In the case of a standard inheritance or partition tree, this seems to\n>> go through really easily, since all the children could share the same\n>> returned CTID column (I guess you'd also need a TABLEOID column so you\n>> could figure out which table to direct the update back into). It gets\n>> a bit harder if the tree contains some foreign tables, because they might\n>> have different concepts of row identity, but I'd think in most cases you\n>> could still combine those into a small number of output columns.\n\n> If this is the direction we go in, we should work on the row ID issue \n> [1] before this?\n\nCertainly, the more foreign tables can use a standardized concept of row\nidentity, the better this would work. What I'm loosely envisioning is\nthat we have one junk row-identity column for each distinct row-identity\ndatatype needed --- so, for instance, all ordinary tables could share\na TID column. Different FDWs might need different things, though one\nwould hope for no more than one datatype per FDW-type involved in the\ninheritance tree. Where things could break down is if we have a lot\nof tables that need a whole-row-variable for row identity, and they're\nall different rowtypes --- eventually we'd run out of available columns.\nSo we'd definitely wish to encourage FDWs to have some more efficient\nrow-identity scheme than that one.\n\nI don't see that as being something that constrains those two projects\nto be done in a particular order; it'd just be a nice-to-have improvement.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 20 Feb 2019 10:14:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Another way to fix inherited UPDATE/DELETE"
},
{
"msg_contents": "(2019/02/21 0:14), Tom Lane wrote:\n> Etsuro Fujita<fujita.etsuro@lab.ntt.co.jp> writes:\n>> (2019/02/20 6:48), Tom Lane wrote:\n>>> In the case of a standard inheritance or partition tree, this seems to\n>>> go through really easily, since all the children could share the same\n>>> returned CTID column (I guess you'd also need a TABLEOID column so you\n>>> could figure out which table to direct the update back into). It gets\n>>> a bit harder if the tree contains some foreign tables, because they might\n>>> have different concepts of row identity, but I'd think in most cases you\n>>> could still combine those into a small number of output columns.\n>\n>> If this is the direction we go in, we should work on the row ID issue\n>> [1] before this?\n>\n> Certainly, the more foreign tables can use a standardized concept of row\n> identity, the better this would work. What I'm loosely envisioning is\n> that we have one junk row-identity column for each distinct row-identity\n> datatype needed --- so, for instance, all ordinary tables could share\n> a TID column. Different FDWs might need different things, though one\n> would hope for no more than one datatype per FDW-type involved in the\n> inheritance tree. Where things could break down is if we have a lot\n> of tables that need a whole-row-variable for row identity, and they're\n> all different rowtypes --- eventually we'd run out of available columns.\n> So we'd definitely wish to encourage FDWs to have some more efficient\n> row-identity scheme than that one.\n\nSeems reasonable. I guess that that can address not only the issue [1] \nbut our restriction on FDW row locking that APIs for that currently only \nallow TID for row identity, in a uniform way.\n\n> I don't see that as being something that constrains those two projects\n> to be done in a particular order; it'd just be a nice-to-have improvement.\n\nOK, thanks for the explanation!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Thu, 21 Feb 2019 12:50:44 +0900",
"msg_from": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Another way to fix inherited UPDATE/DELETE"
},
{
"msg_contents": "Hi Tom,\n\nOn Wed, Feb 20, 2019 at 6:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Obviously this'd be a major rewrite with no chance of making it into v12,\n> but it doesn't sound too big to get done during v13.\n\nAre you planning to work on this?\n\nThanks,\nAmit\n\n\n",
"msg_date": "Wed, 3 Jul 2019 14:53:02 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Another way to fix inherited UPDATE/DELETE"
},
{
"msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Wed, Feb 20, 2019 at 6:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Obviously this'd be a major rewrite with no chance of making it into v12,\n>> but it doesn't sound too big to get done during v13.\n\n> Are you planning to work on this?\n\nIt's on my list, but so are a lot of other things. If you'd like to\nwork on it, feel free.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 03 Jul 2019 09:50:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Another way to fix inherited UPDATE/DELETE"
},
{
"msg_contents": "On Wed, Jul 3, 2019 at 10:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > On Wed, Feb 20, 2019 at 6:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Obviously this'd be a major rewrite with no chance of making it into v12,\n> >> but it doesn't sound too big to get done during v13.\n>\n> > Are you planning to work on this?\n>\n> It's on my list, but so are a lot of other things. If you'd like to\n> work on it, feel free.\n\nThanks for the reply. Let me see if I can get something done for the\nSeptember CF.\n\nRegards,\nAmit\n\n\n",
"msg_date": "Mon, 8 Jul 2019 11:07:35 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Another way to fix inherited UPDATE/DELETE"
}
] |
[
{
"msg_contents": "I poked into a recent complaint[1] about PG not being terribly smart\nabout whether an IS NOT NULL index predicate is implied by a WHERE\nclause, and determined that there are a couple of places where we\nare being less bright than we could be about CoerceViaIO semantics.\nCoerceViaIO is strict independently of whether the I/O functions it\ncalls are (and they might not be --- in particular, domain_in isn't).\nHowever, not everyplace knows that:\n\n* clauses.c's contain_nonstrict_functions_walker() uses default logic\nthat will examine the referenced I/O functions to see if they're strict.\nThat's expensive, requiring several syscache lookups, and it might not\neven give the right answer --- though fortunately it'd err in the\nconservative direction.\n\n* predtest.c's clause_is_strict_for() doesn't know anything about\nCoerceViaIO, so it fails to make the proof requested in [1].\n\nI started to fix this, and was in the midst of copying-and-pasting\ncontain_nonstrict_functions_walker's handling of ArrayCoerceExpr,\nwhen I realized that that code is actually wrong:\n\n return expression_tree_walker((Node *) ((ArrayCoerceExpr *) node)->arg,\n contain_nonstrict_functions_walker,\n context);\n\nIt should be recursing to itself, not to expression_tree_walker.\nAs coded, the strictness check doesn't get applied to the immediate\nchild node of the ArrayCoerceExpr, so that if that node is non-strict\nwe may arrive at the wrong conclusion.\n\ncontain_nonstrict_functions() isn't used in very many places, fortunately,\nand ArrayCoerceExpr isn't that easy to produce either, which may explain\nthe lack of field reports. I was able to cons up this example though,\nwhich demonstrates an incorrect conclusion about whether it's safe to\ninline a function declared STRICT:\n\nregression=# create table t1 (f1 int);\nCREATE TABLE\nregression=# insert into t1 values(1),(null);\nINSERT 0 2\nregression=# create or replace function sfunc(int) returns int[] language sql\nas 'select array[0, $1]::bigint[]::int[]' strict;\nCREATE FUNCTION\nregression=# select sfunc(f1) from t1; \n sfunc \n----------\n {0,1}\n {0,NULL}\n(2 rows)\n\nOf course, since sfunc is strict, that last output should be NULL not\nan array containing a NULL.\n\nThe attached patch fixes both of these things. At least the second\nhunk needs to be back-patched. I'm less sure about whether the\nCoerceViaIO changes merit back-patching; they're not fixing wrong\nanswers, but they are responding to a field complaint. Thoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/CAHkN8V9Rfh6uAjQLURJfnHsQfC_MYiFUSWEVcwVSiPdokmkniw%40mail.gmail.com",
"msg_date": "Tue, 19 Feb 2019 18:11:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "More smarts about CoerceViaIO,\n and less stupidity about ArrayCoerceExpr"
}
] |
[
{
"msg_contents": "From: Tomas Vondra [mailto:tomas.vondra@2ndquadrant.com]\n> 0.7% may easily be just a noise, possibly due to differences in layout\n> of the binary. How many runs? What was the variability of the results\n> between runs? What hardware was this tested on?\n\n3 runs, with the variability of about +-2%. Luckly, all those three runs (incidentally?) showed slight performance decrease with the patched version. The figures I wrote are the highest ones.\n\nThe hardware is, a very poor man's VM:\nCPU: 4 core Intel(R) Xeon(R) CPU E7-4890 v2 @ 2.80GHz\nRAM: 4GB\n\n\n\n> FWIW I doubt tests with such small small schema are proving anything -\n> the cache/lists are likely tiny. That's why I tested with much larger\n> number of relations.\n\nYeah, it requires many relations to test the repeated catcache eviction and new entry creation, which would show the need to increase catalog_cache_max_size for users. I tested with small number of relations to see the impact of additional processing while the catcache is not full -- memory accounting and catcache LRU chain maintenance.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n",
"msg_date": "Wed, 20 Feb 2019 00:22:15 +0000",
"msg_from": "\"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "ZRE: Protect syscache from bloating with negative cache entries"
}
] |
[
{
"msg_contents": "Hello Postgres Gurus,\n\nAfter searching (on www.postgresql.org/Google) I found that the following\nsteps can be used to perform a switchover in Postgres (version 9.3):\n*Step 1.* Do clean shutdown of Primary (-m fast or smart).\n*Step 2. *Check for sync status and recovery status of Standby before\npromoting it.\n Once Standby is in complete sync. At this stage we are safe\nto promote it as Primary.\n*Step 3. *Open the Standby as new Primary by pg_ctl promote or creating a\ntrigger file.\n*Step 4.* Restart old Primary as standby and allow to follow the new\ntimeline by passing \"recovery_target_timline='latest'\" in \\\n $PGDATA/recovery.conf file.\n\nBut I also read in one of the google post that this procedure requires the\nWAL archive location to exist on a shared storage to which both the Master\nand Slave should have access to.\n\nSo wanted to clarify if this procedure really requires the WAL archive\nlocation on a shared storage ?\n\nThanks\nRaj\n\nHello Postgres Gurus,After searching (on www.postgresql.org/Google) I found that the following steps can be used to perform a switchover in Postgres (version 9.3):Step 1. Do clean shutdown of Primary (-m fast or smart).Step 2. Check for sync status and recovery status of Standby before promoting it. Once Standby is in complete sync. At this stage we are safe to promote it as Primary.Step 3. Open the Standby as new Primary by pg_ctl promote or creating a trigger file.Step 4. Restart old Primary as standby and allow to follow the new timeline by passing \"recovery_target_timline='latest'\" in \\ $PGDATA/recovery.conf file. But I also read in one of the google post that this procedure requires the WAL archive location to exist on a shared storage to which both the Master and Slave should have access to.So wanted to clarify if this procedure really requires the WAL archive location on a shared storage ? ThanksRaj",
"msg_date": "Tue, 19 Feb 2019 16:27:02 -0800",
"msg_from": "RSR999GMAILCOM <rsr999@gmail.com>",
"msg_from_op": true,
"msg_subject": "Using old master as new replica after clean switchover"
},
{
"msg_contents": "On Tue, Feb 19, 2019 at 04:27:02PM -0800, RSR999GMAILCOM wrote:\n> So wanted to clarify if this procedure really requires the WAL archive\n> location on a shared storage ?\n\nShared storage for WAL archives is not a requirement. It is perfectly\npossible to use streaming replication to get correct WAL changes.\nUsing an archive is recommended for some deployments and depending on\nyour requirements and data retention policy, still you could have\nthose archives on a different host and have the restore_command of the\nstandbyt in recovery or the archive_command of the primary save the\nsegments to it. Depending on the frequency new WAL segments are\ngenerated, this depends of course.\n--\nMichael",
"msg_date": "Wed, 20 Feb 2019 09:44:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Using old master as new replica after clean switchover"
},
{
"msg_contents": "Is there any link where the required setup and the step by step procedure\nfor performing the controlled switchover are listed?\n\nThanks\nRaj\n\nOn Tue, Feb 19, 2019 at 4:44 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Feb 19, 2019 at 04:27:02PM -0800, RSR999GMAILCOM wrote:\n> > So wanted to clarify if this procedure really requires the WAL archive\n> > location on a shared storage ?\n>\n> Shared storage for WAL archives is not a requirement. It is perfectly\n> possible to use streaming replication to get correct WAL changes.\n> Using an archive is recommended for some deployments and depending on\n> your requirements and data retention policy, still you could have\n> those archives on a different host and have the restore_command of the\n> standbyt in recovery or the archive_command of the primary save the\n> segments to it. Depending on the frequency new WAL segments are\n> generated, this depends of course.\n> --\n> Michael\n>\n\nIs there any link where \n\nthe required setup and the step by step procedure for performing the controlled switchover are listed? ThanksRajOn Tue, Feb 19, 2019 at 4:44 PM Michael Paquier <michael@paquier.xyz> wrote:On Tue, Feb 19, 2019 at 04:27:02PM -0800, RSR999GMAILCOM wrote:\n> So wanted to clarify if this procedure really requires the WAL archive\n> location on a shared storage ?\n\nShared storage for WAL archives is not a requirement. It is perfectly\npossible to use streaming replication to get correct WAL changes.\nUsing an archive is recommended for some deployments and depending on\nyour requirements and data retention policy, still you could have\nthose archives on a different host and have the restore_command of the\nstandbyt in recovery or the archive_command of the primary save the\nsegments to it. Depending on the frequency new WAL segments are\ngenerated, this depends of course.\n--\nMichael",
"msg_date": "Thu, 21 Feb 2019 10:26:37 -0800",
"msg_from": "RSR999GMAILCOM <rsr999@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Using old master as new replica after clean switchover"
},
{
"msg_contents": "On Tue, Feb 19, 2019 at 9:44 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Feb 19, 2019 at 04:27:02PM -0800, RSR999GMAILCOM wrote:\n> > So wanted to clarify if this procedure really requires the WAL archive\n> > location on a shared storage ?\n>\n> Shared storage for WAL archives is not a requirement. It is perfectly\n> possible to use streaming replication to get correct WAL changes.\n> Using an archive is recommended for some deployments and depending on\n> your requirements and data retention policy, still you could have\n> those archives on a different host and have the restore_command of the\n> standbyt in recovery or the archive_command of the primary save the\n> segments to it. Depending on the frequency new WAL segments are\n> generated, this depends of course.\n\nIf I'm not mistaken, if you don't have WAL archive set up (a shared\nfilesystem isn't necessary, but the standby has to be able to restore\nWAL segments from the archive), a few transactions that haven't been\nstreamed at primary shutdown could be lost, since the secondary won't\nbe able to stream anything after the primary has shut down. WAL\narchive can always be restored even without a primary running, hence\nwhy a WAL archive is needed.\n\nOr am I missing something?\n\n",
"msg_date": "Thu, 21 Feb 2019 15:38:21 -0300",
"msg_from": "Claudio Freire <klaussfreire@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Using old master as new replica after clean switchover"
},
{
"msg_contents": "On Thu, Feb 21, 2019 at 03:38:21PM -0300, Claudio Freire wrote:\n> If I'm not mistaken, if you don't have WAL archive set up (a shared\n> filesystem isn't necessary, but the standby has to be able to restore\n> WAL segments from the archive), a few transactions that haven't been\n> streamed at primary shutdown could be lost, since the secondary won't\n> be able to stream anything after the primary has shut down. WAL\n> archive can always be restored even without a primary running, hence\n> why a WAL archive is needed.\n> \n> Or am I missing something?\n\nWell, my point is that you may not need an archive if you are able to\nstream the changes from a primary using streaming if the primary has a\nreplication slot or if a checkpoint has not recycled yet the segments\nthat a standby may need. If the primary is offline, and you need to\nrecover a standby, then an archive is mandatory. When recovering from\nan archive, the standby would be able to catch up to the end of the\nsegment archived as we don't enforce a segment switch when a node\nshuts down. If using pg_receivewal as a form of archiving with its\n--synchronous mode, it is also possible to stream up to the point\nwhere the primary has generated its shutdown checkpoint, so you would\nnot lose data included on the last segment the primary was working on\nwhen stopped.\n--\nMichael",
"msg_date": "Fri, 22 Feb 2019 13:31:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Using old master as new replica after clean switchover"
},
{
"msg_contents": "On Thu, Feb 21, 2019 at 10:26:37AM -0800, RSR999GMAILCOM wrote:\n> Is there any link where the required setup and the step by step procedure\n> for performing the controlled switchover are listed?\n\nDocs about failover are here:\nhttps://www.postgresql.org/docs/current/warm-standby-failover.html\n\nNow I don't recall that we have a section about a step-by-step\nprocedure for one case of failover or another. The docs could be\nperhaps improved regarding that, particularly for the case mentioned\nhere where it is possible to relink a previous master to a promoted\nstandby without risks of corruption:\n- Stop cleanly the primary with smart or fast mode.\n- Promote the standby.\n- Add recovery.conf to the previous primary.\n- Restart the previous primary as a new standby.\n--\nMichael",
"msg_date": "Fri, 22 Feb 2019 13:35:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Using old master as new replica after clean switchover"
},
{
"msg_contents": "On Thu, 21 Feb 2019 15:38:21 -0300\nClaudio Freire <klaussfreire@gmail.com> wrote:\n\n> On Tue, Feb 19, 2019 at 9:44 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Tue, Feb 19, 2019 at 04:27:02PM -0800, RSR999GMAILCOM wrote: \n> > > So wanted to clarify if this procedure really requires the WAL archive\n> > > location on a shared storage ? \n> >\n> > Shared storage for WAL archives is not a requirement. It is perfectly\n> > possible to use streaming replication to get correct WAL changes.\n> > Using an archive is recommended for some deployments and depending on\n> > your requirements and data retention policy, still you could have\n> > those archives on a different host and have the restore_command of the\n> > standbyt in recovery or the archive_command of the primary save the\n> > segments to it. Depending on the frequency new WAL segments are\n> > generated, this depends of course. \n> \n> If I'm not mistaken, if you don't have WAL archive set up (a shared\n> filesystem isn't necessary, but the standby has to be able to restore\n> WAL segments from the archive), a few transactions that haven't been\n> streamed at primary shutdown could be lost, since the secondary won't\n> be able to stream anything after the primary has shut down.\n\nThis has been fixed in 9.3. The primary node wait for all WAL records to be\nstreamed to the connected standbys before shutting down. Including its shutdown\ncheckpoint. See:\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=985bd7d49726c9f178558491d31a570d47340459\n\nBecause a standby could disconnect because of some failure during the shutdown\nprocess, you still need to make sure the standby-to-be-promoted received the\nshutdown checkpoint though.\n\n> WAL archive can always be restored even without a primary running, hence\n> why a WAL archive is needed.\n\nNo. Primary does not force a WAL switch/archive during shutdown.\n\n-- \nJehan-Guillaume de Rorthais\nDalibo\n\n",
"msg_date": "Fri, 22 Feb 2019 09:47:31 +0100",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": false,
"msg_subject": "Re: Using old master as new replica after clean switchover"
},
{
"msg_contents": "On Fri, Feb 22, 2019 at 5:47 AM Jehan-Guillaume de Rorthais\n<jgdr@dalibo.com> wrote:\n>\n> On Thu, 21 Feb 2019 15:38:21 -0300\n> Claudio Freire <klaussfreire@gmail.com> wrote:\n>\n> > On Tue, Feb 19, 2019 at 9:44 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > >\n> > > On Tue, Feb 19, 2019 at 04:27:02PM -0800, RSR999GMAILCOM wrote:\n> > > > So wanted to clarify if this procedure really requires the WAL archive\n> > > > location on a shared storage ?\n> > >\n> > > Shared storage for WAL archives is not a requirement. It is perfectly\n> > > possible to use streaming replication to get correct WAL changes.\n> > > Using an archive is recommended for some deployments and depending on\n> > > your requirements and data retention policy, still you could have\n> > > those archives on a different host and have the restore_command of the\n> > > standbyt in recovery or the archive_command of the primary save the\n> > > segments to it. Depending on the frequency new WAL segments are\n> > > generated, this depends of course.\n> >\n> > If I'm not mistaken, if you don't have WAL archive set up (a shared\n> > filesystem isn't necessary, but the standby has to be able to restore\n> > WAL segments from the archive), a few transactions that haven't been\n> > streamed at primary shutdown could be lost, since the secondary won't\n> > be able to stream anything after the primary has shut down.\n>\n> This has been fixed in 9.3. The primary node wait for all WAL records to be\n> streamed to the connected standbys before shutting down. Including its shutdown\n> checkpoint. See:\n>\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=985bd7d49726c9f178558491d31a570d47340459\n>\n> Because a standby could disconnect because of some failure during the shutdown\n> process, you still need to make sure the standby-to-be-promoted received the\n> shutdown checkpoint though.\n>\n> > WAL archive can always be restored even without a primary running, hence\n> > why a WAL archive is needed.\n>\n> No. Primary does not force a WAL switch/archive during shutdown.\n\nThat's good to know, both of the above.\n\n",
"msg_date": "Fri, 22 Feb 2019 10:34:42 -0300",
"msg_from": "Claudio Freire <klaussfreire@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Using old master as new replica after clean switchover"
}
] |
[
{
"msg_contents": "On Tue, Feb 19, 2019 at 7:31 AM PG Bug reporting form\n<noreply@postgresql.org> wrote:\n>\n> The following bug has been logged on the website:\n>\n> Bug reference: 15641\n> Logged by: Hans Buschmann\n> Email address: buschmann@nidsa.net\n> PostgreSQL version: 11.2\n> Operating system: Windows Server 2019 Standard\n> Description:\n>\n> I recently moved a production system from PG 10.7 to 11.2 on a different\n> Server.\n>\n> The configuration settings where mostly taken from the old system and\n> enhanced by new features of PG 11.\n>\n> pg_prewarm was used for a long time (with no specific configuration).\n>\n> Now I have added Huge page support for Windows in the OS and verified it\n> with vmmap tool from Sysinternals to be active.\n> (the shared buffers are locked in memory: Lock_WS is set).\n>\n> When pg_prewarm.autoprewarm is set to on (using the default after initial\n> database import via pg_restore), the autoprewarm worker process\n> terminates immediately and generates a huge number of logfile entries\n> like:\n>\n> CPS PRD 2019-02-17 16:11:53 CET 00000 11:> LOG: background worker\n> \"autoprewarm worker\" (PID 3996) exited with exit code 1\n> CPS PRD 2019-02-17 16:11:53 CET 55000 1:> ERROR: could not map dynamic\n> shared memory segment\n\nHmm. It's not clear to me how using large pages for the main\nPostgreSQL shared memory region could have any impact on autoprewarm's\nentirely separate DSM segment. I wonder if other DSM use cases are\nimpacted. Does parallel query work? For example, the following\nproduces a parallel query that uses a few DSM segments:\n\ncreate table foo as select generate_series(1, 1000000)::int i;\nanalyze foo;\nexplain analyze select count(*) from foo f1 join foo f2 using (i);\n\nLooking at the place where that error occurs, it seems like it simply\nfailed to find the handle, as if it didn't exist at all at the time\ndsm_attach() was called. I'm not entirely sure how that could happen\njust because you turned on huge pages. Is it possible that there is a\nrace where apw_load_buffers() manages to detach before the worker\nattached, and the timing changes? At a glance, that shouldn't happen\nbecause apw_start_database_worker() waits for the work to exit before\nreturning.\n\nI think we'll need one of our Windows-enabled hackers to take a look.\n\nPS Sorry for breaking the thread. I wish our archives app had a\n\"[re]send me this email\" button, for people who subscribed after the\nmessage was sent...\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n",
"msg_date": "Wed, 20 Feb 2019 15:21:05 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: BUG #15641: Autoprewarm worker fails to start on Windows with\n huge pages in use Old PostgreSQL community/pgsql-bugs x"
},
{
"msg_contents": "Thank you for taking a look.\n\nI encountered this problem after switching the production system and then found it also on the new created replica. \n\nI have no knowledge of the shared memory areas involved.\n\nI did some further investigation and tried to reproduce it on the old System (WS2016, PG 11.2) but there it worked fine (without and with huge pages activated!).\n\nEven on a developer machine under WS2019, PG 11.2 the error did not occur (both cases running on different generation of intel machines, Haswell and Nehalem, under different Hypervisors, WS2012R2 and WS2019).\n\nI am really confused to not being able to reproduce the error outside of production and replica instances...\n\nThe error caused a massive flood of the logs (about 800 MB in about 1 day, on SSD)\n\nI'll try to investigate further by configuring a second replica tomorrow, using the configuration of the production system as done per pg_basebackup.\n\nI looked at the non-default configuration settings but could not identify anything special.\n\nHere is a current list of the production System having 4GB of memory allocated to the VM.\n(all values with XXX are a little obfuscated).\n\nHere, to avoid the error, pg_prewarm.autoprewarm is off!\n\n name | current_setting |\n-------------------------------+--------------------------------------------+\n application_name | psql |\n archive_command | copy \"xxxxxx\" |\n archive_mode | on |\n auto_explain.log_analyze | off |\n auto_explain.log_min_duration | -1 |\n client_encoding | WIN1252 |\n cluster_name | XXX_PROD |\n data_checksums | on |\n DateStyle | ISO, DMY |\n default_text_search_config | pg_catalog.german |\n dynamic_shared_memory_type | windows |\n effective_cache_size | 8GB |\n lc_collate | C |\n lc_ctype | German_Germany.1252 |\n lc_messages | C |\n lc_monetary | German_Germany.1252 |\n lc_numeric | German_Germany.1252 |\n lc_time | German_Germany.1252 |\n listen_addresses | * |\n log_destination | stderr |\n log_directory | <XXX_PATH_TO_LOG) |\n log_file_mode | 0640 |\n log_line_prefix | XXX PRD %t %i %e %2l:> |\n log_statement | mod |\n log_temp_files | 0 |\n log_timezone | CET |\n logging_collector | on |\n maintenance_work_mem | 128MB |\n max_connections | 200 |\n max_stack_depth | 2MB |\n max_wal_size | 1GB |\n min_wal_size | 80MB |\n pg_prewarm.autoprewarm | off |\n pg_stat_statements.max | 8000 |\n pg_stat_statements.track | all |\n random_page_cost | 1 |\n search_path | public, archiv, ablage, admin |\n server_encoding | UTF8 |\n server_version | 11.2 |\n shared_buffers | 768MB |\n shared_preload_libraries | auto_explain,pg_stat_statements,pg_prewarm |\n temp_buffers | 32MB |\n TimeZone | CET |\n transaction_deferrable | off |\n transaction_isolation | read committed |\n transaction_read_only | off |\n update_process_title | off |\n wal_buffers | 16MB |\n wal_compression | on |\n wal_segment_size | 16MB |\n work_mem | 64MB |\n(51 Zeilen)\n\n\nThanks\n\nHans Buschmann\n\n\n\n\n\nAW: BUG #15641: Autoprewarm worker fails to start on Windows with huge pages in use Old PostgreSQL community/pgsql-bugs x\n\n\n\n\nThank you for taking a look.\n\nI encountered this problem after switching the production system and then found it also on the new created replica.\n\nI have no knowledge of the shared memory areas involved.\n\nI did some further investigation and tried to reproduce it on the old System (WS2016, PG 11.2) but there it worked fine (without and with huge pages activated!).\n\nEven on a developer machine under WS2019, PG 11.2 the error did not occur (both cases running on different generation of intel machines, Haswell and Nehalem, under different Hypervisors, WS2012R2 and WS2019).\n\nI am really confused to not being able to reproduce the error outside of production and replica instances...\n\nThe error caused a massive flood of the logs (about 800 MB in about 1 day, on SSD)\n\nI'll try to investigate further by configuring a second replica tomorrow, using the configuration of the production system as done per pg_basebackup.\n\nI looked at the non-default configuration settings but could not identify anything special.\n\nHere is a current list of the production System having 4GB of memory allocated to the VM.\n(all values with XXX are a little obfuscated).\n\nHere, to avoid the error, pg_prewarm.autoprewarm is off!\n\n name | current_setting |\n-------------------------------+--------------------------------------------+\n application_name | psql |\n archive_command | copy \"xxxxxx\" |\n archive_mode | on |\n auto_explain.log_analyze | off |\n auto_explain.log_min_duration | -1 |\n client_encoding | WIN1252 |\n cluster_name | XXX_PROD |\n data_checksums | on |\n DateStyle | ISO, DMY |\n default_text_search_config | pg_catalog.german |\n dynamic_shared_memory_type | windows |\n effective_cache_size | 8GB |\n lc_collate | C |\n lc_ctype | German_Germany.1252 |\n lc_messages | C |\n lc_monetary | German_Germany.1252 |\n lc_numeric | German_Germany.1252 |\n lc_time | German_Germany.1252 |\n listen_addresses | * |\n log_destination | stderr |\n log_directory | <XXX_PATH_TO_LOG) |\n log_file_mode | 0640 |\n log_line_prefix | XXX PRD %t %i %e %2l:> |\n log_statement | mod |\n log_temp_files | 0 |\n log_timezone | CET |\n logging_collector | on |\n maintenance_work_mem | 128MB |\n max_connections | 200 |\n max_stack_depth | 2MB |\n max_wal_size | 1GB |\n min_wal_size | 80MB |\n pg_prewarm.autoprewarm | off |\n pg_stat_statements.max | 8000 |\n pg_stat_statements.track | all |\n random_page_cost | 1 |\n search_path | public, archiv, ablage, admin |\n server_encoding | UTF8 |\n server_version | 11.2 |\n shared_buffers | 768MB |\n shared_preload_libraries | auto_explain,pg_stat_statements,pg_prewarm |\n temp_buffers | 32MB |\n TimeZone | CET |\n transaction_deferrable | off |\n transaction_isolation | read committed |\n transaction_read_only | off |\n update_process_title | off |\n wal_buffers | 16MB |\n wal_compression | on |\n wal_segment_size | 16MB |\n work_mem | 64MB |\n(51 Zeilen)\n\n\nThanks\n\nHans Buschmann",
"msg_date": "Wed, 20 Feb 2019 17:17:08 +0100",
"msg_from": "\"Hans Buschmann\" <buschmann@nidsa.net>",
"msg_from_op": false,
"msg_subject": "AW: BUG #15641: Autoprewarm worker fails to start on Windows with\n huge pages in use Old PostgreSQL community/pgsql-bugs x"
},
{
"msg_contents": "On Thu, Feb 21, 2019 at 4:36 AM Hans Buschmann <buschmann@nidsa.net> wrote:\n> I encountered this problem after switching the production system and then found it also on the new created replica.\n>\n> I have no knowledge of the shared memory areas involved.\n>\n> I did some further investigation and tried to reproduce it on the old System (WS2016, PG 11.2) but there it worked fine (without and with huge pages activated!).\n>\n> Even on a developer machine under WS2019, PG 11.2 the error did not occur (both cases running on different generation of intel machines, Haswell and Nehalem, under different Hypervisors, WS2012R2 and WS2019).\n>\n> I am really confused to not being able to reproduce the error outside of production and replica instances...\n>\n> The error caused a massive flood of the logs (about 800 MB in about 1 day, on SSD)\n>\n> I'll try to investigate further by configuring a second replica tomorrow, using the configuration of the production system as done per pg_basebackup.\n\nJust to confirm: on the machines where it happens, does it happen on\nevery restart, and does it never happen if you set huge_pages = off?\n\nCC'ing the authors of the auto-prewarm feature to see if they have ideas.\n\nThere is a known bug (fixed in commit 6c0fb941 for the next release)\nthat would cause spurious dsm_attach() failure that would look just\nthis this (dsm_attach() returns NULL), but that should be very rare\nand couldn't cause the behaviour described here, because here the\nbackground worker is repeatedly failing to attach in a loop (hence the\n800MB of logs).\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n",
"msg_date": "Thu, 21 Feb 2019 10:41:11 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: BUG #15641: Autoprewarm worker fails to start on Windows with\n huge pages in use Old PostgreSQL community/pgsql-bugs x"
},
{
"msg_contents": "hello\n\nSince these are production systems, I did'nt set huge_pages=off.\n(huge pages give performance, autoprewarm is not so necessary)\n\nI think it occured on every start, but the systems were only started 1 to 2 times in this error mode.\n\nIn the other cases I tried yesterday the results where very confusing (error not reproducable with or without huge pages).\n\nHans Buschmann\n\n\n\n\n\nAW: BUG #15641: Autoprewarm worker fails to start on Windows with huge pages in use Old PostgreSQL community/pgsql-bugs x\n\n\n\nhello\n\nSince these are production systems, I did'nt set huge_pages=off.\n(huge pages give performance, autoprewarm is not so necessary)\n\nI think it occured on every start, but the systems were only started 1 to 2 times in this error mode.\n\nIn the other cases I tried yesterday the results where very confusing (error not reproducable with or without huge pages).\n\nHans Buschmann",
"msg_date": "Thu, 21 Feb 2019 10:21:55 +0100",
"msg_from": "\"Hans Buschmann\" <buschmann@nidsa.net>",
"msg_from_op": false,
"msg_subject": "AW: BUG #15641: Autoprewarm worker fails to start on Windows with\n huge pages in use Old PostgreSQL community/pgsql-bugs x"
},
{
"msg_contents": "Hi Thomas, Hans,\nOn Thu, Feb 21, 2019 at 2:16 PM Hans Buschmann <buschmann@nidsa.net> wrote:\n>\n> hello\n>\n> Since these are production systems, I did'nt set huge_pages=off.\n> (huge pages give performance, autoprewarm is not so necessary)\n\nI did turn autoprewarm on, windows server 2019 and postgresql 11.2 it\nruns fine even with huge_pages=on (Thanks to neha sharma). As Thomas\nsaid error is coming from per database worker and main worker waits\ntill per data database worker exists so from code review I do see an\nissue of having an invalid handle in per database worker. A\nreproducible testcase will really help. I shall see to recheck the\ncode again but I am not much hopeful without a proper testcase.\n\n--\nThanks and Regards\nMithun Chicklore Yogendra\nEnterpriseDB: http://www.enterprisedb.com\n\n",
"msg_date": "Thu, 21 Feb 2019 18:23:29 +0530",
"msg_from": "Mithun Cy <mithun.cy@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15641: Autoprewarm worker fails to start on Windows with\n huge pages in use Old PostgreSQL community/pgsql-bugs x"
},
{
"msg_contents": "On Thu, Feb 21, 2019 at 6:23 PM Mithun Cy <mithun.cy@gmail.com> wrote:\n\n> said error is coming from per database worker and main worker waits\n> till per data database worker exists so from code review I do see an\n> issue of having an invalid handle in per database worker.\n\nSorry a typo error, I meant I do not see an issue from the code.\n\n",
"msg_date": "Thu, 21 Feb 2019 18:28:28 +0530",
"msg_from": "Mithun Cy <mithun.cy@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15641: Autoprewarm worker fails to start on Windows with\n huge pages in use Old PostgreSQL community/pgsql-bugs x"
},
{
"msg_contents": "On the weekend, I did some more investigations:\n\nIt seems that Huge pages are NOT the cause of this problem.\n\nThe problem is only reproducable ONCE, after a database restart it disappears.\n\nBy reinstalling the original pg_pasebackup on another test VM the problem reappeared once.\n\nHere is the start of the error log:\n\nCPS PRD 2019-02-24 12:11:57 CET 00000 1:> LOG: database system was interrupted; last known up at 2019-02-17 16:14:05 CET\nCPS PRD 2019-02-24 12:12:16 CET 00000 2:> LOG: entering standby mode\nCPS PRD 2019-02-24 12:12:16 CET 00000 3:> LOG: redo starts at 0/23000028\nCPS PRD 2019-02-24 12:12:16 CET 00000 4:> LOG: consistent recovery state reached at 0/23000168\nCPS PRD 2019-02-24 12:12:16 CET 00000 5:> LOG: invalid record length at 0/24000060: wanted 24, got 0\nCPS PRD 2019-02-24 12:12:16 CET 00000 9:> LOG: database system is ready to accept read only connections\nCPS PRD 2019-02-24 12:12:16 CET 3D000 1:> FATAL: database 16384 does not exist\nCPS PRD 2019-02-24 12:12:16 CET 00000 10:> LOG: background worker \"autoprewarm worker\" (PID 3968) exited with exit code 1\nCPS PRD 2019-02-24 12:12:16 CET 00000 1:> LOG: autoprewarm successfully prewarmed 0 of 12402 previously-loaded blocks\nCPS PRD 2019-02-24 12:12:17 CET XX000 1:> FATAL: could not connect to the primary server: FATAL: no pg_hba.conf entry for replication connection from host \"192.168.27.155\", user \"replicator\", SSL off\nCPS PRD 2019-02-24 12:12:17 CET 55000 1:> ERROR: could not map dynamic shared memory segment\nCPS PRD 2019-02-24 12:12:17 CET 00000 11:> LOG: background worker \"autoprewarm worker\" (PID 3296) exited with exit code 1\nCPS PRD 2019-02-24 12:12:17 CET XX000 1:> FATAL: could not connect to the primary server: FATAL: no pg_hba.conf entry for replication connection from host \"192.168.27.155\", user \"replicator\", SSL off\nCPS PRD 2019-02-24 12:12:17 CET 55000 1:> ERROR: could not map dynamic shared memory segment\nCPS PRD 2019-02-24 12:12:17 CET 00000 12:> LOG: background worker \"autoprewarm worker\" (PID 2756) exited with exit code 1\nCPS PRD 2019-02-24 12:12:17 CET 55000 1:> ERROR: could not map dynamic shared memory segment\n...\n(PS: the correct replication function was not set, so causing the errors concerning replication)\n\nIt seems that an outdated autoprewarm.blocks causes the problem.\n\nAfter a restart the autoprewarm.blocks file seems to be rewritten, so that the next start gives no error.\n\nFor a test, I copied the erroneus autoprewarm.blocks files over to the data section and the problem reappeared.\n\n\nThe autoprewarm.blocks file is not corrupted or moved around manually but rather a leftover from the preceding test installation.\n\nOn this instance I had installed a copy of the production database under 11.2.\nBy doing the production switch, I dropped the test database and pg_restored the current one.\n\nThis left the previous autoprewarm.blocks file in the data directory.\n\nOn the first start the autoprewarm files does not match the newly restored database (perhpas the cause of the fatal error: database 16384 does not exist)\n\nSo the problem lies in the initial detection of the autoprewarm.blocks file.\n\nThis seems easy to reproduce:\n\n- Install/create a database with autoprewarm on and pg_prewarm loaded.\n- Fill the autoprewarm cache with some data\n- pg_dump the database\n- drop the database\n- create the database and pg_restore it from the dump\n- start the instance and logs are flooded\n\nI have taken no further investigation in the sourcecode due to limited skills so far...\n\n\nThanks\n\nHans Buschmann\n\n\n\n\n\n\nAW: BUG #15641: Autoprewarm worker fails to start on Windows with huge pages in use Old PostgreSQL community/pgsql-bugs x\n\n\n\n\n\nOn the weekend, I did some more investigations:\n\nIt seems that Huge pages are NOT the cause of this problem.\n\nThe problem is only reproducable ONCE, after a database restart it disappears.\n\nBy reinstalling the original pg_pasebackup on another test VM the problem reappeared once.\n\nHere is the start of the error log:\n\nCPS PRD 2019-02-24 12:11:57 CET 00000 1:> LOG: database system was interrupted; last known up at 2019-02-17 16:14:05 CET\nCPS PRD 2019-02-24 12:12:16 CET 00000 2:> LOG: entering standby mode\nCPS PRD 2019-02-24 12:12:16 CET 00000 3:> LOG: redo starts at 0/23000028\nCPS PRD 2019-02-24 12:12:16 CET 00000 4:> LOG: consistent recovery state reached at 0/23000168\nCPS PRD 2019-02-24 12:12:16 CET 00000 5:> LOG: invalid record length at 0/24000060: wanted 24, got 0\nCPS PRD 2019-02-24 12:12:16 CET 00000 9:> LOG: database system is ready to accept read only connections\nCPS PRD 2019-02-24 12:12:16 CET 3D000 1:> FATAL: database 16384 does not exist\nCPS PRD 2019-02-24 12:12:16 CET 00000 10:> LOG: background worker \"autoprewarm worker\" (PID 3968) exited with exit code 1\nCPS PRD 2019-02-24 12:12:16 CET 00000 1:> LOG: autoprewarm successfully prewarmed 0 of 12402 previously-loaded blocks\nCPS PRD 2019-02-24 12:12:17 CET XX000 1:> FATAL: could not connect to the primary server: FATAL: no pg_hba.conf entry for replication connection from host \"192.168.27.155\", user \"replicator\", SSL off\nCPS PRD 2019-02-24 12:12:17 CET 55000 1:> ERROR: could not map dynamic shared memory segment\nCPS PRD 2019-02-24 12:12:17 CET 00000 11:> LOG: background worker \"autoprewarm worker\" (PID 3296) exited with exit code 1\nCPS PRD 2019-02-24 12:12:17 CET XX000 1:> FATAL: could not connect to the primary server: FATAL: no pg_hba.conf entry for replication connection from host \"192.168.27.155\", user \"replicator\", SSL off\nCPS PRD 2019-02-24 12:12:17 CET 55000 1:> ERROR: could not map dynamic shared memory segment\nCPS PRD 2019-02-24 12:12:17 CET 00000 12:> LOG: background worker \"autoprewarm worker\" (PID 2756) exited with exit code 1\nCPS PRD 2019-02-24 12:12:17 CET 55000 1:> ERROR: could not map dynamic shared memory segment\n...\n(PS: the correct replication function was not set, so causing the errors concerning replication)\n\nIt seems that an outdated autoprewarm.blocks causes the problem.\n\nAfter a restart the autoprewarm.blocks file seems to be rewritten, so that the next start gives no error.\n\nFor a test, I copied the erroneus autoprewarm.blocks files over to the data section and the problem reappeared.\n\n\nThe autoprewarm.blocks file is not corrupted or moved around manually but rather a leftover from the preceding test installation.\n\nOn this instance I had installed a copy of the production database under 11.2.\nBy doing the production switch, I dropped the test database and pg_restored the current one.\n\nThis left the previous autoprewarm.blocks file in the data directory.\n\nOn the first start the autoprewarm files does not match the newly restored database (perhpas the cause of the fatal error: database 16384 does not exist)\n\nSo the problem lies in the initial detection of the autoprewarm.blocks file.\n\nThis seems easy to reproduce:\n\n- Install/create a database with autoprewarm on and pg_prewarm loaded.\n- Fill the autoprewarm cache with some data\n- pg_dump the database\n- drop the database\n- create the database and pg_restore it from the dump\n- start the instance and logs are flooded\n\nI have taken no further investigation in the sourcecode due to limited skills so far...\n\n\nThanks\n\nHans Buschmann",
"msg_date": "Sun, 24 Feb 2019 15:04:09 +0100",
"msg_from": "\"Hans Buschmann\" <buschmann@nidsa.net>",
"msg_from_op": false,
"msg_subject": "AW: BUG #15641: Autoprewarm worker fails to start on Windows with\n huge pages in use Old PostgreSQL community/pgsql-bugs x"
},
{
"msg_contents": "Thanks Hans, for a simple reproducible tests.\n\nOn Sun, Feb 24, 2019 at 6:54 PM Hans Buschmann <buschmann@nidsa.net> wrote:\n> Here is the start of the error log:\n>\n> CPS PRD 2019-02-24 12:11:57 CET 00000 1:> LOG: database system was\ninterrupted; last known up at 2019-02-17 16:14:05 CET\n> CPS PRD 2019-02-24 12:12:16 CET 00000 2:> LOG: entering standby mode\n> CPS PRD 2019-02-24 12:12:16 CET 00000 3:> LOG: redo starts at\n0/23000028\n> CPS PRD 2019-02-24 12:12:16 CET 00000 4:> LOG: consistent recovery\nstate reached at 0/23000168\n> CPS PRD 2019-02-24 12:12:16 CET 00000 5:> LOG: invalid record length\nat 0/24000060: wanted 24, got 0\n> CPS PRD 2019-02-24 12:12:16 CET 00000 9:> LOG: database system is\nready to accept read only connections\n> CPS PRD 2019-02-24 12:12:16 CET 3D000 1:> FATAL: database 16384 does\nnot exist\n> CPS PRD 2019-02-24 12:12:16 CET 00000 10:> LOG: background worker\n\"autoprewarm worker\" (PID 3968) exited with exit code 1\n> CPS PRD 2019-02-24 12:12:16 CET 00000 1:> LOG: autoprewarm\nsuccessfully prewarmed 0 of 12402 previously-loaded blocks\n> CPS PRD 2019-02-24 12:12:17 CET XX000 1:> FATAL: could not connect to\nthe primary server: FATAL: no pg_hba.conf entry for replication connection\nfrom host \"192.168.27.155\", user \"replicator\", SSL off\n> CPS PRD 2019-02-24 12:12:17 CET 55000 1:> ERROR: could not map dynamic\nshared memory segment\n\nAs per the log Auto prewarm master did exit (\"autoprewarm successfully\nprewarmed 0 of 12402 previously-loaded blocks\") first. Then only we started\ngetting \"could not map dynamic shared memory segment\".\nThat is, master has done dsm_detach and then workers started throwing error\nafter that.\n\n> This seems easy to reproduce:\n>\n> - Install/create a database with autoprewarm on and pg_prewarm loaded.\n> - Fill the autoprewarm cache with some data\n> - pg_dump the database\n> - drop the database\n> - create the database and pg_restore it from the dump\n> - start the instance and logs are flooded\n>\n> I have taken no further investigation in the sourcecode due to limited\nskills so far...\n\nI was able to reproduce same.\n\nThe \"worker.bgw_restart_time\" is never set for autoprewarm workers so on\nerror it get restarted after some period of time (default behavior). Since\ndatabase itself is dropped our attempt to connect to that database failed\nand then worker exited. But again got restated by postmaster then we start\nseeing above DSM segment error.\n\nI think every autoprewarm worker should be set with\n\"worker.bgw_restart_time = BGW_NEVER_RESTART;\" so that there shall not be\nrepeated prewarm attempt of a dropped database. I will try to think further\nand submit a patch for same.\n\n-- \nThanks and Regards\nMithun Chicklore Yogendra\nEnterpriseDB: http://www.enterprisedb.com\n\nThanks Hans, for a simple reproducible tests.On Sun, Feb 24, 2019 at 6:54 PM Hans Buschmann <buschmann@nidsa.net> wrote:> Here is the start of the error log:>> CPS PRD 2019-02-24 12:11:57 CET 00000 1:> LOG: database system was interrupted; last known up at 2019-02-17 16:14:05 CET> CPS PRD 2019-02-24 12:12:16 CET 00000 2:> LOG: entering standby mode> CPS PRD 2019-02-24 12:12:16 CET 00000 3:> LOG: redo starts at 0/23000028> CPS PRD 2019-02-24 12:12:16 CET 00000 4:> LOG: consistent recovery state reached at 0/23000168> CPS PRD 2019-02-24 12:12:16 CET 00000 5:> LOG: invalid record length at 0/24000060: wanted 24, got 0> CPS PRD 2019-02-24 12:12:16 CET 00000 9:> LOG: database system is ready to accept read only connections> CPS PRD 2019-02-24 12:12:16 CET 3D000 1:> FATAL: database 16384 does not exist> CPS PRD 2019-02-24 12:12:16 CET 00000 10:> LOG: background worker \"autoprewarm worker\" (PID 3968) exited with exit code 1> CPS PRD 2019-02-24 12:12:16 CET 00000 1:> LOG: autoprewarm successfully prewarmed 0 of 12402 previously-loaded blocks> CPS PRD 2019-02-24 12:12:17 CET XX000 1:> FATAL: could not connect to the primary server: FATAL: no pg_hba.conf entry for replication connection from host \"192.168.27.155\", user \"replicator\", SSL off> CPS PRD 2019-02-24 12:12:17 CET 55000 1:> ERROR: could not map dynamic shared memory segmentAs per the log Auto prewarm master did exit (\"autoprewarm successfully prewarmed 0 of 12402 previously-loaded blocks\") first. Then only we started getting \"could not map dynamic shared memory segment\".That is, master has done dsm_detach and then workers started throwing error after that.> This seems easy to reproduce:>> - Install/create a database with autoprewarm on and pg_prewarm loaded.> - Fill the autoprewarm cache with some data> - pg_dump the database> - drop the database> - create the database and pg_restore it from the dump> - start the instance and logs are flooded>> I have taken no further investigation in the sourcecode due to limited skills so far...I was able to reproduce same.The \"worker.bgw_restart_time\" is never set for autoprewarm workers so on error it get restarted after some period of time (default behavior). Since database itself is dropped our attempt to connect to that database failed and then worker exited. But again got restated by postmaster then we start seeing above DSM segment error.I think every autoprewarm worker should be set with \"worker.bgw_restart_time = BGW_NEVER_RESTART;\" so that there shall not be repeated prewarm attempt of a dropped database. I will try to think further and submit a patch for same.-- Thanks and RegardsMithun Chicklore YogendraEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 25 Feb 2019 00:10:49 +0530",
"msg_from": "Mithun Cy <mithun.cy@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15641: Autoprewarm worker fails to start on Windows with\n huge pages in use Old PostgreSQL community/pgsql-bugs x"
},
{
"msg_contents": "Glad to hear you could reproduce the case easily.\n\nI wanted to add that the problem as it seems now should'nt be restricted to Windows only.\n\nAnother thing is the semantic scope of pg_prewarm:\n\nPrewarming affects the whole cluster, so at instance start we can meet some active and/or some dropped databases.\n\nTo not affect the other databases the prewarming should occur on all non dropped databases and omit only the dropped ones.\n\nHope your thinking gives a good patch... ;)\n\nHans Buschmann\n\n\n\n\n\nAW: BUG #15641: Autoprewarm worker fails to start on Windows with huge pages in use Old PostgreSQL community/pgsql-bugs x\n\n\n\n\n\nGlad to hear you could reproduce the case easily.\n\nI wanted to add that the problem as it seems now should'nt be restricted to Windows only.\n\nAnother thing is the semantic scope of pg_prewarm:\n\nPrewarming affects the whole cluster, so at instance start we can meet some active and/or some dropped databases.\n\nTo not affect the other databases the prewarming should occur on all non dropped databases and omit only the dropped ones.\n\nHope your thinking gives a good patch... ;)\n\nHans Buschmann",
"msg_date": "Mon, 25 Feb 2019 11:59:48 +0100",
"msg_from": "\"Hans Buschmann\" <buschmann@nidsa.net>",
"msg_from_op": false,
"msg_subject": "AW: BUG #15641: Autoprewarm worker fails to start on Windows with\n huge pages in use Old PostgreSQL community/pgsql-bugs x"
},
{
"msg_contents": "t tOn Mon, Feb 25, 2019 at 12:10 AM Mithun Cy <mithun.cy@gmail.com> wrote:\n\n> Thanks Hans, for a simple reproducible tests.\n>\n> The \"worker.bgw_restart_time\" is never set for autoprewarm workers so on\n> error it get restarted after some period of time (default behavior). Since\n> database itself is dropped our attempt to connect to that database failed\n> and then worker exited. But again got restated by postmaster then we start\n> seeing above DSM segment error.\n>\n> I think every autoprewarm worker should be set with\n> \"worker.bgw_restart_time = BGW_NEVER_RESTART;\" so that there shall not be\n> repeated prewarm attempt of a dropped database. I will try to think further\n> and submit a patch for same.\n>\n\nHere is the patch for same,\n\nautoprewarm waorker should not be restarted. As per the code\n@apw_start_database_worker@\nmaster starts a worker per database and wait until it exit by calling\nWaitForBackgroundWorkerShutdown. The call WaitForBackgroundWorkerShutdown\ncannot handle the case if the worker was restarted. The\nWaitForBackgroundWorkerShutdown() get the status BGWH_STOPPED from the call\nGetBackgroundWorkerPid() if worker got restarted. So master will next\ndetach the shared memory and next restarted worker keep failing going in a\nunending loop.\n\nI think there is no need to restart at all. Following are the normal error\nwe might encounter.\n1. Connecting database is droped -- So we need to skip to next database\nwhich master will do by starting a new wroker. So not needed.\n2. Relation is droped -- try_relation_open(reloid, AccessShareLock) is used\nso error due to dropped relation is handled also avoids concurrent\ntruncation.\n3. smgrexists is used before reading from a fork file. Again error is\nhandled.\n4. before reading the block we have check as below. So previously truncated\npages will not be read again.\n /* Check whether blocknum is valid and within fork file size. */\n if (blk->blocknum >= nblocks)\n\nI think if any other unexpected errors occurs that should be fatal so\nrestarting will not be correcting same. Hence there is no need to restart\nthe per database worker process.\n\nI tried to dig why we did not set it earlier. It used to be never restart,\nbut it changed after fixing comments [1]. At that time we did not make\nexplicit database connection per worker and did not handle many error cases\nas now. So it appeared fair. But, when code changed to make database\nconnection per worker, we should have set every worker with\nBGW_NEVER_RESTART. Which I think was a mistake.\n\nNOTE : On zero exit status we will not restart the bgworker (see\n@CleanupBackgroundWorker@\nand @maybe_start_bgworkers@)\n[1]\nhttps://www.postgresql.org/message-id/CA%2BTgmoYNF_wfdwQ3z3713zKy2j0Z9C32WJdtKjvRWzeY7JOL4g%40mail.gmail.com\n-- \nThanks and Regards\nMithun Chicklore Yogendra\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 18 Mar 2019 12:34:18 +0530",
"msg_from": "Mithun Cy <mithun.cy@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15641: Autoprewarm worker fails to start on Windows with\n huge pages in use Old PostgreSQL community/pgsql-bugs x"
},
{
"msg_contents": "On Mon, Mar 18, 2019 at 3:04 AM Mithun Cy <mithun.cy@gmail.com> wrote:\n> autoprewarm waorker should not be restarted. As per the code @apw_start_database_worker@ master starts a worker per database and wait until it exit by calling WaitForBackgroundWorkerShutdown. The call WaitForBackgroundWorkerShutdown cannot handle the case if the worker was restarted. The WaitForBackgroundWorkerShutdown() get the status BGWH_STOPPED from the call GetBackgroundWorkerPid() if worker got restarted. So master will next detach the shared memory and next restarted worker keep failing going in a unending loop.\n\nUgh, that seems like a silly oversight. Does it fix the reported problem?\n\nIf I understand correctly, the commit message would be something like this:\n\n==\nDon't auto-restart per-database autoprewarm workers.\n\nWe should try to prewarm each database only once. Otherwise, if\nprewarming fails for some reason, it will just keep retrying in an\ninfnite loop. The existing code was intended to implement this\nbehavior, but because it neglected to set worker.bgw_restart_time, the\nper-database workers keep restarting, contrary to what was intended.\n\nMithun Cy, per a report from Hans Buschmann\n==\n\nDoes that sound right?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Mon, 18 Mar 2019 11:31:26 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15641: Autoprewarm worker fails to start on Windows with\n huge pages in use Old PostgreSQL community/pgsql-bugs x"
},
{
"msg_contents": "Thanks Robert,\nOn Mon, Mar 18, 2019 at 9:01 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Mar 18, 2019 at 3:04 AM Mithun Cy <mithun.cy@gmail.com> wrote:\n> > autoprewarm waorker should not be restarted. As per the code\n> @apw_start_database_worker@ master starts a worker per database and wait\n> until it exit by calling WaitForBackgroundWorkerShutdown. The call\n> WaitForBackgroundWorkerShutdown cannot handle the case if the worker was\n> restarted. The WaitForBackgroundWorkerShutdown() get the status\n> BGWH_STOPPED from the call GetBackgroundWorkerPid() if worker got\n> restarted. So master will next detach the shared memory and next restarted\n> worker keep failing going in a unending loop.\n>\n> Ugh, that seems like a silly oversight. Does it fix the reported problem?\n>\n\n-- Yes this fixes the reported issue, Hans Buschmann has given below steps\nto reproduce.\n\n> This seems easy to reproduce:\n>\n> - Install/create a database with autoprewarm on and pg_prewarm loaded.\n> - Fill the autoprewarm cache with some data\n> - pg_dump the database\n> - drop the database\n> - create the database and pg_restore it from the dump\n> - start the instance and logs are flooded\n\n-- It is explained earlier [1] that they used older autoprewarm.blocks\nwhich was generated before drop database. So on restrart autoprewarm worker\nfailed to connect to droped database and then lead to retry loop. This\npatch should fix same.\n\nNOTE : Also, another kind of error user might see because of same bug is,\nrestarted worker getting connected to next database in autoprewarm.blocks\nbecause autoprewarm master updated shared data \"apw_state->database =\ncurrent_db;\" to start new worker for next database. Both restarted worker\nand newly created worker will connect to same database(next one) and try to\nload same pages. Hence end up with spurious log messages like \"LOG:\nautoprewarm successfully prewarmed 13 of 11 previously-loaded blocks\"\n\nIf I understand correctly, the commit message would be something like this:\n>\n> ==\n> Don't auto-restart per-database autoprewarm workers.\n>\n> We should try to prewarm each database only once. Otherwise, if\n> prewarming fails for some reason, it will just keep retrying in an\n> infnite loop. The existing code was intended to implement this\n> behavior, but because it neglected to set worker.bgw_restart_time, the\n> per-database workers keep restarting, contrary to what was intended.\n>\n> Mithun Cy, per a report from Hans Buschmann\n> ==\n>\n> Does that sound right?\n>\n\n-- Yes I Agree.\n\n[1]\nhttps://www.postgresql.org/message-id/D2B9F2A20670C84685EF7D183F2949E202569F21%40gigant.nidsa.net\n\n-- \nThanks and Regards\nMithun Chicklore Yogendra\nEnterpriseDB: http://www.enterprisedb.com\n\nThanks Robert,On Mon, Mar 18, 2019 at 9:01 PM Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Mar 18, 2019 at 3:04 AM Mithun Cy <mithun.cy@gmail.com> wrote:\n> autoprewarm waorker should not be restarted. As per the code @apw_start_database_worker@ master starts a worker per database and wait until it exit by calling WaitForBackgroundWorkerShutdown. The call WaitForBackgroundWorkerShutdown cannot handle the case if the worker was restarted. The WaitForBackgroundWorkerShutdown() get the status BGWH_STOPPED from the call GetBackgroundWorkerPid() if worker got restarted. So master will next detach the shared memory and next restarted worker keep failing going in a unending loop.\n\nUgh, that seems like a silly oversight. Does it fix the reported problem?-- Yes this fixes the reported issue, Hans Buschmann has given below steps to reproduce.> This seems easy to reproduce:>> - Install/create a database with autoprewarm on and pg_prewarm loaded.> - Fill the autoprewarm cache with some data> - pg_dump the database> - drop the database> - create the database and pg_restore it from the dump> - start the instance and logs are flooded-- It is explained earlier [1] that they used older autoprewarm.blocks which was generated before drop database. So on restrart autoprewarm worker failed to connect to droped database and then lead to retry loop. This patch should fix same.NOTE : Also, another kind of error user might see because of same bug is, restarted worker getting connected to next database in autoprewarm.blocks because autoprewarm master updated shared data \"apw_state->database = current_db;\" to start new worker for next database. Both restarted worker and newly created worker will connect to same database(next one) and try to load same pages. Hence end up with spurious log messages like \"LOG: autoprewarm successfully prewarmed 13 of 11 previously-loaded blocks\"\nIf I understand correctly, the commit message would be something like this:\n\n==\nDon't auto-restart per-database autoprewarm workers.\n\nWe should try to prewarm each database only once. Otherwise, if\nprewarming fails for some reason, it will just keep retrying in an\ninfnite loop. The existing code was intended to implement this\nbehavior, but because it neglected to set worker.bgw_restart_time, the\nper-database workers keep restarting, contrary to what was intended.\n\nMithun Cy, per a report from Hans Buschmann\n==\n\nDoes that sound right?-- Yes I Agree. [1] https://www.postgresql.org/message-id/D2B9F2A20670C84685EF7D183F2949E202569F21%40gigant.nidsa.net -- Thanks and RegardsMithun Chicklore YogendraEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 18 Mar 2019 23:12:24 +0530",
"msg_from": "Mithun Cy <mithun.cy@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15641: Autoprewarm worker fails to start on Windows with\n huge pages in use Old PostgreSQL community/pgsql-bugs x"
},
{
"msg_contents": "On Mon, Mar 18, 2019 at 1:43 PM Mithun Cy <mithun.cy@gmail.com> wrote:\n>> Does that sound right?\n>\n> -- Yes I Agree.\n\nCommitted with a little more tweaking of the commit message, and\nback-patched to v11.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Mon, 18 Mar 2019 15:35:39 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15641: Autoprewarm worker fails to start on Windows with\n huge pages in use Old PostgreSQL community/pgsql-bugs x"
}
] |
[
{
"msg_contents": "Hello\n\nWe increased bgwriter_lru_maxpages limit in 10 release [1]. Docs now are changed correctly but in REL_10_STABLE postgresql.conf.sample we still have comment \"0-1000 max buffers written/round\". \nMaster (and REL_11_STABLE) was updated later in 611fe7d4793ba6516e839dc50b5319b990283f4f, but not REL_10. I think we need backpatch this line.\n\n* http://postgr.es/m/f6e58a22-030b-eb8a-5457-f62fb08d701c@BlueTreble.com\n\nregards, Sergei",
"msg_date": "Wed, 20 Feb 2019 13:53:37 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": true,
"msg_subject": "bgwriter_lru_maxpages limits in PG 10 sample conf"
},
{
"msg_contents": "On Wed, Feb 20, 2019 at 5:53 AM Sergei Kornilov <sk@zsrv.org> wrote:\n> We increased bgwriter_lru_maxpages limit in 10 release [1]. Docs now are changed correctly but in REL_10_STABLE postgresql.conf.sample we still have comment \"0-1000 max buffers written/round\".\n> Master (and REL_11_STABLE) was updated later in 611fe7d4793ba6516e839dc50b5319b990283f4f, but not REL_10. I think we need backpatch this line.\n\nI'm a bit reluctant to whack postgresql.conf around in back-branches\nbecause sometimes that makes funny things happen when somebody\nupgrades, e.g. via RPM. I don't remember exactly what happens but I\nthink typically either the new file overwrites the existing file which\ngets moved to something like postgresql.conf.rpmsave or the new file\nis written into postgresql.conf.rpmnew instead of the original\nlocation. I don't think it's worth making stuff like that happen for\nthe sake of a comment.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Wed, 20 Feb 2019 14:35:27 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bgwriter_lru_maxpages limits in PG 10 sample conf"
},
{
"msg_contents": "Hello\n\n> I'm a bit reluctant to whack postgresql.conf around in back-branches\n> because sometimes that makes funny things happen when somebody\n> upgrades, e.g. via RPM.\n\nIf i remember correctly both deb and rpm packages will ask user about config difference.\nBut good point, comment change is too small difference. I am a bit late, good time for such change was before last minor release (we add data_sync_retry and config was changed anyway).\n\nregards, Sergei\n\n",
"msg_date": "Wed, 20 Feb 2019 23:54:25 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": true,
"msg_subject": "Re: bgwriter_lru_maxpages limits in PG 10 sample conf"
},
{
"msg_contents": "On Wed, Feb 20, 2019 at 11:54:25PM +0300, Sergei Kornilov wrote:\n> Hello\n> \n> > I'm a bit reluctant to whack postgresql.conf around in back-branches\n> > because sometimes that makes funny things happen when somebody\n> > upgrades, e.g. via RPM.\n> \n> If i remember correctly both deb and rpm packages will ask user about config difference.\n> But good point, comment change is too small difference. I am a bit late, good time for such change was before last minor release (we add data_sync_retry and config was changed anyway).\n\nThe other issue is that you will change share/postgresql.conf.sample,\nbut not $PGDATA/postgresql.conf until initdb is run, meaning if someone\ndiffs the two files, they will see differences that they did not make. \nMaking defaults more accurate is not worth that confusion.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n",
"msg_date": "Thu, 21 Feb 2019 19:58:51 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: bgwriter_lru_maxpages limits in PG 10 sample conf"
},
{
"msg_contents": "Hello\n\npostgresql.conf.sample was changed recently in REL_10_STABLE (commit ab1d9f066aee4f9b81abde6136771debe0191ae8)\nSo config will be changed in next minor release anyway. We have another reason to not fix bgwriter_lru_maxpages comment?\n\nregards, Sergei\n\n",
"msg_date": "Thu, 28 Feb 2019 10:28:44 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": true,
"msg_subject": "Re: bgwriter_lru_maxpages limits in PG 10 sample conf"
},
{
"msg_contents": "On Thu, Feb 28, 2019 at 10:28:44AM +0300, Sergei Kornilov wrote:\n> Hello\n> \n> postgresql.conf.sample was changed recently in REL_10_STABLE (commit ab1d9f066aee4f9b81abde6136771debe0191ae8)\n> So config will be changed in next minor release anyway. We have another reason to not fix bgwriter_lru_maxpages comment?\n\nFrankly, I was surprised postgresql.conf.sample was changed in a back\nbranch since it will cause people who diff $PGDATA/postgresql.conf with\nshare/postgresql.conf.sample to see differences they didn't make.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n",
"msg_date": "Mon, 4 Mar 2019 18:49:18 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: bgwriter_lru_maxpages limits in PG 10 sample conf"
},
{
"msg_contents": "On 2019-Mar-04, Bruce Momjian wrote:\n\n> On Thu, Feb 28, 2019 at 10:28:44AM +0300, Sergei Kornilov wrote:\n> > Hello\n> > \n> > postgresql.conf.sample was changed recently in REL_10_STABLE (commit ab1d9f066aee4f9b81abde6136771debe0191ae8)\n> > So config will be changed in next minor release anyway. We have another reason to not fix bgwriter_lru_maxpages comment?\n> \n> Frankly, I was surprised postgresql.conf.sample was changed in a back\n> branch since it will cause people who diff $PGDATA/postgresql.conf with\n> share/postgresql.conf.sample to see differences they didn't make.\n\nI think the set of people that execute diffs of their production conf\nfile against the sample file to be pretty small -- maybe even empty. If\nyou're really interested in knowing the changes you've done, you're more\nlikely to use a version control system on the file anyway.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Tue, 5 Mar 2019 00:24:14 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: bgwriter_lru_maxpages limits in PG 10 sample conf"
},
{
"msg_contents": "On Tue, Mar 5, 2019 at 12:24:14AM -0300, Alvaro Herrera wrote:\n> On 2019-Mar-04, Bruce Momjian wrote:\n> \n> > On Thu, Feb 28, 2019 at 10:28:44AM +0300, Sergei Kornilov wrote:\n> > > Hello\n> > > \n> > > postgresql.conf.sample was changed recently in REL_10_STABLE (commit ab1d9f066aee4f9b81abde6136771debe0191ae8)\n> > > So config will be changed in next minor release anyway. We have another reason to not fix bgwriter_lru_maxpages comment?\n> > \n> > Frankly, I was surprised postgresql.conf.sample was changed in a back\n> > branch since it will cause people who diff $PGDATA/postgresql.conf with\n> > share/postgresql.conf.sample to see differences they didn't make.\n> \n> I think the set of people that execute diffs of their production conf\n> file against the sample file to be pretty small -- maybe even empty. If\n> you're really interested in knowing the changes you've done, you're more\n> likely to use a version control system on the file anyway.\n\nWell, if this is true, then we should all agree to backpatch to\npostgresql.conf.sample more often.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n",
"msg_date": "Tue, 5 Mar 2019 16:04:17 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: bgwriter_lru_maxpages limits in PG 10 sample conf"
},
{
"msg_contents": "Hello\n\nWell, actually we change postgresql.conf.sample in back-branches. Recently was yet another commit in REL_10_STABLE: fea2cab70de8d190762996c7c447143fb47bcfa3\nI think we need fix incosistent comment for bgwriter_lru_maxpages\n\nregards, Sergei\n\n\n",
"msg_date": "Thu, 18 Apr 2019 09:42:41 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": true,
"msg_subject": "Re: bgwriter_lru_maxpages limits in PG 10 sample conf"
}
] |
[
{
"msg_contents": "When reviewing a related patch [1], I spent some time thinking about the\n\"detectNewRows\" argument of ri_PerformCheck(). My understanding is that it\nshould (via the es_crosscheck_snapshot field of EState) prevent the executor\nfrom accidentally deleting PK value that another transaction (whose data\nchanges the trigger's transaction does not see) might need.\n\nHowever I find it confusing that the trigger functions pass detectNewRows=true\neven if they only execute SELECT statement. I think that in these cases the\ntrigger functions 1) detect themselves that another transaction inserted\nrow(s) referencing the PK whose deletion is being checked, 2) do not try to\ndelete the PK anyway. Furthermore (AFAICS), only heap_update() and\nheap_delete() functions receive the crosscheck snapshot, so ri_PerformCheck()\ndoes not have to pass the crosscheck snapshot to SPI_execute_snapshot() for\nSELECT queries.\n\nFollowing is patch proposal. Is anything wrong about that?\n\ndiff --git a/src/backend/utils/adt/ri_triggers.c b/src/backend/utils/adt/ri_triggers.c\nindex e1aa3d0044..e235ad9cd0 100644\n--- a/src/backend/utils/adt/ri_triggers.c\n+++ b/src/backend/utils/adt/ri_triggers.c\n@@ -574,7 +574,7 @@ ri_Check_Pk_Match(Relation pk_rel, Relation fk_rel,\n \tresult = ri_PerformCheck(riinfo, &qkey, qplan,\n \t\t\t\t\t\t\t fk_rel, pk_rel,\n \t\t\t\t\t\t\t old_row, NULL,\n-\t\t\t\t\t\t\t true,\t/* treat like update */\n+\t\t\t\t\t\t\t false,\n \t\t\t\t\t\t\t SPI_OK_SELECT);\n \n \tif (SPI_finish() != SPI_OK_FINISH)\n@@ -802,7 +802,7 @@ ri_restrict(TriggerData *trigdata, bool is_no_action)\n \t\t\tri_PerformCheck(riinfo, &qkey, qplan,\n \t\t\t\t\t\t\tfk_rel, pk_rel,\n \t\t\t\t\t\t\told_row, NULL,\n-\t\t\t\t\t\t\ttrue,\t/* must detect new rows */\n+\t\t\t\t\t\t\tfalse,\n \t\t\t\t\t\t\tSPI_OK_SELECT);\n \n \t\t\tif (SPI_finish() != SPI_OK_FINISH)\n\n\n[1] https://commitfest.postgresql.org/22/1975/\n\n-- \nAntonin Houska\nhttps://www.cybertec-postgresql.com\n\n",
"msg_date": "Wed, 20 Feb 2019 15:29:20 +0100",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Unnecessary checks for new rows by some RI trigger functions?"
},
{
"msg_contents": "On Wed, Feb 20, 2019 at 9:27 AM Antonin Houska <ah@cybertec.at> wrote:\n> However I find it confusing that the trigger functions pass detectNewRows=true\n> even if they only execute SELECT statement.\n\nI don't quite see what those two things have to do with each other,\nbut I might be missing something. I stuck in a debugging elog() to\nsee where ri_Check_Pk_Match() gets called and it fired for the\nfollowing query in the regression tests:\n\nupdate pp set f1=f1+1;\n\nThat agrees with my intuition, which is that the logic here has\nsomething to do with allowing an update that changes a referenced row\nbut also changes some other row in the same table so that the\nreference remains valid -- it's just now a reference to some other\nrow. Unfortunately, as you probably also noticed, making the change\nyou propose here doesn't make any tests fail in either the main\nregression tests or the isolation tests.\n\nI suspect that this is a defect in the tests rather than a sign that\nthe code doesn't need to be changed, though. I'd try a statement like\nthe above in a multi-statement transaction with a higher-than-default\nisolation level, probably REPEATABLE READ, and maybe some concurrent\nactivity in another session. Sorry, I'm hand-waving here; I don't\nknow exactly what's going on.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Thu, 21 Feb 2019 08:00:21 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unnecessary checks for new rows by some RI trigger functions?"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Feb 20, 2019 at 9:27 AM Antonin Houska <ah@cybertec.at> wrote:\n>> However I find it confusing that the trigger functions pass detectNewRows=true\n>> even if they only execute SELECT statement.\n\n> I don't quite see what those two things have to do with each other,\n> but I might be missing something.\n\nMe too. It's quite easy to demonstrate that the second part of Antonin's\nproposed patch (ie, don't check for new rows in the FK table during\nri_restrict) is wrong:\n\nregression=# create table p(f1 int primary key);\nCREATE TABLE\nregression=# create table f(f1 int references p on delete no action deferrable initially deferred);\nCREATE TABLE\nregression=# insert into p values (1);\nINSERT 0 1\nregression=# begin transaction isolation level serializable ;\nBEGIN\nregression=# table f;\n f1 \n----\n(0 rows)\n\nregression=# delete from p where f1=1;\nDELETE 1\n\n-- now, in another session:\n\nregression=# insert into f values (1);\nINSERT 0 1\n\n-- next, back in first session:\n\nregression=# commit;\nERROR: update or delete on table \"p\" violates foreign key constraint \"f_f1_fkey\" on table \"f\"\nDETAIL: Key (f1)=(1) is still referenced from table \"f\".\n\nWith the patch, the first session's commit succeeds, and we're left\nwith a violated FK constraint.\n\nI'm less sure about whether the change in ri_Check_Pk_Match would be\nsafe. On its face, what that is on about is the case where you have\nmatching rows in p and f, and a serializable transaction tries to\ndelete the p row, and by the time it commits some other transaction\nhas inserted a new p row so we could allow our deletion to succeed.\nIf you try that, however, you'll notice that the other transaction\ncan't commit its insertion because we haven't committed our deletion,\nso the unique constraint on the PK side would be violated. So maybe\nthere's a case for simplifying the logic there. Or maybe there are\nactually live cases for that with more complex MATCH rules than what\nI tried.\n\nBut TBH I think Antonin's question is backwards: the right thing to\nbe questioning here is whether detectNewRows = false is *ever* OK.\nI think he mischaracterized what that option really does; the comments\nin ri_PerformCheck explain it this way:\n\n * In READ COMMITTED mode, we just need to use an up-to-date regular\n * snapshot, and we will see all rows that could be interesting. But in\n * transaction-snapshot mode, we can't change the transaction snapshot. If\n * the caller passes detectNewRows == false then it's okay to do the query\n * with the transaction snapshot; otherwise we use a current snapshot, and\n * tell the executor to error out if it finds any rows under the current\n * snapshot that wouldn't be visible per the transaction snapshot.\n\nThe question therefore is under what scenarios is it okay for an FK\nconstraint to be enforced (in a serializable or repeatable-read\ntransaction) without attention to operations already committed by\nconcurrent transactions? If your gut answer isn't \"damn near none\",\nI don't think you're being paranoid enough ;-)\n\nThe one case where we use detectNewRows == false today is in\nRI_FKey_check, which we run after an insert or update on the FK\ntable to see if there's a matching row on the PK side. In this\ncase, failing to observe a newly-added PK row won't result in a\nconstraint violation, just an \"unnecessary\" error. I think this\nchoice is reasonable, since if you're running in serializable\nmode you're not supposed to want to depend on transactions\ncommitted after you start. (Note that if somebody concurrently\n*deletes* the PK row we need to match, we will notice that,\nbecause the SELECT FOR SHARE will, and then it'll throw a\nserialization error which seems like the right thing here.)\n\nPerhaps a similar argument can be constructed to justify changing\nthe behavior of ri_Check_Pk_Match, but I've not thought about it\nvery hard. In any case, changing ri_restrict is provably wrong.\n\n> Unfortunately, as you probably also noticed, making the change\n> you propose here doesn't make any tests fail in either the main\n> regression tests or the isolation tests.\n\nYeah. All the RI code predates the existence of the isolation\ntest machinery, so it's not so surprising that we don't have good\ncoverage for this type of situation. I thought I remembered that\nPeter had added some more FK tests recently, but I don't see\nanything in the commit log ...\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 21 Feb 2019 12:02:25 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unnecessary checks for new rows by some RI trigger functions?"
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Wed, Feb 20, 2019 at 9:27 AM Antonin Houska <ah@cybertec.at> wrote:\n> >> However I find it confusing that the trigger functions pass detectNewRows=true\n> >> even if they only execute SELECT statement.\n> \n> > I don't quite see what those two things have to do with each other,\n> > but I might be missing something.\n> \n> Me too.\n\nSure, that was the problem! I was thinking about the crosscheck snapshot so\nmuch that I missed the fact that snapshot handling is not the only purpose of\nthe detectNewRows argument. What I considered a problem can be fixed this way,\nif it's worth:\n\ndiff --git a/src/backend/utils/adt/ri_triggers.c b/src/backend/utils/adt/ri_triggers.c\nindex e1aa3d0044..993e778875 100644\n--- a/src/backend/utils/adt/ri_triggers.c\n+++ b/src/backend/utils/adt/ri_triggers.c\n@@ -233,7 +233,7 @@ static bool ri_PerformCheck(const RI_ConstraintInfo *riinfo,\n \t\t\t\tRI_QueryKey *qkey, SPIPlanPtr qplan,\n \t\t\t\tRelation fk_rel, Relation pk_rel,\n \t\t\t\tHeapTuple old_tuple, HeapTuple new_tuple,\n-\t\t\t\tbool detectNewRows, int expect_OK);\n+\t\t\t\tbool detectNewRows, bool select_only, int expect_OK);\n static void ri_ExtractValues(Relation rel, HeapTuple tup,\n \t\t\t\t const RI_ConstraintInfo *riinfo, bool rel_is_pk,\n \t\t\t\t Datum *vals, char *nulls);\n@@ -439,7 +439,7 @@ RI_FKey_check(TriggerData *trigdata)\n \tri_PerformCheck(riinfo, &qkey, qplan,\n \t\t\t\t\tfk_rel, pk_rel,\n \t\t\t\t\tNULL, new_row,\n-\t\t\t\t\tfalse,\n+\t\t\t\t\tfalse, true,\n \t\t\t\t\tSPI_OK_SELECT);\n \n \tif (SPI_finish() != SPI_OK_FINISH)\n@@ -575,6 +575,7 @@ ri_Check_Pk_Match(Relation pk_rel, Relation fk_rel,\n \t\t\t\t\t\t\t fk_rel, pk_rel,\n \t\t\t\t\t\t\t old_row, NULL,\n \t\t\t\t\t\t\t true,\t/* treat like update */\n+\t\t\t\t\t\t\t true,\n \t\t\t\t\t\t\t SPI_OK_SELECT);\n \n \tif (SPI_finish() != SPI_OK_FINISH)\n@@ -803,6 +804,7 @@ ri_restrict(TriggerData *trigdata, bool is_no_action)\n \t\t\t\t\t\t\tfk_rel, pk_rel,\n \t\t\t\t\t\t\told_row, NULL,\n \t\t\t\t\t\t\ttrue,\t/* must detect new rows */\n+\t\t\t\t\t\t\ttrue,\n \t\t\t\t\t\t\tSPI_OK_SELECT);\n \n \t\t\tif (SPI_finish() != SPI_OK_FINISH)\n@@ -943,6 +945,7 @@ RI_FKey_cascade_del(PG_FUNCTION_ARGS)\n \t\t\t\t\t\t\tfk_rel, pk_rel,\n \t\t\t\t\t\t\told_row, NULL,\n \t\t\t\t\t\t\ttrue,\t/* must detect new rows */\n+\t\t\t\t\t\t\tfalse,\n \t\t\t\t\t\t\tSPI_OK_DELETE);\n \n \t\t\tif (SPI_finish() != SPI_OK_FINISH)\n@@ -1099,6 +1102,7 @@ RI_FKey_cascade_upd(PG_FUNCTION_ARGS)\n \t\t\t\t\t\t\tfk_rel, pk_rel,\n \t\t\t\t\t\t\told_row, new_row,\n \t\t\t\t\t\t\ttrue,\t/* must detect new rows */\n+\t\t\t\t\t\t\tfalse,\n \t\t\t\t\t\t\tSPI_OK_UPDATE);\n \n \t\t\tif (SPI_finish() != SPI_OK_FINISH)\n@@ -1286,6 +1290,7 @@ ri_setnull(TriggerData *trigdata)\n \t\t\t\t\t\t\tfk_rel, pk_rel,\n \t\t\t\t\t\t\told_row, NULL,\n \t\t\t\t\t\t\ttrue,\t/* must detect new rows */\n+\t\t\t\t\t\t\tfalse,\n \t\t\t\t\t\t\tSPI_OK_UPDATE);\n \n \t\t\tif (SPI_finish() != SPI_OK_FINISH)\n@@ -1473,6 +1478,7 @@ ri_setdefault(TriggerData *trigdata)\n \t\t\t\t\t\t\tfk_rel, pk_rel,\n \t\t\t\t\t\t\told_row, NULL,\n \t\t\t\t\t\t\ttrue,\t/* must detect new rows */\n+\t\t\t\t\t\t\tfalse,\n \t\t\t\t\t\t\tSPI_OK_UPDATE);\n \n \t\t\tif (SPI_finish() != SPI_OK_FINISH)\n@@ -2356,7 +2362,7 @@ ri_PerformCheck(const RI_ConstraintInfo *riinfo,\n \t\t\t\tRI_QueryKey *qkey, SPIPlanPtr qplan,\n \t\t\t\tRelation fk_rel, Relation pk_rel,\n \t\t\t\tHeapTuple old_tuple, HeapTuple new_tuple,\n-\t\t\t\tbool detectNewRows, int expect_OK)\n+\t\t\t\tbool detectNewRows, bool select_only, int expect_OK)\n {\n \tRelation\tquery_rel,\n \t\t\t\tsource_rel;\n@@ -2423,17 +2429,20 @@ ri_PerformCheck(const RI_ConstraintInfo *riinfo,\n \t * that SPI_execute_snapshot will register the snapshots, so we don't need\n \t * to bother here.\n \t */\n+\ttest_snapshot = InvalidSnapshot;\n+\tcrosscheck_snapshot = InvalidSnapshot;\n \tif (IsolationUsesXactSnapshot() && detectNewRows)\n \t{\n \t\tCommandCounterIncrement();\t/* be sure all my own work is visible */\n+\n \t\ttest_snapshot = GetLatestSnapshot();\n-\t\tcrosscheck_snapshot = GetTransactionSnapshot();\n-\t}\n-\telse\n-\t{\n-\t\t/* the default SPI behavior is okay */\n-\t\ttest_snapshot = InvalidSnapshot;\n-\t\tcrosscheck_snapshot = InvalidSnapshot;\n+\n+\t\t/*\n+\t\t * crosscheck_snapshot is actually used only for UPDATE / DELETE\n+\t\t * queries.\n+\t\t */\n+\t\tif (!select_only)\n+\t\t\tcrosscheck_snapshot = GetTransactionSnapshot();\n \t}\n \n \t/*\n\n\n> It's quite easy to demonstrate that the second part of Antonin's\n> proposed patch (ie, don't check for new rows in the FK table during\n> ri_restrict) is wrong:\n> \n> regression=# create table p(f1 int primary key);\n> CREATE TABLE\n> regression=# create table f(f1 int references p on delete no action deferrable initially deferred);\n> CREATE TABLE\n> regression=# insert into p values (1);\n> INSERT 0 1\n> regression=# begin transaction isolation level serializable ;\n> BEGIN\n> regression=# table f;\n> f1 \n> ----\n> (0 rows)\n> \n> regression=# delete from p where f1=1;\n> DELETE 1\n> \n> -- now, in another session:\n> \n> regression=# insert into f values (1);\n> INSERT 0 1\n> \n> -- next, back in first session:\n> \n> regression=# commit;\n> ERROR: update or delete on table \"p\" violates foreign key constraint \"f_f1_fkey\" on table \"f\"\n> DETAIL: Key (f1)=(1) is still referenced from table \"f\".\n> \n> With the patch, the first session's commit succeeds, and we're left\n> with a violated FK constraint.\n\nWhen I was running this example, the other session got blocked until the first\n(serializable) transaction committed. To achieve this ERROR (or FK violated\ndue to my patch proposal) I had to run the other session's INSERT before the\nfirst session's DELETE. But I think I understand the problem.\n\nThanks for the detailed analysis.\n\n-- \nAntonin Houska\nhttps://www.cybertec-postgresql.com\n\n",
"msg_date": "Fri, 22 Feb 2019 10:14:33 +0100",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Unnecessary checks for new rows by some RI trigger functions?"
},
{
"msg_contents": "Antonin Houska <ah@cybertec.at> writes:\n> Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> It's quite easy to demonstrate that the second part of Antonin's\n>> proposed patch (ie, don't check for new rows in the FK table during\n>> ri_restrict) is wrong:\n\n> When I was running this example, the other session got blocked until the first\n> (serializable) transaction committed. To achieve this ERROR (or FK violated\n> due to my patch proposal) I had to run the other session's INSERT before the\n> first session's DELETE.\n\nOh, right, I copied and pasted out of my terminal windows in the wrong\norder :-(. The INSERT indeed must happen before the DELETE (and the\nTABLE or SELECT before that, so as to establish the serializable\ntransaction's snapshot before the insertion). Sorry about that.\n\nNot sure what I think about your new proposed patch. What problem\ndo you think it solves? Also, don't think I believe this:\n\n+\t\t * crosscheck_snapshot is actually used only for UPDATE / DELETE\n+\t\t * queries.\n\nThe commands we're issuing here are SELECT FOR UPDATE^H^H^HSHARE,\nand those should chase up to the newest row version and do a\ncrosscheck just as UPDATE / DELETE would do. If they don't, there's\na hazard of mis-enforcing the FK constraint in the face of\nconcurrent updates.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 22 Feb 2019 09:29:41 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unnecessary checks for new rows by some RI trigger functions?"
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Not sure what I think about your new proposed patch. What problem\n> do you think it solves? Also, don't think I believe this:\n> \n> +\t\t * crosscheck_snapshot is actually used only for UPDATE / DELETE\n> +\t\t * queries.\n\nI wanted to clarify the meaning of crosscheck_snapshot, i.e. only set it when\nit's needed. Anyway I don't feel now it's worth the amount of code changed.\n\n> The commands we're issuing here are SELECT FOR UPDATE^H^H^HSHARE,\n> and those should chase up to the newest row version and do a\n> crosscheck just as UPDATE / DELETE would do. If they don't, there's\n> a hazard of mis-enforcing the FK constraint in the face of\n> concurrent updates.\n\nMaybe I missed something. When I searched through the code I saw the\ncrosscheck_snapshot passed only to heap_update() and heap_delete().\n\n-- \nAntonin Houska\nhttps://www.cybertec-postgresql.com\n\n",
"msg_date": "Mon, 25 Feb 2019 08:20:10 +0100",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Unnecessary checks for new rows by some RI trigger functions?"
}
] |
[
{
"msg_contents": "I propose the attached patch to remove some unnecessary, legacy-looking\nuse of the PROCEDURAL keyword before LANGUAGE. We mostly don't use this\nanymore, so some of these look a bit old.\n\nThere is still some use in pg_dump, which is harder to remove because\nit's baked into the archive format, so I'm not touching that.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 20 Feb 2019 15:39:21 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Remove unnecessary use of PROCEDURAL"
},
{
"msg_contents": "On 2019-02-20 15:39, Peter Eisentraut wrote:\n> I propose the attached patch to remove some unnecessary, legacy-looking\n> use of the PROCEDURAL keyword before LANGUAGE. We mostly don't use this\n> anymore, so some of these look a bit old.\n\ncommitted\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Mon, 25 Feb 2019 09:20:08 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Remove unnecessary use of PROCEDURAL"
}
] |
[
{
"msg_contents": "Nowadays there are a number of methods for composing multiple\npostgresql.conf files for modularity. But if you have a bunch of things\nyou want to load via shared_preload_libraries, you have to put them all\nin one setting. How about some kind of syntax for appending something\nto a list, like\n\n shared_preload_libraries += 'pg_stat_statements'\n\nThoughts?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Wed, 20 Feb 2019 16:15:42 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "list append syntax for postgresql.conf"
},
{
"msg_contents": "Em qua, 20 de fev de 2019 às 12:15, Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> escreveu:\n>\n> Nowadays there are a number of methods for composing multiple\n> postgresql.conf files for modularity. But if you have a bunch of things\n> you want to load via shared_preload_libraries, you have to put them all\n> in one setting. How about some kind of syntax for appending something\n> to a list, like\n>\nPartial setting could confuse users, no? I see the usefulness of such\nfeature but I prefer to implement it via ALTER SYSTEM. Instead of += I\nprefer to add another option to ALTER SYSTEM that appends new values\nsuch as:\n\nALTER SYSTEM APPEND shared_preload_libraries TO 'pg_stat_statements,\npg_prewarm';\n\nand it will expand to:\n\nshared_preload_libraries = 'foo, bar, pg_stat_statements, pg_prewarm'\n\n\n> shared_preload_libraries += 'pg_stat_statements'\n>\nWhat happen if you have:\n\nshared_preload_libraries = 'foo'\n\nthen\n\nshared_preload_libraries += 'bar'\n\nand then\n\nshared_preload_libraries = 'pg_stat_statements'\n\nYou have only 'pg_stat_statements' or 'foo, bar, pg_stat_statements'?\n\nSuppose you mistype 'bar' as 'baz', do you have only 'pg_stat_statements'?\n\n\n-- \n Euler Taveira Timbira -\nhttp://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\n",
"msg_date": "Wed, 20 Feb 2019 12:35:33 -0300",
"msg_from": "Euler Taveira <euler@timbira.com.br>",
"msg_from_op": false,
"msg_subject": "Re: list append syntax for postgresql.conf"
},
{
"msg_contents": "Re: Peter Eisentraut 2019-02-20 <74af1f60-34af-633e-ee8a-310d40c741a7@2ndquadrant.com>\n> Nowadays there are a number of methods for composing multiple\n> postgresql.conf files for modularity. But if you have a bunch of things\n> you want to load via shared_preload_libraries, you have to put them all\n> in one setting. How about some kind of syntax for appending something\n> to a list, like\n> \n> shared_preload_libraries += 'pg_stat_statements'\n\nI've thought about that syntax myself in the past. It would be very\nhandy to be able to things like\n\n/etc/postgresql/11/main/conf.d/pg_stat_statements.conf:\nshared_preload_libraries += 'pg_stat_statements'\npg_stat_statements.track = all\n\n(The current Debian packages already create and support conf.d/ in the\ndefault configuration.)\n\n+1.\n\nChristoph\n\n",
"msg_date": "Wed, 20 Feb 2019 16:37:19 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: list append syntax for postgresql.conf"
},
{
"msg_contents": "On 2/20/19 10:15 AM, Peter Eisentraut wrote:\n> Nowadays there are a number of methods for composing multiple\n> postgresql.conf files for modularity. But if you have a bunch of things\n> you want to load via shared_preload_libraries, you have to put them all\n> in one setting. How about some kind of syntax for appending something\n> to a list, like\n> \n> shared_preload_libraries += 'pg_stat_statements'\n> \n> Thoughts?\n\n\nI like the idea, but presume it would apply to any GUC list, not just\nshared_preload_libraries?\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development",
"msg_date": "Wed, 20 Feb 2019 10:54:50 -0500",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: list append syntax for postgresql.conf"
},
{
"msg_contents": "On Wednesday, February 20, 2019, Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> Nowadays there are a number of methods for composing multiple\n> postgresql.conf files for modularity. But if you have a bunch of things\n> you want to load via shared_preload_libraries, you have to put them all\n> in one setting. How about some kind of syntax for appending something\n> to a list, like\n>\n> shared_preload_libraries += 'pg_stat_statements'\n>\n\nI would rather just have the behavior for that variable “append mode”,\nperiod. Maybe do it generally for all multi-value variables. It would be\nlike “add only” permissions - if you don’t want something loaded it cannot\nbe specified ever, overwrite is not allowed. Get rid of any\norder-of-operations concerns.\n\nDavid J.\n\nOn Wednesday, February 20, 2019, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:Nowadays there are a number of methods for composing multiple\npostgresql.conf files for modularity. But if you have a bunch of things\nyou want to load via shared_preload_libraries, you have to put them all\nin one setting. How about some kind of syntax for appending something\nto a list, like\n\n shared_preload_libraries += 'pg_stat_statements'\nI would rather just have the behavior for that variable “append mode”, period. Maybe do it generally for all multi-value variables. It would be like “add only” permissions - if you don’t want something loaded it cannot be specified ever, overwrite is not allowed. Get rid of any order-of-operations concerns.David J.",
"msg_date": "Wed, 20 Feb 2019 08:59:28 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: list append syntax for postgresql.conf"
},
{
"msg_contents": "On 2/20/19 10:54 AM, Joe Conway wrote:\n>> shared_preload_libraries += 'pg_stat_statements'\n> \n> I like the idea, but presume it would apply to any GUC list, not just\n> shared_preload_libraries?\n\nIt would be nice if there were some way for extensions to declare\nGUC lists (the very thing that recently became explicitly unsupported).\n\nThe difficulty seems to be that a GUC may be assigned before the\nextension has been loaded to determine whether list syntax should apply.\n\nCould a change like this improve that situation too, perhaps by\ndeciding that += syntax /implies/ that an as-yet-undeclared GUC is\nto be of list form (which could then be checked when the extension\ndeclares the GUC)?\n\n-Chap\n\n",
"msg_date": "Wed, 20 Feb 2019 11:16:51 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: list append syntax for postgresql.conf"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> Nowadays there are a number of methods for composing multiple\n> postgresql.conf files for modularity. But if you have a bunch of things\n> you want to load via shared_preload_libraries, you have to put them all\n> in one setting. How about some kind of syntax for appending something\n> to a list, like\n> shared_preload_libraries += 'pg_stat_statements'\n\nSeems potentially useful, but you'd need to figure out how to represent\nthis in the pg_settings and pg_file_settings views, which currently\nsuppose that any given GUC's value is determined by exactly one place in\nthe config file(s).\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 20 Feb 2019 11:42:37 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: list append syntax for postgresql.conf"
},
{
"msg_contents": "On Wed, Feb 20, 2019 at 10:15 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> Nowadays there are a number of methods for composing multiple\n> postgresql.conf files for modularity. But if you have a bunch of things\n> you want to load via shared_preload_libraries, you have to put them all\n> in one setting. How about some kind of syntax for appending something\n> to a list, like\n>\n> shared_preload_libraries += 'pg_stat_statements'\n>\n> Thoughts?\n\nI like the idea of solving this problem but I'm not sure it's a good\nidea to add this sort of syntax to postgresql.conf. I think it would\nbe more useful to provide a way to tell ALTER SYSTEM that you want to\nappend rather than replace, as I see Euler also proposes.\n\nAnother off-ball idea is to maybe allow something like\nshared_preload_statements.pg_stat_statments = 1. The server would\nload all libraries X for which shared_preload_statements.X is set to a\nvalue that is one of the ways we spell a true value for a Boolean GUC.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Thu, 21 Feb 2019 12:18:29 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: list append syntax for postgresql.conf"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I like the idea of solving this problem but I'm not sure it's a good\n> idea to add this sort of syntax to postgresql.conf. I think it would\n> be more useful to provide a way to tell ALTER SYSTEM that you want to\n> append rather than replace, as I see Euler also proposes.\n\n> Another off-ball idea is to maybe allow something like\n> shared_preload_statements.pg_stat_statments = 1. The server would\n> load all libraries X for which shared_preload_statements.X is set to a\n> value that is one of the ways we spell a true value for a Boolean GUC.\n\nEither one of these would avoid the problem of having more than one\nplace-of-definition of a GUC value, so I find them attractive.\nThe second one seems a tad more modular, perhaps, and it's also\neasier to figure out how to undo a change.\n\nA possible objection to the second one is that it binds us forevermore\nto allowing setting of GUCs that weren't previously known, which is\nsomething I don't much care for. But it can probably be constrained\nenough to remove the formal indeterminacy: we can insist that all\nGUCs named \"shared_preload_modules.something\" will be booleans, and\nthen we'll throw error if any of them don't correspond to a known\nloadable library.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 21 Feb 2019 12:30:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: list append syntax for postgresql.conf"
},
{
"msg_contents": "On 2019-02-21 18:18, Robert Haas wrote:\n> Another off-ball idea is to maybe allow something like\n> shared_preload_statements.pg_stat_statments = 1. The server would\n> load all libraries X for which shared_preload_statements.X is set to a\n> value that is one of the ways we spell a true value for a Boolean GUC.\n\nThis line of thought is attractive, but the library lists can be\norder-sensitive in case of cooperating libraries, so an approach were\nthe load order is indeterminate might not work. (It might be worth\nexploring ways of making it not order-sensitive. Many trade-offs.)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 22 Feb 2019 11:57:11 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: list append syntax for postgresql.conf"
}
] |
[
{
"msg_contents": "The following bug has been logged on the website:\n\nBug reference: 15646\nLogged by: Eugen Konkov\nEmail address: kes-kes@yandex.ru\nPostgreSQL version: 10.4\nOperating system: (Debian 10.4-2.pgdg90+1)\nDescription: \n\nHi.\r\n\r\n[documentation](https://www.postgresql.org/docs/11/functions-admin.html#FUNCTIONS-ADMIN-SET)\ndescribes that:\r\n\r\n>set_config sets the parameter setting_name to new_value. If is_local is\ntrue, the new value will only apply to the current transaction.\r\n\r\nThus if I rollback transaction original value should be available.\r\n\r\nBut current behavior returns empty string instead of NULL (the initial\nvalue) after transaction is rolled back. When I restart session, NULL is\nreturned again as it is expected.\r\n( I expect NULL just because 'my.app_period' is not configuration variable\nis not defined yet. The documentation (link provided above) does not cover\nwhat should be returned)\r\n\r\nHow to reproduce steps:\r\n$ make dbshell\r\npsql -h databases -p 5433 -U tucha tucha ||:\r\npsql (11.1 (Ubuntu 11.1-1.pgdg16.04+1), server 10.4 (Debian\n10.4-2.pgdg90+1))\r\nType \"help\" for help.\r\n\r\nWe start a new session and check setting value before transaction. It is\nNULL:\r\ntucha=> select current_setting( 'my.app_period', true ) is null;\r\n ?column? \r\n----------\r\n t\r\n(1 row)\r\n\r\nWe start transaction and change the setting value:\r\ntucha=> begin;\r\nBEGIN\r\ntucha=> select set_config( 'my.app_period', 'some value', true );\r\n set_config \r\n------------\r\n some value\r\n(1 row)\r\n\r\nWe can see that value is changed. It is NOT NULL:\r\ntucha=> select current_setting( 'my.app_period', true ) is null;\r\n ?column? \r\n----------\r\n f\r\n(1 row)\r\n\r\ntucha=> select current_setting( 'my.app_period', true );\r\n current_setting \r\n-----------------\r\n some value\r\n(1 row)\r\n\r\nHere I finish transaction (it has no matter how: commit/rollback):\r\ntucha=> rollback;\r\nROLLBACK\r\n\r\nHere we can see that setting value is different from value that was before\ntransaction\r\ntucha=> select current_setting( 'my.app_period', true ) is null;\r\n ?column? \r\n----------\r\n f\r\n(1 row)\r\n\r\ntucha=> \\q\r\n\r\n\r\nWhen I restart session I get NULL again (as expected):\r\nkes@work ~/t $ make dbshell\r\npsql -h databases -p 5433 -U tucha tucha ||:\r\npsql (11.1 (Ubuntu 11.1-1.pgdg16.04+1), server 10.4 (Debian\n10.4-2.pgdg90+1))\r\nType \"help\" for help.\r\n\r\ntucha=> select current_setting( 'my.app_period', true ) is null;\r\n ?column? \r\n----------\r\n t\r\n(1 row)\r\n\r\n\r\nMy database version:\r\ntucha=> select version();\r\n version \n \r\n------------------------------------------------------------------------------------------\r\n PostgreSQL 10.4 (Debian 10.4-2.pgdg90+1) on x86_64-pc-linux-gnu, compiled\nby gcc (Debian \r\n(1 row)",
"msg_date": "Wed, 20 Feb 2019 16:10:56 +0000",
"msg_from": "PG Bug reporting form <noreply@postgresql.org>",
"msg_from_op": true,
"msg_subject": "BUG #15646: Inconsistent behavior for current_setting/set_config"
},
{
"msg_contents": "On 2/20/19 11:10 AM, PG Bug reporting form wrote:\n> The following bug has been logged on the website:\n> \n> Bug reference: 15646\n> Logged by: Eugen Konkov\n> Email address: kes-kes@yandex.ru\n> PostgreSQL version: 10.4\n> Operating system: (Debian 10.4-2.pgdg90+1)\n> Description: \n> \n> Hi.\n> \n> [documentation](https://www.postgresql.org/docs/11/functions-admin.html#FUNCTIONS-ADMIN-SET)\n> describes that:\n> \n>>set_config sets the parameter setting_name to new_value. If is_local is\n> true, the new value will only apply to the current transaction.\n> \n> Thus if I rollback transaction original value should be available.\n> \n> But current behavior returns empty string instead of NULL (the initial\n> value) after transaction is rolled back. When I restart session, NULL is\n> returned again as it is expected.\n\n\nThis has been discussed before and dismissed:\nhttps://www.postgresql.org/message-id/flat/56842412.5000005%40joeconway.com\n\nPersonally I agree it is a bug, but I am not sure you will get much\nsupport for that position.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development",
"msg_date": "Wed, 20 Feb 2019 12:01:47 -0500",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15646: Inconsistent behavior for current_setting/set_config"
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> On 2/20/19 11:10 AM, PG Bug reporting form wrote:\n>> But current behavior returns empty string instead of NULL (the initial\n>> value) after transaction is rolled back. When I restart session, NULL is\n>> returned again as it is expected.\n\n> This has been discussed before and dismissed:\n> https://www.postgresql.org/message-id/flat/56842412.5000005%40joeconway.com\n> Personally I agree it is a bug, but I am not sure you will get much\n> support for that position.\n\nThe fact that we allow undeclared user-defined GUCs at all is a bug IMO.\nWe need to find a way to replace that behavior with something whereby\nthe name and type of a parameter are declared up-front before you can\nset it.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 20 Feb 2019 12:11:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15646: Inconsistent behavior for current_setting/set_config"
},
{
"msg_contents": "On Wed, Feb 20, 2019 at 10:11 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Joe Conway <mail@joeconway.com> writes:\n> > On 2/20/19 11:10 AM, PG Bug reporting form wrote:\n> >> But current behavior returns empty string instead of NULL (the initial\n> >> value) after transaction is rolled back. When I restart session, NULL is\n> >> returned again as it is expected.\n>\n> > This has been discussed before and dismissed:\n> > https://www.postgresql.org/message-id/flat/56842412.5000005%40joeconway.com\n> > Personally I agree it is a bug, but I am not sure you will get much\n> > support for that position.\n>\n> The fact that we allow undeclared user-defined GUCs at all is a bug IMO.\n> We need to find a way to replace that behavior with something whereby\n> the name and type of a parameter are declared up-front before you can\n> set it.\n\nWe should at least document the existing working-as-intended behavior\nthen. This, the linked thread, and Bug # 14877 are all caused by\ninsufficient documentation of the current behavior. Users should be\ninformed that as far as the GUC system is concerned NULL and the empty\nstring are equivalent and that resetting uses the empty string while\nnever being set returns NULL.\n\nIts immaterial whether its existence is due to a bug that simply\nbecame acceptable or was an otherwise retrospectively poor design\ndecision - at this point we have to live with it and should treat it\nas a proper and supported feature, if only in its current form. At\nleast until someone feels strongly enough to deprecate it and put\nsomething else more suitable in its place.\n\nDavid J.\n\n",
"msg_date": "Wed, 20 Feb 2019 10:32:20 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15646: Inconsistent behavior for current_setting/set_config"
},
{
"msg_contents": "Hi,\n\n> On Feb 20, 2019, at 9:32 AM, David G. Johnston <david.g.johnston@gmail.com> wrote:\n> Users should be\n> informed that as far as the GUC system is concerned NULL and the empty\n> string are equivalent and that resetting uses the empty string while\n> never being set returns NULL.\n> \n\nExcept that it’s not - the code path in guc.c uses what is passed as the boot value:\n\nvar->boot_val <https://doxygen.postgresql.org/structconfig__string.html#ad6895b2d95aa9e5c3ff2169d7fe2bc20> = bootValue;\n\n\nIt’s up to the extension developer to understand that this can be changed back to something other than the boot value and not actually be the boot value - you can see this assumption being made in plpgsql, pltcl, and plperl. The GUC system makes no notification that the two are equivalent, neither in code nor in documentation.\n\n> Its immaterial whether its existence is due to a bug that simply\n> became acceptable or was an otherwise retrospectively poor design\n> decision - at this point we have to live with it and should treat it\n> as a proper and supported feature, if only in its current form. At\n> least until someone feels strongly enough to deprecate it and put\n> something else more suitable in its place.\n\n\nAgreed, but it needs to be documented, the current documentation gives only the boot value, and does not note that string variables are the only variables that behave differently and do not return to their boot value.",
"msg_date": "Wed, 20 Feb 2019 10:06:07 -0800",
"msg_from": "Jerry Sievert <jerry@legitimatesounding.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15646: Inconsistent behavior for current_setting/set_config"
},
{
"msg_contents": "On 2/20/19 12:11 PM, Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n>> On 2/20/19 11:10 AM, PG Bug reporting form wrote:\n>>> But current behavior returns empty string instead of NULL (the initial\n>>> value) after transaction is rolled back. When I restart session, NULL is\n>>> returned again as it is expected.\n> \n>> This has been discussed before and dismissed:\n>> https://www.postgresql.org/message-id/flat/56842412.5000005%40joeconway.com\n>> Personally I agree it is a bug, but I am not sure you will get much\n>> support for that position.\n> \n> The fact that we allow undeclared user-defined GUCs at all is a bug IMO.\n> We need to find a way to replace that behavior with something whereby\n> the name and type of a parameter are declared up-front before you can\n> set it.\n\n(moving to hackers)\n\nPerhaps we could do something like:\n\n1. If the user-defined GUC is defined in postgresql.conf, et al, same\n behavior as now\n2. Backward compatibility concerns would be an issue, so create another\n new GUC declare_custom_settings which initially defaults to false.\n3. If declare_custom_settings is true, and the user-defined GUC is not\n defined in postgresql.conf, then in order to create it dynamically\n via SET or similar methods, you must do something like:\n\n CREATE SETTING name TYPE guctype [LIST];\n SET name TO value;\n\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development",
"msg_date": "Wed, 20 Feb 2019 13:51:57 -0500",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15646: Inconsistent behavior for current_setting/set_config"
},
{
"msg_contents": "Hello,\n\nNot sure I should open new issue or continue this one.\n\nselect set_config( 'my.some_conf', 'value', true );\n\ndoes not issue warning if there is no transaction in progress.\n\nI faced into this problem when call to stored function which make use\nof configurations. and missed that this function has no effect because\nthere is no transaction in progress\n\n-- \nBest regards,\nEugen Konkov\n\n\n",
"msg_date": "Tue, 26 Feb 2019 14:35:39 +0200",
"msg_from": "Eugen Konkov <kes-kes@yandex.ru>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15646: Inconsistent behavior for current_setting/set_config"
},
{
"msg_contents": "Hello\n\nDocumentation has no description how 'false' value for 'is_local' parameter interact with transaction\n\n\n\nDo I understand correct?\n\nhttps://www.postgresql.org/docs/11/functions-admin.html#FUNCTIONS-ADMIN-SET\n>set_config sets the parameter setting_name to new_value. If is_local is true, the new value will only apply to the current transaction. If you want the new value to apply for the current session, use false instead. \n\nIf I use 'false' then transaction will not have effect, because I set the value to session?\n\ntucha=> select current_setting( 'my.app_period', true );\n current_setting \n-----------------\n \n(1 row)\n\ntucha=> begin;\nBEGIN\ntucha=> select set_config( 'my.app_period', tstzrange( '-infinity', 'infinity' )::text, false );\n set_config \n----------------------\n [-infinity,infinity)\n(1 row)\n\ntucha=> rollback;\nROLLBACK\n\nNOTICE: session is rolled back and session value is rolled back despite on that I did not use 'true' as parameter for local:\n\ntucha=> select current_setting( 'my.app_period', true );\n current_setting \n-----------------\n \n(1 row)\n\ntucha=> begin;\nBEGIN\ntucha=> select set_config( 'my.app_period', tstzrange( '-infinity', 'infinity' )::text, false );\n set_config \n----------------------\n [-infinity,infinity)\n(1 row)\n\ntucha=> commit;\nCOMMIT\n\nWhen I commit then the value is applied to session:\ntucha=> select current_setting( 'my.app_period', true );\n current_setting \n----------------------\n [-infinity,infinity)\n(1 row)\n\n\n\n-- \nBest regards,\nEugen Konkov\n\n\n",
"msg_date": "Tue, 26 Feb 2019 14:48:46 +0200",
"msg_from": "Eugen Konkov <kes-kes@yandex.ru>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15646: Inconsistent behavior for current_setting/set_config"
},
{
"msg_contents": "On Tuesday, February 26, 2019, Eugen Konkov <kes-kes@yandex.ru> wrote:\n\n> If I use 'false' then transaction will not have effect, because I set the\n> value to session?\n>\n\nThe current transaction is always affected. True meansonly; false causes\nthe change to persist beyond, for the life of the session.\n\nDavis J.\n\nOn Tuesday, February 26, 2019, Eugen Konkov <kes-kes@yandex.ru> wrote:If I use 'false' then transaction will not have effect, because I set the value to session?\nThe current transaction is always affected. True meansonly; false causes the change to persist beyond, for the life of the session.Davis J.",
"msg_date": "Tue, 26 Feb 2019 07:11:12 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15646: Inconsistent behavior for current_setting/set_config"
}
] |
[
{
"msg_contents": "src/backend/postmaster/autovacuum.c declares:\n\n|static void\n|autovac_report_workitem(AutoVacuumWorkItem *workitem,\n| const char *nspname, const char *relname)\n\nBut calls it like:\n\n| cur_relname = get_rel_name(workitem->avw_relation);\n| cur_nspname = get_namespace_name(get_rel_namespace(workitem->avw_relation));\n| cur_datname = get_database_name(MyDatabaseId);\n| if (!cur_relname || !cur_nspname || !cur_datname)\n| goto deleted2;\n|\n| autovac_report_workitem(workitem, cur_nspname, cur_datname);\n\nSo I see stuff like:\n\n|check_pg - txn_time POSTGRES_TXN_TIME OK: DB main longest txn: 164s PID:10697 database:main username: query:autovacuum: BRIN summarize public.main 1028223\n\nI guess it should be database.namespace.relname ?\n\n",
"msg_date": "Wed, 20 Feb 2019 12:55:52 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "BRIN summarize autovac_report_workitem passes datname as relname"
},
{
"msg_contents": "Em qua, 20 de fev de 2019 às 15:56, Justin Pryzby\n<pryzby@telsasoft.com> escreveu:\n>\n> I guess it should be database.namespace.relname ?\n>\nYup. It is an oversight in 7526e10224f0792201e99631567bbe44492bbde4.\n\n\n-- \n Euler Taveira Timbira -\nhttp://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\n",
"msg_date": "Wed, 20 Feb 2019 18:19:34 -0300",
"msg_from": "Euler Taveira <euler@timbira.com.br>",
"msg_from_op": false,
"msg_subject": "Re: BRIN summarize autovac_report_workitem passes datname as relname"
},
{
"msg_contents": "On 2019-Feb-20, Euler Taveira wrote:\n\n> Em qua, 20 de fev de 2019 �s 15:56, Justin Pryzby\n> <pryzby@telsasoft.com> escreveu:\n> >\n> > I guess it should be database.namespace.relname ?\n> >\n> Yup. It is an oversight in 7526e10224f0792201e99631567bbe44492bbde4.\n\nOops. Pushed fix. Thanks for reporting.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 22 Feb 2019 13:01:34 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: BRIN summarize autovac_report_workitem passes datname as relname"
}
] |
[
{
"msg_contents": "Hi all,\n\n\nWhen we want to get total size of all relation in a schema we have to\nexecute one of our favorite DBA query. It is quite simple but what\nabout displaying schema size when using \\dn+ in psql ?\n\n\ngilles=# \\dn+\n List of schemas\n Name | Owner | Access privileges | Size | Description \n--------+----------+----------------------+---------+------------------------\n public | postgres | postgres=UC/postgres+| 608 kB | standard public schema\n | | =UC/postgres | |\n test | gilles | | 57 MB |\n empty | gilles | | 0 bytes |\n(3 rows)\n\nThe attached simple patch adds this feature. Is there any cons adding\nthis information? The patch tries to be compatible to all PostgreSQL\nversion. Let me know if I have missed something.\n\n\nBest regards,\n\n-- \nGilles Darold\nConsultant PostgreSQL\nhttp://dalibo.com - http://dalibo.org",
"msg_date": "Wed, 20 Feb 2019 23:26:00 +0100",
"msg_from": "Gilles Darold <gilles.darold@dalibo.com>",
"msg_from_op": true,
"msg_subject": "[patch] Add schema total size to psql \\dn+"
},
{
"msg_contents": "Le 20/02/2019 à 23:26, Gilles Darold a écrit :\n> Hi all,\n>\n>\n> When we want to get total size of all relation in a schema we have to\n> execute one of our favorite DBA query. It is quite simple but what\n> about displaying schema size when using \\dn+ in psql ?\n>\n>\n> gilles=# \\dn+\n> List of schemas\n> Name | Owner | Access privileges | Size | Description \n> --------+----------+----------------------+---------+------------------------\n> public | postgres | postgres=UC/postgres+| 608 kB | standard public schema\n> | | =UC/postgres | |\n> test | gilles | | 57 MB |\n> empty | gilles | | 0 bytes |\n> (3 rows)\n>\n> The attached simple patch adds this feature. Is there any cons adding\n> this information? The patch tries to be compatible to all PostgreSQL\n> version. Let me know if I have missed something.\n\n\nImprove this patch by using LATERAL JOIN when version >= 9.3.\n\n\n-- \nGilles Darold\nConsultant PostgreSQL\nhttp://dalibo.com - http://dalibo.org",
"msg_date": "Thu, 21 Feb 2019 11:49:29 +0100",
"msg_from": "Gilles Darold <gilles.darold@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: [patch] Add schema total size to psql \\dn+"
},
{
"msg_contents": "On Thu, Feb 21, 2019 at 11:49 AM Gilles Darold <gilles.darold@dalibo.com> wrote:\n>\n> > When we want to get total size of all relation in a schema we have to\n> > execute one of our favorite DBA query. It is quite simple but what\n> > about displaying schema size when using \\dn+ in psql ?\n> > [...]\n> > The attached simple patch adds this feature. Is there any cons adding\n> > this information? The patch tries to be compatible to all PostgreSQL\n> > version. Let me know if I have missed something.\n\nI needed that quite often, so I'm +1 to add this! Please register\nthis patch on the next commitfest.\n\n",
"msg_date": "Thu, 21 Feb 2019 12:01:26 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [patch] Add schema total size to psql \\dn+"
},
{
"msg_contents": "Le 21/02/2019 à 12:01, Julien Rouhaud a écrit :\n> On Thu, Feb 21, 2019 at 11:49 AM Gilles Darold <gilles.darold@dalibo.com> wrote:\n>>> When we want to get total size of all relation in a schema we have to\n>>> execute one of our favorite DBA query. It is quite simple but what\n>>> about displaying schema size when using \\dn+ in psql ?\n>>> [...]\n>>> The attached simple patch adds this feature. Is there any cons adding\n>>> this information? The patch tries to be compatible to all PostgreSQL\n>>> version. Let me know if I have missed something.\n> I needed that quite often, so I'm +1 to add this! Please register\n> this patch on the next commitfest.\n\n\nAdded to next commitfest.\n\n\n-- \nGilles Darold\nConsultant PostgreSQL\nhttp://dalibo.com - http://dalibo.org\n\n\n",
"msg_date": "Thu, 21 Feb 2019 17:42:03 +0100",
"msg_from": "Gilles Darold <gilles.darold@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: [patch] Add schema total size to psql \\dn+"
},
{
"msg_contents": "On Thu, Feb 21, 2019 at 5:42 PM Gilles Darold <gilles.darold@dalibo.com> wrote:\n>\n> Le 21/02/2019 à 12:01, Julien Rouhaud a écrit :\n> > On Thu, Feb 21, 2019 at 11:49 AM Gilles Darold <gilles.darold@dalibo.com> wrote:\n> >>> When we want to get total size of all relation in a schema we have to\n> >>> execute one of our favorite DBA query. It is quite simple but what\n> >>> about displaying schema size when using \\dn+ in psql ?\n> >>> [...]\n> >>> The attached simple patch adds this feature. Is there any cons adding\n> >>> this information? The patch tries to be compatible to all PostgreSQL\n> >>> version. Let me know if I have missed something.\n\nI have a few comments about the patch.\n\nYou're using pg_class LEFT JOIN pg_namespace while we need INNER JOIN\nhere AFAICT. Also, you're using pg_relation_size(), so fsm, vm won't\nbe accounted for. You should also be bypassing the size for 8.0-\nservers where there's no pg_*_size() functions.\n\n",
"msg_date": "Thu, 21 Feb 2019 18:28:03 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [patch] Add schema total size to psql \\dn+"
},
{
"msg_contents": "Le 21/02/2019 à 18:28, Julien Rouhaud a écrit :\n> On Thu, Feb 21, 2019 at 5:42 PM Gilles Darold <gilles.darold@dalibo.com> wrote:\n>> Le 21/02/2019 à 12:01, Julien Rouhaud a écrit :\n>>> On Thu, Feb 21, 2019 at 11:49 AM Gilles Darold <gilles.darold@dalibo.com> wrote:\n>>>>> When we want to get total size of all relation in a schema we have to\n>>>>> execute one of our favorite DBA query. It is quite simple but what\n>>>>> about displaying schema size when using \\dn+ in psql ?\n>>>>> [...]\n>>>>> The attached simple patch adds this feature. Is there any cons adding\n>>>>> this information? The patch tries to be compatible to all PostgreSQL\n>>>>> version. Let me know if I have missed something.\n> I have a few comments about the patch.\n>\n> You're using pg_class LEFT JOIN pg_namespace while we need INNER JOIN\n> here AFAICT. Also, you're using pg_relation_size(), so fsm, vm won't\n> be accounted for. You should also be bypassing the size for 8.0-\n> servers where there's no pg_*_size() functions.\n\n\nI agree all points. Attached is a new version of the patch that use\npg_total_relation_size() and a filter on relkind IN ('r','m','S'), JOIN\nfixes and no size report before 8.1.\n\n\nThanks for the review.\n\n-- \nGilles Darold\nConsultant PostgreSQL\nhttp://dalibo.com - http://dalibo.org",
"msg_date": "Thu, 21 Feb 2019 20:16:27 +0100",
"msg_from": "Gilles Darold <gilles.darold@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: [patch] Add schema total size to psql \\dn+"
},
{
"msg_contents": "Gilles Darold <gilles.darold@dalibo.com> writes:\n\n> Le 21/02/2019 à 18:28, Julien Rouhaud a écrit :\n>\n>> On Thu, Feb 21, 2019 at 5:42 PM Gilles Darold <gilles.darold@dalibo.com> wrote:\n>>> Le 21/02/2019 à 12:01, Julien Rouhaud a écrit :\n>>>> On Thu, Feb 21, 2019 at 11:49 AM Gilles Darold <gilles.darold@dalibo.com> wrote:\n>>>>>> When we want to get total size of all relation in a schema we have to\n>>>>>> execute one of our favorite DBA query. It is quite simple but what\n>>>>>> about displaying schema size when using \\dn+ in psql ?\n>>>>>> [...]\n>>>>>> The attached simple patch adds this feature. Is there any cons adding\n>>>>>> this information? The patch tries to be compatible to all PostgreSQL\n>>>>>> version. Let me know if I have missed something.\n>> I have a few comments about the patch.\n>>\n>> You're using pg_class LEFT JOIN pg_namespace while we need INNER JOIN\n>> here AFAICT. Also, you're using pg_relation_size(), so fsm, vm won't\n>> be accounted for. You should also be bypassing the size for 8.0-\n>> servers where there's no pg_*_size() functions.\n>\n>\n> I agree all points. Attached is a new version of the patch that use\n> pg_total_relation_size() and a filter on relkind IN ('r','m','S'), JOIN\n> fixes and no size report before 8.1.\n\nBeware that those pg_relation_size() functions are going to block in\ncases where existing objects are (for example) in transactionss such\nas...\n\nbegin;\ntruncate foo;\nbig-nasty-reporting-jobs...;\n\nThus a bare-metal tallying of pg_class.relpages for heap/index/toast,\nalong with missing the FSM/VM size could be $preferred.\n\nAnd/or at least mentioning this caveat in the related manual section :-)\n\nFWIW\n\n\n\n>\n>\n> Thanks for the review.\n\n-- \nJerry Sievers\nPostgres DBA/Development Consulting\ne: postgres.consulting@comcast.net\n\n",
"msg_date": "Thu, 21 Feb 2019 14:57:23 -0600",
"msg_from": "Jerry Sievers <gsievers19@comcast.net>",
"msg_from_op": false,
"msg_subject": "Re: [patch] Add schema total size to psql \\dn+"
},
{
"msg_contents": "Le 21/02/2019 à 21:57, Jerry Sievers a écrit :\n> Gilles Darold <gilles.darold@dalibo.com> writes:\n>\n>> Le 21/02/2019 à 18:28, Julien Rouhaud a écrit :\n>>\n>>> On Thu, Feb 21, 2019 at 5:42 PM Gilles Darold <gilles.darold@dalibo.com> wrote:\n>>>> Le 21/02/2019 à 12:01, Julien Rouhaud a écrit :\n>>>>> On Thu, Feb 21, 2019 at 11:49 AM Gilles Darold <gilles.darold@dalibo.com> wrote:\n>>>>>>> When we want to get total size of all relation in a schema we have to\n>>>>>>> execute one of our favorite DBA query. It is quite simple but what\n>>>>>>> about displaying schema size when using \\dn+ in psql ?\n>>>>>>> [...]\n>>>>>>> The attached simple patch adds this feature. Is there any cons adding\n>>>>>>> this information? The patch tries to be compatible to all PostgreSQL\n>>>>>>> version. Let me know if I have missed something.\n>>> I have a few comments about the patch.\n>>>\n>>> You're using pg_class LEFT JOIN pg_namespace while we need INNER JOIN\n>>> here AFAICT. Also, you're using pg_relation_size(), so fsm, vm won't\n>>> be accounted for. You should also be bypassing the size for 8.0-\n>>> servers where there's no pg_*_size() functions.\n>>\n>> I agree all points. Attached is a new version of the patch that use\n>> pg_total_relation_size() and a filter on relkind IN ('r','m','S'), JOIN\n>> fixes and no size report before 8.1.\n> Beware that those pg_relation_size() functions are going to block in\n> cases where existing objects are (for example) in transactionss such\n> as...\n>\n> begin;\n> truncate foo;\n> big-nasty-reporting-jobs...;\n>\n> Thus a bare-metal tallying of pg_class.relpages for heap/index/toast,\n> along with missing the FSM/VM size could be $preferred.\n>\n> And/or at least mentioning this caveat in the related manual section :-)\n\n\nIt's true but we already have this caveats with \\d+ or \\dt+. They are\ninteractive commands so they can be canceled if they takes too long time.\n\n\nI've attached the v4 of the patch that adds psql documentation update\nfor the \\dn command to add on-disk report in verbose mode. Thanks for\nthe reminder :-)\n\n\n-- \nGilles Darold\nConsultant PostgreSQL\nhttp://dalibo.com - http://dalibo.org",
"msg_date": "Thu, 21 Feb 2019 22:58:54 +0100",
"msg_from": "Gilles Darold <gilles.darold@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: [patch] Add schema total size to psql \\dn+"
},
{
"msg_contents": "On Wed, Feb 20, 2019 at 5:26 PM Gilles Darold <gilles.darold@dalibo.com> wrote:\n> The attached simple patch adds this feature. Is there any cons adding\n> this information?\n\nWell, it'll take time to compute, maybe a lot of time if the database\nis big and the server is busy. Not sure how serious that problem can\nget.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Fri, 22 Feb 2019 11:06:31 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [patch] Add schema total size to psql \\dn+"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Feb 20, 2019 at 5:26 PM Gilles Darold <gilles.darold@dalibo.com> wrote:\n>> The attached simple patch adds this feature. Is there any cons adding\n>> this information?\n\n> Well, it'll take time to compute, maybe a lot of time if the database\n> is big and the server is busy. Not sure how serious that problem can\n> get.\n\nIs there any permissions issue involved here? I'd be a bit worried\nabout whether \\dn+ could fail, or deliver misleading answers, when\nrun by a user without permissions on (some) tables. Also, even if\nwe allow people to get size info on tables they can't read today,\nhaving this feature would be a roadblock to tightening that in\nthe future.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 22 Feb 2019 13:21:50 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [patch] Add schema total size to psql \\dn+"
},
{
"msg_contents": "On Fri, Feb 22, 2019 at 7:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Is there any permissions issue involved here? I'd be a bit worried\n> about whether \\dn+ could fail, or deliver misleading answers, when\n> run by a user without permissions on (some) tables. Also, even if\n> we allow people to get size info on tables they can't read today,\n> having this feature would be a roadblock to tightening that in\n> the future.\n\nGilles' patch is using pg_total_relation_size(), so there's no\npermission check at all. Also AFAICS this function even allows any\nuser to get the size of any other user backend's temporary table.\n\n",
"msg_date": "Sat, 23 Feb 2019 20:54:22 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [patch] Add schema total size to psql \\dn+"
},
{
"msg_contents": "Le 22/02/2019 à 19:21, Tom Lane a écrit :\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> On Wed, Feb 20, 2019 at 5:26 PM Gilles Darold <gilles.darold@dalibo.com> wrote:\n>>> The attached simple patch adds this feature. Is there any cons adding\n>>> this information?\n>> Well, it'll take time to compute, maybe a lot of time if the database\n>> is big and the server is busy. Not sure how serious that problem can\n>> get.\n> Is there any permissions issue involved here? I'd be a bit worried\n> about whether \\dn+ could fail, or deliver misleading answers, when\n> run by a user without permissions on (some) tables. Also, even if\n> we allow people to get size info on tables they can't read today,\n> having this feature would be a roadblock to tightening that in\n> the future.\n\n\nThat's right, I've removed the patch. My first idea was to add a server\nside function pg_schema_size() but I was thinking that a psql\nimplementation was enough but obviously that was not my best idea ever.\nLet me know if there is any interest in having this pg_schema_size()\nserver side function that could take care of user permissions or be used\nby a super user only.\n\n\nBest regards,\n\n-- \nGilles Darold\nConsultant PostgreSQL\nhttp://dalibo.com - http://dalibo.org\n\n\n",
"msg_date": "Mon, 25 Feb 2019 09:51:22 +0100",
"msg_from": "Gilles Darold <gilles.darold@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: [patch] Add schema total size to psql \\dn+"
},
{
"msg_contents": "Le 22/02/2019 à 17:06, Robert Haas a écrit :\n> On Wed, Feb 20, 2019 at 5:26 PM Gilles Darold <gilles.darold@dalibo.com> wrote:\n>> The attached simple patch adds this feature. Is there any cons adding\n>> this information?\n> Well, it'll take time to compute, maybe a lot of time if the database\n> is big and the server is busy. Not sure how serious that problem can\n> get.\n>\nI agree, this king of report should be reserved to a super user\nvoluntary action and not as a default psql behavior. Patch removed.\n\n\nBest regards,\n\n-- \nGilles Darold\nConsultant PostgreSQL\nhttp://dalibo.com - http://dalibo.org\n\n\n",
"msg_date": "Mon, 25 Feb 2019 09:56:53 +0100",
"msg_from": "Gilles Darold <gilles.darold@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: [patch] Add schema total size to psql \\dn+"
}
] |
[
{
"msg_contents": "Hi Team.\n\nNew to contributing so hopefully this is the right place. I've searched the\nforum and it seems this is the place for feature requests/suggestions.\n\nI was reading on VACUUM and VACUUM FULL and saw that the current\nimplementation of VACUUM FULL writes to an entirely new file and then\nswitches to it, as opposed to rewriting the current file in-place. I assume\nthe reason for this is safety in the event the database shuts down in the\nmiddle of the vacuum, the most you will lose is the progress of the vacuum\nand have to start from scratch but otherwise the database will retain its\nintegrity and not become corrupted. This seems to be an effective but\nsomewhat rudimentary system that could be improved. Most notably, the\nrewriting of almost the entire database takes up basically double the\nstorage during the duration of the rewrite which can become expensive or\neven simply inconvenient in IaaS(and probably other) installations where\nthe drive sizes are scaled on demand. Even if the VACUUM FULL doesn't need\nto run often, having to reallocate drive space for an effective duplication\nis not ideal. My suggestion is a journal based in-place rewrite of the\ndatabase files.\n\nThis means that the VACUUM FULL will do a \"pre-processing\" pass over the\ndatabase and figure out at a fairly low level what operations need to be\ndone to compact the database back to it's \"correct\" size. These operations\nwill be written in their entirety to a journal which records all the\noperations about to be performed, with some mechanism for checking if they\nhave already been performed, using the same principle described here:\nhttps://en.wikipedia.org/wiki/Journaling_file_system. This will allow an\nin-place rewrite of the file in a safe manner such that you are able to\nrecover from an unexpected shutdown by resuming the VACUUM FULL from the\njournal, or by detecting where the copy hole is in the file and\nrecording/ignoring it\n\nThe journal could be something as simple as a record of which byte ranges\nneed to be copied into which other byte ranges locations. The journal\nshould record whenever a byte range copy completes for the sake of error\nrecovery. Obviously, each byte range will have a max size of the copy\ndistance from the source to the destination so that the destination will\nnot overwrite the source, therefore making recovery impossible(how can you\nknow where in the copy you stopped?). However, this will have a snowball\neffect as the further you are in the rewrite, the further the source and\ndestination ranges will be so you can copy bigger chunks at a time, and\nwon't have to update the journal's completion flags as often. In the case\nof a shutdown during a copy, you merely read the journal, looking for the\nfirst copy that isn't completed yet, and continue rewriting from there.\nEven if some of the bytes have been copied already, there will be no\ncorruption as you haven't overwritten the source bytes at all. Finally, a\nsimple file truncate can take place once all the copies are complete, and\nthe journal can be deleted. This means the headroom required in the\nfilesystem would be much smaller, and would pay for itself in any copy of\nat least 17 bytes or more (assuming 2*8 byte pointers plus a bit as a\ncompletion flag). The only situation in which this system would consume\nmore space than a total copy is if the database has more than 50% garbage,\nand the garbage is perfectly spread out i.e. isn't in large chunks that can\nbe copied at once and therefore recorded in the journal at once, and each\npiece of garbage is smaller than 17 bytes. Obviously, the journal itself\nwould need a error checking mechanism to ensure the journal was correctly\nand completely written, but this can be as simple as a total file hash at\nthe end of the file.\n\nAn alternative to the completion flags is to compute a hash of the data to\nbe copied and store it in the journal, and then in recovery you can compare\nthe destination with the hash. This has the advantage of not needing to\nwrite to the journal to keep it up to date during the operations, but the\ndisadvantages associated with having to compute many hashes while\nrecovering and storing the hashes in the journal, taking up more space.\nIt's also arguably less safe as there is always the chance(albeit extremely\nunlikely) of a collision, which would mean that the data is not actually\nvalidated. I would argue the risk of this is lower than the risk of bit-rot\nflipping the completion bit, however.\n\nA journaling system like this *might* have performance benefits too,\nspecifically when running in less intelligent file systems like NTFS which\ncan become easily fragmented(causing potentially big performance issues on\nspinning disks). Rewriting the same file will never require a file-system\nde-fragment. The other advantage as mentioned before is in the case of\nauto-scaling drives if used as storage for the DB(as they often are in\nIaaS/Paas services). Not having to scale up rapidly could be a performance\nboost in some cases.\n\nFinally, a journaling system like this will also lend itself to\nstopping/resuming in the middle of the VACUUM FULL. Once the journal is\ncreated and the rewrites have started, assuming the journal \"completion\"\nflag is kept up to date, you can stop the operation in the\nmiddle(presumably writing the current \"gap\" with null bytes or otherwise\nindicating to the DB that there's a gap in the middle that should be\nignored), and continue using the database as usual. This means you can do a\n\"soft\" recovery wherein the database is only halfway vacuumed but it's till\nperfectly operational and functional. You can also resume from this soft\nrecovery by simply continuing to write from the last copy that was\ncompleted. Obviously you will only regain disk space when you reach the end\nand truncate the file but you are at least able to pause/resume the\noperation, waiting only for the current copy block to finish instead of for\nthe entire VACUUM FULL to finish.\n\nI hope this was a coherent explanation of my suggestion. It's possible and\nmaybe even likely that there's a glaring misunderstanding or assumption on\nmy part that means this isn't practical, but I'd still love to get feedback\non it.\n\n\n*ThanksRyan Sheasby*\n\nHi Team.New to contributing so hopefully this is the right place. I've searched the forum and it seems this is the place for feature requests/suggestions.I was reading on VACUUM and VACUUM FULL and saw that the current implementation of VACUUM FULL writes to an entirely new file and then switches to it, as opposed to rewriting the current file in-place. I assume the reason for this is safety in the event the database shuts down in the middle of the vacuum, the most you will lose is the progress of the vacuum and have to start from scratch but otherwise the database will retain its integrity and not become corrupted. This seems to be an effective but somewhat rudimentary system that could be improved. Most notably, the rewriting of almost the entire database takes up basically double the storage during the duration of the rewrite which can become expensive or even simply inconvenient in IaaS(and probably other) installations where the drive sizes are scaled on demand. Even if the VACUUM FULL doesn't need to run often, having to reallocate drive space for an effective duplication is not ideal. My suggestion is a journal based in-place rewrite of the database files.This means that the VACUUM FULL will do a \"pre-processing\" pass over the database and figure out at a fairly low level what operations need to be done to compact the database back to it's \"correct\" size. These operations will be written in their entirety to a journal which records all the operations about to be performed, with some mechanism for checking if they have already been performed, using the same principle described here: https://en.wikipedia.org/wiki/Journaling_file_system. This will allow an in-place rewrite of the file in a safe manner such that you are able to recover from an unexpected shutdown by resuming the VACUUM FULL from the journal, or by detecting where the copy hole is in the file and recording/ignoring it The journal could be something as simple as a record of which byte ranges need to be copied into which other byte ranges locations. The journal should record whenever a byte range copy completes for the sake of error recovery. Obviously, each byte range will have a max size of the copy distance from the source to the destination so that the destination will not overwrite the source, therefore making recovery impossible(how can you know where in the copy you stopped?). However, this will have a snowball effect as the further you are in the rewrite, the further the source and destination ranges will be so you can copy bigger chunks at a time, and won't have to update the journal's completion flags as often. In the case of a shutdown during a copy, you merely read the journal, looking for the first copy that isn't completed yet, and continue rewriting from there. Even if some of the bytes have been copied already, there will be no corruption as you haven't overwritten the source bytes at all. Finally, a simple file truncate can take place once all the copies are complete, and the journal can be deleted. This means the headroom required in the filesystem would be much smaller, and would pay for itself in any copy of at least 17 bytes or more (assuming 2*8 byte pointers plus a bit as a completion flag). The only situation in which this system would consume more space than a total copy is if the database has more than 50% garbage, and the garbage is perfectly spread out i.e. isn't in large chunks that can be copied at once and therefore recorded in the journal at once, and each piece of garbage is smaller than 17 bytes. Obviously, the journal itself would need a error checking mechanism to \nensure the journal was correctly and completely written, but this can be\n as simple as a total file hash at the end of the file.An alternative to the completion flags is to compute a hash of the data to be copied and store it in the journal, and then in recovery you can compare the destination with the hash. This has the advantage of not needing to write to the journal to keep it up to date during the operations, but the disadvantages associated with having to compute many hashes while recovering and storing the hashes in the journal, taking up more space. It's also arguably less safe as there is always the chance(albeit extremely unlikely) of a collision, which would mean that the data is not actually validated. I would argue the risk of this is lower than the risk of bit-rot flipping the completion bit, however.A journaling system like this *might* have performance benefits too, specifically when running in less intelligent file systems like NTFS which can become easily fragmented(causing potentially big performance issues on spinning disks). Rewriting the same file will never require a file-system de-fragment. The other advantage as mentioned before is in the case of auto-scaling drives if used as storage for the DB(as they often are in IaaS/Paas services). Not having to scale up rapidly could be a performance boost in some cases.Finally, a journaling system like this will also lend itself to stopping/resuming in the middle of the VACUUM FULL. Once the journal is created and the rewrites have started, assuming the journal \"completion\" flag is kept up to date, you can stop the operation in the middle(presumably writing the current \"gap\" with null bytes or otherwise indicating to the DB that there's a gap in the middle that should be ignored), and continue using the database as usual. This means you can do a \"soft\" recovery wherein the database is only halfway vacuumed but it's till perfectly operational and functional. You can also resume from this soft recovery by simply continuing to write from the last copy that was completed. Obviously you will only regain disk space when you reach the end and truncate the file but you are at least able to pause/resume the operation, waiting only for the current copy block to finish instead of for the entire VACUUM FULL to finish.I hope this was a coherent explanation of my suggestion. It's possible and maybe even likely that there's a glaring misunderstanding or assumption on my part that means this isn't practical, but I'd still love to get feedback on it.ThanksRyan Sheasby",
"msg_date": "Thu, 21 Feb 2019 01:16:46 +0200",
"msg_from": "Ryan David Sheasby <ryan27968@gmail.com>",
"msg_from_op": true,
"msg_subject": "Journal based VACUUM FULL"
},
{
"msg_contents": "On 2/21/19 12:16 AM, Ryan David Sheasby wrote:\n> I was reading on VACUUM and VACUUM FULL and saw that the current \n> implementation of VACUUM FULL writes to an entirely new file and then \n> switches to it, as opposed to rewriting the current file in-place. I \n> assume the reason for this is safety in the event the database shuts \n> down in the middle of the vacuum, the most you will lose is the progress \n> of the vacuum and have to start from scratch but otherwise the database \n> will retain its integrity and not become corrupted. This seems to be an \n> effective but somewhat rudimentary system that could be improved. Most \n> notably, the rewriting of almost the entire database takes up basically \n> double the storage during the duration of the rewrite which can become \n> expensive or even simply inconvenient in IaaS(and probably other) \n> installations where the drive sizes are scaled on demand. Even if the \n> VACUUM FULL doesn't need to run often, having to reallocate drive space \n> for an effective duplication is not ideal. My suggestion is a journal \n> based in-place rewrite of the database files.\n\nHi,\n\nVACUUM FULL used to modify the table in-place in PostgreSQL 8.4 and \nearlier but that solution was slow and did often cause plenty of index \nbloat while moving the rows around in the table. Which is why PostgreSQL \n9.0 switched it to rewiring the whole table and its indexes.\n\nI have not heard many requests for bringing back the old behavior, but \nI could easily have missed them. Either way I do not think there would \nbe much demand for an in-place VACUUM FULL unless the index bloat \nproblem is also solved.\n\nAdditionally I do not think that the project would want a whole new kind \nof infrastructure just to solve this very narrow case. PostgreSQL \nalready has its own journal (the write-ahead log) which is used to \nensure crash safety, and I think any proposed solution for this would \nneed to use the WAL.\n\nAndreas\n\n",
"msg_date": "Thu, 21 Feb 2019 16:27:06 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: Journal based VACUUM FULL"
},
{
"msg_contents": "Thanks for getting back to me. I had a small discussion with @sfrost on the\nslack team and understand the issue better now. I must admit I didn't\nrealize that the scope of WAL extended to VACUUM operations which is why I\nsuggested a new journaling system. I realize now the issue is not safety(as\nthe WAL already sorts out that issue), but performance. I will rethink my\nsuggestion and let you know if I come up with a useful/performant way of\ndoing this.\n\n\n*ThanksRyan Sheasby*\n\n\nOn Thu, Feb 21, 2019 at 5:27 PM Andreas Karlsson <andreas@proxel.se> wrote:\n\n> On 2/21/19 12:16 AM, Ryan David Sheasby wrote:\n> > I was reading on VACUUM and VACUUM FULL and saw that the current\n> > implementation of VACUUM FULL writes to an entirely new file and then\n> > switches to it, as opposed to rewriting the current file in-place. I\n> > assume the reason for this is safety in the event the database shuts\n> > down in the middle of the vacuum, the most you will lose is the progress\n> > of the vacuum and have to start from scratch but otherwise the database\n> > will retain its integrity and not become corrupted. This seems to be an\n> > effective but somewhat rudimentary system that could be improved. Most\n> > notably, the rewriting of almost the entire database takes up basically\n> > double the storage during the duration of the rewrite which can become\n> > expensive or even simply inconvenient in IaaS(and probably other)\n> > installations where the drive sizes are scaled on demand. Even if the\n> > VACUUM FULL doesn't need to run often, having to reallocate drive space\n> > for an effective duplication is not ideal. My suggestion is a journal\n> > based in-place rewrite of the database files.\n>\n> Hi,\n>\n> VACUUM FULL used to modify the table in-place in PostgreSQL 8.4 and\n> earlier but that solution was slow and did often cause plenty of index\n> bloat while moving the rows around in the table. Which is why PostgreSQL\n> 9.0 switched it to rewiring the whole table and its indexes.\n>\n> I have not heard many requests for bringing back the old behavior, but\n> I could easily have missed them. Either way I do not think there would\n> be much demand for an in-place VACUUM FULL unless the index bloat\n> problem is also solved.\n>\n> Additionally I do not think that the project would want a whole new kind\n> of infrastructure just to solve this very narrow case. PostgreSQL\n> already has its own journal (the write-ahead log) which is used to\n> ensure crash safety, and I think any proposed solution for this would\n> need to use the WAL.\n>\n> Andreas\n>\n\nThanks for getting back to me. I had a small discussion with @sfrost on the slack team and understand the issue better now. I must admit I didn't realize that the scope of WAL extended to VACUUM operations which is why I suggested a new journaling system. I realize now the issue is not safety(as the WAL already sorts out that issue), but performance. I will rethink my suggestion and let you know if I come up with a useful/performant way of doing this.ThanksRyan SheasbyOn Thu, Feb 21, 2019 at 5:27 PM Andreas Karlsson <andreas@proxel.se> wrote:On 2/21/19 12:16 AM, Ryan David Sheasby wrote:\n> I was reading on VACUUM and VACUUM FULL and saw that the current \n> implementation of VACUUM FULL writes to an entirely new file and then \n> switches to it, as opposed to rewriting the current file in-place. I \n> assume the reason for this is safety in the event the database shuts \n> down in the middle of the vacuum, the most you will lose is the progress \n> of the vacuum and have to start from scratch but otherwise the database \n> will retain its integrity and not become corrupted. This seems to be an \n> effective but somewhat rudimentary system that could be improved. Most \n> notably, the rewriting of almost the entire database takes up basically \n> double the storage during the duration of the rewrite which can become \n> expensive or even simply inconvenient in IaaS(and probably other) \n> installations where the drive sizes are scaled on demand. Even if the \n> VACUUM FULL doesn't need to run often, having to reallocate drive space \n> for an effective duplication is not ideal. My suggestion is a journal \n> based in-place rewrite of the database files.\n\nHi,\n\nVACUUM FULL used to modify the table in-place in PostgreSQL 8.4 and \nearlier but that solution was slow and did often cause plenty of index \nbloat while moving the rows around in the table. Which is why PostgreSQL \n9.0 switched it to rewiring the whole table and its indexes.\n\nI have not heard many requests for bringing back the old behavior, but \nI could easily have missed them. Either way I do not think there would \nbe much demand for an in-place VACUUM FULL unless the index bloat \nproblem is also solved.\n\nAdditionally I do not think that the project would want a whole new kind \nof infrastructure just to solve this very narrow case. PostgreSQL \nalready has its own journal (the write-ahead log) which is used to \nensure crash safety, and I think any proposed solution for this would \nneed to use the WAL.\n\nAndreas",
"msg_date": "Thu, 21 Feb 2019 19:12:12 +0200",
"msg_from": "Ryan David Sheasby <ryan27968@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Journal based VACUUM FULL"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-21 16:27:06 +0100, Andreas Karlsson wrote:\n> I have not heard many requests for bringing back the old behavior, but I\n> could easily have missed them. Either way I do not think there would be much\n> demand for an in-place VACUUM FULL unless the index bloat problem is also\n> solved.\n\nYea, I don't think there's much either. What I think there's PLENTY need\nfor is something like pg_repack in core. And could argue that the\ntrigger based logging it does to catch up to changes made concurrently\nwith the rewrite, to the old table, is a form of journaling...\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Fri, 22 Feb 2019 11:15:57 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Journal based VACUUM FULL"
},
{
"msg_contents": "\nOn 2/22/19 2:15 PM, Andres Freund wrote:\n> Hi,\n>\n> On 2019-02-21 16:27:06 +0100, Andreas Karlsson wrote:\n>> I have not heard many requests for bringing back the old behavior, but I\n>> could easily have missed them. Either way I do not think there would be much\n>> demand for an in-place VACUUM FULL unless the index bloat problem is also\n>> solved.\n> Yea, I don't think there's much either. What I think there's PLENTY need\n> for is something like pg_repack in core. And could argue that the\n> trigger based logging it does to catch up to changes made concurrently\n> with the rewrite, to the old table, is a form of journaling...\n>\n\n+1. Maybe this is something that should be on the agenda of the next\ndevelopers' meeting.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 22 Feb 2019 14:56:09 -0500",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Journal based VACUUM FULL"
},
{
"msg_contents": "Greetings,\n\n* Andrew Dunstan (andrew.dunstan@2ndquadrant.com) wrote:\n> On 2/22/19 2:15 PM, Andres Freund wrote:\n> > On 2019-02-21 16:27:06 +0100, Andreas Karlsson wrote:\n> >> I have not heard many requests for bringing back the old behavior, but I\n> >> could easily have missed them. Either way I do not think there would be much\n> >> demand for an in-place VACUUM FULL unless the index bloat problem is also\n> >> solved.\n> > Yea, I don't think there's much either. What I think there's PLENTY need\n> > for is something like pg_repack in core. And could argue that the\n> > trigger based logging it does to catch up to changes made concurrently\n> > with the rewrite, to the old table, is a form of journaling...\n> \n> +1. Maybe this is something that should be on the agenda of the next\n> developers' meeting.\n\nSeems more appropriate to the developer unconference, though perhaps\nthat's what you meant..?\n\nThanks!\n\nStephen",
"msg_date": "Fri, 22 Feb 2019 18:21:21 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Journal based VACUUM FULL"
}
] |
[
{
"msg_contents": "The semantics of NOT IN (SELECT ...) are subtly different from the semantics\nof NOT EXISTS (SELECT ...). These differences center on how NULLs are\ntreated, and in general can result in statements that are harder to optimize\nand slower to execute than the apparently similar NOT EXISTS statement.\n\nA little over a year ago, Christian Antognini authored the blog \"/How Well a\nQuery Optimizer Handles Subqueries?/\" summarizing his findings about the\nperformance of PostgreSQL, MySQL, and Oracle on various subqueries:\n\n \nhttps://antognini.ch/2017/12/how-well-a-query-optimizer-handles-subqueries/\n\nHis position was that you can classify the optimizations as correct or\nincorrect, and based on that he provided the following comparison summary\n(see below). In short, PostgreSQL was the worst of the three systems:\n\n \"Summary\n\n The number of queries that the query optimizers handle correctly are\nthe following:\n\n Oracle Database 12.2: 72 out of 80\n MySQL 8.0.3: 67 out of 80\n PostgreSQL 10.0: 60 out of 80\n\n Since not all queries are handled correctly, for best performance it is\nsometimes necessary to rewrite them.\"\n\nThe subqueries that were found to be optimized \"incorrectly\" were almost\nentirely due to poor or absent NOT IN subquery optimization.\n\nThe PostgreSQL community has been aware of the deficiencies in NOT IN\noptimization for quite some time. Based on an analysis of\npsgsql-performance posts between 2013 and 2015, Robert Haas identified NOT\nIN optimization as one of the common root causes of performance problems.\n\nWe have been working on improved optimization of NOT IN, and we would like\nto share this optimizaton with the community. With respect to the test\ncases mentioned in the blog post mentioned above, it will elevate PostgreSQL\nfrom \"worst\" to \"first\". Generally the performance gains are large when the\noptimization applies, though we have found one test case where performance\nis worse. We are investigating this now to see if we can disable the\noptimization in that case.\n\nWe would like to include a patch for this change in the current commitfest. \nThis thread can be used to track comments about this optimization.\n\n\n\n\n-----\nJim Finnerty, AWS, Amazon Aurora PostgreSQL\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n",
"msg_date": "Wed, 20 Feb 2019 16:44:49 -0700 (MST)",
"msg_from": "Jim Finnerty <jfinnert@amazon.com>",
"msg_from_op": true,
"msg_subject": "NOT IN subquery optimization"
},
{
"msg_contents": "Jim Finnerty <jfinnert@amazon.com> writes:\n> We have been working on improved optimization of NOT IN, and we would like\n> to share this optimizaton with the community.\n\nThe idea that's been kicked around in the past is to detect whether the\nsubselect's output column(s) can be proved NOT NULL, and if so, convert\nto an antijoin just like NOT EXISTS. Is that what you're doing, or\nsomething different?\n\n> We would like to include a patch for this change in the current commitfest. \n\nGenerally, people just send patches, they don't ask for permission first\n;-)\n\nHaving said that, we have a general policy that we don't like complex\npatches that first show up for the last commitfest of a dev cycle.\nSo unless this is a pretty small patch, it's probably going to get\ndelayed to v13. Still, we'd like to have it in the queue, so submit\naway ...\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 20 Feb 2019 19:40:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "re: The idea that's been kicked around in the past is to detect whether the\nsubselect's output column(s) can be proved NOT NULL, and if so, convert\nto an antijoin just like NOT EXISTS\n\nbasically, yes. this will handle nullability of both the outer and inner\ncorrelated expression(s), multiple expressions, presence or absence of\npredicates in the WHERE clause, and whether the correlated expressions are\non the null-padded side of an outer join. If it is judged to be more\nefficient, then it transforms the NOT IN sublink into an anti-join.\n\nsome complications enter into the decision to transform NOT IN to anti-join\nbased on whether a bitmap plan will/not be used, or whether it will/not be\neligible for PQ.\n\n\n\n-----\nJim Finnerty, AWS, Amazon Aurora PostgreSQL\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n",
"msg_date": "Wed, 20 Feb 2019 18:53:25 -0700 (MST)",
"msg_from": "Jim Finnerty <jfinnert@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "Jim Finnerty <jfinnert@amazon.com> writes:\n> re: The idea that's been kicked around in the past is to detect whether the\n> subselect's output column(s) can be proved NOT NULL, and if so, convert\n> to an antijoin just like NOT EXISTS\n\n> basically, yes. this will handle nullability of both the outer and inner\n> correlated expression(s), multiple expressions, presence or absence of\n> predicates in the WHERE clause, and whether the correlated expressions are\n> on the null-padded side of an outer join. If it is judged to be more\n> efficient, then it transforms the NOT IN sublink into an anti-join.\n\nHmm, that seems overcomplicated ...\n\n> some complications enter into the decision to transform NOT IN to anti-join\n> based on whether a bitmap plan will/not be used, or whether it will/not be\n> eligible for PQ.\n\n... and that even more so, considering that this decision really needs\nto be taken long before cost estimates would be available.\n\nAs far as I can see, there should be no situation where we'd not want\nto transform to antijoin if we can prove it's semantically valid to\ndo so. If there are cases where that comes out as a worse plan,\nthat indicates a costing error that would be something to address\nseparately (because it'd also be a problem for other antijoin cases).\nAlso, as long as it nearly always wins, I'm not going to cry too hard\nif there are corner cases where it makes the wrong choice. That's not\nsomething that's possible to avoid completely.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 20 Feb 2019 21:11:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "We can always correctly transform a NOT IN to a correlated NOT EXISTS. In\nalmost all cases it is more efficient to do so. In the one case that we've\nfound that is slower it does come down to a more general costing issue, so\nthat's probably the right way to think about it.\n\n\n\n-----\nJim Finnerty, AWS, Amazon Aurora PostgreSQL\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n",
"msg_date": "Wed, 20 Feb 2019 20:27:47 -0700 (MST)",
"msg_from": "Jim Finnerty <jfinnert@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "On Thu, 21 Feb 2019 at 16:27, Jim Finnerty <jfinnert@amazon.com> wrote:\n> We can always correctly transform a NOT IN to a correlated NOT EXISTS. In\n> almost all cases it is more efficient to do so. In the one case that we've\n> found that is slower it does come down to a more general costing issue, so\n> that's probably the right way to think about it.\n\nI worked on this over 4 years ago [1]. I think the patch there is not\ncompletely broken and seems just to need a few things fixed. I rebased\nit on top of current master and looked at it. I think the main\nremaining issue is fixing the code that ensures the outer side join\nquals can't be NULL. The code that's there looks broken still since\nit attempts to use quals from any inner joined rel for proofs that\nNULLs will be removed. That might not work so well in a case like:\nSELECT * FROM t1 LEFT JOIN t2 ON t1.a = t2.a AND t2.b NOT IN(select b\nfrom t3), however, I'd need to think harder about that since if there\nwas such a qual then the planner should convert the left join into an\ninner join. But anyway, the function expressions_are_not_nullable()\nwas more intended to work with targetlists to ensure exprs there can't\nbe NULL. I just had done a poor job of trying to modify that into\nallowing it to take exprs from any random place, likely that should be\na new function and expressions_are_not_nullable() should be put back\nto what Tom ended up with.\n\nI've attached the rebased and still broken version.\n\n[1] https://www.postgresql.org/message-id/CAApHDvqRB-iFBy68%3DdCgqS46aRep7AuN2pou4KTwL8kX9YOcTQ%40mail.gmail.com\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Fri, 22 Feb 2019 09:44:14 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "On Fri, 22 Feb 2019 at 09:44, David Rowley <david.rowley@2ndquadrant.com> wrote:\n> I've attached the rebased and still broken version.\n\nI set about trying to make a less broken version of this.\n\nA quick reminder of the semantics of NOT IN:\n\n1. WHERE <nullable_column> NOT IN(SELECT <not null column> FROM table);\n\nIf table is non-empty:\nwill filter out rows where <nullable_column> is NULL\nand only show values that are not in <not null column>\n\nIf table is empty:\nFilters nothing.\n\n2. WHERE <nonnullable_column> NOT IN(SELECT <null column> FROM table);\n\nIf table contains NULLs in the <null column> no records will match.\n\nThe previous patch handled #2 correctly but neglected to do anything\nabout #1. For #1 the only way we can implement this as a planner only\nchange is to insist that the outer side expressions also are not null.\nIf we could have somehow tested if \"table\" was non-empty then we could\nhave added a IS NOT NULL clause to the outer query and converted to an\nanti-join, but ... can't know that during planning and can't add the\nIS NOT NULL regardless as, if \"table\" is empty we will filter NULLs\nwhen we shouldn't.\n\nIn the attached, I set about fixing #1 by determining if the outer\nexpressions could be NULL by checking\n\n1. If expression is a Var from an inner joined relation it can't be\nNULL if there's a NOT NULL constraint on the column; or\n2. If expression is a Var from an inner joined relation and there is a\nstrict WHERE/ON clause, the expression can't be NULL; or\n3. If expression is a Var from an outer joined relation check for\nquals that were specified in the same syntactical level as the NOT IN\nfor proofs that NULL will be filtered.\n\nAn example of #3 is:\n\nSELECT * FROM t1 LEFT JOIN t2 on t1.a = t2.a WHERE t2.a IS NOT NULL\nAND t2.a NOT IN(SELECT a FROM t3); -- t2 becomes INNER JOINed later in\nplanning, but...\nor;\nSELECT * FROM t1 LEFT JOIN t2 on t1.a = t2.a AND t2.a NOT IN(SELECT a FROM t3);\n\nIn the latter of the two, the t1.a = t2.a join conditions ensures that\nNULLs can't exist where the NOT IN is evaluated.\n\nI implemented #3 by passing the quals down to\npull_up_sublinks_qual_recurse(). At the top level call 'node' and\n'notnull_proofs' are the same, but that changes in recursive calls\nlike the one we make inside the is_andclause() condition.\n\nComments welcome.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Tue, 26 Feb 2019 01:31:53 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "I'm attaching a working patch following the discussion.\r\n\r\nYou can also find the following patch description is the commit message:\r\nNOT IN to ANTI JOIN transformation\r\n\r\n The semantics of ANTI JOIN were created to match the semantics of NOT\r\n EXISTS, which enables NOT EXISTS subqueries to be efficiently executed\r\n as a kind of join. NOT IN subqueries have different NULL semantics than\r\n NOT EXISTS, but since there is no special join operator for NOT IN it is\r\n generally executed as a nested sub-plan. It is possible, however, to\r\n transform NOT IN to a correlated NOT EXISTS so that it can be executed\r\n it as an ANTI JOIN with additional correlated predicates.\r\n\r\n A general transformation from NOT IN to NOT EXISTS for the\r\n single-expression case (the multi-expression case is just ANDs of the\r\n single-expressions) is:\r\n t1.x NOT IN (SELECT t2.y from t2 where p) <=> NOT EXISTS (select 1\r\n from t2 where (y=x or y is NULL or x is NULL) and p).\r\n\r\n If x or y is non-nullable, we can safely remove the predicate \"x\r\n is NULL\" or \"y is NULL\", and if there is no predicate p,\r\n then \"x is NULL\" may be factored out of the subquery.\r\n Experiments show that if we can remove one or the other ORed\r\n predicates, or if we can factor out the \"x is NULL\", then\r\n execution is typically much faster.\r\n\r\n Basically, for the single expression case (we also handle the multi\r\n expression case), we try to do the following transformation:\r\n When p does not exist:\r\n t1.x not in (t2.y) => ANTI JOIN t1(Filter: x is not null), t2 on\r\n join condition: t1.x=t2.y or t2.y is null.\r\n When x is non-nullable:\r\n t1.x not in (t2.y where p) => ANTI JOIN t1, t2 on join condition:\r\n (t1.x=t2.y or t2.y is null) and p.\r\n\r\n We implemented a nullability test routine is_node_nonnullable().\r\n Currently it handles Var, TargetEntry, CoalesceExpr and Const. Outer\r\n joins are taken into consideration in the nullability test.\r\n\r\n We adjust and apply reduce_outer_joins() before the transformation so \r\n that the outer joins have an opportunity to be converted to inner joins\r\n prior to the transformation.\r\n\r\n Using this transformation, we measured performance improvements of\r\n two to three orders of magnitude on most queries in a development\r\n environment. In our performance experiments, table s (small) has 11\r\n rows, table l (large) has 1 million rows. s.n and l.n have NULL\r\n values. s.nn and l.nn are NOT NULL.\r\n\r\n s.n not in (l.n) 1150 ms -> 0.49 ms\r\n s.nn not in (l.nn) 1120 ms -> 0.45 ms\r\n l.n not in (l.n) over 20 min -> 1700 ms\r\n l.nn not in (l.nn) over 20 min -> 220 ms\r\n l.n not in (s.n) 63 ms -> 750 ms\r\n l.nn not in (s.nn) 58 ms -> 46 ms\r\n\r\n For the only case that performance drops - l.n not in (s.n). It is\r\n likely to be resolved by ending the nested loop anti join early as\r\n soon as we find NULL inner tuple entry/entries that satisfies the\r\n join condition during execution. This is still under investigation.\r\n\r\nComments are welcome.\r\n\r\nWith Regards,\r\n---\r\nZheng Li, AWS, Amazon Aurora PostgreSQL\r\n\r\nOn 2/25/19, 7:32 AM, \"David Rowley\" <david.rowley@2ndquadrant.com> wrote:\r\n\r\n On Fri, 22 Feb 2019 at 09:44, David Rowley <david.rowley@2ndquadrant.com> wrote:\r\n > I've attached the rebased and still broken version.\r\n \r\n I set about trying to make a less broken version of this.\r\n \r\n A quick reminder of the semantics of NOT IN:\r\n \r\n 1. WHERE <nullable_column> NOT IN(SELECT <not null column> FROM table);\r\n \r\n If table is non-empty:\r\n will filter out rows where <nullable_column> is NULL\r\n and only show values that are not in <not null column>\r\n \r\n If table is empty:\r\n Filters nothing.\r\n \r\n 2. WHERE <nonnullable_column> NOT IN(SELECT <null column> FROM table);\r\n \r\n If table contains NULLs in the <null column> no records will match.\r\n \r\n The previous patch handled #2 correctly but neglected to do anything\r\n about #1. For #1 the only way we can implement this as a planner only\r\n change is to insist that the outer side expressions also are not null.\r\n If we could have somehow tested if \"table\" was non-empty then we could\r\n have added a IS NOT NULL clause to the outer query and converted to an\r\n anti-join, but ... can't know that during planning and can't add the\r\n IS NOT NULL regardless as, if \"table\" is empty we will filter NULLs\r\n when we shouldn't.\r\n \r\n In the attached, I set about fixing #1 by determining if the outer\r\n expressions could be NULL by checking\r\n \r\n 1. If expression is a Var from an inner joined relation it can't be\r\n NULL if there's a NOT NULL constraint on the column; or\r\n 2. If expression is a Var from an inner joined relation and there is a\r\n strict WHERE/ON clause, the expression can't be NULL; or\r\n 3. If expression is a Var from an outer joined relation check for\r\n quals that were specified in the same syntactical level as the NOT IN\r\n for proofs that NULL will be filtered.\r\n \r\n An example of #3 is:\r\n \r\n SELECT * FROM t1 LEFT JOIN t2 on t1.a = t2.a WHERE t2.a IS NOT NULL\r\n AND t2.a NOT IN(SELECT a FROM t3); -- t2 becomes INNER JOINed later in\r\n planning, but...\r\n or;\r\n SELECT * FROM t1 LEFT JOIN t2 on t1.a = t2.a AND t2.a NOT IN(SELECT a FROM t3);\r\n \r\n In the latter of the two, the t1.a = t2.a join conditions ensures that\r\n NULLs can't exist where the NOT IN is evaluated.\r\n \r\n I implemented #3 by passing the quals down to\r\n pull_up_sublinks_qual_recurse(). At the top level call 'node' and\r\n 'notnull_proofs' are the same, but that changes in recursive calls\r\n like the one we make inside the is_andclause() condition.\r\n \r\n Comments welcome.\r\n \r\n -- \r\n David Rowley http://www.2ndQuadrant.com/\r\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Mon, 25 Feb 2019 17:38:22 +0000",
"msg_from": "\"Li, Zheng\" <zhelli@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "Resend the patch with a whitespace removed so that \"git apply patch\" works directly.\r\n\r\n---\r\nZheng Li, AWS, Amazon Aurora PostgreSQL\r\n\r\nOn 2/25/19, 12:39 PM, \"Li, Zheng\" <zhelli@amazon.com> wrote:\r\n\r\n I'm attaching a working patch following the discussion.\r\n \r\n You can also find the following patch description is the commit message:\r\n NOT IN to ANTI JOIN transformation\r\n \r\n The semantics of ANTI JOIN were created to match the semantics of NOT\r\n EXISTS, which enables NOT EXISTS subqueries to be efficiently executed\r\n as a kind of join. NOT IN subqueries have different NULL semantics than\r\n NOT EXISTS, but since there is no special join operator for NOT IN it is\r\n generally executed as a nested sub-plan. It is possible, however, to\r\n transform NOT IN to a correlated NOT EXISTS so that it can be executed\r\n it as an ANTI JOIN with additional correlated predicates.\r\n \r\n A general transformation from NOT IN to NOT EXISTS for the\r\n single-expression case (the multi-expression case is just ANDs of the\r\n single-expressions) is:\r\n t1.x NOT IN (SELECT t2.y from t2 where p) <=> NOT EXISTS (select 1\r\n from t2 where (y=x or y is NULL or x is NULL) and p).\r\n \r\n If x or y is non-nullable, we can safely remove the predicate \"x\r\n is NULL\" or \"y is NULL\", and if there is no predicate p,\r\n then \"x is NULL\" may be factored out of the subquery.\r\n Experiments show that if we can remove one or the other ORed\r\n predicates, or if we can factor out the \"x is NULL\", then\r\n execution is typically much faster.\r\n \r\n Basically, for the single expression case (we also handle the multi\r\n expression case), we try to do the following transformation:\r\n When p does not exist:\r\n t1.x not in (t2.y) => ANTI JOIN t1(Filter: x is not null), t2 on\r\n join condition: t1.x=t2.y or t2.y is null.\r\n When x is non-nullable:\r\n t1.x not in (t2.y where p) => ANTI JOIN t1, t2 on join condition:\r\n (t1.x=t2.y or t2.y is null) and p.\r\n \r\n We implemented a nullability test routine is_node_nonnullable().\r\n Currently it handles Var, TargetEntry, CoalesceExpr and Const. Outer\r\n joins are taken into consideration in the nullability test.\r\n \r\n We adjust and apply reduce_outer_joins() before the transformation so \r\n that the outer joins have an opportunity to be converted to inner joins\r\n prior to the transformation.\r\n \r\n Using this transformation, we measured performance improvements of\r\n two to three orders of magnitude on most queries in a development\r\n environment. In our performance experiments, table s (small) has 11\r\n rows, table l (large) has 1 million rows. s.n and l.n have NULL\r\n values. s.nn and l.nn are NOT NULL.\r\n \r\n s.n not in (l.n) 1150 ms -> 0.49 ms\r\n s.nn not in (l.nn) 1120 ms -> 0.45 ms\r\n l.n not in (l.n) over 20 min -> 1700 ms\r\n l.nn not in (l.nn) over 20 min -> 220 ms\r\n l.n not in (s.n) 63 ms -> 750 ms\r\n l.nn not in (s.nn) 58 ms -> 46 ms\r\n \r\n For the only case that performance drops - l.n not in (s.n). It is\r\n likely to be resolved by ending the nested loop anti join early as\r\n soon as we find NULL inner tuple entry/entries that satisfies the\r\n join condition during execution. This is still under investigation.\r\n \r\n Comments are welcome.\r\n \r\n With Regards,\r\n ---\r\n Zheng Li, AWS, Amazon Aurora PostgreSQL\r\n \r\n On 2/25/19, 7:32 AM, \"David Rowley\" <david.rowley@2ndquadrant.com> wrote:\r\n \r\n On Fri, 22 Feb 2019 at 09:44, David Rowley <david.rowley@2ndquadrant.com> wrote:\r\n > I've attached the rebased and still broken version.\r\n \r\n I set about trying to make a less broken version of this.\r\n \r\n A quick reminder of the semantics of NOT IN:\r\n \r\n 1. WHERE <nullable_column> NOT IN(SELECT <not null column> FROM table);\r\n \r\n If table is non-empty:\r\n will filter out rows where <nullable_column> is NULL\r\n and only show values that are not in <not null column>\r\n \r\n If table is empty:\r\n Filters nothing.\r\n \r\n 2. WHERE <nonnullable_column> NOT IN(SELECT <null column> FROM table);\r\n \r\n If table contains NULLs in the <null column> no records will match.\r\n \r\n The previous patch handled #2 correctly but neglected to do anything\r\n about #1. For #1 the only way we can implement this as a planner only\r\n change is to insist that the outer side expressions also are not null.\r\n If we could have somehow tested if \"table\" was non-empty then we could\r\n have added a IS NOT NULL clause to the outer query and converted to an\r\n anti-join, but ... can't know that during planning and can't add the\r\n IS NOT NULL regardless as, if \"table\" is empty we will filter NULLs\r\n when we shouldn't.\r\n \r\n In the attached, I set about fixing #1 by determining if the outer\r\n expressions could be NULL by checking\r\n \r\n 1. If expression is a Var from an inner joined relation it can't be\r\n NULL if there's a NOT NULL constraint on the column; or\r\n 2. If expression is a Var from an inner joined relation and there is a\r\n strict WHERE/ON clause, the expression can't be NULL; or\r\n 3. If expression is a Var from an outer joined relation check for\r\n quals that were specified in the same syntactical level as the NOT IN\r\n for proofs that NULL will be filtered.\r\n \r\n An example of #3 is:\r\n \r\n SELECT * FROM t1 LEFT JOIN t2 on t1.a = t2.a WHERE t2.a IS NOT NULL\r\n AND t2.a NOT IN(SELECT a FROM t3); -- t2 becomes INNER JOINed later in\r\n planning, but...\r\n or;\r\n SELECT * FROM t1 LEFT JOIN t2 on t1.a = t2.a AND t2.a NOT IN(SELECT a FROM t3);\r\n \r\n In the latter of the two, the t1.a = t2.a join conditions ensures that\r\n NULLs can't exist where the NOT IN is evaluated.\r\n \r\n I implemented #3 by passing the quals down to\r\n pull_up_sublinks_qual_recurse(). At the top level call 'node' and\r\n 'notnull_proofs' are the same, but that changes in recursive calls\r\n like the one we make inside the is_andclause() condition.\r\n \r\n Comments welcome.\r\n \r\n -- \r\n David Rowley http://www.2ndQuadrant.com/\r\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Mon, 25 Feb 2019 22:51:30 +0000",
"msg_from": "\"Li, Zheng\" <zhelli@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "On Tue, 26 Feb 2019 at 11:51, Li, Zheng <zhelli@amazon.com> wrote:\n> Resend the patch with a whitespace removed so that \"git apply patch\" works directly.\n\nI had a quick look at this and it seems to be broken for the empty\ntable case I mentioned up thread.\n\nQuick example:\n\nSetup:\n\ncreate table t1 (a int);\ncreate table t2 (a int not null);\ninsert into t1 values(NULL),(1),(2);\n\nselect * from t1 where a not in(select a from t2);\n\nPatched:\n a\n---\n 1\n 2\n(2 rows)\n\nMaster:\n a\n---\n\n 1\n 2\n(3 rows)\n\nThis will be due to the fact you're adding an a IS NOT NULL qual to\nthe scan of a:\n\npostgres=# explain select * from t1 where a not in(select a from t2);\n QUERY PLAN\n------------------------------------------------------------------\n Hash Anti Join (cost=67.38..152.18 rows=1268 width=4)\n Hash Cond: (t1.a = t2.a)\n -> Seq Scan on t1 (cost=0.00..35.50 rows=2537 width=4)\n Filter: (a IS NOT NULL)\n -> Hash (cost=35.50..35.50 rows=2550 width=4)\n -> Seq Scan on t2 (cost=0.00..35.50 rows=2550 width=4)\n(6 rows)\n\nbut as I mentioned, you can't do that as t2 might be empty and there's\nno way to know that during planning.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Tue, 26 Feb 2019 12:19:57 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "Greenplum Database does this optimization. The idea is to use a new join\ntype, let's call it JOIN_LASJ_NOTIN, and its semantic regarding NULL is\ndefined as below:\n\n1. If there is a NULL in the outer side, and the inner side is empty, the\n NULL should be part of the outputs.\n\n2. If there is a NULL in the outer side, and the inner side is not empty,\n the NULL should not be part of the outputs.\n\n3. If there is a NULL in the inner side, no outputs should be produced.\n\nAn example plan looks like:\n\ngpadmin=# explain (costs off) select * from t1 where a not in(select a\nfrom t2);\n QUERY PLAN\n-----------------------------------\n Hash Left Anti Semi (Not-In) Join\n Hash Cond: (t1.a = t2.a)\n -> Seq Scan on t1\n -> Hash\n -> Seq Scan on t2\n(5 rows)\n\nThanks\nRichard\n\nOn Tue, Feb 26, 2019 at 7:20 AM David Rowley <david.rowley@2ndquadrant.com>\nwrote:\n\n> On Tue, 26 Feb 2019 at 11:51, Li, Zheng <zhelli@amazon.com> wrote:\n> > Resend the patch with a whitespace removed so that \"git apply patch\"\n> works directly.\n>\n> I had a quick look at this and it seems to be broken for the empty\n> table case I mentioned up thread.\n>\n> Quick example:\n>\n> Setup:\n>\n> create table t1 (a int);\n> create table t2 (a int not null);\n> insert into t1 values(NULL),(1),(2);\n>\n> select * from t1 where a not in(select a from t2);\n>\n> Patched:\n> a\n> ---\n> 1\n> 2\n> (2 rows)\n>\n> Master:\n> a\n> ---\n>\n> 1\n> 2\n> (3 rows)\n>\n> This will be due to the fact you're adding an a IS NOT NULL qual to\n> the scan of a:\n>\n> postgres=# explain select * from t1 where a not in(select a from t2);\n> QUERY PLAN\n> ------------------------------------------------------------------\n> Hash Anti Join (cost=67.38..152.18 rows=1268 width=4)\n> Hash Cond: (t1.a = t2.a)\n> -> Seq Scan on t1 (cost=0.00..35.50 rows=2537 width=4)\n> Filter: (a IS NOT NULL)\n> -> Hash (cost=35.50..35.50 rows=2550 width=4)\n> -> Seq Scan on t2 (cost=0.00..35.50 rows=2550 width=4)\n> (6 rows)\n>\n> but as I mentioned, you can't do that as t2 might be empty and there's\n> no way to know that during planning.\n>\n> --\n> David Rowley\n> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.2ndQuadrant.com_&d=DwIBaQ&c=lnl9vOaLMzsy2niBC8-h_K-7QJuNJEsFrzdndhuJ3Sw&r=5r3cnfZPUDOHrMiXq8Mq2g&m=dE1nglE17x3nD-oH_BrF0r4SLaFnQKzwwJBJGpDoaaA&s=dshupMomMvkDAd92918cU21AJ1E1s7QwbrxIGSRxZA8&e=\n> PostgreSQL Development, 24x7 Support, Training & Services\n>\n>\n\nGreenplum Database does this optimization. The idea is to use a new jointype, let's call it JOIN_LASJ_NOTIN, and its semantic regarding NULL isdefined as below:1. If there is a NULL in the outer side, and the inner side is empty, the NULL should be part of the outputs.2. If there is a NULL in the outer side, and the inner side is not empty, the NULL should not be part of the outputs.3. If there is a NULL in the inner side, no outputs should be produced.An example plan looks like:gpadmin=# explain (costs off) select * from t1 where a not in(select a from t2); QUERY PLAN----------------------------------- Hash Left Anti Semi (Not-In) Join Hash Cond: (t1.a = t2.a) -> Seq Scan on t1 -> Hash -> Seq Scan on t2(5 rows)ThanksRichardOn Tue, Feb 26, 2019 at 7:20 AM David Rowley <david.rowley@2ndquadrant.com> wrote:On Tue, 26 Feb 2019 at 11:51, Li, Zheng <zhelli@amazon.com> wrote:\n> Resend the patch with a whitespace removed so that \"git apply patch\" works directly.\n\nI had a quick look at this and it seems to be broken for the empty\ntable case I mentioned up thread.\n\nQuick example:\n\nSetup:\n\ncreate table t1 (a int);\ncreate table t2 (a int not null);\ninsert into t1 values(NULL),(1),(2);\n\nselect * from t1 where a not in(select a from t2);\n\nPatched:\n a\n---\n 1\n 2\n(2 rows)\n\nMaster:\n a\n---\n\n 1\n 2\n(3 rows)\n\nThis will be due to the fact you're adding an a IS NOT NULL qual to\nthe scan of a:\n\npostgres=# explain select * from t1 where a not in(select a from t2);\n QUERY PLAN\n------------------------------------------------------------------\n Hash Anti Join (cost=67.38..152.18 rows=1268 width=4)\n Hash Cond: (t1.a = t2.a)\n -> Seq Scan on t1 (cost=0.00..35.50 rows=2537 width=4)\n Filter: (a IS NOT NULL)\n -> Hash (cost=35.50..35.50 rows=2550 width=4)\n -> Seq Scan on t2 (cost=0.00..35.50 rows=2550 width=4)\n(6 rows)\n\nbut as I mentioned, you can't do that as t2 might be empty and there's\nno way to know that during planning.\n\n-- \n David Rowley https://urldefense.proofpoint.com/v2/url?u=http-3A__www.2ndQuadrant.com_&d=DwIBaQ&c=lnl9vOaLMzsy2niBC8-h_K-7QJuNJEsFrzdndhuJ3Sw&r=5r3cnfZPUDOHrMiXq8Mq2g&m=dE1nglE17x3nD-oH_BrF0r4SLaFnQKzwwJBJGpDoaaA&s=dshupMomMvkDAd92918cU21AJ1E1s7QwbrxIGSRxZA8&e=\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Tue, 26 Feb 2019 18:24:36 +0800",
"msg_from": "Richard Guo <riguo@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "The problem is that the special optimization that was proposed for the case\nwhere the subquery has no WHERE clause isn't valid when the subquery returns\nno rows. That additional optimization needs to be removed, and preferably\nreplaced with some sort of efficient run-time test.\n\n\n\n\n\n-----\nJim Finnerty, AWS, Amazon Aurora PostgreSQL\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n",
"msg_date": "Tue, 26 Feb 2019 07:07:42 -0700 (MST)",
"msg_from": "Jim Finnerty <jfinnert@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "On Wed, 27 Feb 2019 at 03:07, Jim Finnerty <jfinnert@amazon.com> wrote:\n>\n> The problem is that the special optimization that was proposed for the case\n> where the subquery has no WHERE clause isn't valid when the subquery returns\n> no rows. That additional optimization needs to be removed, and preferably\n> replaced with some sort of efficient run-time test.\n\nThat is one option, however, the join type that Richard mentions, to\nsatisfy point #3, surely only can work for Hash joins and perhaps\nMerge joins that required a sort, assuming there's some way for the\nsort to communicate about if it found NULLs or not. Either way, we\nneed to have looked at the entire inner side to ensure there are no\nnulls. Probably it would be possible to somehow team that up with a\nplanner check to see if the inner exprs could be NULL then just\nimplement points #1 and #2 for other join methods.\n\nIf you're proposing to do that for this thread then I can take my\nplanner only patch somewhere else. I only posted my patch as I pretty\nmuch already had what I thought you were originally talking about.\nHowever, please be aware there are current patents around adding\nexecution time smarts in this area, so it's probably unlikely you'll\nfind a way to do this in the executor that does not infringe on those.\nProbably no committer would want to touch it. I think my patch covers\na good number of use cases and as far as I understand, does not go\nnear any current patents.\n\nPlease let me know if I should move my patch to another thread. I\ndon't want to hi-jack this if you're going in another direction.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Wed, 27 Feb 2019 09:51:41 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "I agree we will need some runtime smarts (such as a new anti join type as pointed out by Richard) to \"ultimately\" account for all the cases of NOT IN queries.\r\n\r\nHowever, given that the March CommitFest is imminent and the runtime smarts patent concerns David had pointed out (which I was not aware of before), we would not move that direction at the moment.\r\n\r\nI propose that we collaborate to build one patch from the two patches submitted in this thread for the CF. The two patches are for the same purpose and similar. However, they differ in the following ways as far as I can tell:\r\n\r\nNullability Test:\r\n-David's patch uses strict predicates for nullability test.\r\n-Our patch doesn't use strict predicates, but it accounts for COALESCE and null-padded rows from outer join. In addition, we made reduce_outer_joins() work before the transformation which makes the nullability test more accurate.\r\n\r\nAnti Join Transformation:\r\n-Dvaid's patch does the transformation when both inner and outer outputs are non-nullable.\r\n-With the latest fix (for the empty table case), our patch does the transformation as long as the outer is non-nullable regardless of the inner nullability, experiments show that the results are always faster.\r\n\r\nDavid, please let me know what you think. If you would like to collaborate, I'll start merging with your code on using strict predicates to make a better Nullability Test.\r\n\r\nThanks,\r\nZheng\r\n\r\n",
"msg_date": "Wed, 27 Feb 2019 00:05:41 +0000",
"msg_from": "\"Li, Zheng\" <zhelli@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "\"Li, Zheng\" <zhelli@amazon.com> writes:\n> However, given that the March CommitFest is imminent and the runtime smarts patent concerns David had pointed out (which I was not aware of before), we would not move that direction at the moment.\n\n> I propose that we collaborate to build one patch from the two patches submitted in this thread for the CF.\n\nTBH, I think it's very unlikely that any patch for this will be seriously\nconsidered for commit in v12. It would be against project policy, and\nspending a lot of time reviewing the patch would be quite unfair to other\npatches that have been in the queue longer. Therefore, I'd suggest that\nyou not bend things out of shape just to have some patch to submit before\nMarch 1. It'd be better to work with the goal of having a coherent patch\nready for the first v13 CF, probably July-ish.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 26 Feb 2019 19:13:54 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "On Wed, 27 Feb 2019 at 13:05, Li, Zheng <zhelli@amazon.com> wrote:\n> -With the latest fix (for the empty table case), our patch does the transformation as long as the outer is non-nullable regardless of the inner nullability, experiments show that the results are always faster.\n\nHi Zheng,\n\nI'm interested to know how this works without testing for inner\nnullability. If any of the inner side's join exprs are NULL then no\nrecords can match. What do you propose to work around that?\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Wed, 27 Feb 2019 13:24:26 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "On Wed, 27 Feb 2019 at 13:13, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> \"Li, Zheng\" <zhelli@amazon.com> writes:\n> > However, given that the March CommitFest is imminent and the runtime smarts patent concerns David had pointed out (which I was not aware of before), we would not move that direction at the moment.\n>\n> > I propose that we collaborate to build one patch from the two patches submitted in this thread for the CF.\n>\n> TBH, I think it's very unlikely that any patch for this will be seriously\n> considered for commit in v12. It would be against project policy, and\n> spending a lot of time reviewing the patch would be quite unfair to other\n> patches that have been in the queue longer. Therefore, I'd suggest that\n> you not bend things out of shape just to have some patch to submit before\n> March 1. It'd be better to work with the goal of having a coherent patch\n> ready for the first v13 CF, probably July-ish.\n\nFWIW, I did add this to the March CF, but I set the target version to\n13. I wasn't considering this for PG12. I see Zheng was, but I agree\nwith you on PG13 being the target for this.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Wed, 27 Feb 2019 13:26:26 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "I'm totally fine with setting the target to PG13.\r\n\r\n--\r\nI'm interested to know how this works without testing for inner\r\nnullability. If any of the inner side's join exprs are NULL then no\r\nrecords can match. What do you propose to work around that?\r\n--\r\n\r\nWe still check for inner side's nullability, when it is nullable we\r\nappend a \"var is NULL\" to the anti join condition. So every outer\r\ntuple is going to evaluate to true on the join condition when there\r\nis indeed a null entry in the inner. \r\nActually I think the nested loop anti join can end early in this case,\r\nbut I haven't find a way to do it properly, this may be one other reason\r\nwhy we need a new join type for NOT IN.\r\n\r\ne.g.\r\nexplain select count(*) from s where u not in (select n from l);\r\n QUERY PLAN\r\n------------------------------------------------------------------------------------\r\n Aggregate (cost=2892.88..2892.89 rows=1 width=8)\r\n -> Nested Loop Anti Join (cost=258.87..2892.88 rows=1 width=0)\r\n -> Seq Scan on s (cost=0.00..1.11 rows=11 width=4)\r\n -> Bitmap Heap Scan on l (cost=258.87..262.88 rows=1 width=4)\r\n Recheck Cond: ((s.u = n) OR (n IS NULL))\r\n -> BitmapOr (cost=258.87..258.87 rows=1 width=0)\r\n -> Bitmap Index Scan on l_n (cost=0.00..4.43 rows=1 width=0)\r\n Index Cond: (s.u = n)\r\n -> Bitmap Index Scan on l_n (cost=0.00..4.43 rows=1 width=0)\r\n Index Cond: (n IS NULL)\r\n\r\nZheng\r\n\r\n",
"msg_date": "Wed, 27 Feb 2019 00:41:12 +0000",
"msg_from": "\"Li, Zheng\" <zhelli@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "On Wed, 27 Feb 2019 at 13:41, Li, Zheng <zhelli@amazon.com> wrote:\n> We still check for inner side's nullability, when it is nullable we\n> append a \"var is NULL\" to the anti join condition. So every outer\n> tuple is going to evaluate to true on the join condition when there\n> is indeed a null entry in the inner.\n\nThat's possible, at least providing the var is NULL is an OR\ncondition, but the problem there is that you force the plan into a\nnested loop join. That's unfortunately not going to perform very well\nwhen the number of rows to process is large.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Wed, 27 Feb 2019 13:48:08 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "On Wed, Feb 27, 2019 at 4:52 AM David Rowley <david.rowley@2ndquadrant.com>\nwrote:\n\n> On Wed, 27 Feb 2019 at 03:07, Jim Finnerty <jfinnert@amazon.com> wrote:\n>\n> If you're proposing to do that for this thread then I can take my\n> planner only patch somewhere else. I only posted my patch as I pretty\n> much already had what I thought you were originally talking about.\n> However, please be aware there are current patents around adding\n> execution time smarts in this area, so it's probably unlikely you'll\n> find a way to do this in the executor that does not infringe on those.\n> Probably no committer would want to touch it. I think my patch covers\n> a good number of use cases and as far as I understand, does not go\n> near any current patents.\n>\n> Thanks for pointing out the patent concerns. I was not aware of that\nbefore.\nCould you please provide some clue where I can find more info about the\npatents?\n\nThanks\nRichard\n\nOn Wed, Feb 27, 2019 at 4:52 AM David Rowley <david.rowley@2ndquadrant.com> wrote:On Wed, 27 Feb 2019 at 03:07, Jim Finnerty <jfinnert@amazon.com> wrote:\nIf you're proposing to do that for this thread then I can take my\nplanner only patch somewhere else. I only posted my patch as I pretty\nmuch already had what I thought you were originally talking about.\nHowever, please be aware there are current patents around adding\nexecution time smarts in this area, so it's probably unlikely you'll\nfind a way to do this in the executor that does not infringe on those.\nProbably no committer would want to touch it. I think my patch covers\na good number of use cases and as far as I understand, does not go\nnear any current patents.Thanks for pointing out the patent concerns. I was not aware of that before.Could you please provide some clue where I can find more info about the patents?ThanksRichard",
"msg_date": "Wed, 27 Feb 2019 14:14:16 +0800",
"msg_from": "Richard Guo <riguo@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "On Tue, Feb 26, 2019 at 6:51 AM Li, Zheng <zhelli@amazon.com> wrote:\n\n> Resend the patch with a whitespace removed so that \"git apply patch\" works\n> directly.\n>\n>\n\nHi Zheng,\n\nI have reviewed your patch. Good job except two issues I can find:\n\n1. The patch would give wrong results when the inner side is empty. In this\ncase, the whole data from outer side should be in the outputs. But with the\npatch, we will lose the NULLs from outer side.\n\n2. Because of the new added predicate 'OR (var is NULL)', we cannot use hash\njoin or merge join to do the ANTI JOIN. Nested loop becomes the only\nchoice,\nwhich is low-efficency.\n\nThanks\nRichard\n\nOn Tue, Feb 26, 2019 at 6:51 AM Li, Zheng <zhelli@amazon.com> wrote:Resend the patch with a whitespace removed so that \"git apply patch\" works directly.\nHi Zheng,I have reviewed your patch. Good job except two issues I can find:1. The patch would give wrong results when the inner side is empty. In thiscase, the whole data from outer side should be in the outputs. But with thepatch, we will lose the NULLs from outer side.2. Because of the new added predicate 'OR (var is NULL)', we cannot use hashjoin or merge join to do the ANTI JOIN. Nested loop becomes the only choice,which is low-efficency.ThanksRichard",
"msg_date": "Fri, 1 Mar 2019 10:27:04 +0800",
"msg_from": "Richard Guo <riguo@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "On Fri, 1 Mar 2019 at 15:27, Richard Guo <riguo@pivotal.io> wrote:\n> I have reviewed your patch. Good job except two issues I can find:\n>\n> 1. The patch would give wrong results when the inner side is empty. In this\n> case, the whole data from outer side should be in the outputs. But with the\n> patch, we will lose the NULLs from outer side.\n>\n> 2. Because of the new added predicate 'OR (var is NULL)', we cannot use hash\n> join or merge join to do the ANTI JOIN. Nested loop becomes the only choice,\n> which is low-efficency.\n\nYeah. Both of these seem pretty fundamental, so setting the patch to\nwaiting on author.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Sat, 2 Mar 2019 01:53:03 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "Hi,\n\nOn March 1, 2019 4:53:03 AM PST, David Rowley <david.rowley@2ndquadrant.com> wrote:\n>On Fri, 1 Mar 2019 at 15:27, Richard Guo <riguo@pivotal.io> wrote:\n>> I have reviewed your patch. Good job except two issues I can find:\n>>\n>> 1. The patch would give wrong results when the inner side is empty.\n>In this\n>> case, the whole data from outer side should be in the outputs. But\n>with the\n>> patch, we will lose the NULLs from outer side.\n>>\n>> 2. Because of the new added predicate 'OR (var is NULL)', we cannot\n>use hash\n>> join or merge join to do the ANTI JOIN. Nested loop becomes the only\n>choice,\n>> which is low-efficency.\n>\n>Yeah. Both of these seem pretty fundamental, so setting the patch to\n>waiting on author.\n\nI've not checked, but could we please make sure these cases are covered in the regression tests today with a single liner? Seems people had to rediscover them a number of times now, and unless this thread results in an integrated feature soonish, it seems likely other people will again.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n",
"msg_date": "Fri, 01 Mar 2019 08:35:21 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On March 1, 2019 4:53:03 AM PST, David Rowley <david.rowley@2ndquadrant.com> wrote:\n>> On Fri, 1 Mar 2019 at 15:27, Richard Guo <riguo@pivotal.io> wrote:\n>>> 1. The patch would give wrong results when the inner side is empty.\n>>> 2. Because of the new added predicate 'OR (var is NULL)', we cannot\n>>> use hash join or merge join to do the ANTI JOIN.\n\n> I've not checked, but could we please make sure these cases are covered\n> in the regression tests today with a single liner?\n\nI'm not sure if the second one is actually a semantics bug or just a\nmisoptimization? But yeah, +1 for putting in some simple tests for\ncorner cases right now. Anyone want to propose a specific patch?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 01 Mar 2019 11:44:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "On Sat, 2 Mar 2019 at 05:44, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Andres Freund <andres@anarazel.de> writes:\n> > I've not checked, but could we please make sure these cases are covered\n> > in the regression tests today with a single liner?\n>\n> I'm not sure if the second one is actually a semantics bug or just a\n> misoptimization? But yeah, +1 for putting in some simple tests for\n> corner cases right now. Anyone want to propose a specific patch?\n\nThe second is just reducing the planner's flexibility to produce a\ngood plan. The first is a bug. Proposed regression test attached.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Sat, 2 Mar 2019 11:28:00 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> On Sat, 2 Mar 2019 at 05:44, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I'm not sure if the second one is actually a semantics bug or just a\n>> misoptimization? But yeah, +1 for putting in some simple tests for\n>> corner cases right now. Anyone want to propose a specific patch?\n\n> The second is just reducing the planner's flexibility to produce a\n> good plan. The first is a bug. Proposed regression test attached.\n\nLGTM, pushed.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 01 Mar 2019 17:57:58 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "Thanks all for the feedbacks! I'm working on a refined patch.\r\n\r\nAlthough adding \"or var is NULL\" to the anti join condition forces the planner to choose nested loop anti join, it is always faster compared to the original plan. In order to enable the transformation from NOT IN to anti join when the inner/outer is nullable, we have to add some NULL test to the join condition.\r\n\r\nWe could make anti join t1, t2 on (t1.x = t2.y or t2.y IS NULL) eligible for hashjoin, it would require changes in allowing this special join quals for hash join as well as changes in hash join executor to handle NULL accordingly for the case.\r\n\r\nAnother option of transformation is to add \"is not false\" on top of the join condition.\r\n\r\nRegards,\r\nZheng\r\nOn 3/1/19, 5:28 PM, \"David Rowley\" <david.rowley@2ndquadrant.com> wrote:\r\n\r\n On Sat, 2 Mar 2019 at 05:44, Tom Lane <tgl@sss.pgh.pa.us> wrote:\r\n >\r\n > Andres Freund <andres@anarazel.de> writes:\r\n > > I've not checked, but could we please make sure these cases are covered\r\n > > in the regression tests today with a single liner?\r\n >\r\n > I'm not sure if the second one is actually a semantics bug or just a\r\n > misoptimization? But yeah, +1 for putting in some simple tests for\r\n > corner cases right now. Anyone want to propose a specific patch?\r\n \r\n The second is just reducing the planner's flexibility to produce a\r\n good plan. The first is a bug. Proposed regression test attached.\r\n \r\n -- \r\n David Rowley http://www.2ndQuadrant.com/\r\n PostgreSQL Development, 24x7 Support, Training & Services\r\n \r\n\r\n",
"msg_date": "Fri, 1 Mar 2019 22:58:42 +0000",
"msg_from": "\"Li, Zheng\" <zhelli@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "\"Li, Zheng\" <zhelli@amazon.com> writes:\n> Although adding \"or var is NULL\" to the anti join condition forces the planner to choose nested loop anti join, it is always faster compared to the original plan.\n\nTBH, I am *really* skeptical of sweeping claims like that. The existing\ncode will typically produce a hashed-subplan plan, which ought not be\nthat awful as long as the subquery result doesn't blow out memory.\nIt certainly is going to beat a naive nested loop.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 01 Mar 2019 18:13:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "On Sat, 2 Mar 2019 at 12:13, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> \"Li, Zheng\" <zhelli@amazon.com> writes:\n> > Although adding \"or var is NULL\" to the anti join condition forces the planner to choose nested loop anti join, it is always faster compared to the original plan.\n>\n> TBH, I am *really* skeptical of sweeping claims like that. The existing\n> code will typically produce a hashed-subplan plan, which ought not be\n> that awful as long as the subquery result doesn't blow out memory.\n> It certainly is going to beat a naive nested loop.\n\nIt's pretty easy to show the claim is false using master and NOT EXISTS.\n\ncreate table small(a int not null);\ncreate table big (a int not null);\ninsert into small select generate_Series(1,1000);\ninsert into big select x%1000+1 from generate_Series(1,1000000) x;\n\nselect count(*) from big b where not exists(select 1 from small s\nwhere s.a = b.a);\nTime: 178.575 ms\n\nselect count(*) from big b where not exists(select 1 from small s\nwhere s.a = b.a or s.a is null);\nTime: 38049.969 ms (00:38.050)\n\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Sat, 2 Mar 2019 12:16:26 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "The current transformation would not add \"or s.a is NULL\" in the example provided since it is non-nullable. You will be comparing these two cases in terms of the transformation:\r\n\r\nselect count(*) from big b where not exists(select 1 from small s\r\nwhere s.a = b.a);\r\nTime: 51.416 ms\r\n\r\nselect count(*) from big b where a not in (select a from s);\r\nTime: 890.088 ms\r\n\r\n But if s.a is nullable, yes, you have proved my previous statement is false... I should have used almost always.\r\nHowever, if s.a is nullable, we would do this transformation:\r\n select count(*) from big b where not exists(select 1 from small s\r\n where s.a = b.a or s.a is null);\r\n\r\nIt's possible to stop the nested loop join early during execution once we find an inner Null entry because every outer tuple is\r\ngoing to evaluate to true on the join condition.\r\n\r\nZheng\r\n\r\nOn 3/1/19, 6:17 PM, \"David Rowley\" <david.rowley@2ndquadrant.com> wrote:\r\n\r\n On Sat, 2 Mar 2019 at 12:13, Tom Lane <tgl@sss.pgh.pa.us> wrote:\r\n >\r\n > \"Li, Zheng\" <zhelli@amazon.com> writes:\r\n > > Although adding \"or var is NULL\" to the anti join condition forces the planner to choose nested loop anti join, it is always faster compared to the original plan.\r\n >\r\n > TBH, I am *really* skeptical of sweeping claims like that. The existing\r\n > code will typically produce a hashed-subplan plan, which ought not be\r\n > that awful as long as the subquery result doesn't blow out memory.\r\n > It certainly is going to beat a naive nested loop.\r\n \r\n It's pretty easy to show the claim is false using master and NOT EXISTS.\r\n \r\n create table small(a int not null);\r\n create table big (a int not null);\r\n insert into small select generate_Series(1,1000);\r\n insert into big select x%1000+1 from generate_Series(1,1000000) x;\r\n \r\n select count(*) from big b where not exists(select 1 from small s\r\n where s.a = b.a);\r\n Time: 178.575 ms\r\n \r\n select count(*) from big b where not exists(select 1 from small s\r\n where s.a = b.a or s.a is null);\r\n Time: 38049.969 ms (00:38.050)\r\n \r\n \r\n -- \r\n David Rowley http://www.2ndQuadrant.com/\r\n PostgreSQL Development, 24x7 Support, Training & Services\r\n \r\n\r\n",
"msg_date": "Fri, 1 Mar 2019 23:39:05 +0000",
"msg_from": "\"Li, Zheng\" <zhelli@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "On Sat, 2 Mar 2019 at 12:39, Li, Zheng <zhelli@amazon.com> wrote:\n> However, if s.a is nullable, we would do this transformation:\n> select count(*) from big b where not exists(select 1 from small s\n> where s.a = b.a or s.a is null);\n\nI understand you're keen to make this work, but you're assuming again\nthat forcing the planner into a nested loop plan is going to be a win\nover the current behaviour. It may well be in some cases, but it's\nvery simple to show cases where it's a significant regression.\n\nUsing the same tables from earlier, and again with master:\n\nalter table small alter column a drop not null;\nselect * from big where a not in(select a from small);\nTime: 430.283 ms\n\nHere's what you're proposing:\n\nselect * from big b where not exists(select 1 from small s where s.a =\nb.a or s.a is null);\nTime: 37419.646 ms (00:37.420)\n\nabout 80 times slower. Making \"small\" a little less small would likely\nsee that gap grow even further.\n\nI think you're fighting a losing battle here with adding OR quals to\nthe join condition. This transformation happens so early in planning\nthat you really can't cost it out either. I think the only way that\ncould be made to work satisfactorily would be with some execution\nlevel support for it. Short of that, you're left with just adding\nchecks that either side of the join cannot produce NULL values...\nThat's what I've proposed in [1].\n\n[1] https://www.postgresql.org/message-id/CAKJS1f_OA5VeZx8A8H8mkj3uqEgOtmHBGCUA6%2BxqgmUJ6JQURw%40mail.gmail.com\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Sat, 2 Mar 2019 13:11:46 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> I think you're fighting a losing battle here with adding OR quals to\n> the join condition.\n\nYeah --- that has a nontrivial risk of making things significantly worse,\nwhich makes it a hard sell. I think the most reasonable bet here is\nsimply to not perform the transformation if we can't prove the inner side\nNOT NULL. That's going to catch most of the useful cases anyway IMO.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 01 Mar 2019 19:45:29 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "Folks - I was away on vacation for the month of February, and can give this\nmy attention again.\n\nI agree with Tom's comment above - when the cost of the NOT IN is dominated\nby the cost of the outer scan (i.e. when the cardinality of the outer\nrelation is large relative to the cardinality of the subquery), and if the\ninner cardinality is small enough to fit in memory, then the current\nimplementation does an efficient hash lookup into an in-memory structure,\nand that's a very fast way to do the NOT IN. It almost achieves the\nlower-bound cost of scanning the outer relation. It can also parallelizes\neasily, whether or not we currently can do that. In these cases, the\ncurrent plan is the preferred plan, and we should keep it.\n\npreferred in-memory hash lookup plan: https://explain.depesz.com/s/I1kN\n\nThis is a case that we would want to avoid the transform, because when both\nthe inner and outer are nullable and the outer is large and the inner is\nsmall, the transformed plan would Scan and Materialize the inner for each\nrow of the outer row, which is very slow compared to the untransformed plan:\n\nslow case for the transformation: https://explain.depesz.com/s/0CBB\n\nHowever, if the inner is too large to fit into memory, then the transformed\nplan is faster on all of our other test cases, although our test cases are\nfar from complete. If the current solution supports parallel scan of the\nouter, for example, then PQ could have lower elapsed time than the non-PQ\nnested loop solution.\n\nAlso, remember that the issue with the empty inner was just a bug that was\nthe result of trying to do an additional optimization in the case where\nthere is no WHERE clause in the subquery. That bug has been fixed. The\ngeneral case transformation described in the base note produces the correct\nresult in all cases, including the empty subquery case.\n\n\n\n\n\n\n\n-----\nJim Finnerty, AWS, Amazon Aurora PostgreSQL\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n",
"msg_date": "Sat, 2 Mar 2019 05:34:10 -0700 (MST)",
"msg_from": "Jim Finnerty <jfinnert@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "On Sat, 2 Mar 2019 at 13:45, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <david.rowley@2ndquadrant.com> writes:\n> > I think you're fighting a losing battle here with adding OR quals to\n> > the join condition.\n>\n> Yeah --- that has a nontrivial risk of making things significantly worse,\n> which makes it a hard sell. I think the most reasonable bet here is\n> simply to not perform the transformation if we can't prove the inner side\n> NOT NULL. That's going to catch most of the useful cases anyway IMO.\n\nDid you mean outer side NOT NULL? The OR col IS NULL was trying to\nsolve the outer side nullability problem when the inner side is empty.\n Of course, the inner side needs to not produce NULLs either, but\nthat's due to the fact that if a NULL exists in the inner side then\nthe anti-join should not produce any records.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Sun, 3 Mar 2019 02:34:20 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> On Sat, 2 Mar 2019 at 13:45, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Yeah --- that has a nontrivial risk of making things significantly worse,\n>> which makes it a hard sell. I think the most reasonable bet here is\n>> simply to not perform the transformation if we can't prove the inner side\n>> NOT NULL. That's going to catch most of the useful cases anyway IMO.\n\n> Did you mean outer side NOT NULL?\n\nSorry, sloppy thinking.\n\n> Of course, the inner side needs to not produce NULLs either, but\n> that's due to the fact that if a NULL exists in the inner side then\n> the anti-join should not produce any records.\n\nRight. So the path of least resistance is to transform to antijoin\nonly if we can prove both of those things (and maybe we need to check\nthat the join operator is strict, too? -ENOCAFFEINE). The question\nbefore us is what is the cost-benefit ratio of trying to cope with\nadditional cases. I'm skeptical that it's attractive: the cost\ncertainly seems high, and I don't know that there are very many\nreal-world cases where we'd get a win.\n\nHmm ... thinking about the strictness angle some more: what we really\nneed to optimize NOT IN, IIUC, is an assumption that the join operator\nwill never return NULL. While not having NULL inputs is certainly a\n*necessary* condition for that (assuming a strict operator) it's not a\n*sufficient* condition. Any Postgres function/operator is capable\nof returning NULL whenever it feels like it. So checking strictness\ndoes not lead to a mathematically correct optimization.\n\nMy initial thought about plugging that admittedly-academic point is\nto insist that the join operator be both strict and a member of a\nbtree opclass (hash might be OK too; less sure about other index types).\nThe system already contains assumptions that btree comparators never\nreturn NULL. I doubt that this costs us any real-world applicability,\nbecause if the join operator can neither merge nor hash, we're screwed\nanyway for finding a join plan that's better than nested-loop.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sat, 02 Mar 2019 11:25:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "On Sun, 3 Mar 2019 at 05:25, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Hmm ... thinking about the strictness angle some more: what we really\n> need to optimize NOT IN, IIUC, is an assumption that the join operator\n> will never return NULL. While not having NULL inputs is certainly a\n> *necessary* condition for that (assuming a strict operator) it's not a\n> *sufficient* condition. Any Postgres function/operator is capable\n> of returning NULL whenever it feels like it. So checking strictness\n> does not lead to a mathematically correct optimization.\n\nThat's something I didn't think of.\n\n> My initial thought about plugging that admittedly-academic point is\n> to insist that the join operator be both strict and a member of a\n> btree opclass (hash might be OK too; less sure about other index types).\n> The system already contains assumptions that btree comparators never\n> return NULL. I doubt that this costs us any real-world applicability,\n> because if the join operator can neither merge nor hash, we're screwed\n> anyway for finding a join plan that's better than nested-loop.\n\nWhy strict? If both inputs are non-NULL, then what additional\nguarantees does strict give us?\n\nI implemented a btree opfamily check in my version of the patch. Not\nso sure about hash, can you point me in the direction of a mention of\nhow this is guarantees for btree?\n\nThe attached v1.2 does this adds a regression test using the LINE\ntype. This has an operator named '=', but no btree opfamily. A few\nother types are in this boat too, per:\n\nselect typname from pg_type t where not exists(select 1 from pg_amop\nwhere amoplefttype = t.oid and amopmethod=403) and exists (select 1\nfrom pg_operator where oprleft = t.oid and oprname = '=');\n\nThe list of builtin types that have a hash opfamily but no btree\nopfamily that support NOT IN are not very exciting, so doing the same\nfor hash might not be worth the extra code.\n\nselect typname from pg_type t where exists(select 1 from pg_amop where\namoplefttype = t.oid and amopmethod=405) and exists (select 1 from\npg_operator where oprleft = t.oid and oprname = '=') and not\nexists(select 1 from pg_amop where amoplefttype = t.oid and\namopmethod=403);\n typname\n---------\n xid\n cid\n aclitem\n(3 rows)\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Sun, 3 Mar 2019 16:53:31 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> On Sun, 3 Mar 2019 at 05:25, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> My initial thought about plugging that admittedly-academic point is\n>> to insist that the join operator be both strict and a member of a\n>> btree opclass (hash might be OK too; less sure about other index types).\n\n> Why strict? If both inputs are non-NULL, then what additional\n> guarantees does strict give us?\n\nYeah, if we're verifying the inputs are non-null, I think that probably\ndoesn't matter.\n\n> I implemented a btree opfamily check in my version of the patch. Not\n> so sure about hash, can you point me in the direction of a mention of\n> how this is guarantees for btree?\n\nhttps://www.postgresql.org/docs/devel/btree-support-funcs.html\nquoth\n\n The comparison function must take two non-null values A and B and\n return an int32 value that is < 0, 0, or > 0 when A < B, A = B, or A >\n B, respectively. A null result is disallowed: all values of the data\n type must be comparable.\n\n(At the code level, this is implicit in the fact that the comparison\nfunction will be called via FunctionCall2Coll or a sibling, and those\nall throw an error if the called function returns NULL.)\n\nNow, it doesn't say in so many words that the comparison operators\nhave to yield results consistent with the comparison support function,\nbut I think that's pretty obvious ...\n\nFor hash, the equivalent constraint is that the hash function has to\nwork for every possible input value. I suppose it's possible that\nthe associated equality operator would sometimes return null, but\nit's hard to think of a practical reason for doing so.\n\nI've not dug in the code, but I wouldn't be too surprised if\nnodeMergejoin.c or nodeHashjoin.c, or the stuff related to hash\ngrouping or hash aggregation, also contain assumptions about\nthe equality operators not returning null.\n\n> The list of builtin types that have a hash opfamily but no btree\n> opfamily that support NOT IN are not very exciting, so doing the same\n> for hash might not be worth the extra code.\n\nAgreed for builtin types, but there might be some extensions out there\nwhere this doesn't hold. It's not terribly hard to imagine a data type\nthat hasn't got a linear sort order but is amenable to hashing.\n(The in-core xid type is an example, actually.)\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sat, 02 Mar 2019 23:11:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "On Sun, 3 Mar 2019 at 17:11, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> (At the code level, this is implicit in the fact that the comparison\n> function will be called via FunctionCall2Coll or a sibling, and those\n> all throw an error if the called function returns NULL.)\n>\n> Now, it doesn't say in so many words that the comparison operators\n> have to yield results consistent with the comparison support function,\n> but I think that's pretty obvious ...\n\nAh okay. I can get it to misbehave by setting fcinfo->isnull = true in\nthe debugger from int4eq(). I see the NULL result there is not\nverified as that's just translated into \"false\" by ExecInterpExpr()'s\nEEOP_QUAL case. If you're saying something doing that is\nfundamentally broken, then I guess we're okay.\n\n> David Rowley <david.rowley@2ndquadrant.com> writes:\n> > The list of builtin types that have a hash opfamily but no btree\n> > opfamily that support NOT IN are not very exciting, so doing the same\n> > for hash might not be worth the extra code.\n>\n> Agreed for builtin types, but there might be some extensions out there\n> where this doesn't hold. It's not terribly hard to imagine a data type\n> that hasn't got a linear sort order but is amenable to hashing.\n\nOn reflection, it seems pretty easy to add this check, so I've done so\nin the attached.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Mon, 4 Mar 2019 02:34:42 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> On Sun, 3 Mar 2019 at 17:11, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> (At the code level, this is implicit in the fact that the comparison\n>> function will be called via FunctionCall2Coll or a sibling, and those\n>> all throw an error if the called function returns NULL.)\n\n> Ah okay. I can get it to misbehave by setting fcinfo->isnull = true in\n> the debugger from int4eq(). I see the NULL result there is not\n> verified as that's just translated into \"false\" by ExecInterpExpr()'s\n> EEOP_QUAL case. If you're saying something doing that is\n> fundamentally broken, then I guess we're okay.\n\nNo, what I'm thinking of is this bit in _bt_compare:\n\n result = DatumGetInt32(FunctionCall2Coll(&scankey->sk_func,\n scankey->sk_collation,\n datum,\n scankey->sk_argument));\n\nYou absolutely will get errors during btree insertions and searches\nif a datatype's btree comparison functions ever return NULL (for\nnon-NULL inputs).\n\nFor hash indexes, that kind of restriction only directly applies to\nhash-calculation functions, which perhaps are not as tightly tied to the\nopclass's user-visible operators as is the case for btree opclasses.\nBut I think you might be able to find places in hash join or grouping\nthat are calling the actual equality operator and not allowing for it\nto return NULL.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 03 Mar 2019 10:42:29 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "On Mon, 4 Mar 2019 at 04:42, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <david.rowley@2ndquadrant.com> writes:\n> > On Sun, 3 Mar 2019 at 17:11, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> (At the code level, this is implicit in the fact that the comparison\n> >> function will be called via FunctionCall2Coll or a sibling, and those\n> >> all throw an error if the called function returns NULL.)\n>\n> > Ah okay. I can get it to misbehave by setting fcinfo->isnull = true in\n> > the debugger from int4eq(). I see the NULL result there is not\n> > verified as that's just translated into \"false\" by ExecInterpExpr()'s\n> > EEOP_QUAL case. If you're saying something doing that is\n> > fundamentally broken, then I guess we're okay.\n>\n> No, what I'm thinking of is this bit in _bt_compare:\n>\n> result = DatumGetInt32(FunctionCall2Coll(&scankey->sk_func,\n> scankey->sk_collation,\n> datum,\n> scankey->sk_argument));\n>\n> You absolutely will get errors during btree insertions and searches\n> if a datatype's btree comparison functions ever return NULL (for\n> non-NULL inputs).\n\nI understand this is the case if an index happens to be used, but\nthere's no guarantee that's going to be the case. I was looking at the\ncase where an index was not used.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Mon, 4 Mar 2019 11:02:45 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> On Mon, 4 Mar 2019 at 04:42, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> You absolutely will get errors during btree insertions and searches\n>> if a datatype's btree comparison functions ever return NULL (for\n>> non-NULL inputs).\n\n> I understand this is the case if an index happens to be used, but\n> there's no guarantee that's going to be the case. I was looking at the\n> case where an index was not used.\n\nNot following your point? An index opclass is surely not going to be\ndesigned on the assumption that it can never be used in an index.\nTherefore, its support functions can't return NULL unless the index AM\nallows that.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 03 Mar 2019 17:06:05 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "On Mon, 4 Mar 2019 at 11:06, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <david.rowley@2ndquadrant.com> writes:\n> > On Mon, 4 Mar 2019 at 04:42, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> You absolutely will get errors during btree insertions and searches\n> >> if a datatype's btree comparison functions ever return NULL (for\n> >> non-NULL inputs).\n>\n> > I understand this is the case if an index happens to be used, but\n> > there's no guarantee that's going to be the case. I was looking at the\n> > case where an index was not used.\n>\n> Not following your point? An index opclass is surely not going to be\n> designed on the assumption that it can never be used in an index.\n> Therefore, its support functions can't return NULL unless the index AM\n> allows that.\n\nI agree that it makes sense that the behaviour of the two match. I was\ntrying to hint towards that when I said:\n\n> If you're saying something doing that is\n> fundamentally broken, then I guess we're okay.\n\nbut likely I didn't make that very clear.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Mon, 4 Mar 2019 11:14:25 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "On 2/27/19 2:26 AM, David Rowley wrote:\n> On Wed, 27 Feb 2019 at 13:13, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> \"Li, Zheng\" <zhelli@amazon.com> writes:\n>>> However, given that the March CommitFest is imminent and the runtime smarts patent concerns David had pointed out (which I was not aware of before), we would not move that direction at the moment.\n>>\n>>> I propose that we collaborate to build one patch from the two patches submitted in this thread for the CF.\n>>\n>> TBH, I think it's very unlikely that any patch for this will be seriously\n>> considered for commit in v12. It would be against project policy, and\n>> spending a lot of time reviewing the patch would be quite unfair to other\n>> patches that have been in the queue longer. Therefore, I'd suggest that\n>> you not bend things out of shape just to have some patch to submit before\n>> March 1. It'd be better to work with the goal of having a coherent patch\n>> ready for the first v13 CF, probably July-ish.\n> \n> FWIW, I did add this to the March CF, but I set the target version to\n> 13. I wasn't considering this for PG12. I see Zheng was, but I agree\n> with you on PG13 being the target for this.\n\nLooks like the target version of 13 was removed but I have added it back.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n",
"msg_date": "Tue, 5 Mar 2019 10:21:32 +0200",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: NOT IN subquery optimization"
},
{
"msg_contents": "On Tue, 5 Mar 2019 at 21:21, David Steele <david@pgmasters.net> wrote:\n>\n> On 2/27/19 2:26 AM, David Rowley wrote:\n> > FWIW, I did add this to the March CF, but I set the target version to\n> > 13. I wasn't considering this for PG12. I see Zheng was, but I agree\n> > with you on PG13 being the target for this.\n>\n> Looks like the target version of 13 was removed but I have added it back.\n\nThe problem seems to be that there are now 2 CF entries for this\nthread. I originally added [1], but later Zheng added [2]. From what\nJim mentioned when he opened this thread I had the idea that no patch\nexisted yet, so I posted the one I already had written for this 4\nyears ago thinking that might be useful to base new work on. I guess\nZheng's patch already exists when Jim opened this thread as a patch\nappeared fairly quickly afterwards. If that's true then I understand\nthat they wouldn't want to drop the work they'd already done in favour\nof picking mine up.\n\nI'm not all that sure what do to about this. It's going to cause quite\na bit of confusion having two patches in one thread. Part of me feels\nthat I've hijacked this thread and that I should just drop my patch\naltogether and help review Zheng's patch, but I'm struggling a bit to\ndo that as I've not managed to find problems with my version, but a\nfew have been pointed out with the other patch (of course, there may\nbe some yet undiscovered issues with my version too).\n\nAlternatively, I could take my patch to another thread, but there does\nnot seem to be much sense in that. It might not solve the confusion\nproblem. The best thing would be that if we could work together to\nmake this work, however, we both seem to have fairly different ideas\non how it should work. Tom and I both agree that Zheng and Jim's\nproposal to add OR x IS NULL clauses to the join condition is most\nlikely a no go area due to it disallowing hash and merge anti-joins.\nThe last I can understand from Jim is that they appear to disagree\nwith that and want to do the transformation based on costs. Perhaps\nthey're working on some new ideas to make that more feasible. I'm\ninterested to hear the latest on this.\n\n[1] https://commitfest.postgresql.org/22/2020/\n[2] https://commitfest.postgresql.org/22/2023/\n\n--\n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Tue, 5 Mar 2019 21:53:27 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: NOT IN subquery optimization"
},
{
"msg_contents": "On 3/5/19 10:53 AM, David Rowley wrote:\n> On Tue, 5 Mar 2019 at 21:21, David Steele <david@pgmasters.net> wrote:\n>>\n>> On 2/27/19 2:26 AM, David Rowley wrote:\n>>> FWIW, I did add this to the March CF, but I set the target version to\n>>> 13. I wasn't considering this for PG12. I see Zheng was, but I agree\n>>> with you on PG13 being the target for this.\n>>\n>> Looks like the target version of 13 was removed but I have added it back.\n> \n> The problem seems to be that there are now 2 CF entries for this\n> thread. I originally added [1], but later Zheng added [2]. From what\n> Jim mentioned when he opened this thread I had the idea that no patch\n> existed yet, so I posted the one I already had written for this 4\n> years ago thinking that might be useful to base new work on.\n\nYeah, I just figured this out when I got to your patch which was \nproperly marked as PG13 and then saw they were pointing at the same thread.\n\nAt the very least one of the patch entries should be closed, or moved to \na new thread.\n\nI'm not sure if I have an issue with competing patches on the same \nthread. I've seen that before and it can lead to a good outcome. It \ncase, as you say, also lead to confusion.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n",
"msg_date": "Tue, 5 Mar 2019 11:10:55 +0200",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "On Sun, 3 Mar 2019 at 01:34, Jim Finnerty <jfinnert@amazon.com> wrote:\n> I agree with Tom's comment above - when the cost of the NOT IN is dominated\n> by the cost of the outer scan (i.e. when the cardinality of the outer\n> relation is large relative to the cardinality of the subquery), and if the\n> inner cardinality is small enough to fit in memory, then the current\n> implementation does an efficient hash lookup into an in-memory structure,\n> and that's a very fast way to do the NOT IN. It almost achieves the\n> lower-bound cost of scanning the outer relation. It can also parallelizes\n> easily, whether or not we currently can do that. In these cases, the\n> current plan is the preferred plan, and we should keep it.\n\nIf you do the conversion to anti-join (without hacking at the join\nquals and assuming no nulls are possible), then the planner can decide\nwhat's best. The planner may choose a hash join which is not hugely\ndifferent from a hashed subplan, however from the testing I've done\nthe Hash Join is a bit faster. I imagine there's been more motivation\nover the years to optimise that over hashed subplans. As far as I\nknow, there's no parallel query support for hashed subplans, but I\nknow there is for hash joins. In short, I don't think it makes any\nsense to not translate into an anti-join (when possible). I think the\nbest anti-join plan will always be a win over the subquery. The\nplanner could make a mistake of course, but that's a different issue.\nWe certainly don't consider keeping the subquery around for NOT\nEXISTS.\n\n> This is a case that we would want to avoid the transform, because when both\n> the inner and outer are nullable and the outer is large and the inner is\n> small, the transformed plan would Scan and Materialize the inner for each\n> row of the outer row, which is very slow compared to the untransformed plan:\n>\n> slow case for the transformation: https://explain.depesz.com/s/0CBB\n\nWell, that's because you're forcing the planner into a corner in\nregards to the join condition. It has no choice but to nested loop\nthat join.\n\n> However, if the inner is too large to fit into memory, then the transformed\n> plan is faster on all of our other test cases, although our test cases are\n> far from complete. If the current solution supports parallel scan of the\n> outer, for example, then PQ could have lower elapsed time than the non-PQ\n> nested loop solution.\n\nI'm having a little trouble understanding this. From what I\nunderstand the code adds an OR .. IS NULL clause to the join\ncondition. Is this still the case with what you've been testing here?\nIf so, I'm surprised to hear all your test cases are faster. If\nthere's an OR clause in the join condition then the planner has no\nchoice but to use a nested loop join, so it's very surprising that you\nwould find that faster with larger data sets.\n\nOr does the code your testing implement this a different way? Perhaps\nwith some execution level support?\n\n> Also, remember that the issue with the empty inner was just a bug that was\n> the result of trying to do an additional optimization in the case where\n> there is no WHERE clause in the subquery. That bug has been fixed. The\n> general case transformation described in the base note produces the correct\n> result in all cases, including the empty subquery case.\n\nI'm not sure why lack of WHERE clause in the subquery counts for\nanything here. The results set from the subquery can be empty or not\nempty with or without a WHERE clause. The only way you'll know it's\nempty during planning is if some gating qual says so, but that's yet\nto be determined by the time the transformation should be done.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Tue, 5 Mar 2019 22:22:26 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "David Steele <david@pgmasters.net> writes:\n> I'm not sure if I have an issue with competing patches on the same \n> thread. I've seen that before and it can lead to a good outcome. It \n> case, as you say, also lead to confusion.\n\nIt's a bit of a shame that the cfbot will only be testing one of them\nat a time if we leave it like this. I kind of lean towards the\ntwo-thread, two-CF-entry approach because of that. The amount of\nconfusion is a constant.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 05 Mar 2019 09:37:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "On Wed, 6 Mar 2019 at 03:37, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Steele <david@pgmasters.net> writes:\n> > I'm not sure if I have an issue with competing patches on the same\n> > thread. I've seen that before and it can lead to a good outcome. It\n> > case, as you say, also lead to confusion.\n>\n> It's a bit of a shame that the cfbot will only be testing one of them\n> at a time if we leave it like this. I kind of lean towards the\n> two-thread, two-CF-entry approach because of that. The amount of\n> confusion is a constant.\n\nThat sounds fine. I'll take mine elsewhere since I didn't start this thread.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Wed, 6 Mar 2019 12:25:46 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "On Wed, 6 Mar 2019 at 12:25, David Rowley <david.rowley@2ndquadrant.com> wrote:\n> That sounds fine. I'll take mine elsewhere since I didn't start this thread.\n\nMoved to https://www.postgresql.org/message-id/CAKJS1f82pqjqe3WT9_xREmXyG20aOkHc-XqkKZG_yMA7JVJ3Tw%40mail.gmail.com\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Wed, 6 Mar 2019 12:56:40 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "Hey, here is our latest patch. Major changes in this patch include:\r\n1. Use the original hashed subplan if the inner fits in memory as decided by subplan_is_hashable().\r\n2. Fixed the inner relation empty case by adding an inner relation existence check when we pull x out as a filter on the outer (see details below).\r\n3. Integrate David Rowley's routine to use strict predicates and inner join conditions when checking nullability of a Var.\r\n\r\nDetailed description of the patch:\r\n\r\n NOT IN to ANTI JOIN transformation\r\n \r\n If the NOT IN subquery is not eligible for hashed subplan as decided by\r\n subplan_is_hashable(), do the following NOT IN to ANTI JOIN\r\n transformation:\r\n \r\n Single expression:\r\n When x is nullable:\r\n t1.x not in (t2.y where p) =>\r\n ANTI JOIN\r\n t1 (Filter: t1.x IS NOT NULL or NOT EXISTS (select 1 from t2 where p)),\r\n t2 on join condition (t1.x=t2.y or t2.y IS NULL)\r\n and p.\r\n The predicate \"t2.y IS NULL\" can be removed if y is non-nullable.\r\n \r\n When x is non-nullable:\r\n t1.x not in (t2.y where p) =>\r\n ANTI JOIN t1, t2 on join condition (t1.x=t2.y or t2.y IS NULL)\r\n and p.\r\n The predicate \"t2.y IS NULL\" can be removed if y is non-nullable.\r\n \r\n Multi expression:\r\n If all xi's are nullable:\r\n (x1, x2, ... xn) not in (y1, y2, ... yn ) =>\r\n ANTI JOIN t1, t2 on join condition:\r\n ((t1.x1 = t2.y1) and ... (t1.xi = t2.yi) ... and\r\n (t1.xn = t2.yn)) IS NOT FALSE.\r\n \r\n If at least one xi is non-nuallable:\r\n (x1, x2, ... xn) not in (y1, y2, ... yn ) =>\r\n ANTI JOIN t1, t2 on join condition:\r\n (t1.x1 = t2.y1 or t2.y1 IS NULL or t1.x1 IS NULL) and ...\r\n (t1.xi = t2.yi or t2.yi IS NULL) ... and\r\n (t1.xn = t2.yn or t2.yn IS NULL or t1.xn IS NULL).\r\n \r\n Add nullability testing routine is_node_nonnullable(), currently it\r\n handles Var, TargetEntry, CoalesceExpr and Const. It uses strict\r\n predicates, inner join conditions and NOT NULL constraint to check\r\n the nullability of a Var.\r\n \r\n Adjust and apply reduce_outer_joins() before the transformation so\r\n that the outer joins have an opportunity to be converted to inner joins\r\n prior to the transformation.\r\n \r\n We measured performance improvements of two to five orders of magnitude\r\n on most queries in a development environment. In our performance experiments,\r\n table s (small) has 11 rows, table l (large) has 1 million rows. s.n and l.n\r\n have NULL value. s.nn and l.nn are NOT NULL. Index is created on each column.\r\n \r\n Cases using hash anti join:\r\n l.nn not in (l.nn) 21900s -> 235ms\r\n l.nn not in (l.nn where u>0) 22000s -> 240ms\r\n l.n not in (l.nn) 21900s -> 238ms\r\n l.n not in (l.nn where u>0) 22000s -> 248ms\r\n \r\n Cases using index nested loop anti join\r\n s.n not in (l.nn) 360ms -> 0.5ms\r\n s.n not in (l.nn where u>0) 390ms -> 0.6ms\r\n s.nn not in (l.nn) 365ms -> 0.5ms\r\n s.nn not in (l.nn where u>0) 390ms -> 0.5ms\r\n \r\n Cases using bitmap heap scan on the inner and nested loop anti join:\r\n s.n not in (l.n) 360ms -> 0.7ms\r\n l.n not in (l.n) 21900s -> 1.6s\r\n l.n not in (l.n where u>0) 22000s -> 1680ms\r\n s.nn not in (l.n) 360ms -> 0.5ms\r\n l.nn not in (l.n) 21900s -> 1650ms\r\n l.nn not in (l.n where u>0) 22000s -> 1660ms\r\n \r\n Cases using the original hashed subplan:\r\n l.n not in (s.n) 63ms -> 63ms\r\n l.nn not in (s.nn) 63ms -> 63ms\r\n l.n not in (s.n where u>0) 63ms -> 63ms\r\n\r\nComments are welcome.\r\n\r\nRegards,\r\n-----------\r\nZheng Li\r\nAWS, Amazon Aurora PostgreSQL",
"msg_date": "Sat, 16 Mar 2019 21:07:04 +0000",
"msg_from": "\"Li, Zheng\" <zhelli@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "In our patch, we only proceed with the ANTI JOIN transformation if\nsubplan_is_hashable(subplan) is\nfalse, it requires the subquery to be planned at this point.\n\nTo avoid planning the subquery again later on, I want to keep a pointer of\nthe subplan in SubLink so that we can directly reuse the subplan when\nneeded. However, this change breaks initdb for some reason and I'm trying to\nfigure it out.\n\nI'll send the rebased patch in the following email since it's been a while.\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Fri, 14 Jun 2019 14:38:24 -0700 (MST)",
"msg_from": "zhengli <zhelli@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN subquery optimization"
},
{
"msg_contents": "Rebased patch is attached.\r\n\r\nComments are welcome.\r\n\r\n-----------\r\nZheng Li\r\nAWS, Amazon Aurora PostgreSQL\r\n \r\n\r\nOn 6/14/19, 5:39 PM, \"zhengli\" <zhelli@amazon.com> wrote:\r\n\r\n In our patch, we only proceed with the ANTI JOIN transformation if\r\n subplan_is_hashable(subplan) is\r\n false, it requires the subquery to be planned at this point.\r\n \r\n To avoid planning the subquery again later on, I want to keep a pointer of\r\n the subplan in SubLink so that we can directly reuse the subplan when\r\n needed. However, this change breaks initdb for some reason and I'm trying to\r\n figure it out.\r\n \r\n I'll send the rebased patch in the following email since it's been a while.\r\n \r\n \r\n \r\n --\r\n Sent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html",
"msg_date": "Fri, 14 Jun 2019 21:41:06 +0000",
"msg_from": "\"Li, Zheng\" <zhelli@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: [UNVERIFIED SENDER] Re: NOT IN subquery optimization"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nDavid Rowley posted some interesting stuff about an ArrayList[1] that\nwould replace List in many places. (I probably would have replied to\nthat thread if I hadn't changed email account, sorry.) I had similar\nthoughts when I needed to maintain a sorted vector for my refactoring\nproposal for the fsync queue, as part of my undo log work, except I\ncame at the problem with some ideas from C++ in mind. However, now\nthis stuff seems to be redundant for now, given some decisions we've\narrived at in the thread about how to track the fsync queue, and I'm\nmore focused on getting the undo work done.\n\nI wanted to share what I had anyway, since it seems to be related to\nDavid's proposal but make some different trade-offs. Perhaps it will\nbe useful for ideas, code, or examples of what not to do :-) Or\nperhaps I'll come back to it later.\n\nSpecialising the comparator and element size of functions like\nqsort(), binary_search(), unique() is a well known technique for going\nfaster, and the resulting programming environment has better type\nsafety. It's easy to measure a speedup for specialised qsort of small\nobjects (compared to pg_qsort() with function pointer and object size\nas arguments). Doing that in a way that works well with automatically\nmanaged vectors (and also any array, including in shmem etc) seemed\nlike a good plan to me, hence this prototyping work.\n\nThe basic idea is to use simplehash-style generic programming, like a\nkind of poor man's C++ standard library. The vector type is\ninstantiated for a given type like Oid etc, and then you can\ninstantiate specialised qsort() etc. The vector has a 'small vector\noptimisation' where you can hold very small lists without allocating\nany extra memory until you get past (say) 3 members. I was planning\nto extend the qsort implementation a bit further to that it could\nreplace both pg_qsort() and the Perl-based tuple qsort generator, and\nthen we'd have only one copy of the algorithm in the tree (the dream\nof generic programmers). I wanted to do the same with unique(), since\nI'd already noticed that we have many open coded examples of that\nalgorithm scattered throughout the tree.\n\nSome of the gratuitous C++isms should be removed including some\nnonsense about const qualifiers, use of begin and end rather than data\nand size (more common in C code), some some other details, and I was\nplanning to fix some of that before I reposted the patch set as part\nof the larger undo patch set, but now that I'm not going to do that...\nhere is a snapshot of the patch set as-is, with toy examples showing\nseveral examples of List-of-Oid relationOids being replaced with a\nspecialised vector (with a missed opportunity for it to be sorted and\nuse binary_search() instead of find()), and the List-of-Node\np_joinexprs being replaced with a specialised vector. I think those\nlast things are the sort of thing that David was targeting.\n\n[1] https://www.postgresql.org/message-id/flat/CAKJS1f_2SnXhPVa6eWjzy2O9A%3Docwgd0Cj-LQeWpGtrWqbUSDA%40mail.gmail.com\n\n-- \nThomas Munro\nhttps://enterprisedb.com",
"msg_date": "Thu, 21 Feb 2019 18:02:04 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Vectors instead of lists, specialised qsort(), binary_search(),\n unique()"
},
{
"msg_contents": "On Thu, 21 Feb 2019 at 18:02, Thomas Munro <thomas.munro@gmail.com> wrote:\n> David Rowley posted some interesting stuff about an ArrayList[1] that\n> would replace List in many places. (I probably would have replied to\n> that thread if I hadn't changed email account, sorry.) I had similar\n> thoughts when I needed to maintain a sorted vector for my refactoring\n> proposal for the fsync queue, as part of my undo log work, except I\n> came at the problem with some ideas from C++ in mind. However, now\n> this stuff seems to be redundant for now, given some decisions we've\n> arrived at in the thread about how to track the fsync queue, and I'm\n> more focused on getting the undo work done.\n\nHi Thomas,\n\nGlad to see further work being done in this area. I do agree that our\nlinked list implementation is very limited and of course, does perform\nvery poorly for Nth lookups and checking if the list contains\nsomething. Seems what you have here solves (at least) these two\nissues. I also think it's tragic in the many places where we take the\nList and turn it into an array because we know list_nth() performs\nterribly. (e.g ExecInitRangeTable(), setup_simple_rel_arrays() and\nsetup_append_rel_array(), it would be great to one day see those\nfixed)\n\nAs for the ArrayList patch that I'd been working on, I was a bit\nblocked on it due to Tom's comment in [1], after having quickly looked\nover your patch I see there's no solution for that complaint.\n\n(I'd been trying to think of a solution to this as a sort of idle\nbackground task and so far I'd only come up with a sort of List\nindexing system, where we could build additional data structures atop\nof the list and have functions like list_nth() check for such an index\nand use it if available. I don't particularly like the idea, but it\nwas the only way I could think of to work around Tom's complaint. I\nfelt there was just too much list API code that relies on the list\nbeing a linked list. e.g checking for empty lists with list == NIL.\nlnext() etc.)\n\nSo in short, I'm in favour of not just having braindead linked lists\nall over the place to store things. I will rejoice the day we move\naway from that, but also Tom's comment kinda stuck with me. He's\noften right too. Likely backpatching pain that Tom talks about would\nbe much less if we used C++, but... we don't.\n\nI'm happy to support the cause here, just not quite sure yet how I can\nbest do that.\n\n[1] https://www.postgresql.org/message-id/21592.1509632225%40sss.pgh.pa.us\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Thu, 21 Feb 2019 19:26:46 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Vectors instead of lists, specialised qsort(), binary_search(),\n unique()"
},
{
"msg_contents": "On Thu, Feb 21, 2019 at 7:27 PM David Rowley\n<david.rowley@2ndquadrant.com> wrote:\n> As for the ArrayList patch that I'd been working on, I was a bit\n> blocked on it due to Tom's comment in [1], after having quickly looked\n> over your patch I see there's no solution for that complaint.\n>\n> (I'd been trying to think of a solution to this as a sort of idle\n> background task and so far I'd only come up with a sort of List\n> indexing system, where we could build additional data structures atop\n> of the list and have functions like list_nth() check for such an index\n> and use it if available. I don't particularly like the idea, but it\n> was the only way I could think of to work around Tom's complaint. I\n> felt there was just too much list API code that relies on the list\n> being a linked list. e.g checking for empty lists with list == NIL.\n> lnext() etc.)\n>\n> So in short, I'm in favour of not just having braindead linked lists\n> all over the place to store things. I will rejoice the day we move\n> away from that, but also Tom's comment kinda stuck with me. He's\n> often right too. Likely backpatching pain that Tom talks about would\n> be much less if we used C++, but... we don't.\n\nRight, in this patch-set I wasn't really focused on how to replace the\nexisting lists in a back-patch friendly style (a very legitimate\nconcern), I was focused on how to get the best possible machine code\nfor basic data structures and algorithms that are used all over the\nplace, by inlining as many known-at-compile-time parameters as\npossible. And using the right data-structures in the first place by\nmaking them easily available; I guess that's the bit where your ideas\nand mine overlapped. I'm also interested in skipping gratuitous\nallocation and pointer chasing, even in cases where eg linked lists\nmight not be \"wrong\" according to the access pattern, just because\nit's cache-destructive and a waste of cycles. As Alexander Stepanov\nsaid: \"Use vectors whenever you can. If you cannot use vectors,\nredesign your solution so that you can use vectors\", and I think there\nis also something interesting about keeping objects inside in their\ncontaining object, instead of immediately using pointers to somewhere\nelse (obviously tricky with variable sized data, but that's what the\nsmall vector optimisation idea is about; whether it's worth it after\nthe overhead of detecting the case, I don't know; std::string\nimplementors seem to think so). I was primarily interested in new\ncode, though I'm pretty sure there are places all over the tree where\nmicrooptimisations are possible through specialisation (think\npartitioning, sorting, searching, uniquifying in cases where the types\nare fixed, or vary at runtime but there are some common cases you\nmight want to specialise for).\n\nFor the cases you're interested in, maybe piecemeal replication of the\nplanner lists that are measurably very hot is the only practical way\nto do it, along with evidence that it's really worth the future\nbackpatching pain in each case. Or maybe there is some way to create\na set of macros that map to List operations in back branches, as you\nwere hinting at, to keep client code identical... I dunno.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n",
"msg_date": "Fri, 22 Feb 2019 11:45:11 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Vectors instead of lists, specialised qsort(), binary_search(),\n unique()"
}
] |
[
{
"msg_contents": "Hi,\n\nJeff Janes raised an issue [1] about PD_ALL_VISIBLE not being set correctly\nwhile loading data via COPY FREEZE and had also posted a draft patch.\n\nI now have what I think is a more complete patch. I took a slightly\ndifferent approach and instead of setting PD_ALL_VISIBLE bit initially and\nthen not clearing it during insertion, we now recheck the page for\nall-frozen, all-visible tuples just before switching to a new page. This\nallows us to then also mark set the visibility map bit, like we do in\nvacuumlazy.c\n\nSome special treatment is required to handle the last page before bulk\ninsert it shutdown. We could have chosen not to do anything special for the\nlast page and let it remain unfrozen, but I thought it makes sense to take\nthat extra effort so that we can completely freeze the table and set all VM\nbits at the end of COPY FREEZE.\n\nLet me know what you think.\n\nThanks,\nPavan\n\n[1]\nhttps://www.postgresql.org/message-id/CAMkU%3D1w3osJJ2FneELhhNRLxfZitDgp9FPHee08NT2FQFmz_pQ%40mail.gmail.com\n\n-- \n Pavan Deolasee http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Thu, 21 Feb 2019 11:35:14 +0530",
"msg_from": "Pavan Deolasee <pavan.deolasee@gmail.com>",
"msg_from_op": true,
"msg_subject": "COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "Hello Pavan,\n\nThank you for the patch. It seems to me that while performing COPY\nFREEZE, if we've copied tuples in a previously emptied page, we can\nset the PageSetAllVisible(page) in heap_muli_insert only. Something\nlike,\n\nbool init = (ItemPointerGetOffsetNumber(&(heaptuples[ndone]->t_self))\n== FirstOffsetNumber &&\n PageGetMaxOffsetNumber(page) == FirstOffsetNumber + nthispage - 1);\nif (init && (options & HEAP_INSERT_FROZEN))\n PageSetAllVisible(page);\n\nLater, you can skip the pages for\nCheckAndSetAllVisibleBulkInsertState() where PD_ALL_VISIBLE flag is\nalready set. Do you think it's correct?\n\n\nOn Thu, Feb 21, 2019 at 11:35 AM Pavan Deolasee\n<pavan.deolasee@gmail.com> wrote:\n>\n> Hi,\n>\n> Jeff Janes raised an issue [1] about PD_ALL_VISIBLE not being set correctly while loading data via COPY FREEZE and had also posted a draft patch.\n>\n> I now have what I think is a more complete patch. I took a slightly different approach and instead of setting PD_ALL_VISIBLE bit initially and then not clearing it during insertion, we now recheck the page for all-frozen, all-visible tuples just before switching to a new page. This allows us to then also mark set the visibility map bit, like we do in vacuumlazy.c\n>\n> Some special treatment is required to handle the last page before bulk insert it shutdown. We could have chosen not to do anything special for the last page and let it remain unfrozen, but I thought it makes sense to take that extra effort so that we can completely freeze the table and set all VM bits at the end of COPY FREEZE.\n>\n> Let me know what you think.\n>\n> Thanks,\n> Pavan\n>\n> [1] https://www.postgresql.org/message-id/CAMkU%3D1w3osJJ2FneELhhNRLxfZitDgp9FPHee08NT2FQFmz_pQ%40mail.gmail.com\n>\n> --\n> Pavan Deolasee http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\n\n\n\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com\n\n",
"msg_date": "Thu, 21 Feb 2019 21:07:52 +0530",
"msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "On Thu, 21 Feb 2019 at 15:38, Kuntal Ghosh <kuntalghosh.2007@gmail.com>\nwrote:\n\n\n> Thank you for the patch. It seems to me that while performing COPY\n> FREEZE, if we've copied tuples in a previously emptied page\n\n\nThere won't be any previously emptied pages because of the pre-conditions\nrequired for FREEZE.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nOn Thu, 21 Feb 2019 at 15:38, Kuntal Ghosh <kuntalghosh.2007@gmail.com> wrote: Thank you for the patch. It seems to me that while performing COPY\nFREEZE, if we've copied tuples in a previously emptied pageThere won't be any previously emptied pages because of the pre-conditions required for FREEZE. -- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 26 Feb 2019 13:15:55 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "On Tue, Feb 26, 2019 at 6:46 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n>\n> On Thu, 21 Feb 2019 at 15:38, Kuntal Ghosh <kuntalghosh.2007@gmail.com> wrote:\n>\n>>\n>> Thank you for the patch. It seems to me that while performing COPY\n>> FREEZE, if we've copied tuples in a previously emptied page\n>\n>\n> There won't be any previously emptied pages because of the pre-conditions required for FREEZE.\n>\nRight, I missed that part. Thanks for pointing that out. But, this\noptimization is still possible for copying frozen tuples in a newly\ncreated page, right? If current backend allocates a new page, copies a\nbunch of frozen tuples in that page, it can set the PD_ALL_VISIBLE in\nthe same operation.\n\n\n\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com\n\n",
"msg_date": "Tue, 26 Feb 2019 20:18:42 +0530",
"msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "On Thu, Feb 21, 2019 at 1:05 AM Pavan Deolasee <pavan.deolasee@gmail.com>\nwrote:\n\n> Hi,\n>\n> Jeff Janes raised an issue [1] about PD_ALL_VISIBLE not being set\n> correctly while loading data via COPY FREEZE and had also posted a draft\n> patch.\n>\n> I now have what I think is a more complete patch. I took a slightly\n> different approach and instead of setting PD_ALL_VISIBLE bit initially and\n> then not clearing it during insertion, we now recheck the page for\n> all-frozen, all-visible tuples just before switching to a new page. This\n> allows us to then also mark set the visibility map bit, like we do in\n> vacuumlazy.c\n>\n> Some special treatment is required to handle the last page before bulk\n> insert it shutdown. We could have chosen not to do anything special for the\n> last page and let it remain unfrozen, but I thought it makes sense to take\n> that extra effort so that we can completely freeze the table and set all VM\n> bits at the end of COPY FREEZE.\n>\n> Let me know what you think.\n>\n\nHi Pavan, thanks for picking this up.\n\nAfter doing a truncation and '\\copy ... with (freeze)' of a table with long\ndata, I find that the associated toast table has a handful of unfrozen\nblocks. I don't know if that is an actual problem, but it does seem a bit\nodd, and thus suspicious.\n\nperl -le 'print join \"\", map rand(), 1..500 foreach 1..1000000' > foo\n\ncreate table foobar1 (x text);\nbegin;\ntruncate foobar1;\n\\copy foobar1 from foo with (freeze)\ncommit;\nselect all_visible,all_frozen,pd_all_visible, count(*) from\npg_visibility('pg_toast.pg_toast_25085') group by 1,2,3;\n all_visible | all_frozen | pd_all_visible | count\n-------------+------------+----------------+---------\n f | f | f | 18\n t | t | t | 530,361\n(2 rows)\n\nCheers,\n\nJeff\n\n>\n\nOn Thu, Feb 21, 2019 at 1:05 AM Pavan Deolasee <pavan.deolasee@gmail.com> wrote:Hi,Jeff Janes raised an issue [1] about PD_ALL_VISIBLE not being set correctly while loading data via COPY FREEZE and had also posted a draft patch.I now have what I think is a more complete patch. I took a slightly different approach and instead of setting PD_ALL_VISIBLE bit initially and then not clearing it during insertion, we now recheck the page for all-frozen, all-visible tuples just before switching to a new page. This allows us to then also mark set the visibility map bit, like we do in vacuumlazy.cSome special treatment is required to handle the last page before bulk insert it shutdown. We could have chosen not to do anything special for the last page and let it remain unfrozen, but I thought it makes sense to take that extra effort so that we can completely freeze the table and set all VM bits at the end of COPY FREEZE.Let me know what you think.Hi Pavan, thanks for picking this up.After doing a truncation and '\\copy ... with (freeze)' of a table with long data, I find that the associated toast table has a handful of unfrozen blocks. I don't know if that is an actual problem, but it does seem a bit odd, and thus suspicious.perl -le 'print join \"\", map rand(), 1..500 foreach 1..1000000' > foocreate table foobar1 (x text);begin;truncate foobar1;\\copy foobar1 from foo with (freeze)commit;select all_visible,all_frozen,pd_all_visible, count(*) from pg_visibility('pg_toast.pg_toast_25085') group by 1,2,3; all_visible | all_frozen | pd_all_visible | count -------------+------------+----------------+--------- f | f | f | 18 t | t | t | 530,361(2 rows)Cheers,Jeff",
"msg_date": "Tue, 26 Feb 2019 20:35:18 -0500",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "On Wed, Feb 27, 2019 at 7:05 AM Jeff Janes <jeff.janes@gmail.com> wrote:\n\n>\n>\n> After doing a truncation and '\\copy ... with (freeze)' of a table with\n> long data, I find that the associated toast table has a handful of unfrozen\n> blocks. I don't know if that is an actual problem, but it does seem a bit\n> odd, and thus suspicious.\n>\n>\nHi Jeff, thanks for looking at it and the test. I can reproduce the problem\nand quite curiously block number 1 and then every 32672th block is getting\nskipped.\n\npostgres=# select * from pg_visibility('pg_toast.pg_toast_16384') where\nall_visible = 'f';\n blkno | all_visible | all_frozen | pd_all_visible\n--------+-------------+------------+----------------\n 1 | f | f | f\n 32673 | f | f | f\n 65345 | f | f | f\n 98017 | f | f | f\n 130689 | f | f | f\n 163361 | f | f | f\n <snip>\n\nHaving investigated this a bit, I see that a relcache invalidation arrives\nafter 1st and then after every 32672th block is filled. That clears the\nrel->rd_smgr field and we lose the information about the saved target\nblock. The code then moves to extend the relation again and thus skips the\npreviously less-than-half filled block, losing the free space in that block.\n\npostgres=# SELECT * FROM\npage_header(get_raw_page('pg_toast.pg_toast_16384', 0));\n lsn | checksum | flags | lower | upper | special | pagesize |\nversion | prune_xid\n------------+----------+-------+-------+-------+---------+----------+---------+-----------\n 1/15B37748 | 0 | 4 | 40 | 64 | 8192 | 8192 |\n 4 | 0\n(1 row)\n\npostgres=# SELECT * FROM\npage_header(get_raw_page('pg_toast.pg_toast_16384', 1));\n lsn | checksum | flags | lower | upper | special | pagesize |\nversion | prune_xid\n------------+----------+-------+-------+-------+---------+----------+---------+-----------\n 1/15B39A28 | 0 | 4 | 28 | 7640 | 8192 | 8192 |\n 4 | 0\n(1 row)\n\npostgres=# SELECT * FROM\npage_header(get_raw_page('pg_toast.pg_toast_16384', 2));\n lsn | checksum | flags | lower | upper | special | pagesize |\nversion | prune_xid\n------------+----------+-------+-------+-------+---------+----------+---------+-----------\n 1/15B3BE08 | 0 | 4 | 40 | 64 | 8192 | 8192 |\n 4 | 0\n(1 row)\n\nSo the block 1 has a large amount of free space (upper - lower), which\nnever gets filled.\n\nI am not yet sure what causes the relcache invalidation at regular\nintervals. But if I have to guess, it could be because of a new VM (or\nFSM?) page getting allocated. I am bit puzzled because this issue seems to\nonly occur with toast tables since I tested the patch while writing it on a\nregular table and did not see any block remaining unfrozen. I tested only\nupto 450 blocks, but that shouldn't matter because with your test, we see\nthe problem with block 1 as well. So something to look into in more detail.\n\nWhile we could potentially fix this by what you'd done in the original\npatch and what Kuntal also suggested, i.e. by setting the PD_ALL_VISIBLE\nbit during page initialisation itself, I am a bit circumspect about that\napproach for two reasons:\n\n1. It requires us to then add extra logic to avoid clearing the bit during\ninsertions\n2. It requires us to also update the VM bit during page init or risk having\ndivergent views on the page-level bit and the VM bit.\n\nAnd even if we do that, this newly discovered problem of less-than-half\nfilled intermediate blocks remain. I wonder if we should instead track the\nlast used block in BulkInsertState and if the relcache invalidation flushes\nsmgr, start inserting again from the last saved block. In fact, we already\ntrack the last used buffer in BulkInsertState and that's enough to know the\nlast used block.\n\nThanks,\nPavan\n\n-- \n Pavan Deolasee http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\nOn Wed, Feb 27, 2019 at 7:05 AM Jeff Janes <jeff.janes@gmail.com> wrote:After doing a truncation and '\\copy ... with (freeze)' of a table with long data, I find that the associated toast table has a handful of unfrozen blocks. I don't know if that is an actual problem, but it does seem a bit odd, and thus suspicious.Hi Jeff, thanks for looking at it and the test. I can reproduce the problem and quite curiously block number 1 and then every 32672th block is getting skipped.postgres=# select * from pg_visibility('pg_toast.pg_toast_16384') where all_visible = 'f'; blkno | all_visible | all_frozen | pd_all_visible --------+-------------+------------+---------------- 1 | f | f | f 32673 | f | f | f 65345 | f | f | f 98017 | f | f | f 130689 | f | f | f 163361 | f | f | f <snip>Having investigated this a bit, I see that a relcache invalidation arrives after 1st and then after every 32672th block is filled. That clears the rel->rd_smgr field and we lose the information about the saved target block. The code then moves to extend the relation again and thus skips the previously less-than-half filled block, losing the free space in that block.postgres=# SELECT * FROM page_header(get_raw_page('pg_toast.pg_toast_16384', 0)); lsn | checksum | flags | lower | upper | special | pagesize | version | prune_xid ------------+----------+-------+-------+-------+---------+----------+---------+----------- 1/15B37748 | 0 | 4 | 40 | 64 | 8192 | 8192 | 4 | 0(1 row)postgres=# SELECT * FROM page_header(get_raw_page('pg_toast.pg_toast_16384', 1)); lsn | checksum | flags | lower | upper | special | pagesize | version | prune_xid ------------+----------+-------+-------+-------+---------+----------+---------+----------- 1/15B39A28 | 0 | 4 | 28 | 7640 | 8192 | 8192 | 4 | 0(1 row)postgres=# SELECT * FROM page_header(get_raw_page('pg_toast.pg_toast_16384', 2)); lsn | checksum | flags | lower | upper | special | pagesize | version | prune_xid ------------+----------+-------+-------+-------+---------+----------+---------+----------- 1/15B3BE08 | 0 | 4 | 40 | 64 | 8192 | 8192 | 4 | 0(1 row)So the block 1 has a large amount of free space (upper - lower), which never gets filled.I am not yet sure what causes the relcache invalidation at regular intervals. But if I have to guess, it could be because of a new VM (or FSM?) page getting allocated. I am bit puzzled because this issue seems to only occur with toast tables since I tested the patch while writing it on a regular table and did not see any block remaining unfrozen. I tested only upto 450 blocks, but that shouldn't matter because with your test, we see the problem with block 1 as well. So something to look into in more detail.While we could potentially fix this by what you'd done in the original patch and what Kuntal also suggested, i.e. by setting the PD_ALL_VISIBLE bit during page initialisation itself, I am a bit circumspect about that approach for two reasons:1. It requires us to then add extra logic to avoid clearing the bit during insertions2. It requires us to also update the VM bit during page init or risk having divergent views on the page-level bit and the VM bit.And even if we do that, this newly discovered problem of less-than-half filled intermediate blocks remain. I wonder if we should instead track the last used block in BulkInsertState and if the relcache invalidation flushes smgr, start inserting again from the last saved block. In fact, we already track the last used buffer in BulkInsertState and that's enough to know the last used block.Thanks,Pavan-- Pavan Deolasee http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Wed, 27 Feb 2019 10:26:25 +0530",
"msg_from": "Pavan Deolasee <pavan.deolasee@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "On Thu, Feb 21, 2019 at 3:05 PM Pavan Deolasee <pavan.deolasee@gmail.com> wrote:\n>\n> Hi,\n>\n> Jeff Janes raised an issue [1] about PD_ALL_VISIBLE not being set correctly while loading data via COPY FREEZE and had also posted a draft patch.\n>\n> I now have what I think is a more complete patch. I took a slightly different approach and instead of setting PD_ALL_VISIBLE bit initially and then not clearing it during insertion, we now recheck the page for all-frozen, all-visible tuples just before switching to a new page. This allows us to then also mark set the visibility map bit, like we do in vacuumlazy.c\n\nI might be missing something but why do we need to recheck whether\neach pages is all-frozen after insertion? I wonder if we can set\nall-frozen without checking all tuples again in this case.\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n",
"msg_date": "Mon, 11 Mar 2019 17:03:57 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "On Mon, Mar 11, 2019 at 1:37 PM Masahiko Sawada <sawada.mshk@gmail.com>\nwrote:\n\n>\n>\n> I might be missing something but why do we need to recheck whether\n> each pages is all-frozen after insertion? I wonder if we can set\n> all-frozen without checking all tuples again in this case.\n>\n\nIt's possible that the user may have inserted unfrozen rows (via regular\nINSERT or COPY without FREEZE option) before inserting frozen rows. So we\ncan't set the all-visible/all-frozen flag unconditionally. I also find it\nsafer to do an explicit check to ensure we never accidentally mark a page\nas all-frozen. Since the check is performed immediately after the page\nbecomes full and only once per page, there shouldn't be any additional IO\ncost and the check should be quite fast.\n\nThanks,\nPavan\n-- \n Pavan Deolasee http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\nOn Mon, Mar 11, 2019 at 1:37 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n\nI might be missing something but why do we need to recheck whether\neach pages is all-frozen after insertion? I wonder if we can set\nall-frozen without checking all tuples again in this case.It's possible that the user may have inserted unfrozen rows (via regular INSERT or COPY without FREEZE option) before inserting frozen rows. So we can't set the all-visible/all-frozen flag unconditionally. I also find it safer to do an explicit check to ensure we never accidentally mark a page as all-frozen. Since the check is performed immediately after the page becomes full and only once per page, there shouldn't be any additional IO cost and the check should be quite fast. Thanks,Pavan-- Pavan Deolasee http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Tue, 12 Mar 2019 13:24:16 +0530",
"msg_from": "Pavan Deolasee <pavan.deolasee@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "On Tue, Mar 12, 2019 at 4:54 PM Pavan Deolasee <pavan.deolasee@gmail.com> wrote:\n>\n>\n> On Mon, Mar 11, 2019 at 1:37 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>\n>>\n>>\n>> I might be missing something but why do we need to recheck whether\n>> each pages is all-frozen after insertion? I wonder if we can set\n>> all-frozen without checking all tuples again in this case.\n>\n>\n> It's possible that the user may have inserted unfrozen rows (via regular INSERT or COPY without FREEZE option) before inserting frozen rows.\n\nI think that since COPY FREEZE can be executed only when the table is\ncreated or truncated within the transaction other users cannot insert\nany rows during COPY FREEZE.\n\n> So we can't set the all-visible/all-frozen flag unconditionally. I also find it safer to do an explicit check to ensure we never accidentally mark a page as all-frozen. Since the check is performed immediately after the page becomes full and only once per page, there shouldn't be any additional IO cost and the check should be quite fast.\n\nI'd suggest to measure performance overhead. I can imagine one use\ncase of COPY FREEZE is the loading a very large table. Since in\naddition to set visibility map bits this patch could scan a very large\ntable I'm concerned that how much performance is degraded by this\npatch.\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n",
"msg_date": "Wed, 13 Mar 2019 15:03:46 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "On Wed, Mar 13, 2019 at 11:37 AM Masahiko Sawada <sawada.mshk@gmail.com>\nwrote:\n\n>\n>\n> I think that since COPY FREEZE can be executed only when the table is\n> created or truncated within the transaction other users cannot insert\n> any rows during COPY FREEZE.\n>\n>\nRight. But the truncating transaction can insert unfrozen rows into the\ntable before inserting more rows via COPY FREEZE.\n\npostgres=# CREATE EXTENSION pageinspect ;\nCREATE EXTENSION\npostgres=# BEGIN;\nBEGIN\npostgres=# TRUNCATE testtab ;\nTRUNCATE TABLE\npostgres=# INSERT INTO testtab VALUES (100, 200);\nINSERT 0 1\npostgres=# COPY testtab FROM STDIN WITH (FREEZE);\nEnter data to be copied followed by a newline.\nEnd with a backslash and a period on a line by itself, or an EOF signal.\n>> 1 2\n>> 2 3\n>> \\.\nCOPY 2\npostgres=# COMMIT;\n\npostgres=# SELECT lp, to_hex(t_infomask) FROM\nheap_page_items(get_raw_page('testtab', 0));\n lp | to_hex\n----+--------\n 1 | 800\n 2 | b00\n 3 | b00\n(3 rows)\n\nThe first row in inserted by regular insert and it's not frozen. The next 2\nare frozen. We can't mark such as page all-visible, all-frozen.\n\n\n>\n> I'd suggest to measure performance overhead. I can imagine one use\n> case of COPY FREEZE is the loading a very large table. Since in\n> addition to set visibility map bits this patch could scan a very large\n> table I'm concerned that how much performance is degraded by this\n> patch.\n>\n\nOk. I will run some tests. But please note that this patch is a bug fix to\naddress the performance issue that is caused by having to rewrite the\nentire table when all-visible bit is set on the page during first vacuum.\nSo while we may do some more work during COPY FREEZE, we're saving a lot of\npage writes during next vacuum. Also, since the scan that we are doing in\nthis patch is done on a page that should be in the buffer cache, we will\npay a bit in terms of CPU cost, but not anything in terms of IO cost.\n\nThanks,\nPavan\n\n-- \n Pavan Deolasee http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\nOn Wed, Mar 13, 2019 at 11:37 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n\nI think that since COPY FREEZE can be executed only when the table is\ncreated or truncated within the transaction other users cannot insert\nany rows during COPY FREEZE.\nRight. But the truncating transaction can insert unfrozen rows into the table before inserting more rows via COPY FREEZE.postgres=# CREATE EXTENSION pageinspect ;CREATE EXTENSIONpostgres=# BEGIN;BEGINpostgres=# TRUNCATE testtab ;TRUNCATE TABLEpostgres=# INSERT INTO testtab VALUES (100, 200);INSERT 0 1postgres=# COPY testtab FROM STDIN WITH (FREEZE);Enter data to be copied followed by a newline.End with a backslash and a period on a line by itself, or an EOF signal.>> 1 2>> 2 3>> \\.COPY 2postgres=# COMMIT;postgres=# SELECT lp, to_hex(t_infomask) FROM heap_page_items(get_raw_page('testtab', 0)); lp | to_hex ----+-------- 1 | 800 2 | b00 3 | b00(3 rows)The first row in inserted by regular insert and it's not frozen. The next 2 are frozen. We can't mark such as page all-visible, all-frozen. \nI'd suggest to measure performance overhead. I can imagine one use\ncase of COPY FREEZE is the loading a very large table. Since in\naddition to set visibility map bits this patch could scan a very large\ntable I'm concerned that how much performance is degraded by this\npatch.Ok. I will run some tests. But please note that this patch is a bug fix to address the performance issue that is caused by having to rewrite the entire table when all-visible bit is set on the page during first vacuum. So while we may do some more work during COPY FREEZE, we're saving a lot of page writes during next vacuum. Also, since the scan that we are doing in this patch is done on a page that should be in the buffer cache, we will pay a bit in terms of CPU cost, but not anything in terms of IO cost.Thanks,Pavan-- Pavan Deolasee http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Thu, 14 Mar 2019 13:47:21 +0530",
"msg_from": "Pavan Deolasee <pavan.deolasee@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "On Thu, Mar 14, 2019 at 5:17 PM Pavan Deolasee <pavan.deolasee@gmail.com> wrote:\n>\n>\n>\n> On Wed, Mar 13, 2019 at 11:37 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>\n>>\n>>\n>> I think that since COPY FREEZE can be executed only when the table is\n>> created or truncated within the transaction other users cannot insert\n>> any rows during COPY FREEZE.\n>>\n>\n> Right. But the truncating transaction can insert unfrozen rows into the table before inserting more rows via COPY FREEZE.\n>\n> postgres=# CREATE EXTENSION pageinspect ;\n> CREATE EXTENSION\n> postgres=# BEGIN;\n> BEGIN\n> postgres=# TRUNCATE testtab ;\n> TRUNCATE TABLE\n> postgres=# INSERT INTO testtab VALUES (100, 200);\n> INSERT 0 1\n> postgres=# COPY testtab FROM STDIN WITH (FREEZE);\n> Enter data to be copied followed by a newline.\n> End with a backslash and a period on a line by itself, or an EOF signal.\n> >> 1 2\n> >> 2 3\n> >> \\.\n> COPY 2\n> postgres=# COMMIT;\n>\n> postgres=# SELECT lp, to_hex(t_infomask) FROM heap_page_items(get_raw_page('testtab', 0));\n> lp | to_hex\n> ----+--------\n> 1 | 800\n> 2 | b00\n> 3 | b00\n> (3 rows)\n>\n> The first row in inserted by regular insert and it's not frozen. The next 2 are frozen. We can't mark such as page all-visible, all-frozen.\n\nUnderstood. Thank you for explanation!\n\n>\n>>\n>>\n>> I'd suggest to measure performance overhead. I can imagine one use\n>> case of COPY FREEZE is the loading a very large table. Since in\n>> addition to set visibility map bits this patch could scan a very large\n>> table I'm concerned that how much performance is degraded by this\n>> patch.\n>\n>\n> Ok. I will run some tests. But please note that this patch is a bug fix to address the performance issue that is caused by having to rewrite the entire table when all-visible bit is set on the page during first vacuum. So while we may do some more work during COPY FREEZE, we're saving a lot of page writes during next vacuum. Also, since the scan that we are doing in this patch is done on a page that should be in the buffer cache, we will pay a bit in terms of CPU cost, but not anything in terms of IO cost.\n\nAgreed. I had been misunderstanding this patch. The page scan during\nCOPY FREEZE is necessary and it's very cheaper than doing in the first\nvacuum.\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n",
"msg_date": "Thu, 14 Mar 2019 19:20:08 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "Hi Pavan,\n\nOn 3/14/19 2:20 PM, Masahiko Sawada wrote:\n> On Thu, Mar 14, 2019 at 5:17 PM Pavan Deolasee <pavan.deolasee@gmail.com> wrote:\n>>\n>> Ok. I will run some tests. But please note that this patch is a bug fix to address the performance issue that is caused by having to rewrite the entire table when all-visible bit is set on the page during first vacuum. So while we may do some more work during COPY FREEZE, we're saving a lot of page writes during next vacuum. Also, since the scan that we are doing in this patch is done on a page that should be in the buffer cache, we will pay a bit in terms of CPU cost, but not anything in terms of IO cost.\n> \n> Agreed. I had been misunderstanding this patch. The page scan during\n> COPY FREEZE is necessary and it's very cheaper than doing in the first\n> vacuum.\n\nI have removed Ibrar as a reviewer since there has been no review from \nthem in three weeks, and too encourage others to have a look.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n",
"msg_date": "Wed, 20 Mar 2019 14:30:53 +0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "On Thu, Mar 14, 2019 at 3:54 PM Masahiko Sawada <sawada.mshk@gmail.com>\nwrote:\n\n>\n> >\n> >\n> > Ok. I will run some tests. But please note that this patch is a bug fix\n> to address the performance issue that is caused by having to rewrite the\n> entire table when all-visible bit is set on the page during first vacuum.\n> So while we may do some more work during COPY FREEZE, we're saving a lot of\n> page writes during next vacuum. Also, since the scan that we are doing in\n> this patch is done on a page that should be in the buffer cache, we will\n> pay a bit in terms of CPU cost, but not anything in terms of IO cost.\n>\n> Agreed. I had been misunderstanding this patch. The page scan during\n> COPY FREEZE is necessary and it's very cheaper than doing in the first\n> vacuum.\n>\n\nThanks for agreeing to the need of this bug fix. I ran some simple tests\nanyways and here are the results.\n\nThe test consists of a simple table with three columns, two integers and\none char(100). I then ran COPY (FREEZE), loading 7M rows, followed by a\nVACUUM. The total size of the raw data is about 800MB and the table size in\nPostgres is just under 1GB. The results for 3 runs in milliseconds are:\n\nMaster:\n COPY FREEZE: 40243.725 40309.675 40783.836\n VACUUM: 2685.871 2517.445 2508.452\n\nPatched:\n COPY FREEZE: 40942.410 40495.303 40638.075\n VACUUM: 25.067 35.793 25.390\n\nSo there is a slight increase in the time to run COPY FREEZE, but a\nsignificant reduction in time to VACUUM the table. The benefits will only\ngo up if the table is vacuumed much later when most of the pages are\nalready written to the disk and removed from shared buffers and/or kernel\ncache.\n\nI hope this satisfies your doubts regarding performance implications of the\npatch.\n\nThanks,\nPavan\n\n-- \n Pavan Deolasee http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\nOn Thu, Mar 14, 2019 at 3:54 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n>\n> Ok. I will run some tests. But please note that this patch is a bug fix to address the performance issue that is caused by having to rewrite the entire table when all-visible bit is set on the page during first vacuum. So while we may do some more work during COPY FREEZE, we're saving a lot of page writes during next vacuum. Also, since the scan that we are doing in this patch is done on a page that should be in the buffer cache, we will pay a bit in terms of CPU cost, but not anything in terms of IO cost.\n\nAgreed. I had been misunderstanding this patch. The page scan during\nCOPY FREEZE is necessary and it's very cheaper than doing in the first\nvacuum.Thanks for agreeing to the need of this bug fix. I ran some simple tests anyways and here are the results.The test consists of a simple table with three columns, two integers and one char(100). I then ran COPY (FREEZE), loading 7M rows, followed by a VACUUM. The total size of the raw data is about 800MB and the table size in Postgres is just under 1GB. The results for 3 runs in milliseconds are:Master: COPY FREEZE: 40243.725 40309.675 40783.836 VACUUM: 2685.871 2517.445 2508.452 Patched: COPY FREEZE: 40942.410 40495.303 40638.075 VACUUM: 25.067 35.793 25.390So there is a slight increase in the time to run COPY FREEZE, but a significant reduction in time to VACUUM the table. The benefits will only go up if the table is vacuumed much later when most of the pages are already written to the disk and removed from shared buffers and/or kernel cache.I hope this satisfies your doubts regarding performance implications of the patch.Thanks,Pavan-- Pavan Deolasee http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Thu, 21 Mar 2019 19:57:30 +0530",
"msg_from": "Pavan Deolasee <pavan.deolasee@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\nThis patch is particularly helpful in processing OpenStreetMap Data in PostGIS.\r\nOpenStreetMap is imported as a stream of 300-900 (depending on settings) gigabytes, that are needing a VACUUM after a COPY FREEZE.\r\nWith this patch, the first and usually the last transforming query is performed much faster after initial load.\r\n\r\nI have read this patch and have no outstanding comments on it.\r\nPavan Deolasee demonstrates the expected speed improvement.\n\nThe new status of this patch is: Ready for Committer\n",
"msg_date": "Thu, 21 Mar 2019 14:45:23 +0000",
"msg_from": "Darafei Praliaskouski <me@komzpa.net>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "On Thu, Mar 21, 2019 at 11:27 PM Pavan Deolasee\n<pavan.deolasee@gmail.com> wrote:\n>\n>\n>\n> On Thu, Mar 14, 2019 at 3:54 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>\n>>\n>> >\n>> >\n>> > Ok. I will run some tests. But please note that this patch is a bug fix to address the performance issue that is caused by having to rewrite the entire table when all-visible bit is set on the page during first vacuum. So while we may do some more work during COPY FREEZE, we're saving a lot of page writes during next vacuum. Also, since the scan that we are doing in this patch is done on a page that should be in the buffer cache, we will pay a bit in terms of CPU cost, but not anything in terms of IO cost.\n>>\n>> Agreed. I had been misunderstanding this patch. The page scan during\n>> COPY FREEZE is necessary and it's very cheaper than doing in the first\n>> vacuum.\n>\n>\n> Thanks for agreeing to the need of this bug fix. I ran some simple tests anyways and here are the results.\n>\n> The test consists of a simple table with three columns, two integers and one char(100). I then ran COPY (FREEZE), loading 7M rows, followed by a VACUUM. The total size of the raw data is about 800MB and the table size in Postgres is just under 1GB. The results for 3 runs in milliseconds are:\n>\n> Master:\n> COPY FREEZE: 40243.725 40309.675 40783.836\n> VACUUM: 2685.871 2517.445 2508.452\n>\n> Patched:\n> COPY FREEZE: 40942.410 40495.303 40638.075\n> VACUUM: 25.067 35.793 25.390\n>\n> So there is a slight increase in the time to run COPY FREEZE, but a significant reduction in time to VACUUM the table. The benefits will only go up if the table is vacuumed much later when most of the pages are already written to the disk and removed from shared buffers and/or kernel cache.\n>\n> I hope this satisfies your doubts regarding performance implications of the patch.\n\nThank you for the performance testing, that's a great improvement!\n\nI've looked at the patch and have comments and questions.\n\n+ /*\n+ * While we are holding the lock on the page, check if all tuples\n+ * in the page are marked frozen at insertion. We can safely mark\n+ * such page all-visible and set visibility map bits too.\n+ */\n+ if (CheckPageIsAllFrozen(relation, buffer))\n+ PageSetAllVisible(page);\n+\n+ MarkBufferDirty(buffer);\n\nMaybe we don't need to mark the buffer dirty if the page is not set all-visible.\n\n-----\n+ if (PageIsAllVisible(page))\n+ {\n+ uint8 vm_status = visibilitymap_get_status(relation,\n+ targetBlock, &myvmbuffer);\n+ uint8 flags = 0;\n+\n+ /* Set the VM all-frozen bit to flag, if needed */\n+ if ((vm_status & VISIBILITYMAP_ALL_VISIBLE) == 0)\n+ flags |= VISIBILITYMAP_ALL_VISIBLE;\n+ if ((vm_status & VISIBILITYMAP_ALL_FROZEN) == 0)\n+ flags |= VISIBILITYMAP_ALL_FROZEN;\n+\n+ Assert(BufferIsValid(myvmbuffer));\n+ if (flags != 0)\n+ visibilitymap_set(relation, targetBlock, buffer, InvalidXLogRecPtr,\n+ myvmbuffer, InvalidTransactionId, flags);\n\nSince CheckPageIsAllFrozen() is used only when inserting frozen tuples\nCheckAndSetPageAllVisible() seems to be implemented for the same\nsituation. If we have CheckAndSetPageAllVisible() for only this\nsituation we would rather need to check that the VM status of the page\nshould be 0 and then set two flags to the page? The 'flags' will\nalways be (VISIBILITYMAP_ALL_FROZEN | VISIBILITYMAP_ALL_VISIBLE) in\ncopy freeze case. I'm confused that this function has both code that\nassumes some special situations and code that can be used in generic\nsituations.\n\n-----\nPerhaps we can add some tests for this feature to pg_visibility module.\n\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n",
"msg_date": "Fri, 22 Mar 2019 15:45:37 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "On Fri, Mar 22, 2019 at 12:19 PM Masahiko Sawada <sawada.mshk@gmail.com>\nwrote:\n\n> I've looked at the patch and have comments and questions.\n>\n> + /*\n> + * While we are holding the lock on the page, check if all tuples\n> + * in the page are marked frozen at insertion. We can safely mark\n> + * such page all-visible and set visibility map bits too.\n> + */\n> + if (CheckPageIsAllFrozen(relation, buffer))\n> + PageSetAllVisible(page);\n> +\n> + MarkBufferDirty(buffer);\n>\n> Maybe we don't need to mark the buffer dirty if the page is not set\n> all-visible.\n>\n>\nYeah, makes sense. Fixed.\n\n\n> If we have CheckAndSetPageAllVisible() for only this\n> situation we would rather need to check that the VM status of the page\n> should be 0 and then set two flags to the page? The 'flags' will\n> always be (VISIBILITYMAP_ALL_FROZEN | VISIBILITYMAP_ALL_VISIBLE) in\n> copy freeze case. I'm confused that this function has both code that\n> assumes some special situations and code that can be used in generic\n> situations.\n>\n\nIf a second COPY FREEZE is run within the same transaction and if starts\ninserting into the page used by the previous COPY FREEZE, then the page\nwill already be marked all-visible/all-frozen. So we can skip repeating the\noperation again. While it's quite unlikely that someone will do that and I\ncan't think of a situation where only one of those flags will be set, I\ndon't see a harm in keeping the code as is. This code is borrowed from\nvacuumlazy.c and at some point we can even move it to some common location.\n\n\n> Perhaps we can add some tests for this feature to pg_visibility module.\n>\n>\nThat's a good idea. Please see if the tests included in the attached patch\nare enough.\n\nThanks,\nPavan\n\n-- \n Pavan Deolasee http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Tue, 26 Mar 2019 14:56:31 +0530",
"msg_from": "Pavan Deolasee <pavan.deolasee@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "Thank you for sharing the updated patch!\n\nOn Tue, Mar 26, 2019 at 6:26 PM Pavan Deolasee <pavan.deolasee@gmail.com> wrote:\n>\n>\n> On Fri, Mar 22, 2019 at 12:19 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>\n>> I've looked at the patch and have comments and questions.\n>>\n>> + /*\n>> + * While we are holding the lock on the page, check if all tuples\n>> + * in the page are marked frozen at insertion. We can safely mark\n>> + * such page all-visible and set visibility map bits too.\n>> + */\n>> + if (CheckPageIsAllFrozen(relation, buffer))\n>> + PageSetAllVisible(page);\n>> +\n>> + MarkBufferDirty(buffer);\n>>\n>> Maybe we don't need to mark the buffer dirty if the page is not set all-visible.\n>>\n>\n> Yeah, makes sense. Fixed.\n>\n>>\n>> If we have CheckAndSetPageAllVisible() for only this\n>> situation we would rather need to check that the VM status of the page\n>> should be 0 and then set two flags to the page? The 'flags' will\n>> always be (VISIBILITYMAP_ALL_FROZEN | VISIBILITYMAP_ALL_VISIBLE) in\n>> copy freeze case. I'm confused that this function has both code that\n>> assumes some special situations and code that can be used in generic\n>> situations.\n>\n>\n> If a second COPY FREEZE is run within the same transaction and if starts inserting into the page used by the previous COPY FREEZE, then the page will already be marked all-visible/all-frozen. So we can skip repeating the operation again. While it's quite unlikely that someone will do that and I can't think of a situation where only one of those flags will be set, I don't see a harm in keeping the code as is. This code is borrowed from vacuumlazy.c and at some point we can even move it to some common location.\n\nThank you for explanation, agreed.\n\n>\n>>\n>> Perhaps we can add some tests for this feature to pg_visibility module.\n>>\n>\n> That's a good idea. Please see if the tests included in the attached patch are enough.\n>\n\nThe patch looks good to me. There is no comment from me.\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 27 Mar 2019 13:13:19 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "On Wed, Mar 27, 2019 at 9:47 AM Masahiko Sawada <sawada.mshk@gmail.com>\nwrote:\n\n>\n> The patch looks good to me. There is no comment from me.\n>\n>\nThanks for your review! Updated patch attached since patch failed to apply\nafter recent changes in the master.\n\nThanks,\nPavan\n\n-- \n Pavan Deolasee http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Wed, 3 Apr 2019 10:19:17 +0530",
"msg_from": "Pavan Deolasee <pavan.deolasee@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "Hi,\n\nI've been looking at this patch for a while, and it seems pretty much RFC,\nso barring objections I'll take care of that once I do a bit more testing\nand review. Unless someone else wants to take care of that.\n\nFWIW I wonder if we should add the code for partitioned tables to\nCopyFrom, considering that's unsupported and so can't be tested etc. It's\nnot a huge amount of code, of course.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Thu, 4 Apr 2019 19:10:44 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-03 10:19:17 +0530, Pavan Deolasee wrote:\n> diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c\n> index c1fd7b78ce..09df70a3ac 100644\n> --- a/src/backend/commands/copy.c\n> +++ b/src/backend/commands/copy.c\n> @@ -2833,6 +2833,15 @@ CopyFrom(CopyState cstate)\n> \t\t\t\t\t!has_instead_insert_row_trig &&\n> \t\t\t\t\tresultRelInfo->ri_FdwRoutine == NULL;\n> \n> +\t\t\t\t/*\n> +\t\t\t\t * Note: As of PG12, COPY FREEZE is not supported on\n> +\t\t\t\t * partitioned table. Nevertheless have this check in place so\n> +\t\t\t\t * that we do the right thing if it ever gets supported.\n> +\t\t\t\t */\n> +\t\t\t\tif (ti_options & TABLE_INSERT_FROZEN)\n> +\t\t\t\t\tCheckAndSetAllVisibleBulkInsertState(resultRelInfo->ri_RelationDesc,\n> +\t\t\t\t\t\t\tbistate);\n> +\n> \t\t\t\t/*\n> \t\t\t\t * We'd better make the bulk insert mechanism gets a new\n> \t\t\t\t * buffer when the partition being inserted into changes.\n> @@ -3062,6 +3071,15 @@ CopyFrom(CopyState cstate)\n> \t\t\t\t\t\t\t\tfirstBufferedLineNo);\n> \t}\n> \n> +\t/*\n> +\t * If we are inserting frozen tuples, check if the last page used can also\n> +\t * be marked as all-visible and all-frozen. This ensures that a table can\n> +\t * be fully frozen when the data is loaded.\n> +\t */\n> +\tif (ti_options & TABLE_INSERT_FROZEN)\n> +\t\tCheckAndSetAllVisibleBulkInsertState(resultRelInfo->ri_RelationDesc,\n> +\t\t\t\tbistate);\n> +\n> \t/* Done, clean up */\n> \terror_context_stack = errcallback.previous;\n\nI'm totally not OK with this from a layering\nPOV. CheckAndSetAllVisibleBulkInsertState is entirely heap specific\n(without being named such), whereas all the heap specific bits are\ngetting moved below tableam.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 4 Apr 2019 10:24:17 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "On 2019-Apr-04, Andres Freund wrote:\n\n> I'm totally not OK with this from a layering\n> POV. CheckAndSetAllVisibleBulkInsertState is entirely heap specific\n> (without being named such), whereas all the heap specific bits are\n> getting moved below tableam.\n\nThis is a fair complaint, but on the other hand the COPY changes for\ntable AM are still being developed, so there's no ground on which to\nrebase this patch. Do you have a timeline on getting the COPY one\ncommitted?\n\nI think it's fair to ask the RMT for an exceptional extension of a\ncouple of working days for this patch.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 4 Apr 2019 16:15:54 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-04 16:15:54 -0300, Alvaro Herrera wrote:\n> On 2019-Apr-04, Andres Freund wrote:\n> \n> > I'm totally not OK with this from a layering\n> > POV. CheckAndSetAllVisibleBulkInsertState is entirely heap specific\n> > (without being named such), whereas all the heap specific bits are\n> > getting moved below tableam.\n> \n> This is a fair complaint, but on the other hand the COPY changes for\n> table AM are still being developed, so there's no ground on which to\n> rebase this patch. Do you have a timeline on getting the COPY one\n> committed?\n\n~2h. Just pondering the naming of some functions etc. Don't think\nthere's a large interdependency though.\n\nBut even if tableam weren't committed, I'd still argue that it's\nstructurally done wrong in the patch right now. FWIW, I actually think\nthis whole approach isn't quite right - this shouldn't be done as a\nsecondary action after we'd already inserted, with a separate\nlock-unlock cycle etc.\n\nAlso, how is this code even close to correct?\nCheckAndSetPageAllVisible() modifies the buffer in a crucial way, and\nthere's no WAL logging? Without even a comment arguing why that's OK (I\ndon't think it is)?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 4 Apr 2019 12:23:08 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-04 12:23:08 -0700, Andres Freund wrote:\n> Also, how is this code even close to correct?\n> CheckAndSetPageAllVisible() modifies the buffer in a crucial way, and\n> there's no WAL logging? Without even a comment arguing why that's OK (I\n> don't think it is)?\n\nPeter Geoghegan just reminded me over IM that there's actually logging\ninside log_heap_visible(), called from visibilitymap_set(). Still lacks\na critical section though.\n\nI still think this is the wrong architecture.\n\nGreetings,\n\nAndres Freund\n\nPS: We're going to have to revamp visibilitymap_set() soon-ish - the\nfact that it directly calls heap routines inside is bad, means that\nadditional AMs e.g. zheap has to reimplement that routine.\n\n\n",
"msg_date": "Thu, 4 Apr 2019 14:45:20 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "On 2019-Apr-04, Andres Freund wrote:\n\n> On 2019-04-04 12:23:08 -0700, Andres Freund wrote:\n> > Also, how is this code even close to correct?\n> > CheckAndSetPageAllVisible() modifies the buffer in a crucial way, and\n> > there's no WAL logging? Without even a comment arguing why that's OK (I\n> > don't think it is)?\n> \n> Peter Geoghegan just reminded me over IM that there's actually logging\n> inside log_heap_visible(), called from visibilitymap_set(). Still lacks\n> a critical section though.\n\nHmm, isn't there already a critical section in visibilitymap_set itself?\n\n> I still think this is the wrong architecture.\n\nHmm.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 5 Apr 2019 00:06:04 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-05 00:06:04 -0300, Alvaro Herrera wrote:\n> On 2019-Apr-04, Andres Freund wrote:\n> \n> > On 2019-04-04 12:23:08 -0700, Andres Freund wrote:\n> > > Also, how is this code even close to correct?\n> > > CheckAndSetPageAllVisible() modifies the buffer in a crucial way, and\n> > > there's no WAL logging? Without even a comment arguing why that's OK (I\n> > > don't think it is)?\n> > \n> > Peter Geoghegan just reminded me over IM that there's actually logging\n> > inside log_heap_visible(), called from visibilitymap_set(). Still lacks\n> > a critical section though.\n> \n> Hmm, isn't there already a critical section in visibilitymap_set itself?\n\nThere is, but the proposed code sets all visible on the page, and marks\nthe buffer dirty, before calling visibilitymap_set.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 4 Apr 2019 20:07:29 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-05 00:06:04 -0300, Alvaro Herrera wrote:\n> On 2019-Apr-04, Andres Freund wrote:\n> > I still think this is the wrong architecture.\n> \n> Hmm.\n\nI think the right approach would be to do all of this in heap_insert and\nheap_multi_insert. Whenever starting to work on a page, if INSERT_FROZEN\nis specified, remember whether it is either currently empty, or is\nalready marked as all-visible. If previously empty, mark it as all\nvisible at the end. If already all visible, there's no need to change\nthat. If not yet all-visible, no need to do anything, since it can't\nhave been inserted with COPY FREEZE. Do you see any problem doing it\nthat way?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 4 Apr 2019 20:35:37 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "Hi,\n\nOn Fri, Apr 5, 2019 at 9:05 AM Andres Freund <andres@anarazel.de> wrote:\n\n>\n>\n> I think the right approach would be to do all of this in heap_insert and\n> heap_multi_insert. Whenever starting to work on a page, if INSERT_FROZEN\n> is specified, remember whether it is either currently empty, or is\n> already marked as all-visible. If previously empty, mark it as all\n> visible at the end. If already all visible, there's no need to change\n> that. If not yet all-visible, no need to do anything, since it can't\n> have been inserted with COPY FREEZE.\n\n\nWe're doing roughly the same. If we are running INSERT_FROZEN, whenever\nwe're about to switch to a new page, we check if the previous page should\nbe marked all-frozen and do it that way. The special code in copy.c is\nnecessary to take care of the last page which we don't get to handle in the\nregular code path.\n\nOr are you suggesting that we don't even rescan the page for all-frozen\ntuples at the end and just simply mark it all-frozen at the start, when the\nfirst tuple is inserted and then don't touch the PD_ALL_VISIBLE/visibility\nmap bit as we go on inserting more tuples in the page?\n\nAnyways, if major architectural changes are required then it's probably too\nlate to consider this for PG12, even though it's more of a bug fix and a\ncandidate for back-patching too.\n\nThanks,\nPavan\n\n-- \n Pavan Deolasee http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\nHi,On Fri, Apr 5, 2019 at 9:05 AM Andres Freund <andres@anarazel.de> wrote:\n\nI think the right approach would be to do all of this in heap_insert and\nheap_multi_insert. Whenever starting to work on a page, if INSERT_FROZEN\nis specified, remember whether it is either currently empty, or is\nalready marked as all-visible. If previously empty, mark it as all\nvisible at the end. If already all visible, there's no need to change\nthat. If not yet all-visible, no need to do anything, since it can't\nhave been inserted with COPY FREEZE. We're doing roughly the same. If we are running INSERT_FROZEN, whenever we're about to switch to a new page, we check if the previous page should be marked all-frozen and do it that way. The special code in copy.c is necessary to take care of the last page which we don't get to handle in the regular code path.Or are you suggesting that we don't even rescan the page for all-frozen tuples at the end and just simply mark it all-frozen at the start, when the first tuple is inserted and then don't touch the PD_ALL_VISIBLE/visibility map bit as we go on inserting more tuples in the page?Anyways, if major architectural changes are required then it's probably too late to consider this for PG12, even though it's more of a bug fix and a candidate for back-patching too.Thanks,Pavan-- Pavan Deolasee http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Fri, 5 Apr 2019 09:20:36 +0530",
"msg_from": "Pavan Deolasee <pavan.deolasee@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "On Fri, Apr 5, 2019 at 8:37 AM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2019-04-05 00:06:04 -0300, Alvaro Herrera wrote:\n>\n> >\n> > Hmm, isn't there already a critical section in visibilitymap_set itself?\n>\n> There is, but the proposed code sets all visible on the page, and marks\n> the buffer dirty, before calling visibilitymap_set.\n>\n\nHow's it any different than what we're doing at vacuumlazy.c:1322? We set\nthe page-level bit, mark the buffer dirty and then call\nvisibilitymap_set(), all outside a critical section.\n\n1300 /* mark page all-visible, if appropriate */\n1301 if (all_visible && !all_visible_according_to_vm)\n1302 {\n1303 uint8 flags = VISIBILITYMAP_ALL_VISIBLE;\n1304\n1305 if (all_frozen)\n1306 flags |= VISIBILITYMAP_ALL_FROZEN;\n1307\n1308 /*\n1309 * It should never be the case that the visibility map\npage is set\n1310 * while the page-level bit is clear, but the reverse is\nallowed\n1311 * (if checksums are not enabled). Regardless, set the\nboth bits\n1312 * so that we get back in sync.\n1313 *\n1314 * NB: If the heap page is all-visible but the VM bit is\nnot set,\n1315 * we don't need to dirty the heap page. However, if\nchecksums\n1316 * are enabled, we do need to make sure that the heap page\nis\n1317 * dirtied before passing it to visibilitymap_set(),\nbecause it\n1318 * may be logged. Given that this situation should only\nhappen in\n1319 * rare cases after a crash, it is not worth optimizing.\n1320 */\n1321 PageSetAllVisible(page);\n1322 MarkBufferDirty(buf);\n1323 visibilitymap_set(onerel, blkno, buf, InvalidXLogRecPtr,\n1324 vmbuffer, visibility_cutoff_xid, flags);\n1325 }\n\nAs the first para in that comment says, I thought it's ok for page-level\nbit to be set but the visibility bit to be clear, but not the vice versa.\nThe proposed code does not introduce any new behaviour AFAICS. But I might\nbe missing something.\n\nThanks,\nPavan\n\n-- \n Pavan Deolasee http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\nOn Fri, Apr 5, 2019 at 8:37 AM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2019-04-05 00:06:04 -0300, Alvaro Herrera wrote:\n> \n> Hmm, isn't there already a critical section in visibilitymap_set itself?\n\nThere is, but the proposed code sets all visible on the page, and marks\nthe buffer dirty, before calling visibilitymap_set.How's it any different than what we're doing at vacuumlazy.c:1322? We set the page-level bit, mark the buffer dirty and then call visibilitymap_set(), all outside a critical section.1300 /* mark page all-visible, if appropriate */1301 if (all_visible && !all_visible_according_to_vm)1302 {1303 uint8 flags = VISIBILITYMAP_ALL_VISIBLE;1304 1305 if (all_frozen)1306 flags |= VISIBILITYMAP_ALL_FROZEN;1307 1308 /*1309 * It should never be the case that the visibility map page is set1310 * while the page-level bit is clear, but the reverse is allowed1311 * (if checksums are not enabled). Regardless, set the both bits1312 * so that we get back in sync.1313 *1314 * NB: If the heap page is all-visible but the VM bit is not set,1315 * we don't need to dirty the heap page. However, if checksums1316 * are enabled, we do need to make sure that the heap page is1317 * dirtied before passing it to visibilitymap_set(), because it1318 * may be logged. Given that this situation should only happen in1319 * rare cases after a crash, it is not worth optimizing.1320 */1321 PageSetAllVisible(page);1322 MarkBufferDirty(buf);1323 visibilitymap_set(onerel, blkno, buf, InvalidXLogRecPtr,1324 vmbuffer, visibility_cutoff_xid, flags);1325 } As the first para in that comment says, I thought it's ok for page-level bit to be set but the visibility bit to be clear, but not the vice versa. The proposed code does not introduce any new behaviour AFAICS. But I might be missing something.Thanks,Pavan-- Pavan Deolasee http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Fri, 5 Apr 2019 09:25:18 +0530",
"msg_from": "Pavan Deolasee <pavan.deolasee@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I think the right approach would be to do all of this in heap_insert and\n> heap_multi_insert. Whenever starting to work on a page, if INSERT_FROZEN\n> is specified, remember whether it is either currently empty, or is\n> already marked as all-visible. If previously empty, mark it as all\n> visible at the end. If already all visible, there's no need to change\n> that. If not yet all-visible, no need to do anything, since it can't\n> have been inserted with COPY FREEZE. Do you see any problem doing it\n> that way?\n\nDo we want to add overhead to these hot-spot routines for this purpose?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 04 Apr 2019 23:57:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-05 09:20:36 +0530, Pavan Deolasee wrote:\n> On Fri, Apr 5, 2019 at 9:05 AM Andres Freund <andres@anarazel.de> wrote:\n> > I think the right approach would be to do all of this in heap_insert and\n> > heap_multi_insert. Whenever starting to work on a page, if INSERT_FROZEN\n> > is specified, remember whether it is either currently empty, or is\n> > already marked as all-visible. If previously empty, mark it as all\n> > visible at the end. If already all visible, there's no need to change\n> > that. If not yet all-visible, no need to do anything, since it can't\n> > have been inserted with COPY FREEZE.\n> \n> \n> We're doing roughly the same. If we are running INSERT_FROZEN, whenever\n> we're about to switch to a new page, we check if the previous page should\n> be marked all-frozen and do it that way. The special code in copy.c is\n> necessary to take care of the last page which we don't get to handle in the\n> regular code path.\n\nWell, it's not the same, because you need extra code from copy.c, extra\nlock cycles, and extra WAL logging.\n\n\n> Or are you suggesting that we don't even rescan the page for all-frozen\n> tuples at the end and just simply mark it all-frozen at the start, when the\n> first tuple is inserted and then don't touch the PD_ALL_VISIBLE/visibility\n> map bit as we go on inserting more tuples in the page?\n\nCorrect. If done right that should be cheaper (no extra scans, less WAL\nlogging), without requiring some new dispatch logic from copy.c.\n\n\n> Anyways, if major architectural changes are required then it's probably too\n> late to consider this for PG12, even though it's more of a bug fix and a\n> candidate for back-patching too.\n\nLet's just see how bad it looks? I don't feel like we'd need to be super\nstrict about it. If it looks simple enough I'd e.g. be ok to merge this\nsoon after freeze, and backpatch around maybe 12.1 or such.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 4 Apr 2019 20:58:16 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-04 23:57:58 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I think the right approach would be to do all of this in heap_insert and\n> > heap_multi_insert. Whenever starting to work on a page, if INSERT_FROZEN\n> > is specified, remember whether it is either currently empty, or is\n> > already marked as all-visible. If previously empty, mark it as all\n> > visible at the end. If already all visible, there's no need to change\n> > that. If not yet all-visible, no need to do anything, since it can't\n> > have been inserted with COPY FREEZE. Do you see any problem doing it\n> > that way?\n> \n> Do we want to add overhead to these hot-spot routines for this purpose?\n\nFor heap_multi_insert I can't see it being a problem - it's only used\nfrom copy.c, and the cost should be \"smeared\" over many tuples. I'd\nassume that compared with locking a page, WAL logging, etc, it'd not\neven meaningfully show up for heap_insert. Especially because we already\nhave codepaths for options & HEAP_INSERT_FROZEN in\nheap_prepare_insert(), and I'd assume those could be combined.\n\nI think we should measure it, but I don't think that one or two\nadditional, very well predictd, branches are going to be measurable in\nin those routines.\n\nThe patch, as implemented, has modifications in\nRelationGetBufferForTuple(), that seems like they'd be more expensive.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 4 Apr 2019 21:04:49 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "On Fri, Apr 5, 2019 at 6:58 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Andres Freund <andres@anarazel.de> writes:\n> > I think the right approach would be to do all of this in heap_insert and\n> > heap_multi_insert. Whenever starting to work on a page, if INSERT_FROZEN\n> > is specified, remember whether it is either currently empty, or is\n> > already marked as all-visible. If previously empty, mark it as all\n> > visible at the end. If already all visible, there's no need to change\n> > that. If not yet all-visible, no need to do anything, since it can't\n> > have been inserted with COPY FREEZE. Do you see any problem doing it\n> > that way?\n>\n> Do we want to add overhead to these hot-spot routines for this purpose?\n>\n\nSizing the overhead: workflows right now don't end with COPY FREEZE - you\nneed another VACUUM to set maps.\nAnything that lets you skip that VACUUM (and faster than that VACUUM\nitself) is helping. You specifically asked for it to be skippable with\nFREEZE anyway.\n\n\n-- \nDarafei Praliaskouski\nSupport me: http://patreon.com/komzpa\n\nOn Fri, Apr 5, 2019 at 6:58 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Andres Freund <andres@anarazel.de> writes:\n> I think the right approach would be to do all of this in heap_insert and\n> heap_multi_insert. Whenever starting to work on a page, if INSERT_FROZEN\n> is specified, remember whether it is either currently empty, or is\n> already marked as all-visible. If previously empty, mark it as all\n> visible at the end. If already all visible, there's no need to change\n> that. If not yet all-visible, no need to do anything, since it can't\n> have been inserted with COPY FREEZE. Do you see any problem doing it\n> that way?\n\nDo we want to add overhead to these hot-spot routines for this purpose?Sizing the overhead: workflows right now don't end with COPY FREEZE - you need another VACUUM to set maps. Anything that lets you skip that VACUUM (and faster than that VACUUM itself) is helping. You specifically asked for it to be skippable with FREEZE anyway.-- Darafei PraliaskouskiSupport me: http://patreon.com/komzpa",
"msg_date": "Fri, 5 Apr 2019 08:38:34 +0300",
"msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-05 08:38:34 +0300, Darafei \"Komяpa\" Praliaskouski wrote:\n> On Fri, Apr 5, 2019 at 6:58 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> > Andres Freund <andres@anarazel.de> writes:\n> > > I think the right approach would be to do all of this in heap_insert and\n> > > heap_multi_insert. Whenever starting to work on a page, if INSERT_FROZEN\n> > > is specified, remember whether it is either currently empty, or is\n> > > already marked as all-visible. If previously empty, mark it as all\n> > > visible at the end. If already all visible, there's no need to change\n> > > that. If not yet all-visible, no need to do anything, since it can't\n> > > have been inserted with COPY FREEZE. Do you see any problem doing it\n> > > that way?\n> >\n> > Do we want to add overhead to these hot-spot routines for this purpose?\n> >\n> \n> Sizing the overhead: workflows right now don't end with COPY FREEZE - you\n> need another VACUUM to set maps.\n> Anything that lets you skip that VACUUM (and faster than that VACUUM\n> itself) is helping. You specifically asked for it to be skippable with\n> FREEZE anyway.\n\nTom's point was that the routines I was suggesting to adapt above aren't\njust used for COPY FREEZE.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 4 Apr 2019 22:42:41 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-04 21:04:49 -0700, Andres Freund wrote:\n> On 2019-04-04 23:57:58 -0400, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > I think the right approach would be to do all of this in heap_insert and\n> > > heap_multi_insert. Whenever starting to work on a page, if INSERT_FROZEN\n> > > is specified, remember whether it is either currently empty, or is\n> > > already marked as all-visible. If previously empty, mark it as all\n> > > visible at the end. If already all visible, there's no need to change\n> > > that. If not yet all-visible, no need to do anything, since it can't\n> > > have been inserted with COPY FREEZE. Do you see any problem doing it\n> > > that way?\n> > \n> > Do we want to add overhead to these hot-spot routines for this purpose?\n> \n> For heap_multi_insert I can't see it being a problem - it's only used\n> from copy.c, and the cost should be \"smeared\" over many tuples. I'd\n> assume that compared with locking a page, WAL logging, etc, it'd not\n> even meaningfully show up for heap_insert. Especially because we already\n> have codepaths for options & HEAP_INSERT_FROZEN in\n> heap_prepare_insert(), and I'd assume those could be combined.\n> \n> I think we should measure it, but I don't think that one or two\n> additional, very well predictd, branches are going to be measurable in\n> in those routines.\n> \n> The patch, as implemented, has modifications in\n> RelationGetBufferForTuple(), that seems like they'd be more expensive.\n\nHere's a *prototype* patch for this. It only implements what I\ndescribed for heap_multi_insert, not for plain inserts. I wanted to see\nwhat others think before investing additional time.\n\nI don't think it's possible to see the overhead of the changed code in\nheap_multi_insert(), and probably - with less confidence - that it's\nalso going to be ok for heap_insert(). But we gotta measure that.\n\n\nThis avoids an extra WAL record for setting empty pages to all visible,\nby adding XLH_INSERT_ALL_VISIBLE_SET & XLH_INSERT_ALL_FROZEN_SET, and\nsetting those when appropriate in heap_multi_insert. Unfortunately\ncurrently visibilitymap_set() doesn't really properly allow to do this,\nas it has embedded WAL logging for heap.\n\nI think we should remove the WAL logging from visibilitymap_set(), and\nmove it to a separate, heap specific, function. Right now different\ntableams e.g. would have to reimplement visibilitymap_set(), so that's a\nsecond need to separate that functionality. Let me try to come up with\na proposal.\n\n\nThe patch currently does a vmbuffer_pin() while holding an exclusive\nlwlock on the page. That's something we normally try to avoid - but I\nthink it's probably OK here, because INSERT_FROZEN can only be used to\ninsert into a new relfilenode (which thus no other session can see). I\nthink it's preferrable to have this logic in specific to the\nINSERT_FROZEN path, rather than adding nontrivial complications to\nRelationGetBufferForTuple().\n\nI noticed that, before this patch, we do a\n\tif (vmbuffer != InvalidBuffer)\n\t\tReleaseBuffer(vmbuffer);\nafter every filled page - that doesn't strike me as particularly smart -\nit's pretty likely that the next heap page to be filled is going to be\non the same vm page as the previous iteration.\n\n\nI noticed one small oddity that I think is common to all the approaches\npresented in this thread so far. After\n\nBEGIN;\nTRUNCATE foo;\nCOPY foo(i) FROM '/tmp/foo' WITH FREEZE;\nCOPY foo(i) FROM '/tmp/foo' WITH FREEZE;\nCOPY foo(i) FROM '/tmp/foo' WITH FREEZE;\nCOMMIT;\n\nwe currently end up with pages like:\n┌───────┬───────────┬──────────┬───────┬───────┬───────┬─────────┬──────────┬─────────┬───────────┐\n│ blkno │ lsn │ checksum │ flags │ lower │ upper │ special │ pagesize │ version │ prune_xid │\n├───────┼───────────┼──────────┼───────┼───────┼───────┼─────────┼──────────┼─────────┼───────────┤\n│ 0 │ 0/50B5488 │ 0 │ 4 │ 928 │ 960 │ 8192 │ 8192 │ 4 │ 0 │\n│ 1 │ 0/50B6360 │ 0 │ 4 │ 928 │ 960 │ 8192 │ 8192 │ 4 │ 0 │\n│ 2 │ 0/50B71B8 │ 0 │ 4 │ 928 │ 960 │ 8192 │ 8192 │ 4 │ 0 │\n│ 3 │ 0/50B8028 │ 0 │ 4 │ 928 │ 960 │ 8192 │ 8192 │ 4 │ 0 │\n│ 4 │ 0/50B8660 │ 0 │ 4 │ 408 │ 5120 │ 8192 │ 8192 │ 4 │ 0 │\n│ 5 │ 0/50B94B8 │ 0 │ 4 │ 928 │ 960 │ 8192 │ 8192 │ 4 │ 0 │\n│ 6 │ 0/50BA328 │ 0 │ 4 │ 928 │ 960 │ 8192 │ 8192 │ 4 │ 0 │\n│ 7 │ 0/50BB180 │ 0 │ 4 │ 928 │ 960 │ 8192 │ 8192 │ 4 │ 0 │\n│ 8 │ 0/50BBFD8 │ 0 │ 4 │ 928 │ 960 │ 8192 │ 8192 │ 4 │ 0 │\n│ 9 │ 0/50BCF88 │ 0 │ 4 │ 928 │ 960 │ 8192 │ 8192 │ 4 │ 0 │\n│ 10 │ 0/50BDDE0 │ 0 │ 4 │ 928 │ 960 │ 8192 │ 8192 │ 4 │ 0 │\n│ 11 │ 0/50BEC50 │ 0 │ 4 │ 928 │ 960 │ 8192 │ 8192 │ 4 │ 0 │\n│ 12 │ 0/50BFAA8 │ 0 │ 4 │ 928 │ 960 │ 8192 │ 8192 │ 4 │ 0 │\n│ 13 │ 0/50C06F8 │ 0 │ 4 │ 792 │ 2048 │ 8192 │ 8192 │ 4 │ 0 │\n└───────┴───────────┴──────────┴───────┴───────┴───────┴─────────┴──────────┴─────────┴───────────┘\n(14 rows)\n\nNote how block 4 has more space available. That's because the\nvisibilitymap_pin() called in the first COPY has to vm_extend(), which\nin turn does:\n\n\t/*\n\t * Send a shared-inval message to force other backends to close any smgr\n\t * references they may have for this rel, which we are about to change.\n\t * This is a useful optimization because it means that backends don't have\n\t * to keep checking for creation or extension of the file, which happens\n\t * infrequently.\n\t */\n\tCacheInvalidateSmgr(rel->rd_smgr->smgr_rnode);\n\nwhich invalidates ->rd_smgr->smgr_targblock *after* the first COPY,\nbecause that's when the pending smgr invalidations are sent out. That's\nfar from great, but it doesn't seem to be this patch's fault.\n\nIt seems to me we need a separate invalidation that doesn't close the\nwhole smgr relation, but just invalidates the VM specific fields.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Sun, 7 Apr 2019 18:04:27 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-07 18:04:27 -0700, Andres Freund wrote:\n> Here's a *prototype* patch for this. It only implements what I\n> described for heap_multi_insert, not for plain inserts. I wanted to see\n> what others think before investing additional time.\n\nPavan, are you planning to work on this for v13 CF1? Or have you lost\ninterest on the topic?\n\n- Andres\n\n\n",
"msg_date": "Tue, 28 May 2019 08:36:26 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "On Tue, 28 May 2019 at 4:36 PM, Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2019-04-07 18:04:27 -0700, Andres Freund wrote:\n> > Here's a *prototype* patch for this. It only implements what I\n> > described for heap_multi_insert, not for plain inserts. I wanted to see\n> > what others think before investing additional time.\n>\n> Pavan, are you planning to work on this for v13 CF1? Or have you lost\n> interest on the topic?\n>\n\nYes, I plan to work on it.\n\nThanks,\nPavan\n-- \n Pavan Deolasee http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\nOn Tue, 28 May 2019 at 4:36 PM, Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2019-04-07 18:04:27 -0700, Andres Freund wrote:\n> Here's a *prototype* patch for this. It only implements what I\n> described for heap_multi_insert, not for plain inserts. I wanted to see\n> what others think before investing additional time.\n\nPavan, are you planning to work on this for v13 CF1? Or have you lost\ninterest on the topic?\nYes, I plan to work on it.Thanks,Pavan-- Pavan Deolasee http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Wed, 29 May 2019 09:20:28 +0100",
"msg_from": "Pavan Deolasee <pavan.deolasee@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "Hi Andres,\n\nOn Wed, May 29, 2019 at 1:50 PM Pavan Deolasee <pavan.deolasee@gmail.com>\nwrote:\n\n>\n>\n> On Tue, 28 May 2019 at 4:36 PM, Andres Freund <andres@anarazel.de> wrote:\n>\n>> Hi,\n>>\n>> On 2019-04-07 18:04:27 -0700, Andres Freund wrote:\n>> > Here's a *prototype* patch for this. It only implements what I\n>> > described for heap_multi_insert, not for plain inserts. I wanted to see\n>> > what others think before investing additional time.\n>>\n>> Pavan, are you planning to work on this for v13 CF1? Or have you lost\n>> interest on the topic?\n>>\n>\n> Yes, I plan to work on it.\n>\n>\nI am sorry, but I am not able to find time to get back to this because of\nother high priority items. If it still remains unaddressed in the next few\nweeks, I will pick it up again. But for now, I am happy if someone wants to\npick and finish the work.\n\nThanks,\nPavan\n\n-- \n Pavan Deolasee http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\nHi Andres,On Wed, May 29, 2019 at 1:50 PM Pavan Deolasee <pavan.deolasee@gmail.com> wrote:On Tue, 28 May 2019 at 4:36 PM, Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2019-04-07 18:04:27 -0700, Andres Freund wrote:\n> Here's a *prototype* patch for this. It only implements what I\n> described for heap_multi_insert, not for plain inserts. I wanted to see\n> what others think before investing additional time.\n\nPavan, are you planning to work on this for v13 CF1? Or have you lost\ninterest on the topic?\nYes, I plan to work on it.I am sorry, but I am not able to find time to get back to this because of other high priority items. If it still remains unaddressed in the next few weeks, I will pick it up again. But for now, I am happy if someone wants to pick and finish the work. Thanks,Pavan-- Pavan Deolasee http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Thu, 27 Jun 2019 11:02:15 +0530",
"msg_from": "Pavan Deolasee <pavan.deolasee@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "On Thu, Jun 27, 2019 at 11:02 AM Pavan Deolasee\n<pavan.deolasee@gmail.com> wrote:\n>\n>>> On 2019-04-07 18:04:27 -0700, Andres Freund wrote:\n>>> > Here's a *prototype* patch for this. It only implements what I\n>>> > described for heap_multi_insert, not for plain inserts. I wanted to see\n>>> > what others think before investing additional time.\n>>>\n>>> Pavan, are you planning to work on this for v13 CF1? Or have you lost\n>>> interest on the topic?\n>>\n>>\n>> Yes, I plan to work on it.\n>>\n>\n> I am sorry, but I am not able to find time to get back to this because of other high priority items. If it still remains unaddressed in the next few weeks, I will pick it up again. But for now, I am happy if someone wants to pick and finish the work.\n>\n\nFair enough, I have marked the entry [1] in the coming CF as \"Returned\nwith Feedback\". I hope that is okay with you.\n\n[1] - https://commitfest.postgresql.org/23/2009/\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 29 Jun 2019 01:26:20 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "On Sat, Jun 29, 2019 at 12:56 AM Amit Kapila <amit.kapila16@gmail.com>\nwrote:\n\n> On Thu, Jun 27, 2019 at 11:02 AM Pavan Deolasee\n> <pavan.deolasee@gmail.com> wrote:\n> >\n> >>> On 2019-04-07 18:04:27 -0700, Andres Freund wrote:\n> >>> > Here's a *prototype* patch for this. It only implements what I\n> >>> > described for heap_multi_insert, not for plain inserts. I wanted to\n> see\n> >>> > what others think before investing additional time.\n> >>>\n> >>> Pavan, are you planning to work on this for v13 CF1? Or have you lost\n> >>> interest on the topic?\n> >>\n> >>\n> >> Yes, I plan to work on it.\n> >>\n> >\n> > I am sorry, but I am not able to find time to get back to this because\n> of other high priority items. If it still remains unaddressed in the next\n> few weeks, I will pick it up again. But for now, I am happy if someone\n> wants to pick and finish the work.\n> >\n>\n> Fair enough, I have marked the entry [1] in the coming CF as \"Returned\n> with Feedback\". I hope that is okay with you.\n>\n> [1] - https://commitfest.postgresql.org/23/2009/\n>\n>\n\nHi,\n\nAs Pavan mentioned in the last email that he is no longer interested in\nthat, so I want to\ntake the lead and want to finish that. It is a bug and needs to be fixed.\nI have rebased and the patch with the latest master and added some test\ncases (borrowed from Pavan's patch), and did some performance testing with\na table size of\n700MB (10Millions rows)\n\nCOPY WITH FREEZE took 21406.692ms and VACUUM took 2478.666ms\nCOPY WITH FREEZE took 23095.985ms and VACUUM took 26.309ms\n\nThe performance decrease in copy with the patch is only 7%, but we get\nquite adequate performance in VACUUM. In any case, this is a bug fix, so we\ncan ignore\nthe performance hit.\n\nThere are two issues left to address.\n\n1 - Andres: It only implements what I described for heap_multi_insert, not\nfor plain inserts.\nI wanted to see what others think before investing additional time.\n\nIn which condition we need that for plain inserts?\n\n2 - Andres: I think we should remove the WAL logging from\nvisibilitymap_set(), and\nmove it to a separate, heap specific, function. Right now different\ntableams e.g. would have to reimplement visibilitymap_set(), so that's a\nsecond need to separate that functionality. Let me try to come up with\na proposal.\n\n\n\n\n-- \nIbrar Ahmed",
"msg_date": "Thu, 5 Mar 2020 18:13:58 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "On Fri, Mar 13, 2020 at 6:58 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n>\n> Thanks for picking up this patch. There's a minor typo:\n>\n> + * readable outside of this sessoin. Therefore\n> doing IO here isn't\n>\n> => session\n>\n> --\n> Justin\n>\n\nThanks, please see the updated and rebased patch. (master\n17a28b03645e27d73bf69a95d7569b61e58f06eb)\n\n-- \nIbrar Ahmed",
"msg_date": "Tue, 24 Mar 2020 22:06:47 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "On Tue, Mar 24, 2020 at 10:06 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n\n>\n>\n> On Fri, Mar 13, 2020 at 6:58 AM Justin Pryzby <pryzby@telsasoft.com>\n> wrote:\n>\n>>\n>> Thanks for picking up this patch. There's a minor typo:\n>>\n>> + * readable outside of this sessoin. Therefore\n>> doing IO here isn't\n>>\n>> => session\n>>\n>> --\n>> Justin\n>>\n>\n> Thanks, please see the updated and rebased patch. (master\n> 17a28b03645e27d73bf69a95d7569b61e58f06eb)\n>\n> --\n> Ibrar Ahmed\n>\n\nAndres while fixing the one FIXME in the patch\n\n\" visibilitymap_pin(relation, BufferGetBlockNumber(buffer),\n&vmbuffer);\n\n\n /*\n\n * FIXME: setting recptr here is a dirty dirty hack, to prevent\n\n * visibilitymap_set() from WAL logging.\n *\n\"\nI am not able to see any scenario where recptr is not set before reaching\nto that statement. Can you clarify why you think recptr will not be set at\nthat statement?\n\n\n-- \nIbrar Ahmed\n\nOn Tue, Mar 24, 2020 at 10:06 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:On Fri, Mar 13, 2020 at 6:58 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\nThanks for picking up this patch. There's a minor typo:\n\n+ * readable outside of this sessoin. Therefore doing IO here isn't\n\n=> session\n\n-- \nJustin\nThanks, please see the updated and rebased patch. (master 17a28b03645e27d73bf69a95d7569b61e58f06eb)-- Ibrar Ahmed\nAndres while fixing the one FIXME in the patch\" visibilitymap_pin(relation, BufferGetBlockNumber(buffer), &vmbuffer);\n\n /*\n * FIXME: setting recptr here is a dirty dirty hack, to prevent\n * visibilitymap_set() from WAL logging.\n *\"I am not able to see any scenario where recptr is not set before reaching to that statement. Can you clarify why you think recptr will not be set at that statement?-- Ibrar Ahmed",
"msg_date": "Wed, 25 Mar 2020 19:37:22 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "This patch incurs a compiler warning, which probably is quite simple to fix:\n\nheapam.c: In function ‘heap_multi_insert’:\nheapam.c:2349:4: error: ‘recptr’ may be used uninitialized in this function [-Werror=maybe-uninitialized]\n visibilitymap_set(relation, BufferGetBlockNumber(buffer), buffer,\n ^\nheapam.c:2136:14: note: ‘recptr’ was declared here\n XLogRecPtr recptr;\n ^\n\nPlease fix and submit a new version, I'm marking the entry Waiting on Author in\nthe meantime.\n\ncheers ./daniel\n\n",
"msg_date": "Wed, 1 Jul 2020 11:38:59 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "On 01.07.2020 12:38, Daniel Gustafsson wrote:\n> This patch incurs a compiler warning, which probably is quite simple to fix:\n>\n> heapam.c: In function ‘heap_multi_insert’:\n> heapam.c:2349:4: error: ‘recptr’ may be used uninitialized in this function [-Werror=maybe-uninitialized]\n> visibilitymap_set(relation, BufferGetBlockNumber(buffer), buffer,\n> ^\n> heapam.c:2136:14: note: ‘recptr’ was declared here\n> XLogRecPtr recptr;\n> ^\n>\n> Please fix and submit a new version, I'm marking the entry Waiting on Author in\n> the meantime.\n>\n> cheers ./daniel\n>\nThis patch looks very useful to me, so I want to pick it up.\n\n\nThe patch that fixes the compiler warning is in the attachment. Though, \nI'm not\nentirely satisfied with this fix. Also, the patch contains some FIXME \ncomments.\nI'll test it more and send fixes this week.\n\nQuestions from the first review pass:\n\n1) Do we need XLH_INSERT_ALL_VISIBLE_SET ? IIUC, in the patch it is always\nimplied by XLH_INSERT_ALL_FROZEN_SET.\n\n2) What does this comment mean? Does XXX refers to the lsn comparison? \nSince it\nis definitely necessary to update the VM.\n\n+ * XXX: This seems entirely unnecessary?\n+ *\n+ * FIXME: Theoretically we should only do this after we've\n+ * modified the heap - but it's safe to do it here I think,\n+ * because this means that the page previously was empty.\n+ */\n+ if (lsn > PageGetLSN(vmpage))\n+ visibilitymap_set(reln, blkno, InvalidBuffer, lsn, \nvmbuffer,\n+ InvalidTransactionId, vmbits);\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 14 Jul 2020 20:51:37 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "On Tue, Jul 14, 2020 at 1:51 PM Anastasia Lubennikova\n<a.lubennikova@postgrespro.ru> wrote:\n> Questions from the first review pass:\n>\n> 1) Do we need XLH_INSERT_ALL_VISIBLE_SET ? IIUC, in the patch it is always\n> implied by XLH_INSERT_ALL_FROZEN_SET.\n\nI agree that it looks unnecessary to have two separate bits.\n\n> 2) What does this comment mean? Does XXX refers to the lsn comparison?\n> Since it\n> is definitely necessary to update the VM.\n>\n> + * XXX: This seems entirely unnecessary?\n> + *\n> + * FIXME: Theoretically we should only do this after we've\n> + * modified the heap - but it's safe to do it here I think,\n> + * because this means that the page previously was empty.\n> + */\n> + if (lsn > PageGetLSN(vmpage))\n> + visibilitymap_set(reln, blkno, InvalidBuffer, lsn,\n> vmbuffer,\n> + InvalidTransactionId, vmbits);\n\nI wondered about that too. The comment which precedes it was, I\nbelieve, originally written by me, and copied here from\nheap_xlog_visible(). But it's not clear very good practice to just\ncopy the comment like this. If the same logic applies, the code should\nsay that we're doing the same thing here as in heap_xlog_visible() for\nconsistency, or some such thing; after all, that's the primary place\nwhere that happens. But it looks like the XXX might have been added by\na second person who thought that we didn't need this logic at all, and\nthe FIXME by a third person who thought it was in the wrong place, so\nthe whole thing is really confusing at this point.\n\nI'm pretty worried about this, too:\n\n+ * FIXME: setting recptr here is a dirty dirty hack, to prevent\n+ * visibilitymap_set() from WAL logging.\n\nThat is indeed a dirty hack, and something needs to be done about it.\n\nI wonder if it was really all that smart to try to make the\nHEAP2_MULTI_INSERT do this instead of just issuing separate WAL\nrecords to mark it all-visible afterwards, but I don't see any reason\nwhy this can't be made to work. It needs substantially more polishing\nthan it's had, though, I think.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 31 Jul 2020 16:28:34 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "On 31.07.2020 23:28, Robert Haas wrote:\n> On Tue, Jul 14, 2020 at 1:51 PM Anastasia Lubennikova\n> <a.lubennikova@postgrespro.ru> wrote:\n>> Questions from the first review pass:\n>>\n>> 1) Do we need XLH_INSERT_ALL_VISIBLE_SET ? IIUC, in the patch it is always\n>> implied by XLH_INSERT_ALL_FROZEN_SET.\n> I agree that it looks unnecessary to have two separate bits.\n>\n>> 2) What does this comment mean? Does XXX refers to the lsn comparison?\n>> Since it\n>> is definitely necessary to update the VM.\n>>\n>> + * XXX: This seems entirely unnecessary?\n>> + *\n>> + * FIXME: Theoretically we should only do this after we've\n>> + * modified the heap - but it's safe to do it here I think,\n>> + * because this means that the page previously was empty.\n>> + */\n>> + if (lsn > PageGetLSN(vmpage))\n>> + visibilitymap_set(reln, blkno, InvalidBuffer, lsn,\n>> vmbuffer,\n>> + InvalidTransactionId, vmbits);\n> I wondered about that too. The comment which precedes it was, I\n> believe, originally written by me, and copied here from\n> heap_xlog_visible(). But it's not clear very good practice to just\n> copy the comment like this. If the same logic applies, the code should\n> say that we're doing the same thing here as in heap_xlog_visible() for\n> consistency, or some such thing; after all, that's the primary place\n> where that happens. But it looks like the XXX might have been added by\n> a second person who thought that we didn't need this logic at all, and\n> the FIXME by a third person who thought it was in the wrong place, so\n> the whole thing is really confusing at this point.\n>\n> I'm pretty worried about this, too:\n>\n> + * FIXME: setting recptr here is a dirty dirty hack, to prevent\n> + * visibilitymap_set() from WAL logging.\n>\n> That is indeed a dirty hack, and something needs to be done about it.\n>\n> I wonder if it was really all that smart to try to make the\n> HEAP2_MULTI_INSERT do this instead of just issuing separate WAL\n> records to mark it all-visible afterwards, but I don't see any reason\n> why this can't be made to work. It needs substantially more polishing\n> than it's had, though, I think.\n>\nNew version of the patch is in the attachment.\n\nNew design is more conservative and simpler:\n- pin the visibility map page in advance;\n- set PageAllVisible;\n- call visibilitymap_set() with its own XlogRecord, just like in other \nplaces.\n\nIt allows to remove most of the \"hacks\" and keep code clean.\nThe patch passes tests added in previous versions.\n\nI haven't tested performance yet, though. Maybe after tests, I'll bring \nsome optimizations back.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 3 Aug 2020 12:29:36 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "On Mon, Aug 3, 2020 at 2:29 PM Anastasia Lubennikova <\na.lubennikova@postgrespro.ru> wrote:\n\n> On 31.07.2020 23:28, Robert Haas wrote:\n> > On Tue, Jul 14, 2020 at 1:51 PM Anastasia Lubennikova\n> > <a.lubennikova@postgrespro.ru> wrote:\n> >> Questions from the first review pass:\n> >>\n> >> 1) Do we need XLH_INSERT_ALL_VISIBLE_SET ? IIUC, in the patch it is\n> always\n> >> implied by XLH_INSERT_ALL_FROZEN_SET.\n> > I agree that it looks unnecessary to have two separate bits.\n> >\n> >> 2) What does this comment mean? Does XXX refers to the lsn comparison?\n> >> Since it\n> >> is definitely necessary to update the VM.\n> >>\n> >> + * XXX: This seems entirely unnecessary?\n> >> + *\n> >> + * FIXME: Theoretically we should only do this after we've\n> >> + * modified the heap - but it's safe to do it here I think,\n> >> + * because this means that the page previously was empty.\n> >> + */\n> >> + if (lsn > PageGetLSN(vmpage))\n> >> + visibilitymap_set(reln, blkno, InvalidBuffer, lsn,\n> >> vmbuffer,\n> >> + InvalidTransactionId, vmbits);\n> > I wondered about that too. The comment which precedes it was, I\n> > believe, originally written by me, and copied here from\n> > heap_xlog_visible(). But it's not clear very good practice to just\n> > copy the comment like this. If the same logic applies, the code should\n> > say that we're doing the same thing here as in heap_xlog_visible() for\n> > consistency, or some such thing; after all, that's the primary place\n> > where that happens. But it looks like the XXX might have been added by\n> > a second person who thought that we didn't need this logic at all, and\n> > the FIXME by a third person who thought it was in the wrong place, so\n> > the whole thing is really confusing at this point.\n> >\n> > I'm pretty worried about this, too:\n> >\n> > + * FIXME: setting recptr here is a dirty dirty hack, to\n> prevent\n> > + * visibilitymap_set() from WAL logging.\n> >\n> > That is indeed a dirty hack, and something needs to be done about it.\n> >\n> > I wonder if it was really all that smart to try to make the\n> > HEAP2_MULTI_INSERT do this instead of just issuing separate WAL\n> > records to mark it all-visible afterwards, but I don't see any reason\n> > why this can't be made to work. It needs substantially more polishing\n> > than it's had, though, I think.\n> >\n> New version of the patch is in the attachment.\n>\n> New design is more conservative and simpler:\n> - pin the visibility map page in advance;\n> - set PageAllVisible;\n> - call visibilitymap_set() with its own XlogRecord, just like in other\n> places.\n>\n> It allows to remove most of the \"hacks\" and keep code clean.\n> The patch passes tests added in previous versions.\n>\n> I haven't tested performance yet, though. Maybe after tests, I'll bring\n> some optimizations back.\n>\n> --\n> Anastasia Lubennikova\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n> Here are some performance results with a patched and unpatched master\nbranch.\nThe table used for the test contains three columns (integer, text,\nvarchar).\nThe total number of rows is 10000000 in total.\n\nUnpatched (Master: 92c12e46d5f1e25fc85608a6d6a19b8f5ea02600)\nCOPY: 9069.432 ms vacuum; 2567.961ms\nCOPY: 9004.533 ms vacuum: 2553.075ms\nCOPY: 8832.422 ms vacuum: 2540.742ms\n\nPatched (Master: 92c12e46d5f1e25fc85608a6d6a19b8f5ea02600)\nCOPY: 10031.723 ms vacuum: 127.524 ms\nCOPY: 9985.109 ms vacuum: 39.953 ms\nCOPY: 9283.373 ms vacuum: 37.137 ms\n\nTime to take the copy slightly increased but the vacuum time significantly\ndecrease.\n\n-- \nIbrar Ahmed\n\nOn Mon, Aug 3, 2020 at 2:29 PM Anastasia Lubennikova <a.lubennikova@postgrespro.ru> wrote:On 31.07.2020 23:28, Robert Haas wrote:\n> On Tue, Jul 14, 2020 at 1:51 PM Anastasia Lubennikova\n> <a.lubennikova@postgrespro.ru> wrote:\n>> Questions from the first review pass:\n>>\n>> 1) Do we need XLH_INSERT_ALL_VISIBLE_SET ? IIUC, in the patch it is always\n>> implied by XLH_INSERT_ALL_FROZEN_SET.\n> I agree that it looks unnecessary to have two separate bits.\n>\n>> 2) What does this comment mean? Does XXX refers to the lsn comparison?\n>> Since it\n>> is definitely necessary to update the VM.\n>>\n>> + * XXX: This seems entirely unnecessary?\n>> + *\n>> + * FIXME: Theoretically we should only do this after we've\n>> + * modified the heap - but it's safe to do it here I think,\n>> + * because this means that the page previously was empty.\n>> + */\n>> + if (lsn > PageGetLSN(vmpage))\n>> + visibilitymap_set(reln, blkno, InvalidBuffer, lsn,\n>> vmbuffer,\n>> + InvalidTransactionId, vmbits);\n> I wondered about that too. The comment which precedes it was, I\n> believe, originally written by me, and copied here from\n> heap_xlog_visible(). But it's not clear very good practice to just\n> copy the comment like this. If the same logic applies, the code should\n> say that we're doing the same thing here as in heap_xlog_visible() for\n> consistency, or some such thing; after all, that's the primary place\n> where that happens. But it looks like the XXX might have been added by\n> a second person who thought that we didn't need this logic at all, and\n> the FIXME by a third person who thought it was in the wrong place, so\n> the whole thing is really confusing at this point.\n>\n> I'm pretty worried about this, too:\n>\n> + * FIXME: setting recptr here is a dirty dirty hack, to prevent\n> + * visibilitymap_set() from WAL logging.\n>\n> That is indeed a dirty hack, and something needs to be done about it.\n>\n> I wonder if it was really all that smart to try to make the\n> HEAP2_MULTI_INSERT do this instead of just issuing separate WAL\n> records to mark it all-visible afterwards, but I don't see any reason\n> why this can't be made to work. It needs substantially more polishing\n> than it's had, though, I think.\n>\nNew version of the patch is in the attachment.\n\nNew design is more conservative and simpler:\n- pin the visibility map page in advance;\n- set PageAllVisible;\n- call visibilitymap_set() with its own XlogRecord, just like in other \nplaces.\n\nIt allows to remove most of the \"hacks\" and keep code clean.\nThe patch passes tests added in previous versions.\n\nI haven't tested performance yet, though. Maybe after tests, I'll bring \nsome optimizations back.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\nHere are some performance results with a patched and unpatched master branch. The table used for the test contains three columns (integer, text, varchar). The total number of rows is 10000000 in total. Unpatched (Master: 92c12e46d5f1e25fc85608a6d6a19b8f5ea02600)\nCOPY: 9069.432 ms vacuum; 2567.961msCOPY: 9004.533 ms vacuum: 2553.075msCOPY: 8832.422 ms vacuum: 2540.742msPatched (Master: 92c12e46d5f1e25fc85608a6d6a19b8f5ea02600)\nCOPY: 10031.723 ms vacuum: 127.524 msCOPY: 9985.109 ms vacuum: 39.953 msCOPY: 9283.373 ms vacuum: 37.137 msTime to take the copy slightly increased but the vacuum time significantly decrease. -- Ibrar Ahmed",
"msg_date": "Fri, 14 Aug 2020 20:52:36 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "Unfortunately the latest patch doesn't apply cleanly on the master branch. Can you please share an updated one.",
"msg_date": "Mon, 17 Aug 2020 09:18:56 +0000",
"msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "On Mon, Aug 17, 2020 at 2:19 PM Hamid Akhtar <hamid.akhtar@gmail.com> wrote:\n\n> Unfortunately the latest patch doesn't apply cleanly on the master branch.\n> Can you please share an updated one.\n\n\nPlease see the attached patch rebased with master (\na28d731a1187e8d9d8c2b6319375fcbf0a8debd5)\n-- \nIbrar Ahmed",
"msg_date": "Mon, 17 Aug 2020 16:26:40 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "On 2020-Aug-14, Ibrar Ahmed wrote:\n\n> The table used for the test contains three columns (integer, text,\n> varchar).\n> The total number of rows is 10000000 in total.\n> \n> Unpatched (Master: 92c12e46d5f1e25fc85608a6d6a19b8f5ea02600)\n> COPY: 9069.432 ms vacuum; 2567.961ms\n> COPY: 9004.533 ms vacuum: 2553.075ms\n> COPY: 8832.422 ms vacuum: 2540.742ms\n> \n> Patched (Master: 92c12e46d5f1e25fc85608a6d6a19b8f5ea02600)\n> COPY: 10031.723 ms vacuum: 127.524 ms\n> COPY: 9985.109 ms vacuum: 39.953 ms\n> COPY: 9283.373 ms vacuum: 37.137 ms\n> \n> Time to take the copy slightly increased but the vacuum time significantly\n> decrease.\n\n\"Slightly\"? It seems quite a large performance drop to me -- more than\n10%. Where is that time being spent? Andres said in [1] that he\nthought the performance shouldn't be affected noticeably, but this\ndoesn't seem to hold true. As I understand, the idea was that there\nwould be little or no additional WAL records .. only flags in the\nexisting record. So what is happening?\n\n[1] https://postgr.es/m/20190408010427.4l63qr7h2fjcyp77@alap3.anarazel.de\n\nAlso, when Andres posted this patch first, he said this was only for\nheap_multi_insert because it was a prototype. But I think we expect\nthat the table_insert path (CIM_SINGLE mode in copy) should also receive\nthat treatment.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 17 Aug 2020 19:54:13 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "On 18.08.2020 02:54, Alvaro Herrera wrote:\n> On 2020-Aug-14, Ibrar Ahmed wrote:\n>\n>> The table used for the test contains three columns (integer, text,\n>> varchar).\n>> The total number of rows is 10000000 in total.\n>>\n>> Unpatched (Master: 92c12e46d5f1e25fc85608a6d6a19b8f5ea02600)\n>> COPY: 9069.432 ms vacuum; 2567.961ms\n>> COPY: 9004.533 ms vacuum: 2553.075ms\n>> COPY: 8832.422 ms vacuum: 2540.742ms\n>>\n>> Patched (Master: 92c12e46d5f1e25fc85608a6d6a19b8f5ea02600)\n>> COPY: 10031.723 ms vacuum: 127.524 ms\n>> COPY: 9985.109 ms vacuum: 39.953 ms\n>> COPY: 9283.373 ms vacuum: 37.137 ms\n>>\n>> Time to take the copy slightly increased but the vacuum time significantly\n>> decrease.\n> \"Slightly\"? It seems quite a large performance drop to me -- more than\n> 10%. Where is that time being spent? Andres said in [1] that he\n> thought the performance shouldn't be affected noticeably, but this\n> doesn't seem to hold true. As I understand, the idea was that there\n> would be little or no additional WAL records .. only flags in the\n> existing record. So what is happening?\n>\n> [1] https://postgr.es/m/20190408010427.4l63qr7h2fjcyp77@alap3.anarazel.de\n\nI agree that 10% performance drop is not what we expect with this patch.\nIbrar, can you share more info about your tests? I'd like to reproduce \nthis slowdown and fix it, if necessary.\n\nI've run some tests on my laptop and COPY FREEZE shows the same time for \nboth versions, while VACUUM is much faster on the patched version. I've \nalso checked WAL generation and it shows that the patch works correctly \nas it doesn't add any records for COPY.\n\nNot patched:\n\nTime: 54883,356 ms (00:54,883)\nTime: 65373,333 ms (01:05,373)\nTime: 64684,592 ms (01:04,685)\nVACUUM Time: 60861,670 ms (01:00,862)\n\nCOPY wal_bytes 3765 MB\nVACUUM wal_bytes 6015 MB\ntable size 5971 MB\n\nPatched:\n\nTime: 53142,947 ms (00:53,143)\nTime: 65420,812 ms (01:05,421)\nTime: 66600,114 ms (01:06,600)\nVACUUM Time: 63,401 ms\n\nCOPY wal_bytes 3765 MB\nVACUUM wal_bytes 30 kB\ntable size 5971 MB\n\nThe test script is attached.\n\n> Also, when Andres posted this patch first, he said this was only for\n> heap_multi_insert because it was a prototype. But I think we expect\n> that the table_insert path (CIM_SINGLE mode in copy) should also receive\n> that treatment.\n\nI am afraid that extra checks for COPY FREEZE in heap_insert() will \nslow down normal insertions.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Wed, 19 Aug 2020 16:15:40 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "On Wed, Aug 19, 2020 at 6:15 PM Anastasia Lubennikova <\na.lubennikova@postgrespro.ru> wrote:\n\n> On 18.08.2020 02:54, Alvaro Herrera wrote:\n> > On 2020-Aug-14, Ibrar Ahmed wrote:\n> >\n> >> The table used for the test contains three columns (integer, text,\n> >> varchar).\n> >> The total number of rows is 10000000 in total.\n> >>\n> >> Unpatched (Master: 92c12e46d5f1e25fc85608a6d6a19b8f5ea02600)\n> >> COPY: 9069.432 ms vacuum; 2567.961ms\n> >> COPY: 9004.533 ms vacuum: 2553.075ms\n> >> COPY: 8832.422 ms vacuum: 2540.742ms\n> >>\n> >> Patched (Master: 92c12e46d5f1e25fc85608a6d6a19b8f5ea02600)\n> >> COPY: 10031.723 ms vacuum: 127.524 ms\n> >> COPY: 9985.109 ms vacuum: 39.953 ms\n> >> COPY: 9283.373 ms vacuum: 37.137 ms\n> >>\n> >> Time to take the copy slightly increased but the vacuum time\n> significantly\n> >> decrease.\n> > \"Slightly\"? It seems quite a large performance drop to me -- more than\n> > 10%. Where is that time being spent? Andres said in [1] that he\n> > thought the performance shouldn't be affected noticeably, but this\n> > doesn't seem to hold true. As I understand, the idea was that there\n> > would be little or no additional WAL records .. only flags in the\n> > existing record. So what is happening?\n> >\n> > [1]\n> https://postgr.es/m/20190408010427.4l63qr7h2fjcyp77@alap3.anarazel.de\n>\n> I agree that 10% performance drop is not what we expect with this patch.\n> Ibrar, can you share more info about your tests? I'd like to reproduce\n> this slowdown and fix it, if necessary.\n>\n> I've run some tests on my laptop and COPY FREEZE shows the same time for\n> both versions, while VACUUM is much faster on the patched version. I've\n> also checked WAL generation and it shows that the patch works correctly\n> as it doesn't add any records for COPY.\n>\n> Not patched:\n>\n> Time: 54883,356 ms (00:54,883)\n> Time: 65373,333 ms (01:05,373)\n> Time: 64684,592 ms (01:04,685)\n> VACUUM Time: 60861,670 ms (01:00,862)\n>\n> COPY wal_bytes 3765 MB\n> VACUUM wal_bytes 6015 MB\n> table size 5971 MB\n>\n> Patched:\n>\n> Time: 53142,947 ms (00:53,143)\n> Time: 65420,812 ms (01:05,421)\n> Time: 66600,114 ms (01:06,600)\n> VACUUM Time: 63,401 ms\n>\n> COPY wal_bytes 3765 MB\n> VACUUM wal_bytes 30 kB\n> table size 5971 MB\n>\n> The test script is attached.\n>\n> > Also, when Andres posted this patch first, he said this was only for\n> > heap_multi_insert because it was a prototype. But I think we expect\n> > that the table_insert path (CIM_SINGLE mode in copy) should also receive\n> > that treatment.\n>\n> I am afraid that extra checks for COPY FREEZE in heap_insert() will\n> slow down normal insertions.\n>\n> --\n> Anastasia Lubennikova\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\nHere is my test;\n\n\npostgres=# BEGIN;\n\nBEGIN\n\n\npostgres=*# TRUNCATE foo;\n\nTRUNCATE TABLE\n\n\npostgres=*# COPY foo(id, name, address) FROM '/home/ibrar/bar.csv'\nDELIMITER ',' FREEZE;\n\nCOPY 10000000\n\n\npostgres=*# COMMIT;\n\nCOMMIT\n\n\npostgres=# VACUUM;\n\nVACUUM\n\n\npostgres=# SELECT count(*) FROM foo;\n\n count\n\n----------\n\n 10000000\n\n(1 row)\n\n\n\n-- \nIbrar Ahmed\n\nOn Wed, Aug 19, 2020 at 6:15 PM Anastasia Lubennikova <a.lubennikova@postgrespro.ru> wrote:On 18.08.2020 02:54, Alvaro Herrera wrote:\n> On 2020-Aug-14, Ibrar Ahmed wrote:\n>\n>> The table used for the test contains three columns (integer, text,\n>> varchar).\n>> The total number of rows is 10000000 in total.\n>>\n>> Unpatched (Master: 92c12e46d5f1e25fc85608a6d6a19b8f5ea02600)\n>> COPY: 9069.432 ms vacuum; 2567.961ms\n>> COPY: 9004.533 ms vacuum: 2553.075ms\n>> COPY: 8832.422 ms vacuum: 2540.742ms\n>>\n>> Patched (Master: 92c12e46d5f1e25fc85608a6d6a19b8f5ea02600)\n>> COPY: 10031.723 ms vacuum: 127.524 ms\n>> COPY: 9985.109 ms vacuum: 39.953 ms\n>> COPY: 9283.373 ms vacuum: 37.137 ms\n>>\n>> Time to take the copy slightly increased but the vacuum time significantly\n>> decrease.\n> \"Slightly\"? It seems quite a large performance drop to me -- more than\n> 10%. Where is that time being spent? Andres said in [1] that he\n> thought the performance shouldn't be affected noticeably, but this\n> doesn't seem to hold true. As I understand, the idea was that there\n> would be little or no additional WAL records .. only flags in the\n> existing record. So what is happening?\n>\n> [1] https://postgr.es/m/20190408010427.4l63qr7h2fjcyp77@alap3.anarazel.de\n\nI agree that 10% performance drop is not what we expect with this patch.\nIbrar, can you share more info about your tests? I'd like to reproduce \nthis slowdown and fix it, if necessary.\n\nI've run some tests on my laptop and COPY FREEZE shows the same time for \nboth versions, while VACUUM is much faster on the patched version. I've \nalso checked WAL generation and it shows that the patch works correctly \nas it doesn't add any records for COPY.\n\nNot patched:\n\nTime: 54883,356 ms (00:54,883)\nTime: 65373,333 ms (01:05,373)\nTime: 64684,592 ms (01:04,685)\nVACUUM Time: 60861,670 ms (01:00,862)\n\nCOPY wal_bytes 3765 MB\nVACUUM wal_bytes 6015 MB\ntable size 5971 MB\n\nPatched:\n\nTime: 53142,947 ms (00:53,143)\nTime: 65420,812 ms (01:05,421)\nTime: 66600,114 ms (01:06,600)\nVACUUM Time: 63,401 ms\n\nCOPY wal_bytes 3765 MB\nVACUUM wal_bytes 30 kB\ntable size 5971 MB\n\nThe test script is attached.\n\n> Also, when Andres posted this patch first, he said this was only for\n> heap_multi_insert because it was a prototype. But I think we expect\n> that the table_insert path (CIM_SINGLE mode in copy) should also receive\n> that treatment.\n\nI am afraid that extra checks for COPY FREEZE in heap_insert() will \nslow down normal insertions.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\nHere is my test; \npostgres=# BEGIN;\nBEGIN\npostgres=*# TRUNCATE foo;\nTRUNCATE TABLE\npostgres=*# COPY foo(id, name, address) FROM '/home/ibrar/bar.csv' DELIMITER ',' FREEZE; \nCOPY 10000000\npostgres=*# COMMIT;\nCOMMIT\npostgres=# VACUUM;\nVACUUM\npostgres=# SELECT count(*) FROM foo; \n count \n----------\n 10000000\n(1 row)-- Ibrar Ahmed",
"msg_date": "Fri, 21 Aug 2020 21:43:46 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "On 21.08.2020 19:43, Ibrar Ahmed wrote:\n>\n>\n> On Wed, Aug 19, 2020 at 6:15 PM Anastasia Lubennikova \n> <a.lubennikova@postgrespro.ru <mailto:a.lubennikova@postgrespro.ru>> \n> wrote:\n>\n> On 18.08.2020 02:54, Alvaro Herrera wrote:\n> > On 2020-Aug-14, Ibrar Ahmed wrote:\n> >\n> >> The table used for the test contains three columns (integer, text,\n> >> varchar).\n> >> The total number of rows is 10000000 in total.\n> >>\n> >> Unpatched (Master: 92c12e46d5f1e25fc85608a6d6a19b8f5ea02600)\n> >> COPY: 9069.432 ms vacuum; 2567.961ms\n> >> COPY: 9004.533 ms vacuum: 2553.075ms\n> >> COPY: 8832.422 ms vacuum: 2540.742ms\n> >>\n> >> Patched (Master: 92c12e46d5f1e25fc85608a6d6a19b8f5ea02600)\n> >> COPY: 10031.723 ms vacuum: 127.524 ms\n> >> COPY: 9985.109 ms vacuum: 39.953 ms\n> >> COPY: 9283.373 ms vacuum: 37.137 ms\n> >>\n> >> Time to take the copy slightly increased but the vacuum time\n> significantly\n> >> decrease.\n> > \"Slightly\"? It seems quite a large performance drop to me --\n> more than\n> > 10%. Where is that time being spent? Andres said in [1] that he\n> > thought the performance shouldn't be affected noticeably, but this\n> > doesn't seem to hold true. As I understand, the idea was that there\n> > would be little or no additional WAL records .. only flags in the\n> > existing record. So what is happening?\n> >\n> > [1]\n> https://postgr.es/m/20190408010427.4l63qr7h2fjcyp77@alap3.anarazel.de\n>\n> I agree that 10% performance drop is not what we expect with this\n> patch.\n> Ibrar, can you share more info about your tests? I'd like to\n> reproduce\n> this slowdown and fix it, if necessary.\n>\n>\n> Here is my test;\n>\n> postgres=# BEGIN;\n>\n> BEGIN\n>\n>\n> postgres=*# TRUNCATE foo;\n>\n> TRUNCATE TABLE\n>\n>\n> postgres=*# COPY foo(id, name, address) FROM '/home/ibrar/bar.csv' \n> DELIMITER ',' FREEZE;\n>\n> COPY 10000000\n>\n>\n>\n> -- \n> Ibrar Ahmed\n\n\nI've repeated the test and didn't notice any slowdown for COPY FREEZE.\nTest data is here [1].\n\nThe numbers do fluctuate a bit, but there is no dramatic difference \nbetween master and patched version. So I assume that the performance \ndrop in your test has something to do with the measurement error. \nUnless, you have some non-default configuration that could affect it.\n\npatched:\n\nCOPY: 12327,090 ms vacuum: 37,555 ms\nCOPY: 12939,540 ms vacuum: 35,703 ms\nCOPY: 12245,819 ms vacuum: 36,273 ms\n\nmaster:\nCOPY\nCOPY: 13253,605 ms vacuum: 3592,849 ms\nCOPY: 12619,428 ms vacuum: 4253,836 ms\nCOPY: 12512,940 ms vacuum: 4009,847 ms\n\nI also slightly cleaned up comments, so the new version of the patch is \nattached. As this is just a performance optimization documentation is \nnot needed. It would be great, if other reviewers could run some \nindependent performance tests, as I believe that this patch is ready for \ncommitter.\n\n[1] https://drive.google.com/file/d/11r19NX6yyPjvxdDub8Ce-kmApRurp4Nx/view\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Companyt",
"msg_date": "Thu, 27 Aug 2020 00:14:34 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "On Thu, Aug 27, 2020 at 2:14 AM Anastasia Lubennikova <\na.lubennikova@postgrespro.ru> wrote:\n\n> On 21.08.2020 19:43, Ibrar Ahmed wrote:\n>\n>\n>\n> On Wed, Aug 19, 2020 at 6:15 PM Anastasia Lubennikova <\n> a.lubennikova@postgrespro.ru> wrote:\n>\n>> On 18.08.2020 02:54, Alvaro Herrera wrote:\n>> > On 2020-Aug-14, Ibrar Ahmed wrote:\n>> >\n>> >> The table used for the test contains three columns (integer, text,\n>> >> varchar).\n>> >> The total number of rows is 10000000 in total.\n>> >>\n>> >> Unpatched (Master: 92c12e46d5f1e25fc85608a6d6a19b8f5ea02600)\n>> >> COPY: 9069.432 ms vacuum; 2567.961ms\n>> >> COPY: 9004.533 ms vacuum: 2553.075ms\n>> >> COPY: 8832.422 ms vacuum: 2540.742ms\n>> >>\n>> >> Patched (Master: 92c12e46d5f1e25fc85608a6d6a19b8f5ea02600)\n>> >> COPY: 10031.723 ms vacuum: 127.524 ms\n>> >> COPY: 9985.109 ms vacuum: 39.953 ms\n>> >> COPY: 9283.373 ms vacuum: 37.137 ms\n>> >>\n>> >> Time to take the copy slightly increased but the vacuum time\n>> significantly\n>> >> decrease.\n>> > \"Slightly\"? It seems quite a large performance drop to me -- more than\n>> > 10%. Where is that time being spent? Andres said in [1] that he\n>> > thought the performance shouldn't be affected noticeably, but this\n>> > doesn't seem to hold true. As I understand, the idea was that there\n>> > would be little or no additional WAL records .. only flags in the\n>> > existing record. So what is happening?\n>> >\n>> > [1]\n>> https://postgr.es/m/20190408010427.4l63qr7h2fjcyp77@alap3.anarazel.de\n>>\n>> I agree that 10% performance drop is not what we expect with this patch.\n>> Ibrar, can you share more info about your tests? I'd like to reproduce\n>> this slowdown and fix it, if necessary.\n>>\n>>\n> Here is my test;\n>\n>\n> postgres=# BEGIN;\n>\n> BEGIN\n>\n>\n> postgres=*# TRUNCATE foo;\n>\n> TRUNCATE TABLE\n>\n>\n> postgres=*# COPY foo(id, name, address) FROM '/home/ibrar/bar.csv'\n> DELIMITER ',' FREEZE;\n>\n> COPY 10000000\n>\n>\n>\n> --\n> Ibrar Ahmed\n>\n>\n> I've repeated the test and didn't notice any slowdown for COPY FREEZE.\n> Test data is here [1].\n>\n> The numbers do fluctuate a bit, but there is no dramatic difference\n> between master and patched version. So I assume that the performance drop\n> in your test has something to do with the measurement error. Unless, you\n> have some non-default configuration that could affect it.\n>\n> patched:\n>\n> COPY: 12327,090 ms vacuum: 37,555 ms\n> COPY: 12939,540 ms vacuum: 35,703 ms\n> COPY: 12245,819 ms vacuum: 36,273 ms\n>\n> master:\n> COPY\n> COPY: 13253,605 ms vacuum: 3592,849 ms\n> COPY: 12619,428 ms vacuum: 4253,836 ms\n> COPY: 12512,940 ms vacuum: 4009,847 ms\n>\n> I also slightly cleaned up comments, so the new version of the patch is\n> attached. As this is just a performance optimization documentation is not\n> needed. It would be great, if other reviewers could run some independent\n> performance tests, as I believe that this patch is ready for committer.\n>\n> [1] https://drive.google.com/file/d/11r19NX6yyPjvxdDub8Ce-kmApRurp4Nx/view\n>\n> --\n> Anastasia Lubennikova\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Companyt\n>\n> I gave another try with latest v3 patch on latest master branch\n(ff60394a8c9a7af8b32de420ccb54a20a0f019c1) with all default settings.\n11824.495 is median with master and 11884.089 is median value with patch.\n\n\nNote: There are two changes such as (1) used the v3 patch (2) now test is\ndone on latest master (ff60394a8c9a7af8b32de420ccb54a20a0f019c1).\n\n\nMaster (ff60394a8c9a7af8b32de420ccb54a20a0f019c1)\n\npostgres=# \\timing\n\npostgres=# BEGIN;\n\npostgres=*# TRUNCATE foo;\n\npostgres=*# COPY foo(id, name, address) FROM '/Users/ibrar/bar.csv'\nDELIMITER ',' FREEZE;\n\nTime: 11824.495 ms (00:11.824)\n\npostgres=*# COMMIT;\n\n\nRestart\n\n\npostgres=# \\timing\n\npostgres=# BEGIN;\n\npostgres=*# TRUNCATE foo;\n\npostgres=*# COPY foo(id, name, address) FROM '/Users/ibrar/bar.csv'\nDELIMITER ',' FREEZE;\n\nTime: 14096.987 ms (00:14.097)\n\npostgres=*# commit;\n\n\nRestart\n\n\npostgres=# \\timing\n\npostgres=# BEGIN;\n\npostgres=*# TRUNCATE foo;\n\npostgres=*# COPY foo(id, name, address) FROM '/Users/ibrar/bar.csv'\nDELIMITER ',' FREEZE;\n\nTime: 11108.289 ms (00:11.108)\n\npostgres=*# commit;\n\n\n\nPatched (ff60394a8c9a7af8b32de420ccb54a20a0f019c1)\n\npostgres=# \\timing\n\npostgres=# BEGIN;\n\npostgres=*# TRUNCATE foo;\n\npostgres=*# COPY foo(id, name, address) FROM '/Users/ibrar/bar.csv'\nDELIMITER ',' FREEZE;\n\nTime: 10749.945 ms (00:10.750)\n\npostgres=*# commit;\n\n\nRestart\n\n\npostgres=# \\timing\n\npostgres=# BEGIN;\n\npostgres=*# TRUNCATE foo;\n\npostgres=*# COPY foo(id, name, address) FROM '/Users/ibrar/bar.csv'\nDELIMITER ',' FREEZE;\n\nTime: 14274.361 ms (00:14.274)\n\npostgres=*# commit;\n\n\nRestart\n\n\npostgres=# \\timing\n\npostgres=# BEGIN;\n\npostgres=*# TRUNCATE foo;\n\npostgres=*# COPY foo(id, name, address) FROM '/Users/ibrar/bar.csv'\nDELIMITER ',' FREEZE;\n\nTime: 11884.089 ms (00:11.884)\n\npostgres=*# commit;\n\n\n-- \nIbrar Ahmed\n\n\nOn Thu, Aug 27, 2020 at 2:14 AM Anastasia Lubennikova <a.lubennikova@postgrespro.ru> wrote:\n\nOn 21.08.2020 19:43, Ibrar Ahmed wrote:\n\n\n\n\n\n\n\nOn Wed, Aug 19, 2020 at 6:15\n PM Anastasia Lubennikova <a.lubennikova@postgrespro.ru>\n wrote:\n\nOn 18.08.2020 02:54,\n Alvaro Herrera wrote:\n > On 2020-Aug-14, Ibrar Ahmed wrote:\n >\n >> The table used for the test contains three columns\n (integer, text,\n >> varchar).\n >> The total number of rows is 10000000 in total.\n >>\n >> Unpatched (Master:\n 92c12e46d5f1e25fc85608a6d6a19b8f5ea02600)\n >> COPY: 9069.432 ms vacuum; 2567.961ms\n >> COPY: 9004.533 ms vacuum: 2553.075ms\n >> COPY: 8832.422 ms vacuum: 2540.742ms\n >>\n >> Patched (Master:\n 92c12e46d5f1e25fc85608a6d6a19b8f5ea02600)\n >> COPY: 10031.723 ms vacuum: 127.524 ms\n >> COPY: 9985.109 ms vacuum: 39.953 ms\n >> COPY: 9283.373 ms vacuum: 37.137 ms\n >>\n >> Time to take the copy slightly increased but the\n vacuum time significantly\n >> decrease.\n > \"Slightly\"? It seems quite a large performance drop to\n me -- more than\n > 10%. Where is that time being spent? Andres said in\n [1] that he\n > thought the performance shouldn't be affected\n noticeably, but this\n > doesn't seem to hold true. As I understand, the idea\n was that there\n > would be little or no additional WAL records .. only\n flags in the\n > existing record. So what is happening?\n >\n > [1] https://postgr.es/m/20190408010427.4l63qr7h2fjcyp77@alap3.anarazel.de\n\n I agree that 10% performance drop is not what we expect with\n this patch.\n Ibrar, can you share more info about your tests? I'd like to\n reproduce \n this slowdown and fix it, if necessary.\n\n\n\n\nHere is my test;\n \n\n\npostgres=#\n BEGIN;\nBEGIN\n\n\npostgres=*#\n TRUNCATE foo;\nTRUNCATE\n TABLE\n\n\npostgres=*#\n COPY foo(id, name, address) FROM '/home/ibrar/bar.csv'\n DELIMITER ',' FREEZE; \nCOPY\n 10000000\n\n\n\n\n\n -- \n\n\nIbrar Ahmed\n\n\n\n\n\n\nI've repeated the test and didn't notice any slowdown for COPY\n FREEZE.\n Test data is here [1].\n\n The numbers do fluctuate a bit, but there is no dramatic\n difference between master and patched version. So I assume that\n the performance drop in your test has something to do with the\n measurement error. Unless, you have some non-default configuration\n that could affect it.\n\npatched:\n\n COPY: 12327,090 ms vacuum: 37,555 ms\n COPY: 12939,540 ms vacuum: 35,703 ms\n COPY: 12245,819 ms vacuum: 36,273 ms\n\n master:\n COPY\n COPY: 13253,605 ms vacuum: 3592,849 ms\n COPY: 12619,428 ms vacuum: 4253,836 ms\n COPY: 12512,940 ms vacuum: 4009,847 ms\n\n I also slightly cleaned up comments, so the new version of the\n patch is attached. As this is just a performance optimization\n documentation is not needed. It would be great, if other reviewers\n could run some independent performance tests, as I believe that\n this patch is ready for committer.\n\n[1]\n https://drive.google.com/file/d/11r19NX6yyPjvxdDub8Ce-kmApRurp4Nx/view\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Companyt\n\nI gave another try with latest v3 patch on latest master branch (ff60394a8c9a7af8b32de420ccb54a20a0f019c1) with all default settings. 11824.495 is median with master and 11884.089 is median value with patch. Note: There are two changes such as (1) used the v3 patch (2) now test is done on latest master (ff60394a8c9a7af8b32de420ccb54a20a0f019c1).Master (ff60394a8c9a7af8b32de420ccb54a20a0f019c1)\npostgres=# \\timing\npostgres=# BEGIN;\npostgres=*# TRUNCATE foo;\npostgres=*# COPY foo(id, name, address) FROM '/Users/ibrar/bar.csv' DELIMITER ',' FREEZE;\nTime: 11824.495 ms (00:11.824)\npostgres=*# COMMIT;\nRestartpostgres=# \\timing\npostgres=# BEGIN;\npostgres=*# TRUNCATE foo;\npostgres=*# COPY foo(id, name, address) FROM '/Users/ibrar/bar.csv' DELIMITER ',' FREEZE;\nTime: 14096.987 ms (00:14.097)\npostgres=*# commit;\nRestartpostgres=# \\timing\npostgres=# BEGIN;\npostgres=*# TRUNCATE foo;\npostgres=*# COPY foo(id, name, address) FROM '/Users/ibrar/bar.csv' DELIMITER ',' FREEZE;\nTime: 11108.289 ms (00:11.108)\npostgres=*# commit;\nPatched (ff60394a8c9a7af8b32de420ccb54a20a0f019c1)\npostgres=# \\timing\npostgres=# BEGIN;\npostgres=*# TRUNCATE foo;\npostgres=*# COPY foo(id, name, address) FROM '/Users/ibrar/bar.csv' DELIMITER ',' FREEZE;\nTime: 10749.945 ms (00:10.750)\npostgres=*# commit;\nRestartpostgres=# \\timing\npostgres=# BEGIN;\npostgres=*# TRUNCATE foo;\npostgres=*# COPY foo(id, name, address) FROM '/Users/ibrar/bar.csv' DELIMITER ',' FREEZE;\nTime: 14274.361 ms (00:14.274)\npostgres=*# commit;\nRestartpostgres=# \\timing\npostgres=# BEGIN;\npostgres=*# TRUNCATE foo;\npostgres=*# COPY foo(id, name, address) FROM '/Users/ibrar/bar.csv' DELIMITER ',' FREEZE;\nTime: 11884.089 ms (00:11.884)\npostgres=*# commit;\n-- Ibrar Ahmed",
"msg_date": "Thu, 27 Aug 2020 03:19:45 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "Status update for a commitfest entry.\r\n\r\nThis patch is ReadyForCommitter. It applies and passes the CI. There are no unanswered questions in the discussion. \r\n\r\nThe discussion started in 2015 with a patch by Jeff Janes. Later it was revived by Pavan Deolasee. After it was picked up by Ibrar Ahmed and finally, it was rewritten by me, so I moved myself from reviewers to authors as well.\r\n\r\nThe latest version was reviewed and tested by Ibrar Ahmed. The patch doesn't affect COPY FREEZE performance and significantly decreases the time of the following VACUUM.",
"msg_date": "Tue, 27 Oct 2020 19:16:11 +0000",
"msg_from": "Anastasia Lubennikova <lubennikovaav@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "> Status update for a commitfest entry.\n> \n> This patch is ReadyForCommitter. It applies and passes the CI. There are no unanswered questions in the discussion. \n> \n> The discussion started in 2015 with a patch by Jeff Janes. Later it was revived by Pavan Deolasee. After it was picked up by Ibrar Ahmed and finally, it was rewritten by me, so I moved myself from reviewers to authors as well.\n> \n> The latest version was reviewed and tested by Ibrar Ahmed. The patch doesn't affect COPY FREEZE performance and significantly decreases the time of the following VACUUM.\n\nI have tested the patch on my laptop (mem 16GB, SSD 512GB) using the\ndata introduced in up thread and saw that VACCUM after COPY FREEZE is\nnearly 60 times faster than current master branch. Quite impressive.\n\nBy the way, I noticed following comment:\n+\t\t\t/*\n+\t\t\t * vmbuffer should be already pinned by RelationGetBufferForTuple,\n+\t\t\t * Though, it's fine if is not. all_frozen is just an optimization.\n+\t\t\t */\n\ncould be enhanced like below. What do you think?\n+\t\t\t/*\n+\t\t\t * vmbuffer should be already pinned by RelationGetBufferForTuple.\n+\t\t\t * Though, it's fine if it is not. all_frozen is just an optimization.\n+\t\t\t */\n\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Wed, 28 Oct 2020 14:46:53 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "Hi,\n\nI might be somewhat late to the party, but I've done a bit of\nbenchmarking too ;-) I used TPC-H data from a 100GB test, and tried\ndifferent combinations of COPY [FREEZE] and VACUUM [FREEZE], both on\ncurrent master and with the patch.\n\nThe results look like this (the columns say what combination of COPY and\nVACUUM was used - e.g. -/FREEZE means plain COPY and VACUUM FREEZE)\n\nmaster:\n\n - / - FREEZE / - - / FREEZE FREEZE / FREEZE\n ----------------------------------------------------------------\n COPY 2471 2477 2486 2484\n VACUUM 228 209 453 206\n\npatched:\n\n - / - FREEZE / - - / FREEZE FREEZE / FREEZE\n ----------------------------------------------------------------\n COPY 2459 2445 2458 2446\n VACUUM 227 0 467 0\n\nSo I don't really observe any measurable slowdowns in the COPY part (in\nfact there seems to be a tiny speedup, but it might be just noise). In\nthe VACUUM part, there's clear speedup when the data was already frozen\nby COPY (Yes, those are zeroes, because it took less than 1 second.)\n\nSo that looks pretty awesome, I guess.\n\nFor the record, these tests were run on a server with NVMe SSD, so\nhopefully reliable and representative numbers.\n\n\nA couple minor comments about the code:\n\n1) Maybe add a comment before the block setting xlrec->flags in\nheap_multi_insert. It's not very complex, but it used to be a bit\nsimpler, and all the other pieces around have comments, so it won't\nhurt.\n\n\n2) I find the \"if (all_frozen_set)\" block a bit strange. It's a matter\nof personal preference, but I'd just use a single level of nesting, i.e.\nsomething like this:\n\n /* if everything frozen, the whole page has to be visible */\n Assert(!(all_frozen_set && !PageIsAllVisible(page)));\n\n /*\n * If we've frozen everything on the page, and if we're already\n * holding pin on the vmbuffer, record that in the visibilitymap.\n * If we're not holding the pin, it's OK to skip this - it's just\n * an optimization.\n *\n * It's fine to use InvalidTransactionId here - this is only used\n * when HEAP_INSERT_FROZEN is specified, which intentionally\n * violates visibility rules.\n */\n if (all_frozen_set &&\n visibilitymap_pin_ok(BufferGetBlockNumber(buffer), vmbuffer))\n\tvisibilitymap_set(...);\n\nIMHO it's easier to read, but YMMV. I've also reworded the comment a bit\nto say what we're doing etc. And I've moved the comment from inside the\nif block into the main comment - that was making it harder to read too.\n\n\n3) I see RelationGetBufferForTuple does this:\n\n /*\n * This is for COPY FREEZE needs. If page is empty,\n * pin vmbuffer to set all_frozen bit\n */\n if ((options & HEAP_INSERT_FROZEN) &&\n (PageGetMaxOffsetNumber(BufferGetPage(buffer)) == 0))\n visibilitymap_pin(relation, BufferGetBlockNumber(buffer), vmbuffer);\n\nso is it actually possible to get into the (all_frozen_set) without\nholding a pin on the visibilitymap? I haven't investigated this so\nmaybe there are other ways to get into that code block. But the new\nadditions to hio.c get the pin too.\n\n\n4) In heap_xlog_multi_insert we now have this:\n\n if (xlrec->flags & XLH_INSERT_ALL_VISIBLE_CLEARED)\n PageClearAllVisible(page);\n if (xlrec->flags & XLH_INSERT_ALL_FROZEN_SET)\n PageSetAllVisible(page);\n\nIIUC it makes no sense to have both flags at the same time, right? So\nwhat about adding\n\n Assert(!(XLH_INSERT_ALL_FROZEN_SET && XLH_INSERT_ALL_VISIBLE_CLEARED));\n\nto check that?\n\n\n5) Not sure we need to explicitly say this is for COPY FREE in all the\nblocks added to hio.c. IMO it's sufficient to use HEAP_INSERT_FROZEN in\nthe condition, at this level of abstraction.\n\n\nI wonder what to do about the heap_insert - I know there are concerns it\nwould negatively impact regular insert, but is it really true? I suppose\nthis optimization would be valuable even for cases where multi-insert is\nnot possible.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Fri, 30 Oct 2020 01:42:52 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "On 30.10.2020 03:42, Tomas Vondra wrote:\n> Hi,\n>\n> I might be somewhat late to the party, but I've done a bit of\n> benchmarking too ;-) I used TPC-H data from a 100GB test, and tried\n> different combinations of COPY [FREEZE] and VACUUM [FREEZE], both on\n> current master and with the patch.\n>\n> So I don't really observe any measurable slowdowns in the COPY part (in\n> fact there seems to be a tiny speedup, but it might be just noise). In\n> the VACUUM part, there's clear speedup when the data was already frozen\n> by COPY (Yes, those are zeroes, because it took less than 1 second.)\n>\n> So that looks pretty awesome, I guess.\n>\n> For the record, these tests were run on a server with NVMe SSD, so\n> hopefully reliable and representative numbers.\n>\nThank you for the review.\n\n> A couple minor comments about the code:\n>\n> 2) I find the \"if (all_frozen_set)\" block a bit strange. It's a matter\n> of personal preference, but I'd just use a single level of nesting, i.e.\n> something like this:\n>\n> /* if everything frozen, the whole page has to be visible */\n> Assert(!(all_frozen_set && !PageIsAllVisible(page)));\n>\n> /*\n> * If we've frozen everything on the page, and if we're already\n> * holding pin on the vmbuffer, record that in the visibilitymap.\n> * If we're not holding the pin, it's OK to skip this - it's just\n> * an optimization.\n> *\n> * It's fine to use InvalidTransactionId here - this is only used\n> * when HEAP_INSERT_FROZEN is specified, which intentionally\n> * violates visibility rules.\n> */\n> if (all_frozen_set &&\n> visibilitymap_pin_ok(BufferGetBlockNumber(buffer), vmbuffer))\n> visibilitymap_set(...);\n>\n> IMHO it's easier to read, but YMMV. I've also reworded the comment a bit\n> to say what we're doing etc. And I've moved the comment from inside the\n> if block into the main comment - that was making it harder to read too.\n>\nI agree that it's a matter of taste. I've updated comments and left \nnesting unchanged to keep assertions simple.\n\n>\n> 3) I see RelationGetBufferForTuple does this:\n>\n> /*\n> * This is for COPY FREEZE needs. If page is empty,\n> * pin vmbuffer to set all_frozen bit\n> */\n> if ((options & HEAP_INSERT_FROZEN) &&\n> (PageGetMaxOffsetNumber(BufferGetPage(buffer)) == 0))\n> visibilitymap_pin(relation, BufferGetBlockNumber(buffer), \n> vmbuffer);\n>\n> so is it actually possible to get into the (all_frozen_set) without\n> holding a pin on the visibilitymap? I haven't investigated this so\n> maybe there are other ways to get into that code block. But the new\n> additions to hio.c get the pin too.\n>\nI was thinking that GetVisibilityMapPins() can somehow unset the pin. I \ngave it a second look. And now I don't think it's possible to get into \nthis code block without a pin. So, I converted this check into an \nassertion.\n\n>\n> 4) In heap_xlog_multi_insert we now have this:\n>\n> if (xlrec->flags & XLH_INSERT_ALL_VISIBLE_CLEARED)\n> PageClearAllVisible(page);\n> if (xlrec->flags & XLH_INSERT_ALL_FROZEN_SET)\n> PageSetAllVisible(page);\n>\n> IIUC it makes no sense to have both flags at the same time, right? So\n> what about adding\n>\n> Assert(!(XLH_INSERT_ALL_FROZEN_SET && \n> XLH_INSERT_ALL_VISIBLE_CLEARED));\n>\n> to check that?\n>\nAgree.\n\nI placed this assertion to the very beginning of the function. It also \nhelped to simplify the code a bit.\nI also noticed, that we were not updating visibility map for all_frozen \nfrom heap_xlog_multi_insert. Fixed.\n\n>\n> I wonder what to do about the heap_insert - I know there are concerns it\n> would negatively impact regular insert, but is it really true? I suppose\n> this optimization would be valuable even for cases where multi-insert is\n> not possible.\n>\nDo we have something like INSERT .. FREEZE? I only see \nTABLE_INSERT_FROZEN set for COPY FREEZE and for matview operations. Can \nyou explain, what use-case are we trying to optimize by extending this \npatch to heap_insert()?\n\nThe new version is attached.\nI've also fixed a typo in the comment by Tatsuo Ishii suggestion.\nAlso, I tested this patch with replication and found no issues.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 2 Nov 2020 16:44:22 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "On Mon, Nov 02, 2020 at 04:44:22PM +0300, Anastasia Lubennikova wrote:\n>On 30.10.2020 03:42, Tomas Vondra wrote:\n>>Hi,\n>>\n>>I might be somewhat late to the party, but I've done a bit of\n>>benchmarking too ;-) I used TPC-H data from a 100GB test, and tried\n>>different combinations of COPY [FREEZE] and VACUUM [FREEZE], both on\n>>current master and with the patch.\n>>\n>>So I don't really observe any measurable slowdowns in the COPY part (in\n>>fact there seems to be a tiny speedup, but it might be just noise). In\n>>the VACUUM part, there's clear speedup when the data was already frozen\n>>by COPY (Yes, those are zeroes, because it took less than 1 second.)\n>>\n>>So that looks pretty awesome, I guess.\n>>\n>>For the record, these tests were run on a server with NVMe SSD, so\n>>hopefully reliable and representative numbers.\n>>\n>Thank you for the review.\n>\n>>A couple minor comments about the code:\n>>\n>>2) I find the \"if (all_frozen_set)\" block a bit strange. It's a matter\n>>of personal preference, but I'd just use a single level of nesting, i.e.\n>>something like this:\n>>\n>>��� /* if everything frozen, the whole page has to be visible */\n>>��� Assert(!(all_frozen_set && !PageIsAllVisible(page)));\n>>\n>>��� /*\n>>���� * If we've frozen everything on the page, and if we're already\n>>���� * holding pin on the vmbuffer, record that in the visibilitymap.\n>>���� * If we're not holding the pin, it's OK to skip this - it's just\n>>���� * an optimization.\n>>���� *\n>>���� * It's fine to use InvalidTransactionId here - this is only used\n>>���� * when HEAP_INSERT_FROZEN is specified, which intentionally\n>>���� * violates visibility rules.\n>>���� */\n>>��� if (all_frozen_set &&\n>>������� visibilitymap_pin_ok(BufferGetBlockNumber(buffer), vmbuffer))\n>>����visibilitymap_set(...);\n>>\n>>IMHO it's easier to read, but YMMV. I've also reworded the comment a bit\n>>to say what we're doing etc. And I've moved the comment from inside the\n>>if block into the main comment - that was making it harder to read too.\n>>\n>I agree that it's a matter of taste. I've updated comments and left \n>nesting unchanged to keep assertions simple.\n>\n>>\n>>3) I see RelationGetBufferForTuple does this:\n>>\n>>��� /*\n>>���� * This is for COPY FREEZE needs. If page is empty,\n>>���� * pin vmbuffer to set all_frozen bit\n>>���� */\n>>��� if ((options & HEAP_INSERT_FROZEN) &&\n>>������� (PageGetMaxOffsetNumber(BufferGetPage(buffer)) == 0))\n>>������� visibilitymap_pin(relation, BufferGetBlockNumber(buffer), \n>>vmbuffer);\n>>\n>>so is it actually possible to get into the (all_frozen_set) without\n>>holding a pin on the visibilitymap? I haven't investigated this so\n>>maybe there are other ways to get into that code block. But the new\n>>additions to hio.c get the pin too.\n>>\n>I was thinking that GetVisibilityMapPins() can somehow unset the pin. \n>I gave it a second look. And now I don't think it's possible to get \n>into this code block without a pin.� So, I converted this check into \n>an assertion.\n>\n>>\n>>4) In heap_xlog_multi_insert we now have this:\n>>\n>>��� if (xlrec->flags & XLH_INSERT_ALL_VISIBLE_CLEARED)\n>>������� PageClearAllVisible(page);\n>>��� if (xlrec->flags & XLH_INSERT_ALL_FROZEN_SET)\n>>������� PageSetAllVisible(page);\n>>\n>>IIUC it makes no sense to have both flags at the same time, right? So\n>>what about adding\n>>\n>>��� Assert(!(XLH_INSERT_ALL_FROZEN_SET && \n>>XLH_INSERT_ALL_VISIBLE_CLEARED));\n>>\n>>to check that?\n>>\n>Agree.\n>\n>I placed this assertion to the very beginning of the function. It also \n>helped to simplify the code a bit.\n>I also noticed, that we were not updating visibility map for \n>all_frozen from heap_xlog_multi_insert. Fixed.\n>\n>>\n>>I wonder what to do about the heap_insert - I know there are concerns it\n>>would negatively impact regular insert, but is it really true? I suppose\n>>this optimization would be valuable even for cases where multi-insert is\n>>not possible.\n>>\n>Do we have something like INSERT .. FREEZE? I only see \n>TABLE_INSERT_FROZEN set for COPY FREEZE and for matview operations. \n>Can you explain, what use-case are we trying to optimize by extending \n>this patch to heap_insert()?\n>\n\nI might be mistaken, but isn't copy forced to use heap_insert for a\nbunch of reasons? For example in the presence of before/after triggers,\nstatement triggers on partitioned tables, or with volatile functions.\n\n>The new version is attached.\n>I've also fixed a typo in the comment by Tatsuo Ishii suggestion.\n>Also, I tested this patch with replication and found no issues.\n>\n\nThanks. I'll take a look.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Mon, 2 Nov 2020 15:44:53 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "Hi,\n\nI started looking at this patch again, hoping to get it committed in \nthis CF, but I think there's a regression in handling TOAST tables \n(compared to the v3 patch submitted by Pavan in February 2019).\n\nThe test I'm running a very simple test (see test.sql):\n\n1) start a transaction\n2) create a table with a text column\n3) copy freeze data into it\n4) use pg_visibility to see how many blocks are all_visible both in the\n main table and it's TOAST table\n\nFor v3 patch (applied on top of 278584b526 and s/hi_options/ti_options) \nI get this:\n\n pages NOT all_visible\n ------------------------------------------\n main 637 0\n toast 50001 3\n\nThere was some discussion about relcache invalidations causing a couple \nTOAST pages not be marked as all_visible, etc.\n\nHowever, for this patch on master I get this\n\n pages NOT all_visible\n ------------------------------------------\n main 637 0\n toast 50001 50001\n\nSo no pages in TOAST are marked as all_visible. I haven't investigated \nwhat's causing this, but IMO that needs fixing to make ths patch RFC.\n\nAttached is the test script I'm using, and a v5 of the patch - rebased \non current master, with some minor tweaks to comments etc.\n\n\nregards\n\n--\nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 10 Jan 2021 23:35:03 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "On 11.01.2021 01:35, Tomas Vondra wrote:\n> Hi,\n>\n> I started looking at this patch again, hoping to get it committed in \n> this CF, but I think there's a regression in handling TOAST tables \n> (compared to the v3 patch submitted by Pavan in February 2019).\n>\n> The test I'm running a very simple test (see test.sql):\n>\n> 1) start a transaction\n> 2) create a table with a text column\n> 3) copy freeze data into it\n> 4) use pg_visibility to see how many blocks are all_visible both in the\n> main table and it's TOAST table\n>\n> For v3 patch (applied on top of 278584b526 and \n> s/hi_options/ti_options) I get this:\n>\n> pages NOT all_visible\n> ------------------------------------------\n> main 637 0\n> toast 50001 3\n>\n> There was some discussion about relcache invalidations causing a \n> couple TOAST pages not be marked as all_visible, etc.\n>\n> However, for this patch on master I get this\n>\n> pages NOT all_visible\n> ------------------------------------------\n> main 637 0\n> toast 50001 50001\n>\n> So no pages in TOAST are marked as all_visible. I haven't investigated \n> what's causing this, but IMO that needs fixing to make ths patch RFC.\n>\n> Attached is the test script I'm using, and a v5 of the patch - rebased \n> on current master, with some minor tweaks to comments etc.\n>\n\nThank you for attaching the test script. I reproduced the problem. This \nregression occurs because TOAST internally uses heap_insert().\nYou have asked upthread about adding this optimization to heap_insert().\n\nI wrote a quick fix, see the attached patch 0002. The TOAST test passes \nnow, but I haven't tested performance or any other use-cases yet.\nI'm going to test it properly in a couple of days and share results.\n\nWith this change a lot of new code is repeated in heap_insert() and \nheap_multi_insert(). I think it's fine, because these functions already \nhave a lot in common.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 12 Jan 2021 00:00:39 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "\n\nOn 1/11/21 10:00 PM, Anastasia Lubennikova wrote:\n> On 11.01.2021 01:35, Tomas Vondra wrote:\n>> Hi,\n>>\n>> I started looking at this patch again, hoping to get it committed in \n>> this CF, but I think there's a regression in handling TOAST tables \n>> (compared to the v3 patch submitted by Pavan in February 2019).\n>>\n>> The test I'm running a very simple test (see test.sql):\n>>\n>> 1) start a transaction\n>> 2) create a table with a text column\n>> 3) copy freeze data into it\n>> 4) use pg_visibility to see how many blocks are all_visible both in the\n>> main table and it's TOAST table\n>>\n>> For v3 patch (applied on top of 278584b526 and \n>> s/hi_options/ti_options) I get this:\n>>\n>> pages NOT all_visible\n>> ------------------------------------------\n>> main 637 0\n>> toast 50001 3\n>>\n>> There was some discussion about relcache invalidations causing a \n>> couple TOAST pages not be marked as all_visible, etc.\n>>\n>> However, for this patch on master I get this\n>>\n>> pages NOT all_visible\n>> ------------------------------------------\n>> main 637 0\n>> toast 50001 50001\n>>\n>> So no pages in TOAST are marked as all_visible. I haven't investigated \n>> what's causing this, but IMO that needs fixing to make ths patch RFC.\n>>\n>> Attached is the test script I'm using, and a v5 of the patch - rebased \n>> on current master, with some minor tweaks to comments etc.\n>>\n> \n> Thank you for attaching the test script. I reproduced the problem. This \n> regression occurs because TOAST internally uses heap_insert().\n> You have asked upthread about adding this optimization to heap_insert().\n> \n> I wrote a quick fix, see the attached patch 0002. The TOAST test passes \n> now, but I haven't tested performance or any other use-cases yet.\n> I'm going to test it properly in a couple of days and share results.\n> \n\nThanks. I think it's important to make this work for TOAST tables - it \noften stores most of the data, and it was working in v3 of the patch. I \nhaven't looked into the details, but if it's really just due to TOAST \nusing heap_insert, I'd say it just confirms the importance of tweaking \nheap_insert too.\n\n> With this change a lot of new code is repeated in heap_insert() and \n> heap_multi_insert(). I think it's fine, because these functions already \n> have a lot in common.\n> \n\nUnderstood. IMHO a bit of redundancy is not a big issue, but I haven't \nlooked at the code yet. Let's get it working first, then we can decide \nif some refactoring is appropriate.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 11 Jan 2021 22:51:21 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "On 12.01.2021 00:51, Tomas Vondra wrote:\n>\n>\n> On 1/11/21 10:00 PM, Anastasia Lubennikova wrote:\n>> On 11.01.2021 01:35, Tomas Vondra wrote:\n>>> Hi,\n>>>\n>>> I started looking at this patch again, hoping to get it committed in \n>>> this CF, but I think there's a regression in handling TOAST tables \n>>> (compared to the v3 patch submitted by Pavan in February 2019).\n>>>\n>>> The test I'm running a very simple test (see test.sql):\n>>>\n>>> 1) start a transaction\n>>> 2) create a table with a text column\n>>> 3) copy freeze data into it\n>>> 4) use pg_visibility to see how many blocks are all_visible both in the\n>>> main table and it's TOAST table\n>>>\n>>> For v3 patch (applied on top of 278584b526 and \n>>> s/hi_options/ti_options) I get this:\n>>>\n>>> pages NOT all_visible\n>>> ------------------------------------------\n>>> main 637 0\n>>> toast 50001 3\n>>>\n>>> There was some discussion about relcache invalidations causing a \n>>> couple TOAST pages not be marked as all_visible, etc.\n>>>\n>>> However, for this patch on master I get this\n>>>\n>>> pages NOT all_visible\n>>> ------------------------------------------\n>>> main 637 0\n>>> toast 50001 50001\n>>>\n>>> So no pages in TOAST are marked as all_visible. I haven't \n>>> investigated what's causing this, but IMO that needs fixing to make \n>>> ths patch RFC.\n>>>\n>>> Attached is the test script I'm using, and a v5 of the patch - \n>>> rebased on current master, with some minor tweaks to comments etc.\n>>>\n>>\n>> Thank you for attaching the test script. I reproduced the problem. \n>> This regression occurs because TOAST internally uses heap_insert().\n>> You have asked upthread about adding this optimization to heap_insert().\n>>\n>> I wrote a quick fix, see the attached patch 0002. The TOAST test \n>> passes now, but I haven't tested performance or any other use-cases yet.\n>> I'm going to test it properly in a couple of days and share results.\n>>\n>\n> Thanks. I think it's important to make this work for TOAST tables - it \n> often stores most of the data, and it was working in v3 of the patch. \n> I haven't looked into the details, but if it's really just due to \n> TOAST using heap_insert, I'd say it just confirms the importance of \n> tweaking heap_insert too. \n\n\nI've tested performance. All tests were run on my laptop, latest master \nwith and without patches, all default settings, except disabled \nautovacuum and installed pg_stat_statements extension.\n\nThe VACUUM is significantly faster with the patch, as it only checks \nvisibility map. COPY speed fluctuates a lot between tests, but I didn't \nnotice any trend.\nI would expect minor slowdown with the patch, as we need to handle \nvisibility map pages during COPY FREEZE. But in some runs, patched \nversion was faster than current master, so the impact of the patch is \ninsignificant.\n\nI run 3 different tests:\n\n1) Regular table (final size 5972 MB)\n\npatched | master\n\nCOPY FREEZE data 3 times:\n\n33384,544 ms 31135,037 ms\n31666,226 ms 31158,294 ms\n32802,783 ms 33599,234 ms\n\nVACUUM\n54,185 ms 48445,584 ms\n\n\n2) Table with TOAST (final size 1207 MB where 1172 MB is in toast table)\n\npatched | master\n\nCOPY FREEZE data 3 times:\n\n368166,743 ms 383231,077 ms\n398695,018 ms 454572,630 ms\n410174,682 ms 567847,288 ms\n\nVACUUM\n43,159 ms 6648,302 ms\n\n\n3) Table with a trivial BEFORE INSERT trigger (final size 5972 MB)\n\npatched | master\n\nCOPY FREEZE data 3 times:\n\n73544,225 ms 64967,802 ms\n90960,584 ms 71826,362 ms\n81356,025 ms 80023,041 ms\n\nVACUUM\n49,626 ms 40100,097 ms\n\nI took another look at the yesterday's patch and it looks fine to me. So \nnow I am waiting for your review.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Tue, 12 Jan 2021 20:17:17 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "Thanks. These patches seem to resolve the TOAST table issue, freezing it \nas expected. I think the code duplication is not an issue, but I wonder \nwhy heap_insert uses this condition:\n\n /*\n * ...\n *\n * No need to update the visibilitymap if it had all_frozen bit set\n * before this insertion.\n */\n if (all_frozen_set && ((vmstatus & VISIBILITYMAP_ALL_FROZEN) == 0))\n\nwhile heap_multi_insert only does this:\n\n if (all_frozen_set) { ... }\n\nI haven't looked at the details, but shouldn't both do the same thing?\n\nI've done some benchmarks, comparing master and patched version on a \nbunch of combinations (logged/unlogged, no before-insert trigger, \ntrigger filtering everything/nothing). On master, the results are:\n\n group copy vacuum\n -----------------------------------------------\n logged / no trigger 4672 162\n logged / trigger (all) 4914 162\n logged / trigger (none) 1219 11\n unlogged / no trigger 4132 156\n unlogged / trigger (all) 4292 156\n unlogged / trigger (none) 1275 11\n\nand patched looks like this:\n\n group copy vacuum\n -----------------------------------------------\n logged / no trigger 4669 12\n logged / trigger (all) 4874 12\n logged / trigger (none) 1233 11\n unlogged / no trigger 4054 11\n unlogged / trigger (all) 4185 12\n unlogged / trigger (none) 1222 10\n\nThis looks pretty nice - there are no regressions, just speedups in the \nvacuum step. The SQL script used is attached.\n\nHowever, I've also repeated the test counting all-frozen pages in both \nthe main table and TOAST table, and I get this:\n\n\nmaster\n======\n\nselect count(*) from pg_visibility((select reltoastrelid from pg_class \nwhere relname = 't'));\n\n count\n--------\n 100000\n(1 row)\n\n\nselect count(*) from pg_visibility((select reltoastrelid from pg_class \nwhere relname = 't')) where not all_visible;\n\n count\n--------\n 100000\n(1 row)\n\n\npatched\n=======\n\nselect count(*) from pg_visibility((select reltoastrelid from pg_class \nwhere relname = 't'));\n\n count\n--------\n 100002\n(1 row)\n\n\nselect count(*) from pg_visibility((select reltoastrelid from pg_class \nwhere relname = 't')) where not all_visible;\n\n count\n--------\n 0\n(1 row)\n\nThat is - all TOAST pages are frozen (as expected, which is good). But \nnow there are 100002 pages, not just 100000 pages. That is, we're now \ncreating 2 extra pages, for some reason. I recall Pavan reported similar \nissue with every 32768-th page not being properly filled, but I'm not \nsure if that's the same issue.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 12 Jan 2021 20:30:35 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "On 12.01.2021 22:30, Tomas Vondra wrote:\n> Thanks. These patches seem to resolve the TOAST table issue, freezing \n> it as expected. I think the code duplication is not an issue, but I \n> wonder why heap_insert uses this condition:\n>\n> /*\n> * ...\n> *\n> * No need to update the visibilitymap if it had all_frozen bit set\n> * before this insertion.\n> */\n> if (all_frozen_set && ((vmstatus & VISIBILITYMAP_ALL_FROZEN) == 0))\n>\n> while heap_multi_insert only does this:\n>\n> if (all_frozen_set) { ... }\n>\n> I haven't looked at the details, but shouldn't both do the same thing?\n\n\nI decided to add this check for heap_insert() to avoid unneeded calls of \nvisibilitymap_set(). If we insert tuples one by one, we can only call \nthis once per page.\nIn my understanding, heap_multi_insert() inserts tuples in batches, so \nit doesn't need this optimization.\n\n>\n>\n> However, I've also repeated the test counting all-frozen pages in both \n> the main table and TOAST table, and I get this:\n>\n> patched\n> =======\n>\n> select count(*) from pg_visibility((select reltoastrelid from pg_class \n> where relname = 't'));\n>\n> count\n> --------\n> 100002\n> (1 row)\n>\n>\n> select count(*) from pg_visibility((select reltoastrelid from pg_class \n> where relname = 't')) where not all_visible;\n>\n> count\n> --------\n> 0\n> (1 row)\n>\n> That is - all TOAST pages are frozen (as expected, which is good). But \n> now there are 100002 pages, not just 100000 pages. That is, we're now \n> creating 2 extra pages, for some reason. I recall Pavan reported \n> similar issue with every 32768-th page not being properly filled, but \n> I'm not sure if that's the same issue.\n>\n>\n> regards\n>\n\nAs Pavan correctly figured it out before the problem is that \nRelationGetBufferForTuple() moves to the next page, losing free space in \nthe block:\n\n > ... I see that a relcache invalidation arrives\n > after 1st and then after every 32672th block is filled. That clears the\n > rel->rd_smgr field and we lose the information about the saved target\n > block. The code then moves to extend the relation again and thus \nskips the\n > previously less-than-half filled block, losing the free space in that \nblock.\n\nThe reason of this cache invalidation is vm_extend() call, which happens \nevery 32762 blocks.\n\nRelationGetBufferForTuple() tries to use the last page, but for some \nreason this code is under 'use_fsm' check. And COPY FROM doesn't use fsm \n(see TABLE_INSERT_SKIP_FSM).\n\n\n /*\n * If the FSM knows nothing of the rel, try the last page before we\n * give up and extend. This avoids one-tuple-per-page syndrome \nduring\n * bootstrapping or in a recently-started system.\n */\n if (targetBlock == InvalidBlockNumber)\n {\n BlockNumber nblocks = RelationGetNumberOfBlocks(relation);\n if (nblocks > 0)\n targetBlock = nblocks - 1;\n }\n\n\nI think we can use this code without regard to 'use_fsm'. With this \nchange, the number of toast rel pages is correct. The patch is attached.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Sat, 16 Jan 2021 18:11:29 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "On 1/16/21 4:11 PM, Anastasia Lubennikova wrote:\n>\n > ...\n> \n> As Pavan correctly figured it out before the problem is that \n> RelationGetBufferForTuple() moves to the next page, losing free space in \n> the block:\n> \n> > ... I see that a relcache invalidation arrives\n> > after 1st and then after every 32672th block is filled. That clears the\n> > rel->rd_smgr field and we lose the information about the saved target\n> > block. The code then moves to extend the relation again and thus \n> skips the\n> > previously less-than-half filled block, losing the free space in that \n> block.\n> \n> The reason of this cache invalidation is vm_extend() call, which happens \n> every 32762 blocks.\n> \n> RelationGetBufferForTuple() tries to use the last page, but for some \n> reason this code is under 'use_fsm' check. And COPY FROM doesn't use fsm \n> (see TABLE_INSERT_SKIP_FSM).\n> \n> \n> /*\n> * If the FSM knows nothing of the rel, try the last page \n> before we\n> * give up and extend. This avoids one-tuple-per-page syndrome \n> during\n> * bootstrapping or in a recently-started system.\n> */\n> if (targetBlock == InvalidBlockNumber)\n> {\n> BlockNumber nblocks = RelationGetNumberOfBlocks(relation);\n> if (nblocks > 0)\n> targetBlock = nblocks - 1;\n> }\n> \n> \n> I think we can use this code without regard to 'use_fsm'. With this \n> change, the number of toast rel pages is correct. The patch is attached.\n> \n\nThanks for the updated patch, this version looks OK to me - I've marked \nit as RFC. I'll do a bit more testing, review, and then I'll get it \ncommitted.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 16 Jan 2021 23:18:19 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "On 1/16/21 11:18 PM, Tomas Vondra wrote:\n> ...\n >\n> Thanks for the updated patch, this version looks OK to me - I've marked \n> it as RFC. I'll do a bit more testing, review, and then I'll get it \n> committed.\n> \n\nPushed. Thanks everyone for the effort put into this patch. The first \nversion was sent in 2015, so it took quite a bit of time.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 17 Jan 2021 22:32:49 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "> Pushed. Thanks everyone for the effort put into this patch. The first\n> version was sent in 2015, so it took quite a bit of time.\n\nGreat news. Thanks everyone who have been working on this.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Mon, 18 Jan 2021 10:41:54 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "On Mon, Jan 18, 2021 at 3:02 AM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n>\n>\n> Pushed. Thanks everyone for the effort put into this patch. The first\n> version was sent in 2015, so it took quite a bit of time.\n>\n>\nThanks Tomas, Anastasia and everyone else who worked on the patch and\nensured that it gets into the tree.\n\nThanks,\nPavan\n\n-- \nPavan Deolasee\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nOn Mon, Jan 18, 2021 at 3:02 AM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n\nPushed. Thanks everyone for the effort put into this patch. The first \nversion was sent in 2015, so it took quite a bit of time.\nThanks Tomas, Anastasia and everyone else who worked on the patch and ensured that it gets into the tree.Thanks,Pavan-- Pavan DeolaseeEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 18 Jan 2021 10:23:43 +0530",
"msg_from": "Pavan Deolasee <pavan.deolasee@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "Pavan Deolasee <pavan.deolasee@gmail.com> writes:\n> On Mon, Jan 18, 2021 at 3:02 AM Tomas Vondra <tomas.vondra@enterprisedb.com>\n> wrote:\n>> Pushed. Thanks everyone for the effort put into this patch. The first\n>> version was sent in 2015, so it took quite a bit of time.\n\n> Thanks Tomas, Anastasia and everyone else who worked on the patch and\n> ensured that it gets into the tree.\n\nBuildfarm results suggest that the test case is unstable under\nCLOBBER_CACHE_ALWAYS:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hyrax&dt=2021-01-19%2020%3A27%3A46\n\nThis might mean an actual bug, or just that the test isn't robust.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 22 Jan 2021 19:10:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "\n\nOn 1/23/21 1:10 AM, Tom Lane wrote:\n> Pavan Deolasee <pavan.deolasee@gmail.com> writes:\n>> On Mon, Jan 18, 2021 at 3:02 AM Tomas Vondra <tomas.vondra@enterprisedb.com>\n>> wrote:\n>>> Pushed. Thanks everyone for the effort put into this patch. The first\n>>> version was sent in 2015, so it took quite a bit of time.\n> \n>> Thanks Tomas, Anastasia and everyone else who worked on the patch and\n>> ensured that it gets into the tree.\n> \n> Buildfarm results suggest that the test case is unstable under\n> CLOBBER_CACHE_ALWAYS:\n> \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hyrax&dt=2021-01-19%2020%3A27%3A46\n> \n> This might mean an actual bug, or just that the test isn't robust.\n> \n\nYeah :-( It seems I've committed the v5 patch, not the v7 addressing \nexactly this issue (which I've actually pointed out and asked to be \nfixed). Oh well ... I'll get this fixed tomorrow - I have the fix, and I \nhave verified that it passes with CLOBBER_CACHE_ALWAYS, but pushing it \nat 5AM does not seem like a good idea.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 23 Jan 2021 04:58:06 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
},
{
"msg_contents": "Hi,\n\nI've pushed the fix, after a a couple extra rounds of careful testing.\n\nI noticed that the existing pg_visibility regression tests don't check \nif we freeze the TOAST rows too (failing to do that was one of the \nsymptoms). It'd be good to add that, because that would fail even \nwithout CLOBBER_CACHE_ALWAYS, so attached is a test I propose to add.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 24 Jan 2021 01:54:22 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY FREEZE and setting PD_ALL_VISIBLE/visibility map bits"
}
] |
[
{
"msg_contents": "Hi Kirk,\r\n\r\n> Currently, the patch fails to build according to CF app.\r\n> As you know, it has something to do with the misspelling of function.\r\n> GetTimezoneInformation --> GetTimeZoneInformation\r\nThank you. I fixed it. Please see my v7 patch.\r\n\r\nRegards,\r\nAya Iwata\r\n",
"msg_date": "Thu, 21 Feb 2019 08:11:11 +0000",
"msg_from": "\"Iwata, Aya\" <iwata.aya@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: libpq debug log"
}
] |
[
{
"msg_contents": "Hi,\n\nI would like to use transactions with partitioning table using FDW, but transactions can not be used with the following error.\n 'ERROR: could not serialize access due to concurrent update\n\nSo, I tried to write a very simple patch.\nThis patch works for my purpose, but I do not know if it matches general usage.\nI'd like to improve this feature which can be used generally, so please check it.\n\nPlease find attached file.\n\nRegards,\n\n-- \nmitani <mitani@sraw.co.jp>",
"msg_date": "Thu, 21 Feb 2019 17:44:00 +0900",
"msg_from": "mitani <mitani@sraw.co.jp>",
"msg_from_op": true,
"msg_subject": "[PATCH] Allow transaction to partition table using FDW"
},
{
"msg_contents": "Hi there,\n\nI modified my patch in response to Ishii-san's pointed out.\nI always set 'COMMITTED READ' to SQL in 'begin_remote_xact()', but changed to set it only when 'XactIsoLevel' == 'XACT_READ_COMMITTED'.\n\nI tested transaction query to partition tables on remote servers as follows,\n(sent BEGIN - UPDATE - COMMIT query in two sessions)\n\n target record on the same server target record on a different server\n--------------------------------------------------------------------------------------------------------\ntarget table is same (wait) (wait)\ntarget table is defferent (no wait) (no wait)\n\n(wait): Session 2 is kept waiting until session 1 commits\n(no wait): Session 2 can be committed before session 1 commits\n\nI do not understand FDW's design philosophy, so please let me know if there is a problem with my patch.\n\nThe target version of PostgreSQL is 11.2, and target file of this patch is 'contrib/postgresql/connection.c'.\n\nRegards,\n-- \nmitani <mitani@sraw.co.jp>",
"msg_date": "Fri, 22 Feb 2019 17:33:07 +0900",
"msg_from": "mitani <mitani@sraw.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Allow transaction to partition table using FDW"
}
] |
[
{
"msg_contents": "In AfterTriggerSaveEvent(), the \"new_shared\" variable is not used outside the\n\"for\" loop, so I think it should be defined only within the loop. The\nfollowing patch makes reading the code a little bit more convenient for me.\n\ndiff --git a/src/backend/commands/trigger.c b/src/backend/commands/trigger.c\nindex 409bee24f8..d95c57f244 100644\n--- a/src/backend/commands/trigger.c\n+++ b/src/backend/commands/trigger.c\n@@ -5743,7 +5743,6 @@ AfterTriggerSaveEvent(EState *estate, ResultRelInfo *relinfo,\n \tRelation\trel = relinfo->ri_RelationDesc;\n \tTriggerDesc *trigdesc = relinfo->ri_TrigDesc;\n \tAfterTriggerEventData new_event;\n-\tAfterTriggerSharedData new_shared;\n \tchar\t\trelkind = rel->rd_rel->relkind;\n \tint\t\t\ttgtype_event;\n \tint\t\t\ttgtype_level;\n@@ -5937,6 +5936,7 @@ AfterTriggerSaveEvent(EState *estate, ResultRelInfo *relinfo,\n \tfor (i = 0; i < trigdesc->numtriggers; i++)\n \t{\n \t\tTrigger *trigger = &trigdesc->triggers[i];\n+\t\tAfterTriggerSharedData new_shared;\n \n \t\tif (!TRIGGER_TYPE_MATCHES(trigger->tgtype,\n \t\t\t\t\t\t\t\t tgtype_level,\n\n--\nAntonin Houska\nhttps://www.cybertec-postgresql.com\n\n",
"msg_date": "Thu, 21 Feb 2019 10:08:20 +0100",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Inappropriate scope of local variable"
}
] |
[
{
"msg_contents": "I installed latest PostgreSQL-ODBC , OS=Solaris 10, Arch = Sparc. I can not\nuse PostgreSQL ODBC in unixODBC.\n\n-bash-3.2$ LD_DEBUG=libs ./isql -v PG\ndebug:\ndebug: Solaris Linkers: 5.10-1.1518\ndebug:\n19267:\n19267: platform capability (CA_SUNW_PLAT) - sun4v\n19267: machine capability (CA_SUNW_MACH) - sun4v\n19267: hardware capabilities (CA_SUNW_HW_1) - 0x3ffe8dfb [ CRC32C CBCOND\nPAUSE MONT MPMUL SHA512 SHA256 SHA1 MD5 CAMELLIA KASUMI DES AES IMA HPC\nVIS3 FMAF ASI_BLK_INIT VIS2 VIS POPC V8PLUS DIV32 MUL32 ]\n19267:\n19267:\n19267: configuration file=/var/ld/ld.config: unable to process file\n19267:\n19267:\n19267: find object=libodbc.so.2; searching\n19267: search path=/usr/local/unixODBC/lib:/usr/local/lib (RUNPATH/RPATH\nfrom file isql)\n19267: trying path=/usr/local/unixODBC/lib/libodbc.so.2\n19267:\n19267: find object=libiconv.so.2; searching\n19267: search path=/usr/local/unixODBC/lib:/usr/local/lib (RUNPATH/RPATH\nfrom file isql)\n19267: trying path=/usr/local/unixODBC/lib/libiconv.so.2\n19267: trying path=/usr/local/lib/libiconv.so.2\n19267:\n19267: find object=libthread.so.1; searching\n19267: search path=/usr/local/unixODBC/lib:/usr/local/lib (RUNPATH/RPATH\nfrom file isql)\n19267: trying path=/usr/local/unixODBC/lib/libthread.so.1\n19267: trying path=/usr/local/lib/libthread.so.1\n19267: search path=/lib (default)\n19267: search path=/usr/lib (default)\n19267: trying path=/lib/libthread.so.1\n19267:\n19267: find object=libc.so.1; searching\n19267: search path=/usr/local/unixODBC/lib:/usr/local/lib (RUNPATH/RPATH\nfrom file isql)\n19267: trying path=/usr/local/unixODBC/lib/libc.so.1\n19267: trying path=/usr/local/lib/libc.so.1\n19267: search path=/lib (default)\n19267: search path=/usr/lib (default)\n19267: trying path=/lib/libc.so.1\n19267:\n19267: find object=libiconv.so.2; searching\n19267: search path=/usr/local/lib (RUNPATH/RPATH from file\n/usr/local/unixODBC/lib/libodbc.so.2)\n19267: trying path=/usr/local/lib/libiconv.so.2\n19267:\n19267: find object=libthread.so.1; searching\n19267: search path=/usr/local/lib (RUNPATH/RPATH from file\n/usr/local/unixODBC/lib/libodbc.so.2)\n19267: trying path=/usr/local/lib/libthread.so.1\n19267: search path=/lib (default)\n19267: search path=/usr/lib (default)\n19267: trying path=/lib/libthread.so.1\n19267:\n19267: find object=libc.so.1; searching\n19267: search path=/usr/local/lib (RUNPATH/RPATH from file\n/usr/local/unixODBC/lib/libodbc.so.2)\n19267: trying path=/usr/local/lib/libc.so.1\n19267: search path=/lib (default)\n19267: search path=/usr/lib (default)\n19267: trying path=/lib/libc.so.1\n19267:\n19267: find object=libgcc_s.so.1; searching\n19267: search path=/usr/local/lib (RUNPATH/RPATH from file\n/usr/local/unixODBC/lib/libodbc.so.2)\n19267: trying path=/usr/local/lib/libgcc_s.so.1\n19267:\n19267: find object=libc.so.1; searching\n19267: search\npath=/usr/local/lib:/usr/lib:/usr/openwin/lib:/usr/local/ssl/lib:/usr/local/BerkeleyDB.4.2/lib\n(RUNPATH/RPATH from file /usr/local/lib/libiconv.so.2)\n19267: trying path=/usr/local/lib/libc.so.1\n19267: trying path=/usr/lib/libc.so.1\n19267:\n19267: find object=libgcc_s.so.1; searching\n19267: search\npath=/usr/local/lib:/usr/lib:/usr/openwin/lib:/usr/local/ssl/lib:/usr/local/BerkeleyDB.4.2/lib\n(RUNPATH/RPATH from file /usr/local/lib/libiconv.so.2)\n19267: trying path=/usr/local/lib/libgcc_s.so.1\n19267:\n19267: find object=libc.so.1; searching\n19267: search path=/usr/local/lib:/usr/local/ssl/lib (RUNPATH/RPATH from\nfile /usr/local/lib/libgcc_s.so.1)\n19267: trying path=/usr/local/lib/libc.so.1\n19267: trying path=/usr/local/ssl/lib/libc.so.1\n19267: search path=/lib (default)\n19267: search path=/usr/lib (default)\n19267: trying path=/lib/libc.so.1\n19267:\n19267: find object=libc.so.1; searching\n19267: search path=/lib (default)\n19267: search path=/usr/lib (default)\n19267: trying path=/lib/libc.so.1\n19267:\n19267: 1:\n19267: 1: transferring control: isql\n19267: 1:\n19267: 1:\n19267: 1: find object=/platform/sun4v/lib/libc_psr.so.1; searching\n19267: 1: trying path=/platform/sun4v/lib/libc_psr.so.1\n19267: 1:\n19267: 1: find object=/usr/local/lib/psqlodbcw.so; searching\n19267: 1: trying path=/usr/local/lib/psqlodbcw.so\n19267: 1:\n19267: 1: find object=libpq.so.5; searching\n19267: 1: search path=/usr/local/unixODBC/lib (RUNPATH/RPATH from file\n/usr/local/lib/psqlodbcw.so)\n19267: 1: trying path=/usr/local/unixODBC/lib/libpq.so.5\n19267: 1: search path=/lib (default)\n19267: 1: search path=/usr/lib (default)\n19267: 1: trying path=/lib/libpq.so.5\n19267: 1: trying path=/usr/lib/libpq.so.5\n19267: 1:\n19267: 1: find object=libpthread.so.1; searching\n19267: 1: search path=/usr/local/unixODBC/lib (RUNPATH/RPATH from file\n/usr/local/lib/psqlodbcw.so)\n19267: 1: trying path=/usr/local/unixODBC/lib/libpthread.so.1\n19267: 1: search path=/lib (default)\n19267: 1: search path=/usr/lib (default)\n19267: 1: trying path=/lib/libpthread.so.1\n19267: 1:\n19267: 1: find object=libodbcinst.so.2; searching\n19267: 1: search path=/usr/local/unixODBC/lib (RUNPATH/RPATH from file\n/usr/local/lib/psqlodbcw.so)\n19267: 1: trying path=/usr/local/unixODBC/lib/libodbcinst.so.2\n19267: 1:\n19267: 1: find object=libthread.so.1; searching\n19267: 1: search path=/usr/local/unixODBC/lib (RUNPATH/RPATH from file\n/usr/local/lib/psqlodbcw.so)\n19267: 1: trying path=/usr/local/unixODBC/lib/libthread.so.1\n19267: 1: search path=/lib (default)\n19267: 1: search path=/usr/lib (default)\n19267: 1: trying path=/lib/libthread.so.1\n19267: 1:\n19267: 1: find object=libc.so.1; searching\n19267: 1: search path=/usr/local/unixODBC/lib (RUNPATH/RPATH from file\n/usr/local/lib/psqlodbcw.so)\n19267: 1: trying path=/usr/local/unixODBC/lib/libc.so.1\n19267: 1: search path=/lib (default)\n19267: 1: search path=/usr/lib (default)\n19267: 1: trying path=/lib/libc.so.1\n19267: 1:\n19267: 1: find object=libgcc_s.so.1; searching\n19267: 1: search path=/usr/local/unixODBC/lib (RUNPATH/RPATH from file\n/usr/local/lib/psqlodbcw.so)\n19267: 1: trying path=/usr/local/unixODBC/lib/libgcc_s.so.1\n19267: 1: search path=/lib (default)\n19267: 1: search path=/usr/lib (default)\n19267: 1: trying path=/lib/libgcc_s.so.1\n19267: 1: trying path=/usr/lib/libgcc_s.so.1\n19267: 1:\n19267: 1: find object=libnsl.so.1; searching\n19267: 1: search path=/usr/local/pgsql/lib (RUNPATH/RPATH from file\n/usr/lib/libpq.so.5)\n19267: 1: trying path=/usr/local/pgsql/lib/libnsl.so.1\n19267: 1: search path=/lib (default)\n19267: 1: search path=/usr/lib (default)\n19267: 1: trying path=/lib/libnsl.so.1\n19267: 1:\n19267: 1: find object=libsocket.so.1; searching\n19267: 1: search path=/usr/local/pgsql/lib (RUNPATH/RPATH from file\n/usr/lib/libpq.so.5)\n19267: 1: trying path=/usr/local/pgsql/lib/libsocket.so.1\n19267: 1: search path=/lib (default)\n19267: 1: search path=/usr/lib (default)\n19267: 1: trying path=/lib/libsocket.so.1\n19267: 1:\n19267: 1: find object=libgcc_s.so.1; searching\n19267: 1: search path=/usr/local/pgsql/lib (RUNPATH/RPATH from file\n/usr/lib/libpq.so.5)\n19267: 1: trying path=/usr/local/pgsql/lib/libgcc_s.so.1\n19267: 1: search path=/lib (default)\n19267: 1: search path=/usr/lib (default)\n19267: 1: trying path=/lib/libgcc_s.so.1\n19267: 1: trying path=/usr/lib/libgcc_s.so.1\n19267: 1:\n19267: 1: find object=libthread.so.1; searching\n19267: 1: search path=/lib (default)\n19267: 1: search path=/usr/lib (default)\n19267: 1: trying path=/lib/libthread.so.1\n19267: 1:\n19267: 1: find object=libc.so.1; searching\n19267: 1: search path=/lib (default)\n19267: 1: search path=/usr/lib (default)\n19267: 1: trying path=/lib/libc.so.1\n19267: 1:\n19267: 1: find object=libgcc_s.so.1; searching\n19267: 1: search path=/lib (default)\n19267: 1: search path=/usr/lib (default)\n19267: 1: trying path=/lib/libgcc_s.so.1\n19267: 1: trying path=/usr/lib/libgcc_s.so.1\n19267: 1:\n19267: 1: find object=libc.so.1; searching\n19267: 1: search path=/lib (default)\n19267: 1: search path=/usr/lib (default)\n19267: 1: trying path=/lib/libc.so.1\n19267: 1:\n19267: 1: find object=libnsl.so.1; searching\n19267: 1: search path=/lib (default)\n19267: 1: search path=/usr/lib (default)\n19267: 1: trying path=/lib/libnsl.so.1\n19267: 1:\n19267: 1: find object=libc.so.1; searching\n19267: 1: search path=/lib (default)\n19267: 1: search path=/usr/lib (default)\n19267: 1: trying path=/lib/libc.so.1\n19267: 1:\n19267: 1: find object=libmp.so.2; searching\n19267: 1: search path=/lib (default)\n19267: 1: search path=/usr/lib (default)\n19267: 1: trying path=/lib/libmp.so.2\n19267: 1:\n19267: 1: find object=libc.so.1; searching\n19267: 1: search path=/lib (default)\n19267: 1: search path=/usr/lib (default)\n19267: 1: trying path=/lib/libc.so.1\n19267: 1:\n19267: 1: find object=libmd.so.1; searching\n19267: 1: search path=/lib (default)\n19267: 1: search path=/usr/lib (default)\n19267: 1: trying path=/lib/libmd.so.1\n19267: 1:\n19267: 1: find object=libc.so.1; searching\n19267: 1: search path=/lib (default)\n19267: 1: search path=/usr/lib (default)\n19267: 1: trying path=/lib/libc.so.1\n19267: 1:\n19267: 1: find object=libscf.so.1; searching\n19267: 1: search path=/lib (default)\n19267: 1: search path=/usr/lib (default)\n19267: 1: trying path=/lib/libscf.so.1\n19267: 1:\n19267: 1: find object=libdoor.so.1; searching\n19267: 1: search path=/lib (default)\n19267: 1: search path=/usr/lib (default)\n19267: 1: trying path=/lib/libdoor.so.1\n19267: 1:\n19267: 1: find object=libuutil.so.1; searching\n19267: 1: search path=/lib (default)\n19267: 1: search path=/usr/lib (default)\n19267: 1: trying path=/lib/libuutil.so.1\n19267: 1:\n19267: 1: find object=libc.so.1; searching\n19267: 1: search path=/lib (default)\n19267: 1: search path=/usr/lib (default)\n19267: 1: trying path=/lib/libc.so.1\n19267: 1:\n19267: 1: find object=libgen.so.1; searching\n19267: 1: search path=/lib (default)\n19267: 1: search path=/usr/lib (default)\n19267: 1: trying path=/lib/libgen.so.1\n19267: 1:\n19267: 1: find object=libc.so.1; searching\n19267: 1: search path=/lib (default)\n19267: 1: search path=/usr/lib (default)\n19267: 1: trying path=/lib/libc.so.1\n19267: 1:\n19267: 1: find object=libc.so.1; searching\n19267: 1: search path=/lib (default)\n19267: 1: search path=/usr/lib (default)\n19267: 1: trying path=/lib/libc.so.1\n19267: 1:\n19267: 1: find object=libc.so.1; searching\n19267: 1: search path=/lib (default)\n19267: 1: search path=/usr/lib (default)\n19267: 1: trying path=/lib/libc.so.1\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLDriverLoad: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLDriverLoad: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLDriverUnload: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLDriverUnload: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLAllocConnect: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLAllocConnect: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLAllocEnv: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLAllocEnv: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLAllocHandle: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLAllocStmt: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLAllocStmt: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLAllocHandleStd: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLAllocHandleStd: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLBindCol: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLBindParam: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLBindParameter: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLBrowseConnect: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLBrowseConnectA: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLBrowseConnectA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLBrowseConnectW: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLBulkOperations: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLCancel: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLCloseCursor: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLColAttribute: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLColAttributeA: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLColAttributeA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLColAttributeW: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLColAttributes: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLColAttributes: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLColAttributesA: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLColAttributesA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLColAttributesW: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLColAttributesW: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLColumnPrivileges: can't\nfind symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLColumnPrivilegesA: can't\nfind symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLColumnPrivilegesA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLColumnPrivilegesW: can't\nfind symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLColumns: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLColumnsA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLColumnsA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLColumnsW: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLConnect: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLConnectA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLConnectA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLConnectW: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLCopyDesc: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLDataSources: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLDataSourcesA: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLDataSourcesA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLDataSourcesW: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLDescribeCol: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLDescribeColA: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLDescribeColA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLDescribeColW: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLDescribeParam: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLDisconnect: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLDriverConnect: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLDriverConnectA: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLDriverConnectA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLDriverConnectW: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLDrivers: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLDrivers: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLDriversA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLDriversA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLDriversW: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLDriversW: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLEndTran: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLError: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLError: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLErrorA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLErrorA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLErrorW: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLErrorW: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLExecDirect: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLExecDirectA: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLExecDirectA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLExecDirectW: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLExecute: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLExtendedFetch: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLFetch: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLFetchScroll: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLForeignKeys: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLForeignKeysA: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLForeignKeysA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLForeignKeysW: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLFreeEnv: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLFreeEnv: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLFreeHandle: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLFreeStmt: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLFreeConnect: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLFreeConnect: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetConnectAttr: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetConnectAttrA: can't\nfind symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLGetConnectAttrA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetConnectAttrW: can't\nfind symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetConnectOption: can't\nfind symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLGetConnectOption: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetConnectOptionA: can't\nfind symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLGetConnectOptionA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetConnectOptionW: can't\nfind symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLGetConnectOptionW: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetCursorName: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetCursorNameA: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLGetCursorNameA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetCursorNameW: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetData: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetDescField: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetDescFieldA: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLGetDescFieldA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetDescFieldW: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetDescRec: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetDescRecA: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLGetDescRecA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetDescRecW: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetDiagField: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetDiagFieldA: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLGetDiagFieldA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetDiagFieldW: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetEnvAttr: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetFunctions: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetInfo: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetInfoA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLGetInfoA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetInfoW: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetStmtAttr: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetStmtAttrA: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLGetStmtAttrA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetStmtAttrW: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetStmtOption: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLGetStmtOption: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetTypeInfo: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetTypeInfoA: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLGetTypeInfoA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetTypeInfoW: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLMoreResults: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLNativeSql: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLNativeSqlA: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLNativeSqlA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLNativeSqlW: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLNumParams: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLNumResultCols: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLParamData: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLParamOptions: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLParamOptions: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLPrepare: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLPrepareA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLPrepareA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLPrepareW: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLPrimaryKeys: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLPrimaryKeysA: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLPrimaryKeysA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLPrimaryKeysW: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLProcedureColumns: can't\nfind symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLProcedureColumnsA: can't\nfind symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLProcedureColumnsA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLProcedureColumnsW: can't\nfind symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLProcedures: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLProceduresA: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLProceduresA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLProceduresW: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLPutData: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLRowCount: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSetConnectAttr: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSetConnectAttrA: can't\nfind symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLSetConnectAttrA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSetConnectAttrW: can't\nfind symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSetConnectOption: can't\nfind symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLSetConnectOption: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSetConnectOptionA: can't\nfind symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLSetConnectOptionA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSetConnectOptionW: can't\nfind symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLSetConnectOptionW: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSetCursorName: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSetCursorNameA: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLSetCursorNameA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSetCursorNameW: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSetDescField: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSetDescFieldA: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLSetDescFieldA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSetDescFieldW: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSetDescRec: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSetEnvAttr: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSetParam: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSetPos: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSetScrollOptions: can't\nfind symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLSetScrollOptions: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSetStmtAttr: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSetStmtAttrA: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLSetStmtAttrA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSetStmtAttrW: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSetStmtOption: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLSetStmtOption: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSpecialColumns: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSpecialColumnsA: can't\nfind symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLSpecialColumnsA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSpecialColumnsW: can't\nfind symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLStatistics: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLStatisticsA: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLStatisticsA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLStatisticsW: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLTablePrivileges: can't\nfind symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLTablePrivilegesA: can't\nfind symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLTablePrivilegesA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLTablePrivilegesW: can't\nfind symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLTables: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLTablesA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLTablesA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLTablesW: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLTransact: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLTransact: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetDiagRec: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetDiagRecA: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLGetDiagRecA: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetDiagRecW: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLCancelHandle: can't find\nsymbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: SQLCancelHandle: can't find symbol\n19267: 1:\n19267: 1:\n19267: 1:\n19267: 1: ld.so.1: isql: fatal: relocation error: file\n/usr/local/lib/psqlodbcw.so: symbol PQExpBufferDataBroken: referenced\nsymbol not found\n19267: 1:\n19267: 1:\nld.so.1: isql: fatal: relocation error: file /usr/local/lib/psqlodbcw.so:\nsymbol PQExpBufferDataBroken: referenced symbol not found\nKilled\n\nI installed latest PostgreSQL-ODBC , OS=Solaris 10, Arch = Sparc. I can not use PostgreSQL ODBC in unixODBC.-bash-3.2$ LD_DEBUG=libs ./isql -v PGdebug: debug: Solaris Linkers: 5.10-1.1518debug: 19267: 19267: platform capability (CA_SUNW_PLAT) - sun4v19267: machine capability (CA_SUNW_MACH) - sun4v19267: hardware capabilities (CA_SUNW_HW_1) - 0x3ffe8dfb [ CRC32C CBCOND PAUSE MONT MPMUL SHA512 SHA256 SHA1 MD5 CAMELLIA KASUMI DES AES IMA HPC VIS3 FMAF ASI_BLK_INIT VIS2 VIS POPC V8PLUS DIV32 MUL32 ]19267: 19267: 19267: configuration file=/var/ld/ld.config: unable to process file19267: 19267: 19267: find object=libodbc.so.2; searching19267: search path=/usr/local/unixODBC/lib:/usr/local/lib (RUNPATH/RPATH from file isql)19267: trying path=/usr/local/unixODBC/lib/libodbc.so.219267: 19267: find object=libiconv.so.2; searching19267: search path=/usr/local/unixODBC/lib:/usr/local/lib (RUNPATH/RPATH from file isql)19267: trying path=/usr/local/unixODBC/lib/libiconv.so.219267: trying path=/usr/local/lib/libiconv.so.219267: 19267: find object=libthread.so.1; searching19267: search path=/usr/local/unixODBC/lib:/usr/local/lib (RUNPATH/RPATH from file isql)19267: trying path=/usr/local/unixODBC/lib/libthread.so.119267: trying path=/usr/local/lib/libthread.so.119267: search path=/lib (default)19267: search path=/usr/lib (default)19267: trying path=/lib/libthread.so.119267: 19267: find object=libc.so.1; searching19267: search path=/usr/local/unixODBC/lib:/usr/local/lib (RUNPATH/RPATH from file isql)19267: trying path=/usr/local/unixODBC/lib/libc.so.119267: trying path=/usr/local/lib/libc.so.119267: search path=/lib (default)19267: search path=/usr/lib (default)19267: trying path=/lib/libc.so.119267: 19267: find object=libiconv.so.2; searching19267: search path=/usr/local/lib (RUNPATH/RPATH from file /usr/local/unixODBC/lib/libodbc.so.2)19267: trying path=/usr/local/lib/libiconv.so.219267: 19267: find object=libthread.so.1; searching19267: search path=/usr/local/lib (RUNPATH/RPATH from file /usr/local/unixODBC/lib/libodbc.so.2)19267: trying path=/usr/local/lib/libthread.so.119267: search path=/lib (default)19267: search path=/usr/lib (default)19267: trying path=/lib/libthread.so.119267: 19267: find object=libc.so.1; searching19267: search path=/usr/local/lib (RUNPATH/RPATH from file /usr/local/unixODBC/lib/libodbc.so.2)19267: trying path=/usr/local/lib/libc.so.119267: search path=/lib (default)19267: search path=/usr/lib (default)19267: trying path=/lib/libc.so.119267: 19267: find object=libgcc_s.so.1; searching19267: search path=/usr/local/lib (RUNPATH/RPATH from file /usr/local/unixODBC/lib/libodbc.so.2)19267: trying path=/usr/local/lib/libgcc_s.so.119267: 19267: find object=libc.so.1; searching19267: search path=/usr/local/lib:/usr/lib:/usr/openwin/lib:/usr/local/ssl/lib:/usr/local/BerkeleyDB.4.2/lib (RUNPATH/RPATH from file /usr/local/lib/libiconv.so.2)19267: trying path=/usr/local/lib/libc.so.119267: trying path=/usr/lib/libc.so.119267: 19267: find object=libgcc_s.so.1; searching19267: search path=/usr/local/lib:/usr/lib:/usr/openwin/lib:/usr/local/ssl/lib:/usr/local/BerkeleyDB.4.2/lib (RUNPATH/RPATH from file /usr/local/lib/libiconv.so.2)19267: trying path=/usr/local/lib/libgcc_s.so.119267: 19267: find object=libc.so.1; searching19267: search path=/usr/local/lib:/usr/local/ssl/lib (RUNPATH/RPATH from file /usr/local/lib/libgcc_s.so.1)19267: trying path=/usr/local/lib/libc.so.119267: trying path=/usr/local/ssl/lib/libc.so.119267: search path=/lib (default)19267: search path=/usr/lib (default)19267: trying path=/lib/libc.so.119267: 19267: find object=libc.so.1; searching19267: search path=/lib (default)19267: search path=/usr/lib (default)19267: trying path=/lib/libc.so.119267: 19267: 1: 19267: 1: transferring control: isql19267: 1: 19267: 1: 19267: 1: find object=/platform/sun4v/lib/libc_psr.so.1; searching19267: 1: trying path=/platform/sun4v/lib/libc_psr.so.119267: 1: 19267: 1: find object=/usr/local/lib/psqlodbcw.so; searching19267: 1: trying path=/usr/local/lib/psqlodbcw.so19267: 1: 19267: 1: find object=libpq.so.5; searching19267: 1: search path=/usr/local/unixODBC/lib (RUNPATH/RPATH from file /usr/local/lib/psqlodbcw.so)19267: 1: trying path=/usr/local/unixODBC/lib/libpq.so.519267: 1: search path=/lib (default)19267: 1: search path=/usr/lib (default)19267: 1: trying path=/lib/libpq.so.519267: 1: trying path=/usr/lib/libpq.so.519267: 1: 19267: 1: find object=libpthread.so.1; searching19267: 1: search path=/usr/local/unixODBC/lib (RUNPATH/RPATH from file /usr/local/lib/psqlodbcw.so)19267: 1: trying path=/usr/local/unixODBC/lib/libpthread.so.119267: 1: search path=/lib (default)19267: 1: search path=/usr/lib (default)19267: 1: trying path=/lib/libpthread.so.119267: 1: 19267: 1: find object=libodbcinst.so.2; searching19267: 1: search path=/usr/local/unixODBC/lib (RUNPATH/RPATH from file /usr/local/lib/psqlodbcw.so)19267: 1: trying path=/usr/local/unixODBC/lib/libodbcinst.so.219267: 1: 19267: 1: find object=libthread.so.1; searching19267: 1: search path=/usr/local/unixODBC/lib (RUNPATH/RPATH from file /usr/local/lib/psqlodbcw.so)19267: 1: trying path=/usr/local/unixODBC/lib/libthread.so.119267: 1: search path=/lib (default)19267: 1: search path=/usr/lib (default)19267: 1: trying path=/lib/libthread.so.119267: 1: 19267: 1: find object=libc.so.1; searching19267: 1: search path=/usr/local/unixODBC/lib (RUNPATH/RPATH from file /usr/local/lib/psqlodbcw.so)19267: 1: trying path=/usr/local/unixODBC/lib/libc.so.119267: 1: search path=/lib (default)19267: 1: search path=/usr/lib (default)19267: 1: trying path=/lib/libc.so.119267: 1: 19267: 1: find object=libgcc_s.so.1; searching19267: 1: search path=/usr/local/unixODBC/lib (RUNPATH/RPATH from file /usr/local/lib/psqlodbcw.so)19267: 1: trying path=/usr/local/unixODBC/lib/libgcc_s.so.119267: 1: search path=/lib (default)19267: 1: search path=/usr/lib (default)19267: 1: trying path=/lib/libgcc_s.so.119267: 1: trying path=/usr/lib/libgcc_s.so.119267: 1: 19267: 1: find object=libnsl.so.1; searching19267: 1: search path=/usr/local/pgsql/lib (RUNPATH/RPATH from file /usr/lib/libpq.so.5)19267: 1: trying path=/usr/local/pgsql/lib/libnsl.so.119267: 1: search path=/lib (default)19267: 1: search path=/usr/lib (default)19267: 1: trying path=/lib/libnsl.so.119267: 1: 19267: 1: find object=libsocket.so.1; searching19267: 1: search path=/usr/local/pgsql/lib (RUNPATH/RPATH from file /usr/lib/libpq.so.5)19267: 1: trying path=/usr/local/pgsql/lib/libsocket.so.119267: 1: search path=/lib (default)19267: 1: search path=/usr/lib (default)19267: 1: trying path=/lib/libsocket.so.119267: 1: 19267: 1: find object=libgcc_s.so.1; searching19267: 1: search path=/usr/local/pgsql/lib (RUNPATH/RPATH from file /usr/lib/libpq.so.5)19267: 1: trying path=/usr/local/pgsql/lib/libgcc_s.so.119267: 1: search path=/lib (default)19267: 1: search path=/usr/lib (default)19267: 1: trying path=/lib/libgcc_s.so.119267: 1: trying path=/usr/lib/libgcc_s.so.119267: 1: 19267: 1: find object=libthread.so.1; searching19267: 1: search path=/lib (default)19267: 1: search path=/usr/lib (default)19267: 1: trying path=/lib/libthread.so.119267: 1: 19267: 1: find object=libc.so.1; searching19267: 1: search path=/lib (default)19267: 1: search path=/usr/lib (default)19267: 1: trying path=/lib/libc.so.119267: 1: 19267: 1: find object=libgcc_s.so.1; searching19267: 1: search path=/lib (default)19267: 1: search path=/usr/lib (default)19267: 1: trying path=/lib/libgcc_s.so.119267: 1: trying path=/usr/lib/libgcc_s.so.119267: 1: 19267: 1: find object=libc.so.1; searching19267: 1: search path=/lib (default)19267: 1: search path=/usr/lib (default)19267: 1: trying path=/lib/libc.so.119267: 1: 19267: 1: find object=libnsl.so.1; searching19267: 1: search path=/lib (default)19267: 1: search path=/usr/lib (default)19267: 1: trying path=/lib/libnsl.so.119267: 1: 19267: 1: find object=libc.so.1; searching19267: 1: search path=/lib (default)19267: 1: search path=/usr/lib (default)19267: 1: trying path=/lib/libc.so.119267: 1: 19267: 1: find object=libmp.so.2; searching19267: 1: search path=/lib (default)19267: 1: search path=/usr/lib (default)19267: 1: trying path=/lib/libmp.so.219267: 1: 19267: 1: find object=libc.so.1; searching19267: 1: search path=/lib (default)19267: 1: search path=/usr/lib (default)19267: 1: trying path=/lib/libc.so.119267: 1: 19267: 1: find object=libmd.so.1; searching19267: 1: search path=/lib (default)19267: 1: search path=/usr/lib (default)19267: 1: trying path=/lib/libmd.so.119267: 1: 19267: 1: find object=libc.so.1; searching19267: 1: search path=/lib (default)19267: 1: search path=/usr/lib (default)19267: 1: trying path=/lib/libc.so.119267: 1: 19267: 1: find object=libscf.so.1; searching19267: 1: search path=/lib (default)19267: 1: search path=/usr/lib (default)19267: 1: trying path=/lib/libscf.so.119267: 1: 19267: 1: find object=libdoor.so.1; searching19267: 1: search path=/lib (default)19267: 1: search path=/usr/lib (default)19267: 1: trying path=/lib/libdoor.so.119267: 1: 19267: 1: find object=libuutil.so.1; searching19267: 1: search path=/lib (default)19267: 1: search path=/usr/lib (default)19267: 1: trying path=/lib/libuutil.so.119267: 1: 19267: 1: find object=libc.so.1; searching19267: 1: search path=/lib (default)19267: 1: search path=/usr/lib (default)19267: 1: trying path=/lib/libc.so.119267: 1: 19267: 1: find object=libgen.so.1; searching19267: 1: search path=/lib (default)19267: 1: search path=/usr/lib (default)19267: 1: trying path=/lib/libgen.so.119267: 1: 19267: 1: find object=libc.so.1; searching19267: 1: search path=/lib (default)19267: 1: search path=/usr/lib (default)19267: 1: trying path=/lib/libc.so.119267: 1: 19267: 1: find object=libc.so.1; searching19267: 1: search path=/lib (default)19267: 1: search path=/usr/lib (default)19267: 1: trying path=/lib/libc.so.119267: 1: 19267: 1: find object=libc.so.1; searching19267: 1: search path=/lib (default)19267: 1: search path=/usr/lib (default)19267: 1: trying path=/lib/libc.so.119267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLDriverLoad: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLDriverLoad: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLDriverUnload: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLDriverUnload: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLAllocConnect: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLAllocConnect: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLAllocEnv: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLAllocEnv: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLAllocHandle: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLAllocStmt: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLAllocStmt: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLAllocHandleStd: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLAllocHandleStd: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLBindCol: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLBindParam: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLBindParameter: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLBrowseConnect: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLBrowseConnectA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLBrowseConnectA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLBrowseConnectW: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLBulkOperations: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLCancel: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLCloseCursor: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLColAttribute: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLColAttributeA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLColAttributeA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLColAttributeW: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLColAttributes: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLColAttributes: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLColAttributesA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLColAttributesA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLColAttributesW: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLColAttributesW: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLColumnPrivileges: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLColumnPrivilegesA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLColumnPrivilegesA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLColumnPrivilegesW: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLColumns: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLColumnsA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLColumnsA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLColumnsW: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLConnect: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLConnectA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLConnectA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLConnectW: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLCopyDesc: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLDataSources: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLDataSourcesA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLDataSourcesA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLDataSourcesW: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLDescribeCol: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLDescribeColA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLDescribeColA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLDescribeColW: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLDescribeParam: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLDisconnect: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLDriverConnect: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLDriverConnectA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLDriverConnectA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLDriverConnectW: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLDrivers: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLDrivers: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLDriversA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLDriversA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLDriversW: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLDriversW: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLEndTran: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLError: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLError: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLErrorA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLErrorA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLErrorW: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLErrorW: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLExecDirect: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLExecDirectA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLExecDirectA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLExecDirectW: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLExecute: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLExtendedFetch: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLFetch: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLFetchScroll: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLForeignKeys: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLForeignKeysA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLForeignKeysA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLForeignKeysW: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLFreeEnv: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLFreeEnv: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLFreeHandle: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLFreeStmt: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLFreeConnect: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLFreeConnect: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetConnectAttr: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetConnectAttrA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLGetConnectAttrA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetConnectAttrW: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetConnectOption: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLGetConnectOption: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetConnectOptionA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLGetConnectOptionA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetConnectOptionW: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLGetConnectOptionW: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetCursorName: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetCursorNameA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLGetCursorNameA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetCursorNameW: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetData: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetDescField: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetDescFieldA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLGetDescFieldA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetDescFieldW: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetDescRec: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetDescRecA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLGetDescRecA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetDescRecW: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetDiagField: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetDiagFieldA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLGetDiagFieldA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetDiagFieldW: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetEnvAttr: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetFunctions: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetInfo: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetInfoA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLGetInfoA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetInfoW: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetStmtAttr: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetStmtAttrA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLGetStmtAttrA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetStmtAttrW: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetStmtOption: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLGetStmtOption: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetTypeInfo: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetTypeInfoA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLGetTypeInfoA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetTypeInfoW: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLMoreResults: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLNativeSql: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLNativeSqlA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLNativeSqlA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLNativeSqlW: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLNumParams: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLNumResultCols: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLParamData: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLParamOptions: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLParamOptions: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLPrepare: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLPrepareA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLPrepareA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLPrepareW: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLPrimaryKeys: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLPrimaryKeysA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLPrimaryKeysA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLPrimaryKeysW: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLProcedureColumns: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLProcedureColumnsA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLProcedureColumnsA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLProcedureColumnsW: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLProcedures: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLProceduresA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLProceduresA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLProceduresW: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLPutData: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLRowCount: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSetConnectAttr: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSetConnectAttrA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLSetConnectAttrA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSetConnectAttrW: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSetConnectOption: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLSetConnectOption: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSetConnectOptionA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLSetConnectOptionA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSetConnectOptionW: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLSetConnectOptionW: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSetCursorName: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSetCursorNameA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLSetCursorNameA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSetCursorNameW: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSetDescField: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSetDescFieldA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLSetDescFieldA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSetDescFieldW: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSetDescRec: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSetEnvAttr: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSetParam: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSetPos: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSetScrollOptions: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLSetScrollOptions: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSetStmtAttr: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSetStmtAttrA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLSetStmtAttrA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSetStmtAttrW: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSetStmtOption: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLSetStmtOption: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSpecialColumns: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSpecialColumnsA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLSpecialColumnsA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLSpecialColumnsW: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLStatistics: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLStatisticsA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLStatisticsA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLStatisticsW: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLTablePrivileges: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLTablePrivilegesA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLTablePrivilegesA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLTablePrivilegesW: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLTables: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLTablesA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLTablesA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLTablesW: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLTransact: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLTransact: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetDiagRec: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetDiagRecA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLGetDiagRecA: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLGetDiagRecW: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: psqlodbcw_LTX_SQLCancelHandle: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: SQLCancelHandle: can't find symbol19267: 1: 19267: 1: 19267: 1: 19267: 1: ld.so.1: isql: fatal: relocation error: file /usr/local/lib/psqlodbcw.so: symbol PQExpBufferDataBroken: referenced symbol not found19267: 1: 19267: 1: ld.so.1: isql: fatal: relocation error: file /usr/local/lib/psqlodbcw.so: symbol PQExpBufferDataBroken: referenced symbol not foundKilled",
"msg_date": "Thu, 21 Feb 2019 13:30:26 +0400",
"msg_from": "Nariman Ibadullaev <nariman.ibadullaev@gmail.com>",
"msg_from_op": true,
"msg_subject": "Solaris 10 (sparc) and unixODBC problem"
}
] |
[
{
"msg_contents": "As mentioned on\n\nhttps://www.cybertec-postgresql.com/en/looking-at-mysql-8-with-postgresql-goggles-on/\n\nhow about this:\n\n=> \\h analyze\nCommand: ANALYZE\nDescription: collect statistics about a database\nSyntax:\nANALYZE [ ( option [, ...] ) ] [ table_and_columns [, ...] ]\nANALYZE [ VERBOSE ] [ table_and_columns [, ...] ]\n\nwhere option can be one of:\n\n VERBOSE\n SKIP_LOCKED\n\nand table_and_columns is:\n\n table_name [ ( column_name [, ...] ) ]\n\nURL: https://www.postgresql.org/docs/12/sql-analyze.html\n^^^^\n\n(Won't actually work because the web site isn't serving \"12\" URLs yet,\nbut that's something that could probably be sorted out.)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Thu, 21 Feb 2019 18:28:09 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "psql show URL with help"
},
{
"msg_contents": "čt 21. 2. 2019 v 18:28 odesílatel Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> napsal:\n\n> As mentioned on\n>\n>\n> https://www.cybertec-postgresql.com/en/looking-at-mysql-8-with-postgresql-goggles-on/\n>\n> how about this:\n>\n> => \\h analyze\n> Command: ANALYZE\n> Description: collect statistics about a database\n> Syntax:\n> ANALYZE [ ( option [, ...] ) ] [ table_and_columns [, ...] ]\n> ANALYZE [ VERBOSE ] [ table_and_columns [, ...] ]\n>\n> where option can be one of:\n>\n> VERBOSE\n> SKIP_LOCKED\n>\n> and table_and_columns is:\n>\n> table_name [ ( column_name [, ...] ) ]\n>\n> URL: https://www.postgresql.org/docs/12/sql-analyze.html\n> ^^^^\n>\n> (Won't actually work because the web site isn't serving \"12\" URLs yet,\n> but that's something that could probably be sorted out.)\n>\n\nWhy not? It can be useful\n\nPavel\n\n\n> --\n> Peter Eisentraut http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n\nčt 21. 2. 2019 v 18:28 odesílatel Peter Eisentraut <peter.eisentraut@2ndquadrant.com> napsal:As mentioned on\n\nhttps://www.cybertec-postgresql.com/en/looking-at-mysql-8-with-postgresql-goggles-on/\n\nhow about this:\n\n=> \\h analyze\nCommand: ANALYZE\nDescription: collect statistics about a database\nSyntax:\nANALYZE [ ( option [, ...] ) ] [ table_and_columns [, ...] ]\nANALYZE [ VERBOSE ] [ table_and_columns [, ...] ]\n\nwhere option can be one of:\n\n VERBOSE\n SKIP_LOCKED\n\nand table_and_columns is:\n\n table_name [ ( column_name [, ...] ) ]\n\nURL: https://www.postgresql.org/docs/12/sql-analyze.html\n^^^^\n\n(Won't actually work because the web site isn't serving \"12\" URLs yet,\nbut that's something that could probably be sorted out.)Why not? It can be usefulPavel\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 21 Feb 2019 18:32:51 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql show URL with help"
},
{
"msg_contents": "On Thu, Feb 21, 2019 at 6:33 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> čt 21. 2. 2019 v 18:28 odesílatel Peter Eisentraut <peter.eisentraut@2ndquadrant.com> napsal:\n>>\n>> As mentioned on\n>>\n>> https://www.cybertec-postgresql.com/en/looking-at-mysql-8-with-postgresql-goggles-on/\n>>\n>> how about this:\n>>\n>> => \\h analyze\n>> Command: ANALYZE\n>> Description: collect statistics about a database\n>> Syntax:\n>> ANALYZE [ ( option [, ...] ) ] [ table_and_columns [, ...] ]\n>> ANALYZE [ VERBOSE ] [ table_and_columns [, ...] ]\n>>\n>> where option can be one of:\n>>\n>> VERBOSE\n>> SKIP_LOCKED\n>>\n>> and table_and_columns is:\n>>\n>> table_name [ ( column_name [, ...] ) ]\n>>\n>> URL: https://www.postgresql.org/docs/12/sql-analyze.html\n>> ^^^^\n>>\n>> (Won't actually work because the web site isn't serving \"12\" URLs yet,\n>> but that's something that could probably be sorted out.)\n>\n>\n> Why not? It can be useful\n\nOr just use the devel version for not-released-yet versions?\n\n",
"msg_date": "Thu, 21 Feb 2019 18:41:45 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql show URL with help"
},
{
"msg_contents": "On Thu, Feb 21, 2019 at 06:28:09PM +0100, Peter Eisentraut wrote:\n> As mentioned on\n> \n> https://www.cybertec-postgresql.com/en/looking-at-mysql-8-with-postgresql-goggles-on/\n> \n> how about this:\n> \n> => \\h analyze\n> Command: ANALYZE\n> Description: collect statistics about a database\n> Syntax:\n> ANALYZE [ ( option [, ...] ) ] [ table_and_columns [, ...] ]\n> ANALYZE [ VERBOSE ] [ table_and_columns [, ...] ]\n> \n> where option can be one of:\n> \n> VERBOSE\n> SKIP_LOCKED\n> \n> and table_and_columns is:\n> \n> table_name [ ( column_name [, ...] ) ]\n> \n> URL: https://www.postgresql.org/docs/12/sql-analyze.html\n> ^^^^\n> \n> (Won't actually work because the web site isn't serving \"12\" URLs yet,\n> but that's something that could probably be sorted out.)\n\nSince there's no longer any mystery as to what the upcoming major\nversion of PostgreSQL will be, it should be pretty straightforward to\nredirect integers > max(released version) to the devel docs.\n\nThis could cause some confusion because we branch long (feature- and\nbug-wise) before we do the release, and a checkout from immediately\nafter branching will be *very* different from one just before the\nrelease. The way I've come up with to clear this up is *way* too\nexpensive: lazily create doc builds for each SHA1 requested. Pointing\npeople at the latest devel docs is less confusing than pointing them\nat nothing or at the previous version, those being, as far as I can\ntell, the viable alternatives.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n",
"msg_date": "Fri, 22 Feb 2019 00:07:22 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: psql show URL with help"
},
{
"msg_contents": "On Thu, Feb 21, 2019 at 6:28 PM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> As mentioned on\n>\n>\n> https://www.cybertec-postgresql.com/en/looking-at-mysql-8-with-postgresql-goggles-on/\n>\n> how about this:\n>\n> => \\h analyze\n> Command: ANALYZE\n> Description: collect statistics about a database\n> Syntax:\n> ANALYZE [ ( option [, ...] ) ] [ table_and_columns [, ...] ]\n> ANALYZE [ VERBOSE ] [ table_and_columns [, ...] ]\n>\n> where option can be one of:\n>\n> VERBOSE\n> SKIP_LOCKED\n>\n> and table_and_columns is:\n>\n> table_name [ ( column_name [, ...] ) ]\n>\n> URL: https://www.postgresql.org/docs/12/sql-analyze.html\n> ^^^^\n>\n>\nI've had doing this on my TODO for a few years, but never managed to get\naround to it. So strong +1 for the idea :)\n\n\n\n> (Won't actually work because the web site isn't serving \"12\" URLs yet,\n> but that's something that could probably be sorted out.)\n>\n\nWhy not just link to /devel/ when it's a devel version? The 12 docs will be\nup alongside the first beta version, so it should be perfectly possible to\nhave it do that based on information from configure, no?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Thu, Feb 21, 2019 at 6:28 PM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:As mentioned on\n\nhttps://www.cybertec-postgresql.com/en/looking-at-mysql-8-with-postgresql-goggles-on/\n\nhow about this:\n\n=> \\h analyze\nCommand: ANALYZE\nDescription: collect statistics about a database\nSyntax:\nANALYZE [ ( option [, ...] ) ] [ table_and_columns [, ...] ]\nANALYZE [ VERBOSE ] [ table_and_columns [, ...] ]\n\nwhere option can be one of:\n\n VERBOSE\n SKIP_LOCKED\n\nand table_and_columns is:\n\n table_name [ ( column_name [, ...] ) ]\n\nURL: https://www.postgresql.org/docs/12/sql-analyze.html\n^^^^\nI've had doing this on my TODO for a few years, but never managed to get around to it. So strong +1 for the idea :) \n(Won't actually work because the web site isn't serving \"12\" URLs yet,\nbut that's something that could probably be sorted out.)Why not just link to /devel/ when it's a devel version? The 12 docs will be up alongside the first beta version, so it should be perfectly possible to have it do that based on information from configure, no?-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Fri, 22 Feb 2019 12:07:45 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: psql show URL with help"
},
{
"msg_contents": "Em qui, 21 de fev de 2019 às 14:28, Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> escreveu:\n>\n> URL: https://www.postgresql.org/docs/12/sql-analyze.html\n> ^^^^\n>\nWhat happen if I connect to an old server? Did it print URL according\nto psql version or server version? psql prints help about its version\nand if user wants details about a command, clicks in the URL but what\nis shown is 12 docs but user is accessing a 9.4 server. Ops... the\ncommand failed. Check the URL again but... wait it is not 9.4 docs.\nThere is also the case that some commands don't exist on old versions\nbut URL will be printed. IMHO URL should be printed iif psql version\nis the same as server version. If we want flexibility, let's add an\noption to enable URL exhibition (always/same) that defaults to same.\n\n\n-- \n Euler Taveira Timbira -\nhttp://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\n",
"msg_date": "Fri, 22 Feb 2019 11:37:08 -0300",
"msg_from": "Euler Taveira <euler@timbira.com.br>",
"msg_from_op": false,
"msg_subject": "Re: psql show URL with help"
},
{
"msg_contents": "On 2019-02-22 15:37, Euler Taveira wrote:\n> Em qui, 21 de fev de 2019 às 14:28, Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> escreveu:\n>> URL: https://www.postgresql.org/docs/12/sql-analyze.html\n>> ^^^^\n>>\n> What happen if I connect to an old server? Did it print URL according\n> to psql version or server version? psql prints help about its version\n> and if user wants details about a command, clicks in the URL but what\n> is shown is 12 docs but user is accessing a 9.4 server. Ops... the\n> command failed. Check the URL again but... wait it is not 9.4 docs.\n\nWell, the help that currently displays is also hardcoded to the psql\nversion. At least this way it would indicate in the URL that it might\npertain to a different version.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 22 Feb 2019 15:50:17 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: psql show URL with help"
},
{
"msg_contents": "Euler Taveira <euler@timbira.com.br> writes:\n> Em qui, 21 de fev de 2019 às 14:28, Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> escreveu:\n>> URL: https://www.postgresql.org/docs/12/sql-analyze.html\n\n> What happen if I connect to an old server? Did it print URL according\n> to psql version or server version? psql prints help about its version\n> and if user wants details about a command, clicks in the URL but what\n> is shown is 12 docs but user is accessing a 9.4 server.\n\nThe syntax summary that psql is showing is for its own version, and\nI'd say the URL must be too. You can't even be sure that a corresponding\nURL would exist in another version, so blindly inserting the server's\nmajor version into a URL string that psql has doesn't seem like a bright\nidea.\n\n(I'm assuming that the implementation Peter has in mind is that these\nURLs would just be part of the prefab help text that psql has for\nvarious commands. If we somehow involved the server in it, then\nmaybe things could be different; but I doubt that's possible without\na protocol change, which it's probably not worth.)\n\nIn the end, if you are using a server version that's different from\nyour psql version, there are lots of ways things could go wrong.\nI think it's up to the user to take psql's help with a grain of salt\nin such cases.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 22 Feb 2019 09:55:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: psql show URL with help"
},
{
"msg_contents": "Em sex, 22 de fev de 2019 às 11:55, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n>\n> The syntax summary that psql is showing is for its own version, and\n> I'd say the URL must be too. You can't even be sure that a corresponding\n> URL would exist in another version, so blindly inserting the server's\n> major version into a URL string that psql has doesn't seem like a bright\n> idea.\n>\nI'm not suggesting that we replace version number using the latest\nversion URL. However, we could prevent URL to be shown if the version\nmismatch. If psql wasn't backward compatible we shouldn't care but it\nis. Someone could be confused as I said earlier.\n\n\n-- \n Euler Taveira Timbira -\nhttp://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\n",
"msg_date": "Fri, 22 Feb 2019 12:10:20 -0300",
"msg_from": "Euler Taveira <euler@timbira.com.br>",
"msg_from_op": false,
"msg_subject": "Re: psql show URL with help"
},
{
"msg_contents": "Euler Taveira <euler@timbira.com.br> writes:\n> I'm not suggesting that we replace version number using the latest\n> version URL. However, we could prevent URL to be shown if the version\n> mismatch. If psql wasn't backward compatible we shouldn't care but it\n> is. Someone could be confused as I said earlier.\n\nI tend to agree with Peter that showing the URL is actually better\nthan not doing so, in such a case --- it might remind the user\nwhich version the help text is for.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 22 Feb 2019 12:40:31 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: psql show URL with help"
},
{
"msg_contents": "On 2019-02-21 18:28, Peter Eisentraut wrote:\n> => \\h analyze\n\n> URL: https://www.postgresql.org/docs/12/sql-analyze.html\n> ^^^^\n\nHere is a patch.\n\nIt doesn't deal with the \"devel\" paths yet. Discussion there is still\nongoing a bit.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 25 Feb 2019 12:05:29 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: psql show URL with help"
},
{
"msg_contents": "On 2019-02-22 12:07, Magnus Hagander wrote:\n> (Won't actually work because the web site isn't serving \"12\" URLs yet,\n> but that's something that could probably be sorted out.)\n> \n> \n> Why not just link to /devel/ when it's a devel version? The 12 docs will\n> be up alongside the first beta version, so it should be perfectly\n> possible to have it do that based on information from configure, no?\n\nWhy not just serve /12/ from the web site earlier? Is there a reason\nnot to?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Mon, 25 Feb 2019 12:06:32 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: psql show URL with help"
},
{
"msg_contents": "po 25. 2. 2019 v 12:06 odesílatel Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> napsal:\n\n> On 2019-02-22 12:07, Magnus Hagander wrote:\n> > (Won't actually work because the web site isn't serving \"12\" URLs\n> yet,\n> > but that's something that could probably be sorted out.)\n> >\n> >\n> > Why not just link to /devel/ when it's a devel version? The 12 docs will\n> > be up alongside the first beta version, so it should be perfectly\n> > possible to have it do that based on information from configure, no?\n>\n> Why not just serve /12/ from the web site earlier? Is there a reason\n> not to?\n>\n\nI had same idea. The fact so this version is development version, can be\nsolved by some styles.\n\nBut a URL should be stable.\n\nRegards\n\nPavel\n\n\n> --\n> Peter Eisentraut http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n\npo 25. 2. 2019 v 12:06 odesílatel Peter Eisentraut <peter.eisentraut@2ndquadrant.com> napsal:On 2019-02-22 12:07, Magnus Hagander wrote:\n> (Won't actually work because the web site isn't serving \"12\" URLs yet,\n> but that's something that could probably be sorted out.)\n> \n> \n> Why not just link to /devel/ when it's a devel version? The 12 docs will\n> be up alongside the first beta version, so it should be perfectly\n> possible to have it do that based on information from configure, no?\n\nWhy not just serve /12/ from the web site earlier? Is there a reason\nnot to?I had same idea. The fact so this version is development version, can be solved by some styles.But a URL should be stable.RegardsPavel\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 25 Feb 2019 12:29:21 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql show URL with help"
},
{
"msg_contents": "On 2019-02-25 12:05, Peter Eisentraut wrote:\n> On 2019-02-21 18:28, Peter Eisentraut wrote:\n>> => \\h analyze\n> \n>> URL: https://www.postgresql.org/docs/12/sql-analyze.html\n>> ^^^^\n> \n> Here is a patch.\n> \n> It doesn't deal with the \"devel\" paths yet. Discussion there is still\n> ongoing a bit.\n\nA new patch that now handles the \"devel\" part. Seems easy enough.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 27 Feb 2019 09:14:59 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: psql show URL with help"
},
{
"msg_contents": "On Wed, Feb 27, 2019 at 09:14:59AM +0100, Peter Eisentraut wrote:\n> +\t\t\t\t\turl = psprintf(\"https://www.postgresql.org/docs/%s/%s.html\",\n> +\t\t\t\t\t\t\t\t strstr(PG_VERSION, \"devel\") ? \"devel\" : PG_MAJORVERSION,\n> +\t\t\t\t\t\t\t\t QL_HELP[i].docbook_id);\n\nDo we need to make sure that the docs are published under the major\nversion as soon as we get to alpha, or do we need something more like\nthis?\n\n url = psprintf(\"https://www.postgresql.org/docs/%s/%s.html\",\n (strstr(PG_VERSION, \"devel\") || strstr(PG_VERSION, \"beta\") ||\n strstr(PG_VERSION, \"alpha\")) : \"devel\" : PG_MAJORVERSION,\n QL_HELP[i].docbook_id);\n\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n",
"msg_date": "Sun, 3 Mar 2019 19:14:32 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: psql show URL with help"
},
{
"msg_contents": "Hi,\nIs there any documentation change required for this patch?\nCheers\nRam 4.0\n\nHi,Is there any documentation change required for this patch?CheersRam 4.0",
"msg_date": "Mon, 4 Mar 2019 02:25:35 +0530",
"msg_from": "Ramanarayana <raam.soft@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql show URL with help"
},
{
"msg_contents": "On Sun, Mar 3, 2019 at 7:14 PM David Fetter <david@fetter.org> wrote:\n\n> On Wed, Feb 27, 2019 at 09:14:59AM +0100, Peter Eisentraut wrote:\n> > + url = psprintf(\"\n> https://www.postgresql.org/docs/%s/%s.html\",\n> > +\n> strstr(PG_VERSION, \"devel\") ? \"devel\" : PG_MAJORVERSION,\n> > +\n> QL_HELP[i].docbook_id);\n>\n> Do we need to make sure that the docs are published under the major\n> version as soon as we get to alpha, or do we need something more like\n> this?\n>\n> url = psprintf(\"https://www.postgresql.org/docs/%s/%s.html\",\n> (strstr(PG_VERSION, \"devel\") || strstr(PG_VERSION, \"beta\") ||\n> strstr(PG_VERSION, \"alpha\")) : \"devel\" : PG_MAJORVERSION,\n> QL_HELP[i].docbook_id);\n>\n\nWe don't really release alphas any more. And we do load the documentation\nalongside the betas. (Last time we did an alpha was so long ago I don't\nremember if we loaded docs)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Sun, Mar 3, 2019 at 7:14 PM David Fetter <david@fetter.org> wrote:On Wed, Feb 27, 2019 at 09:14:59AM +0100, Peter Eisentraut wrote:\n> + url = psprintf(\"https://www.postgresql.org/docs/%s/%s.html\",\n> + strstr(PG_VERSION, \"devel\") ? \"devel\" : PG_MAJORVERSION,\n> + QL_HELP[i].docbook_id);\n\nDo we need to make sure that the docs are published under the major\nversion as soon as we get to alpha, or do we need something more like\nthis?\n\n url = psprintf(\"https://www.postgresql.org/docs/%s/%s.html\",\n (strstr(PG_VERSION, \"devel\") || strstr(PG_VERSION, \"beta\") ||\n strstr(PG_VERSION, \"alpha\")) : \"devel\" : PG_MAJORVERSION,\n QL_HELP[i].docbook_id);We don't really release alphas any more. And we do load the documentation alongside the betas. (Last time we did an alpha was so long ago I don't remember if we loaded docs) -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Sun, 3 Mar 2019 21:57:25 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: psql show URL with help"
},
{
"msg_contents": "On Sun, Mar 03, 2019 at 09:57:25PM +0100, Magnus Hagander wrote:\n> On Sun, Mar 3, 2019 at 7:14 PM David Fetter <david@fetter.org> wrote:\n> \n> > On Wed, Feb 27, 2019 at 09:14:59AM +0100, Peter Eisentraut wrote:\n> > > + url = psprintf(\"\n> > https://www.postgresql.org/docs/%s/%s.html\",\n> > > +\n> > strstr(PG_VERSION, \"devel\") ? \"devel\" : PG_MAJORVERSION,\n> > > +\n> > QL_HELP[i].docbook_id);\n> >\n> > Do we need to make sure that the docs are published under the major\n> > version as soon as we get to alpha, or do we need something more like\n> > this?\n> >\n> > url = psprintf(\"https://www.postgresql.org/docs/%s/%s.html\",\n> > (strstr(PG_VERSION, \"devel\") || strstr(PG_VERSION, \"beta\") ||\n> > strstr(PG_VERSION, \"alpha\")) : \"devel\" : PG_MAJORVERSION,\n> > QL_HELP[i].docbook_id);\n> >\n> \n> We don't really release alphas any more. And we do load the documentation\n> alongside the betas. (Last time we did an alpha was so long ago I don't\n> remember if we loaded docs)\n\nIf the first thing we do when we move from devel to some other state\n(beta, RC, etc.) is to publish the docs under the major version\nnumber, then maybe this test should be more along the lines of looking\nfor anything that's neither devel nor a number, extract the number,\nand use that.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n",
"msg_date": "Sun, 3 Mar 2019 22:48:33 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: psql show URL with help"
},
{
"msg_contents": "On Sun, Mar 3, 2019 at 10:48 PM David Fetter <david@fetter.org> wrote:\n\n> On Sun, Mar 03, 2019 at 09:57:25PM +0100, Magnus Hagander wrote:\n> > On Sun, Mar 3, 2019 at 7:14 PM David Fetter <david@fetter.org> wrote:\n> >\n> > > On Wed, Feb 27, 2019 at 09:14:59AM +0100, Peter Eisentraut wrote:\n> > > > + url = psprintf(\"\n> > > https://www.postgresql.org/docs/%s/%s.html\",\n> > > > +\n> > > strstr(PG_VERSION, \"devel\") ? \"devel\" : PG_MAJORVERSION,\n> > > > +\n> > > QL_HELP[i].docbook_id);\n> > >\n> > > Do we need to make sure that the docs are published under the major\n> > > version as soon as we get to alpha, or do we need something more like\n> > > this?\n> > >\n> > > url = psprintf(\"https://www.postgresql.org/docs/%s/%s.html\",\n> > > (strstr(PG_VERSION, \"devel\") || strstr(PG_VERSION, \"beta\")\n> ||\n> > > strstr(PG_VERSION, \"alpha\")) : \"devel\" : PG_MAJORVERSION,\n> > > QL_HELP[i].docbook_id);\n> > >\n> >\n> > We don't really release alphas any more. And we do load the documentation\n> > alongside the betas. (Last time we did an alpha was so long ago I don't\n> > remember if we loaded docs)\n>\n> If the first thing we do when we move from devel to some other state\n> (beta, RC, etc.) is to publish the docs under the major version\n> number, then maybe this test should be more along the lines of looking\n> for anything that's neither devel nor a number, extract the number,\n> and use that.\n>\n\nWell, alpha versions do go under the numeric URL. Whether we load the docs\nat that time or not we can just choose -- but there is no reason not to. So\nyeah, that sounds like it would work better.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Sun, Mar 3, 2019 at 10:48 PM David Fetter <david@fetter.org> wrote:On Sun, Mar 03, 2019 at 09:57:25PM +0100, Magnus Hagander wrote:\n> On Sun, Mar 3, 2019 at 7:14 PM David Fetter <david@fetter.org> wrote:\n> \n> > On Wed, Feb 27, 2019 at 09:14:59AM +0100, Peter Eisentraut wrote:\n> > > + url = psprintf(\"\n> > https://www.postgresql.org/docs/%s/%s.html\",\n> > > +\n> > strstr(PG_VERSION, \"devel\") ? \"devel\" : PG_MAJORVERSION,\n> > > +\n> > QL_HELP[i].docbook_id);\n> >\n> > Do we need to make sure that the docs are published under the major\n> > version as soon as we get to alpha, or do we need something more like\n> > this?\n> >\n> > url = psprintf(\"https://www.postgresql.org/docs/%s/%s.html\",\n> > (strstr(PG_VERSION, \"devel\") || strstr(PG_VERSION, \"beta\") ||\n> > strstr(PG_VERSION, \"alpha\")) : \"devel\" : PG_MAJORVERSION,\n> > QL_HELP[i].docbook_id);\n> >\n> \n> We don't really release alphas any more. And we do load the documentation\n> alongside the betas. (Last time we did an alpha was so long ago I don't\n> remember if we loaded docs)\n\nIf the first thing we do when we move from devel to some other state\n(beta, RC, etc.) is to publish the docs under the major version\nnumber, then maybe this test should be more along the lines of looking\nfor anything that's neither devel nor a number, extract the number,\nand use that.Well, alpha versions do go under the numeric URL. Whether we load the docs at that time or not we can just choose -- but there is no reason not to. So yeah, that sounds like it would work better. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Mon, 4 Mar 2019 17:55:58 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: psql show URL with help"
},
{
"msg_contents": "On 2019-03-04 17:55, Magnus Hagander wrote:\n> If the first thing we do when we move from devel to some other state\n> (beta, RC, etc.) is to publish the docs under the major version\n> number, then maybe this test should be more along the lines of looking\n> for anything that's neither devel nor a number, extract the number,\n> and use that.\n> \n> \n> Well, alpha versions do go under the numeric URL. Whether we load the\n> docs at that time or not we can just choose -- but there is no reason\n> not to. So yeah, that sounds like it would work better. \n\nCan you put your proposal in the form of some logical pseudocode? I\ndon't understand the free-form description.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Tue, 5 Mar 2019 08:41:35 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: psql show URL with help"
},
{
"msg_contents": "On Mon, Mar 4, 2019 at 11:41 PM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2019-03-04 17:55, Magnus Hagander wrote:\n> > If the first thing we do when we move from devel to some other state\n> > (beta, RC, etc.) is to publish the docs under the major version\n> > number, then maybe this test should be more along the lines of\n> looking\n> > for anything that's neither devel nor a number, extract the number,\n> > and use that.\n> >\n> >\n> > Well, alpha versions do go under the numeric URL. Whether we load the\n> > docs at that time or not we can just choose -- but there is no reason\n> > not to. So yeah, that sounds like it would work better.\n>\n> Can you put your proposal in the form of some logical pseudocode? I\n> don't understand the free-form description.\n>\n\nHah, sorry. It's actually Davids proposal, but something like:\n\nif (psql_version_is_numeric)\n return /docs/psql_version/\nelse if (psql_version ends with 'devel')\n return /docs/devel/\nelse\n return /docs/{psql_version but with text stripped}/\n\nSo that e.g. 12beta would return \"12\", as would 12rc or 12alpha. But\n12devel would return \"devel\".\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, Mar 4, 2019 at 11:41 PM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:On 2019-03-04 17:55, Magnus Hagander wrote:\n> If the first thing we do when we move from devel to some other state\n> (beta, RC, etc.) is to publish the docs under the major version\n> number, then maybe this test should be more along the lines of looking\n> for anything that's neither devel nor a number, extract the number,\n> and use that.\n> \n> \n> Well, alpha versions do go under the numeric URL. Whether we load the\n> docs at that time or not we can just choose -- but there is no reason\n> not to. So yeah, that sounds like it would work better. \n\nCan you put your proposal in the form of some logical pseudocode? I\ndon't understand the free-form description.Hah, sorry. It's actually Davids proposal, but something like:if (psql_version_is_numeric) return /docs/psql_version/ else if (psql_version ends with 'devel') return /docs/devel/else return /docs/{psql_version but with text stripped}/So that e.g. 12beta would return \"12\", as would 12rc or 12alpha. But 12devel would return \"devel\".-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Thu, 7 Mar 2019 11:44:07 -0800",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: psql show URL with help"
},
{
"msg_contents": "On Thu, Mar 07, 2019 at 11:44:07AM -0800, Magnus Hagander wrote:\n> On Mon, Mar 4, 2019 at 11:41 PM Peter Eisentraut <\n> peter.eisentraut@2ndquadrant.com> wrote:\n> \n> > On 2019-03-04 17:55, Magnus Hagander wrote:\n> > > If the first thing we do when we move from devel to some other state\n> > > (beta, RC, etc.) is to publish the docs under the major version\n> > > number, then maybe this test should be more along the lines of\n> > looking\n> > > for anything that's neither devel nor a number, extract the number,\n> > > and use that.\n> > >\n> > >\n> > > Well, alpha versions do go under the numeric URL. Whether we load the\n> > > docs at that time or not we can just choose -- but there is no reason\n> > > not to. So yeah, that sounds like it would work better.\n> >\n> > Can you put your proposal in the form of some logical pseudocode? I\n> > don't understand the free-form description.\n> >\n> \n> Hah, sorry. It's actually Davids proposal, but something like:\n> \n> if (psql_version_is_numeric)\n> return /docs/psql_version/\n> else if (psql_version ends with 'devel')\n> return /docs/devel/\n> else\n> return /docs/{psql_version but with text stripped}/\n> \n> So that e.g. 12beta would return \"12\", as would 12rc or 12alpha. But\n> 12devel would return \"devel\".\n\nThat's exactly what I had in mind :)\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n",
"msg_date": "Thu, 7 Mar 2019 23:02:34 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: psql show URL with help"
},
{
"msg_contents": "On 2019-03-07 23:02, David Fetter wrote:\n>> if (psql_version_is_numeric)\n>> return /docs/psql_version/\n>> else if (psql_version ends with 'devel')\n>> return /docs/devel/\n>> else\n>> return /docs/{psql_version but with text stripped}/\n>>\n>> So that e.g. 12beta would return \"12\", as would 12rc or 12alpha. But\n>> 12devel would return \"devel\".\n> \n> That's exactly what I had in mind :)\n\nThe outcome of that is exactly what my patch does, but the inputs are\ndifferent. We have PG_MAJORVERSION, which is always a single integer,\nand PG_VERSION, which could be 10.9.8 or 11beta5 or 12devel. The patch does\n\nif (PG_VERSION ends with 'devel')\n return /docs/devel/\nelse\n return /docs/$PG_MAJORVERSION/\n\nThere is no third case. Your third case of not-numeric-and-not-devel is\ncorrectly covered by the else branch.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 8 Mar 2019 13:45:03 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: psql show URL with help"
},
{
"msg_contents": "On Fri, Mar 08, 2019 at 01:45:03PM +0100, Peter Eisentraut wrote:\n> On 2019-03-07 23:02, David Fetter wrote:\n> >> if (psql_version_is_numeric)\n> >> return /docs/psql_version/\n> >> else if (psql_version ends with 'devel')\n> >> return /docs/devel/\n> >> else\n> >> return /docs/{psql_version but with text stripped}/\n> >>\n> >> So that e.g. 12beta would return \"12\", as would 12rc or 12alpha. But\n> >> 12devel would return \"devel\".\n> > \n> > That's exactly what I had in mind :)\n> \n> The outcome of that is exactly what my patch does, but the inputs are\n> different. We have PG_MAJORVERSION, which is always a single integer,\n> and PG_VERSION, which could be 10.9.8 or 11beta5 or 12devel. The patch does\n> \n> if (PG_VERSION ends with 'devel')\n> return /docs/devel/\n> else\n> return /docs/$PG_MAJORVERSION/\n> \n> There is no third case. Your third case of not-numeric-and-not-devel is\n> correctly covered by the else branch.\n\nThanks for helping me understand.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n",
"msg_date": "Fri, 8 Mar 2019 16:11:25 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: psql show URL with help"
},
{
"msg_contents": "On 2019-03-08 16:11, David Fetter wrote:\n>> The outcome of that is exactly what my patch does, but the inputs are\n>> different. We have PG_MAJORVERSION, which is always a single integer,\n>> and PG_VERSION, which could be 10.9.8 or 11beta5 or 12devel. The patch does\n>>\n>> if (PG_VERSION ends with 'devel')\n>> return /docs/devel/\n>> else\n>> return /docs/$PG_MAJORVERSION/\n>>\n>> There is no third case. Your third case of not-numeric-and-not-devel is\n>> correctly covered by the else branch.\n> \n> Thanks for helping me understand.\n\nCommitted, thanks.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Mon, 11 Mar 2019 09:13:27 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: psql show URL with help"
}
] |
[
{
"msg_contents": "Attached set of patches with some jsonb optimizations that were made during\ncomparison of performance of ordinal jsonb operators and jsonpath operators.\n\n1. Optimize JsonbExtractScalar():\n It is better to use getIthJsonbValueFromContainer(cont, 0) instead of\n JsonIterator to get 0th element of raw-scalar pseudoarray.\n JsonbExtractScalar() is used in jsonb casts, so they speed up a bit.\n\n2. Optimize operator #>>, jsonb_each_text(), jsonb_array_elements_text():\n These functions have direct conversion (JsonbValue => text) only for\n jbvString scalars, but indirect conversion of other types of scalars\n (JsonbValue => jsonb => text) is obviously too slow. Extracted common\n subroutine JsonbValueAsText() and used in all suitable places.\n\n3. Optimize JsonbContainer type recognition in get_jsonb_path_all():\n Fetching of the first token from JsonbIterator is replaced with lightweight\n JsonbContainerIsXxx() macros.\n\n4. Extract findJsonbKeyInObject():\n Extracted findJsonbKeyInObject() from findJsonbValueFromContainer(),\n which is slightly easier to use (key string and its length is passed instead\n of filled string JsonbValue).\n\n5. Optimize resulting value allocation in findJsonbValueFromContainer() and\n getIthJsonbValueFromContainer():\n Added ability to pass stack-allocated JsonbValue that will be filled with\n the result of operation instead of returning unconditionally palloc()ated\n JsonbValue.\n\nPatches #4 and #5 are mostly refactorings, but they can give small speedup\n(up to 5% for upcoming jsonpath operators) due to elimination of unnecessary\npalloc()s. The whole interface of findJsonbValueFromContainer() with JB_OBJECT\nand JB_ARRAY flags always seemed a bit strange to me, so I think it is worth to\nhave separate functions for searching keys in objects and elements in arrays.\n\n\nPerformance tests:\n - Test data for {\"x\": {\"y\": {\"z\": i}}}:\n CREATE TABLE t AS\n SELECT jsonb_build_object('x',\n jsonb_build_object('y',\n jsonb_build_object('z', i))) js\n FROM generate_series(1, 3000000) i;\n\n - Sample query:\n EXPLAIN (ANALYZE) SELECT js -> 'x' -> 'y' -> 'z' FROM t;\n\n - Results:\n | execution time, ms\n query | master | optimized\n-------------------------------------------------------------------------------\n {\"x\": {\"y\": {\"z\": i}}}\n js #> '{x,y,z}' | 1148.632 | 1005.578 -10%\n js #>> '{x,y,z}' | 1520.160 | 849.991 -40%\n (js #> '{x,y,z}')::numeric | 1310.881 | 1067.752 -20%\n (js #>> '{x,y,z}')::numeric | 1757.179 | 1109.495 -30%\n\n js -> 'x' -> 'y' -> 'z' | 1030.211 | 977.267\n js -> 'x' -> 'y' ->> 'z' | 887.101 | 838.745\n (js -> 'x' -> 'y' -> 'z')::numeric | 1184.086 | 1050.462\n (js -> 'x' -> 'y' -> 'z')::int4 | 1279.315 | 1133.032\n (js -> 'x' -> 'y' ->> 'z')::numeric | 1134.003 | 1100.047\n (js -> 'x' -> 'y' ->> 'z')::int4 | 1077.216 | 991.995\n\n js ? 'x' | 523.111 | 495.387\n js ?| '{x,y,z}' | 612.880 | 607.455\n js ?& '{x,y,z}' | 674.786 | 643.987\n js -> 'x' -> 'y' ? 'z' | 712.623 | 698.588\n js @> '{\"x\": {\"y\": {\"z\": 1}}}' | 1154.926 | 1149.069\n\njsonpath:\n js @@ '$.x.y.z == 123' | 973,444 | 912,08 -5%\n\n {\"x\": i, \"y\": i, \"z\": i}\n jsonb_each(js) | 2281.577 | 2262.660\n jsonb_each_text(js) | 2603.539 | 2112.200 -20%\n \n [i, i, i]\n jsonb_array_elements(js) | 1255.210 | 1205.939\n jsonb_array_elements(js)::numeric | 1662.550 | 1576.227 -5%\n jsonb_array_elements_text(js) | 1555.021 | 1067.031 -30%\n\n js @> '1' | 798.858 | 768.664 -4%\n js <@ '[1,2,3]' | 820.795 | 785.086 -5%\n js <@ '[0,1,2,3,4,5,6,7,8,9]' | 1214.170 | 1165.289 -5%\n\n\nAs it can be seen, #> operators are always slower than equivalent series of ->.\nI think it is caused by array deconstruction in \"jsonb #> text[]\".\n\n-- \nNikita Glukhov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Fri, 22 Feb 2019 03:05:33 +0300",
"msg_from": "Nikita Glukhov <n.gluhov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Optimization of some jsonb functions"
},
{
"msg_contents": "On 2/22/19 2:05 AM, Nikita Glukhov wrote:\n> Attached set of patches with some jsonb optimizations that were made during\n> comparison of performance of ordinal jsonb operators and jsonpath operators.\n\nThis patch was submitted just before the last commitfest for PG12 and \nseems to have potential for breakage.\n\nI have updated the target to PG13.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n",
"msg_date": "Tue, 5 Mar 2019 12:24:23 +0200",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Optimization of some jsonb functions"
},
{
"msg_contents": "\nOn 3/5/19 5:24 AM, David Steele wrote:\n> On 2/22/19 2:05 AM, Nikita Glukhov wrote:\n>> Attached set of patches with some jsonb optimizations that were made\n>> during\n>> comparison of performance of ordinal jsonb operators and jsonpath\n>> operators.\n>\n> This patch was submitted just before the last commitfest for PG12 and\n> seems to have potential for breakage.\n>\n> I have updated the target to PG13.\n>\n>\n\nI think that's overly cautious. The first one I looked at, to optimize\nJsonbExtractScalar, is very small, self-contained, and I think low risk.\nI haven't looked at the others in detail, but I think at least some part\nof this is reasonably committable.\n\n\nI'll try to look at the others fairly shortly.\n\n\ncheers\n\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 6 Mar 2019 14:50:57 -0500",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimization of some jsonb functions"
},
{
"msg_contents": "Hi Andrew,\n\nOn 3/6/19 9:50 PM, Andrew Dunstan wrote:\n> \n> On 3/5/19 5:24 AM, David Steele wrote:\n>> On 2/22/19 2:05 AM, Nikita Glukhov wrote:\n>>> Attached set of patches with some jsonb optimizations that were made\n>>> during\n>>> comparison of performance of ordinal jsonb operators and jsonpath\n>>> operators.\n>>\n>> This patch was submitted just before the last commitfest for PG12 and\n>> seems to have potential for breakage.\n>>\n>> I have updated the target to PG13.\n>>\n>>\n> \n> I think that's overly cautious. The first one I looked at, to optimize\n> JsonbExtractScalar, is very small, self-contained, and I think low risk.\n> I haven't looked at the others in detail, but I think at least some part\n> of this is reasonably committable.\n> \n> \n> I'll try to look at the others fairly shortly.\n\nIf you decide all or part of this can be committed then feel free to \nupdate the target version.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n",
"msg_date": "Thu, 7 Mar 2019 14:23:53 +0200",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Optimization of some jsonb functions"
},
{
"msg_contents": "> >> On 2/22/19 2:05 AM, Nikita Glukhov wrote:\n> >>> Attached set of patches with some jsonb optimizations that were made\n> >>> during\n> >>> comparison of performance of ordinal jsonb operators and jsonpath\n> >>> operators.\n\nHi Nikita,\n\nThis doesn't apply -- to attract reviewers, could we please have a rebase?\n\nThanks,\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Mon, 1 Jul 2019 22:50:35 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimization of some jsonb functions"
},
{
"msg_contents": "Thomas Munro wrote:\n> This doesn't apply -- to attract reviewers, could we please have a rebase?\n\nTo help the review go forward, I have rebased the patch on 27cd521e6e.\nIt passes `make check` for me, but that's as far as I've verified the\ncorrectness.\n\nI squashed the changes into a single patch, sorry if that makes it\nharder to review than the original set of five patch files...\n\n--\nJoe Nelson https://begriffs.com",
"msg_date": "Fri, 26 Jul 2019 14:34:22 -0500",
"msg_from": "Joe Nelson <joe@begriffs.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimization of some jsonb functions"
},
{
"msg_contents": "On 2019-Jul-26, Joe Nelson wrote:\n\n> Thomas Munro wrote:\n> > This doesn't apply -- to attract reviewers, could we please have a rebase?\n> \n> To help the review go forward, I have rebased the patch on 27cd521e6e.\n> It passes `make check` for me, but that's as far as I've verified the\n> correctness.\n> \n> I squashed the changes into a single patch, sorry if that makes it\n> harder to review than the original set of five patch files...\n\nWell, I think that was useless, so I rebased again -- attached.\n(Thanks, git-imerge).\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 18 Sep 2019 19:18:13 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimization of some jsonb functions"
},
{
"msg_contents": "On 2019-Sep-18, Alvaro Herrera wrote:\n\n> Well, I think that was useless, so I rebased again -- attached.\n\n... which is how you find out that 0001 as an independent patch is not\nreally a valid one, since it depends on an API change that does not\nhappen until 0005.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 19 Sep 2019 00:09:06 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimization of some jsonb functions"
},
{
"msg_contents": "On 2019-Sep-19, Alvaro Herrera wrote:\n\n> On 2019-Sep-18, Alvaro Herrera wrote:\n> \n> > Well, I think that was useless, so I rebased again -- attached.\n> \n> ... which is how you find out that 0001 as an independent patch is not\n> really a valid one, since it depends on an API change that does not\n> happen until 0005.\n\n... and there were other compilation problems too, presumably fixed\nsilently by Joe in his rebase, but which I fixed again for this series\nwhich now seems more credible. I tested compile and regression tests\nafter each patch, it all works locally.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 19 Sep 2019 00:47:15 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimization of some jsonb functions"
},
{
"msg_contents": "I pushed the first few parts. The attached is a rebased copy of the\nlast remaining piece. However, I didn't quite understand what this was\ndoing, so I refrained from pushing. I think there are two patches here:\none that adapts the API of findJsonbValueFromContainer and\ngetIthJsonbValueFromContainer to take the output result pointer as an\nargument, allowing to save palloc cycles just like the newly added\ngetKeyJsonValueFromContainer(); and the other changes JsonbDeepContains\nso that it uses a new function (which is a function with a weird API\nthat would be extracted from findJsonbValueFromContainer).\n\nAlso, the current patch just passes NULL into the routines from\njsonpath_exec.c but I think it would be useful to pass pointers into\nstack-allocated result structs instead, at least in getJsonPathVariable.\n\nSince the majority of this patchset got pushed, I'll leave this for\nNikita to handle for the next commitfest if he wants to, and mark this\nCF entry as committed.\n\nThanks!\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 20 Sep 2019 21:09:31 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimization of some jsonb functions"
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nDuring the development of another feature, I found that same local\nvariables are declared twice.\nIMO, there is no need of again declaring the local variables. Patch\nattached.\n\nRegards,\nHaribabu Kommi\nFujitsu Australia",
"msg_date": "Fri, 22 Feb 2019 11:33:17 +1100",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Removal of duplicate variable declarations in fe-connect.c"
},
{
"msg_contents": "On Fri, Feb 22, 2019 at 11:33:17AM +1100, Haribabu Kommi wrote:\n> During the development of another feature, I found that same local\n> variables are declared twice.\n> IMO, there is no need of again declaring the local variables. Patch\n> attached.\n\nIndeed, fixed. That's not a good practice, and each variable is\nassigned in its own block before getting used, so there is no\noverlap.\n--\nMichael",
"msg_date": "Fri, 22 Feb 2019 13:22:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Removal of duplicate variable declarations in fe-connect.c"
},
{
"msg_contents": "On Fri, Feb 22, 2019 at 3:22 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Fri, Feb 22, 2019 at 11:33:17AM +1100, Haribabu Kommi wrote:\n> > During the development of another feature, I found that same local\n> > variables are declared twice.\n> > IMO, there is no need of again declaring the local variables. Patch\n> > attached.\n>\n> Indeed, fixed. That's not a good practice, and each variable is\n> assigned in its own block before getting used, so there is no\n> overlap.\n>\n\nThanks.\n\nRegards,\nHaribabu Kommi\nFujitsu Australia\n\nOn Fri, Feb 22, 2019 at 3:22 PM Michael Paquier <michael@paquier.xyz> wrote:On Fri, Feb 22, 2019 at 11:33:17AM +1100, Haribabu Kommi wrote:\n> During the development of another feature, I found that same local\n> variables are declared twice.\n> IMO, there is no need of again declaring the local variables. Patch\n> attached.\n\nIndeed, fixed. That's not a good practice, and each variable is\nassigned in its own block before getting used, so there is no\noverlap.\nThanks.Regards,Haribabu KommiFujitsu Australia",
"msg_date": "Fri, 22 Feb 2019 18:50:18 +1100",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Removal of duplicate variable declarations in fe-connect.c"
}
] |
[
{
"msg_contents": "I know there has been recent discussion about implementing transparent\ndata encryption (TDE) in Postgres:\n\n\thttps://www.postgresql.org/message-id/CAD21AoAqtytk0iH6diCJW24oyJdS4roN-VhrFD53HcNP0s8pzA%40mail.gmail.com\n\nI would like to now post a new extension I developed to handle\ncryptographic key management in Postgres. It could be used with TDE,\nwith pgcrypto, and with an auto-encrypted data type. It is called\npgcryptokey and can be downloaded from:\n\n\thttps://momjian.us/download/pgcryptokey/\n\nI am attaching its README file to this email.\n\nThe extension uses two-layer key storage, and stores the key in a\nPostgres table. It allows the encryption key to be unlocked by the\nclient, or at boot time. (This would need to be modified to be a global\ntable if it was used for block-level encryption like TDE.)\n\nI am willing to continue to develop this extension if there is interest.\nShould I put it on PGXN eventually? It is something we would want in\n/contrib?\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +",
"msg_date": "Thu, 21 Feb 2019 22:58:16 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Unified security key managment"
}
] |
[
{
"msg_contents": "Hi,\n\nI found the bug of default partition pruning when executing a range query.\n\n-----\npostgres=# create table test1(id int, val text) partition by range (id); \npostgres=# create table test1_1 partition of test1 for values from (0) to (100); \npostgres=# create table test1_2 partition of test1 for values from (150) to (200);\npostgres=# create table test1_def partition of test1 default; \n\npostgres=# explain select * from test1 where id > 0 and id < 30;\n QUERY PLAN \n----------------------------------------------------------------\n Append (cost=0.00..11.83 rows=59 width=11)\n -> Seq Scan on test1_1 (cost=0.00..5.00 rows=58 width=11)\n Filter: ((id > 0) AND (id < 30))\n -> Seq Scan on test1_def (cost=0.00..6.53 rows=1 width=12)\n Filter: ((id > 0) AND (id < 30))\n(5 rows)\n\nThere is no need to scan the default partition, but it's scanned.\n-----\n\nIn the current implement, whether the default partition is scanned\nor not is determined according to each condition of given WHERE\nclause at get_matching_range_bounds(). In this example, scan_default\nis set true according to id > 0 because id >= 200 matches the default\npartition. Similarly, according to id < 30, scan_default is set true.\nThen, these results are combined according to AND/OR at perform_pruning_combine_step().\nIn this case, final result's scan_default is set true.\n\nThe modifications I made are as follows:\n- get_matching_range_bounds() determines only offsets of range bounds\n according to each condition \n- These results are combined at perform_pruning_combine_step()\n- Whether the default partition is scanned or not is determined at \n get_matching_partitions()\n\nAttached the patch. Any feedback is greatly appreciated.\n\nBest regards,\n---\nYuzuko Hosoya\nNTT Open Source Software Center",
"msg_date": "Fri, 22 Feb 2019 17:14:04 +0900",
"msg_from": "\"Yuzuko Hosoya\" <hosoya.yuzuko@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "Problem with default partition pruning"
},
{
"msg_contents": "Hosoya-san,\n\nOn 2019/02/22 17:14, Yuzuko Hosoya wrote:\n> Hi,\n> \n> I found the bug of default partition pruning when executing a range query.\n> \n> -----\n> postgres=# create table test1(id int, val text) partition by range (id); \n> postgres=# create table test1_1 partition of test1 for values from (0) to (100); \n> postgres=# create table test1_2 partition of test1 for values from (150) to (200);\n> postgres=# create table test1_def partition of test1 default; \n> \n> postgres=# explain select * from test1 where id > 0 and id < 30;\n> QUERY PLAN \n> ----------------------------------------------------------------\n> Append (cost=0.00..11.83 rows=59 width=11)\n> -> Seq Scan on test1_1 (cost=0.00..5.00 rows=58 width=11)\n> Filter: ((id > 0) AND (id < 30))\n> -> Seq Scan on test1_def (cost=0.00..6.53 rows=1 width=12)\n> Filter: ((id > 0) AND (id < 30))\n> (5 rows)\n> \n> There is no need to scan the default partition, but it's scanned.\n> -----\n> \n> In the current implement, whether the default partition is scanned\n> or not is determined according to each condition of given WHERE\n> clause at get_matching_range_bounds(). In this example, scan_default\n> is set true according to id > 0 because id >= 200 matches the default\n> partition. Similarly, according to id < 30, scan_default is set true.\n> Then, these results are combined according to AND/OR at perform_pruning_combine_step().\n> In this case, final result's scan_default is set true.\n> \n> The modifications I made are as follows:\n> - get_matching_range_bounds() determines only offsets of range bounds\n> according to each condition \n> - These results are combined at perform_pruning_combine_step()\n> - Whether the default partition is scanned or not is determined at \n> get_matching_partitions()\n> \n> Attached the patch. Any feedback is greatly appreciated.\n\nThank you for reporting. Can you please add this to March CF in Bugs\ncategory so as not to lose track of this?\n\nI will try to send review comments soon.\n\nRegards,\nAmit\n\n\n",
"msg_date": "Wed, 27 Feb 2019 11:21:58 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "Amit-san,\n\n> From: Amit Langote [mailto:Langote_Amit_f8@lab.ntt.co.jp]\n> Sent: Wednesday, February 27, 2019 11:22 AM\n> \n> Hosoya-san,\n> \n> On 2019/02/22 17:14, Yuzuko Hosoya wrote:\n> > Hi,\n> >\n> > I found the bug of default partition pruning when executing a range query.\n> >\n> > -----\n> > postgres=# create table test1(id int, val text) partition by range\n> > (id); postgres=# create table test1_1 partition of test1 for values\n> > from (0) to (100); postgres=# create table test1_2 partition of test1\n> > for values from (150) to (200); postgres=# create table test1_def\n> > partition of test1 default;\n> >\n> > postgres=# explain select * from test1 where id > 0 and id < 30;\n> > QUERY PLAN\n> > ----------------------------------------------------------------\n> > Append (cost=0.00..11.83 rows=59 width=11)\n> > -> Seq Scan on test1_1 (cost=0.00..5.00 rows=58 width=11)\n> > Filter: ((id > 0) AND (id < 30))\n> > -> Seq Scan on test1_def (cost=0.00..6.53 rows=1 width=12)\n> > Filter: ((id > 0) AND (id < 30))\n> > (5 rows)\n> >\n> > There is no need to scan the default partition, but it's scanned.\n> > -----\n> >\n> > In the current implement, whether the default partition is scanned or\n> > not is determined according to each condition of given WHERE clause at\n> > get_matching_range_bounds(). In this example, scan_default is set\n> > true according to id > 0 because id >= 200 matches the default\n> > partition. Similarly, according to id < 30, scan_default is set true.\n> > Then, these results are combined according to AND/OR at perform_pruning_combine_step().\n> > In this case, final result's scan_default is set true.\n> >\n> > The modifications I made are as follows:\n> > - get_matching_range_bounds() determines only offsets of range bounds\n> > according to each condition\n> > - These results are combined at perform_pruning_combine_step()\n> > - Whether the default partition is scanned or not is determined at\n> > get_matching_partitions()\n> >\n> > Attached the patch. Any feedback is greatly appreciated.\n> \n> Thank you for reporting. Can you please add this to March CF in Bugs category so as not to lose\ntrack\n> of this?\n> \n> I will try to send review comments soon.\n> \nThank you for your reply. I added this to March CF.\n\nRegards,\nYuzuko Hosoya\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Wed, 27 Feb 2019 15:50:39 +0900",
"msg_from": "\"Yuzuko Hosoya\" <hosoya.yuzuko@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "RE: Problem with default partition pruning"
},
{
"msg_contents": "Hosoya-san\n\nOn Wed, Feb 27, 2019 at 6:51 AM, Yuzuko Hosoya wrote:\n> > From: Amit Langote [mailto:Langote_Amit_f8@lab.ntt.co.jp]\n> > Sent: Wednesday, February 27, 2019 11:22 AM\n> >\n> > Hosoya-san,\n> >\n> > On 2019/02/22 17:14, Yuzuko Hosoya wrote:\n> > > Hi,\n> > >\n> > > I found the bug of default partition pruning when executing a range\n> query.\n> > >\n> > > -----\n> > > postgres=# create table test1(id int, val text) partition by range\n> > > (id); postgres=# create table test1_1 partition of test1 for values\n> > > from (0) to (100); postgres=# create table test1_2 partition of\n> > > test1 for values from (150) to (200); postgres=# create table\n> > > test1_def partition of test1 default;\n> > >\n> > > postgres=# explain select * from test1 where id > 0 and id < 30;\n> > > QUERY PLAN\n> > > ----------------------------------------------------------------\n> > > Append (cost=0.00..11.83 rows=59 width=11)\n> > > -> Seq Scan on test1_1 (cost=0.00..5.00 rows=58 width=11)\n> > > Filter: ((id > 0) AND (id < 30))\n> > > -> Seq Scan on test1_def (cost=0.00..6.53 rows=1 width=12)\n> > > Filter: ((id > 0) AND (id < 30))\n> > > (5 rows)\n> > >\n> > > There is no need to scan the default partition, but it's scanned.\n> > > -----\n> > >\n> > > In the current implement, whether the default partition is scanned\n> > > or not is determined according to each condition of given WHERE\n> > > clause at get_matching_range_bounds(). In this example,\n> > > scan_default is set true according to id > 0 because id >= 200\n> > > matches the default partition. Similarly, according to id < 30,\n> scan_default is set true.\n> > > Then, these results are combined according to AND/OR at\n> perform_pruning_combine_step().\n> > > In this case, final result's scan_default is set true.\n> > >\n> > > The modifications I made are as follows:\n> > > - get_matching_range_bounds() determines only offsets of range bounds\n> > > according to each condition\n> > > - These results are combined at perform_pruning_combine_step()\n> > > - Whether the default partition is scanned or not is determined at\n> > > get_matching_partitions()\n> > >\n> > > Attached the patch. Any feedback is greatly appreciated.\n> >\n> > Thank you for reporting. Can you please add this to March CF in Bugs\n> > category so as not to lose\n> track\n> > of this?\n> >\n> > I will try to send review comments soon.\n> >\n> Thank you for your reply. I added this to March CF.\n\nI tested with simple use case and I confirmed it works correctly like below.\n\nIn case using between clause:\npostgres=# create table test1(id int, val text) partition by range (id); \npostgres=# create table test1_1 partition of test1 for values from (0) to (100); \npostgres=# create table test1_2 partition of test1 for values from (150) to (200);\npostgres=# create table test1_def partition of test1 default; \n\n[HEAD]\npostgres=# explain analyze select * from test1 where id between 0 and 50;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------\n Append (cost=0.00..58.16 rows=12 width=36) (actual time=0.008..0.008 rows=0 loops=1)\n -> Seq Scan on test1_1 (cost=0.00..29.05 rows=6 width=36) (actual time=0.005..0.005 rows=0 loops=1)\n Filter: ((id >= 0) AND (id <= 50))\n -> Seq Scan on test1_def (cost=0.00..29.05 rows=6 width=36) (actual time=0.002..0.002 rows=0 loops=1)\n Filter: ((id >= 0) AND (id <= 50))\n\n\n[patched]\npostgres=# explain analyze select * from test1 where id between 0 and 50;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------\n Append (cost=0.00..29.08 rows=6 width=36) (actual time=0.006..0.006 rows=0 loops=1)\n -> Seq Scan on test1_1 (cost=0.00..29.05 rows=6 width=36) (actual time=0.004..0.005 rows=0 loops=1)\n Filter: ((id >= 0) AND (id <= 50))\n\n\n\nI considered about another use case. If default partition contains rows whose id = 300 and then we add another partition which have constraints like id >= 300 and id < 400, I thought we won't scan the rows anymore. But I noticed we simply can't add such a partition.\n\npostgres=# insert into test1 values (300);\nINSERT 0 1\npostgres=# create table test1_3 partition of test1 for values from (300) to (400); \nERROR: updated partition constraint for default partition \"test1_def\" would be violated by some row\n\n\nSo I haven't come up with bad cases so far :)\n\n--\nYoshikazu Imai \n\n\n\n",
"msg_date": "Thu, 28 Feb 2019 08:26:45 +0000",
"msg_from": "\"Imai, Yoshikazu\" <imai.yoshikazu@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Problem with default partition pruning"
},
{
"msg_contents": "Hi \r\n\r\nPatch work fine to me, but I have one test case where default partition still scanned. \r\n\r\npostgres=# explain select * from test1 where (id < 10) and true;\r\n QUERY PLAN \r\n-------------------------------------------------------------------\r\n Append (cost=0.00..55.98 rows=846 width=36)\r\n -> Seq Scan on test1_1 (cost=0.00..25.88 rows=423 width=36)\r\n Filter: (id < 10)\r\n -> Seq Scan on test1_def (cost=0.00..25.88 rows=423 width=36)\r\n Filter: (id < 10)\r\n(5 rows)",
"msg_date": "Mon, 04 Mar 2019 17:29:00 +0000",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "Hi Yuzuko Hosoya,\n\nIgnore my last message, I think this is also a legitimate scan on default\npartition.\n\n\nOn Mon, Mar 4, 2019 at 10:29 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n\n> Hi\n>\n> Patch work fine to me, but I have one test case where default partition\n> still scanned.\n>\n> postgres=# explain select * from test1 where (id < 10) and true;\n> QUERY PLAN\n> -------------------------------------------------------------------\n> Append (cost=0.00..55.98 rows=846 width=36)\n> -> Seq Scan on test1_1 (cost=0.00..25.88 rows=423 width=36)\n> Filter: (id < 10)\n> -> Seq Scan on test1_def (cost=0.00..25.88 rows=423 width=36)\n> Filter: (id < 10)\n> (5 rows)\n\n\n\n-- \nIbrar Ahmed\n\nHi Yuzuko Hosoya,Ignore my last message, I think this is also a legitimate scan on default partition.On Mon, Mar 4, 2019 at 10:29 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:Hi \n\nPatch work fine to me, but I have one test case where default partition still scanned. \n\npostgres=# explain select * from test1 where (id < 10) and true;\n QUERY PLAN \n-------------------------------------------------------------------\n Append (cost=0.00..55.98 rows=846 width=36)\n -> Seq Scan on test1_1 (cost=0.00..25.88 rows=423 width=36)\n Filter: (id < 10)\n -> Seq Scan on test1_def (cost=0.00..25.88 rows=423 width=36)\n Filter: (id < 10)\n(5 rows)-- Ibrar Ahmed",
"msg_date": "Mon, 4 Mar 2019 22:36:40 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "Imai-san,\n\nThanks for sharing your tests!\n\nOn Thu, Feb 28, 2019 at 5:27 PM Imai, Yoshikazu\n<imai.yoshikazu@jp.fujitsu.com> wrote:\n>\n> Hosoya-san\n>\n> On Wed, Feb 27, 2019 at 6:51 AM, Yuzuko Hosoya wrote:\n> > > From: Amit Langote [mailto:Langote_Amit_f8@lab.ntt.co.jp]\n> > > Sent: Wednesday, February 27, 2019 11:22 AM\n> > >\n> > > Hosoya-san,\n> > >\n> > > On 2019/02/22 17:14, Yuzuko Hosoya wrote:\n> > > > Hi,\n> > > >\n> > > > I found the bug of default partition pruning when executing a range\n> > query.\n> > > >\n> > > > -----\n> > > > postgres=# create table test1(id int, val text) partition by range\n> > > > (id); postgres=# create table test1_1 partition of test1 for values\n> > > > from (0) to (100); postgres=# create table test1_2 partition of\n> > > > test1 for values from (150) to (200); postgres=# create table\n> > > > test1_def partition of test1 default;\n> > > >\n> > > > postgres=# explain select * from test1 where id > 0 and id < 30;\n> > > > QUERY PLAN\n> > > > ----------------------------------------------------------------\n> > > > Append (cost=0.00..11.83 rows=59 width=11)\n> > > > -> Seq Scan on test1_1 (cost=0.00..5.00 rows=58 width=11)\n> > > > Filter: ((id > 0) AND (id < 30))\n> > > > -> Seq Scan on test1_def (cost=0.00..6.53 rows=1 width=12)\n> > > > Filter: ((id > 0) AND (id < 30))\n> > > > (5 rows)\n> > > >\n> > > > There is no need to scan the default partition, but it's scanned.\n> > > > -----\n> > > >\n> > > > In the current implement, whether the default partition is scanned\n> > > > or not is determined according to each condition of given WHERE\n> > > > clause at get_matching_range_bounds(). In this example,\n> > > > scan_default is set true according to id > 0 because id >= 200\n> > > > matches the default partition. Similarly, according to id < 30,\n> > scan_default is set true.\n> > > > Then, these results are combined according to AND/OR at\n> > perform_pruning_combine_step().\n> > > > In this case, final result's scan_default is set true.\n> > > >\n> > > > The modifications I made are as follows:\n> > > > - get_matching_range_bounds() determines only offsets of range bounds\n> > > > according to each condition\n> > > > - These results are combined at perform_pruning_combine_step()\n> > > > - Whether the default partition is scanned or not is determined at\n> > > > get_matching_partitions()\n> > > >\n> > > > Attached the patch. Any feedback is greatly appreciated.\n> > >\n> > > Thank you for reporting. Can you please add this to March CF in Bugs\n> > > category so as not to lose\n> > track\n> > > of this?\n> > >\n> > > I will try to send review comments soon.\n> > >\n> > Thank you for your reply. I added this to March CF.\n>\n> I tested with simple use case and I confirmed it works correctly like below.\n>\n> In case using between clause:\n> postgres=# create table test1(id int, val text) partition by range (id);\n> postgres=# create table test1_1 partition of test1 for values from (0) to (100);\n> postgres=# create table test1_2 partition of test1 for values from (150) to (200);\n> postgres=# create table test1_def partition of test1 default;\n>\n> [HEAD]\n> postgres=# explain analyze select * from test1 where id between 0 and 50;\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------\n> Append (cost=0.00..58.16 rows=12 width=36) (actual time=0.008..0.008 rows=0 loops=1)\n> -> Seq Scan on test1_1 (cost=0.00..29.05 rows=6 width=36) (actual time=0.005..0.005 rows=0 loops=1)\n> Filter: ((id >= 0) AND (id <= 50))\n> -> Seq Scan on test1_def (cost=0.00..29.05 rows=6 width=36) (actual time=0.002..0.002 rows=0 loops=1)\n> Filter: ((id >= 0) AND (id <= 50))\n>\n>\n> [patched]\n> postgres=# explain analyze select * from test1 where id between 0 and 50;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------\n> Append (cost=0.00..29.08 rows=6 width=36) (actual time=0.006..0.006 rows=0 loops=1)\n> -> Seq Scan on test1_1 (cost=0.00..29.05 rows=6 width=36) (actual time=0.004..0.005 rows=0 loops=1)\n> Filter: ((id >= 0) AND (id <= 50))\n>\n>\n>\n> I considered about another use case. If default partition contains rows whose id = 300 and then we add another partition which have constraints like id >= 300 and id < 400, I thought we won't scan the rows anymore. But I noticed we simply can't add such a partition.\n>\n> postgres=# insert into test1 values (300);\n> INSERT 0 1\n> postgres=# create table test1_3 partition of test1 for values from (300) to (400);\n> ERROR: updated partition constraint for default partition \"test1_def\" would be violated by some row\n>\n>\n> So I haven't come up with bad cases so far :)\n\nI didn't test cases you mentioned.\nThanks to you, I could check correctness of the patch!\n\n-- \nBest regards,\nYuzuko Hosoya\nNTT Open Source Software Center\n\n",
"msg_date": "Tue, 5 Mar 2019 17:35:59 +0900",
"msg_from": "yuzuko <yuzukohosoya@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "Hi Ibrar,\n\nOn Tue, Mar 5, 2019 at 2:37 AM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n>\n> Hi Yuzuko Hosoya,\n>\n> Ignore my last message, I think this is also a legitimate scan on default partition.\n>\nOh, I got it. Thanks a lot.\n\n>\n> On Mon, Mar 4, 2019 at 10:29 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n>>\n>> Hi\n>>\n>> Patch work fine to me, but I have one test case where default partition still scanned.\n>>\n>> postgres=# explain select * from test1 where (id < 10) and true;\n>> QUERY PLAN\n>> -------------------------------------------------------------------\n>> Append (cost=0.00..55.98 rows=846 width=36)\n>> -> Seq Scan on test1_1 (cost=0.00..25.88 rows=423 width=36)\n>> Filter: (id < 10)\n>> -> Seq Scan on test1_def (cost=0.00..25.88 rows=423 width=36)\n>> Filter: (id < 10)\n>> (5 rows)\n>\n>\n>\n> --\n> Ibrar Ahmed\n\n-- \nBest regards,\nYuzuko Hosoya\nNTT Open Source Software Center\n\n",
"msg_date": "Tue, 5 Mar 2019 18:13:32 +0900",
"msg_from": "yuzuko <yuzukohosoya@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "\nLe 28/02/2019 à 09:26, Imai, Yoshikazu a écrit :\n> Hosoya-san\n>\n> On Wed, Feb 27, 2019 at 6:51 AM, Yuzuko Hosoya wrote:\n>>> From: Amit Langote [mailto:Langote_Amit_f8@lab.ntt.co.jp]\n>>> Sent: Wednesday, February 27, 2019 11:22 AM\n>>>\n>>> Hosoya-san,\n>>>\n>>> On 2019/02/22 17:14, Yuzuko Hosoya wrote:\n>>>> Hi,\n>>>>\n>>>> I found the bug of default partition pruning when executing a range\n>> query.\n>>>> -----\n>>>> postgres=# create table test1(id int, val text) partition by range\n>>>> (id); postgres=# create table test1_1 partition of test1 for values\n>>>> from (0) to (100); postgres=# create table test1_2 partition of\n>>>> test1 for values from (150) to (200); postgres=# create table\n>>>> test1_def partition of test1 default;\n>>>>\n>>>> postgres=# explain select * from test1 where id > 0 and id < 30;\n>>>> QUERY PLAN\n>>>> ----------------------------------------------------------------\n>>>> Append (cost=0.00..11.83 rows=59 width=11)\n>>>> -> Seq Scan on test1_1 (cost=0.00..5.00 rows=58 width=11)\n>>>> Filter: ((id > 0) AND (id < 30))\n>>>> -> Seq Scan on test1_def (cost=0.00..6.53 rows=1 width=12)\n>>>> Filter: ((id > 0) AND (id < 30))\n>>>> (5 rows)\n>>>>\n>>>> There is no need to scan the default partition, but it's scanned.\n>>>> -----\n>>>>\n>>>> In the current implement, whether the default partition is scanned\n>>>> or not is determined according to each condition of given WHERE\n>>>> clause at get_matching_range_bounds(). In this example,\n>>>> scan_default is set true according to id > 0 because id >= 200\n>>>> matches the default partition. Similarly, according to id < 30,\n>> scan_default is set true.\n>>>> Then, these results are combined according to AND/OR at\n>> perform_pruning_combine_step().\n>>>> In this case, final result's scan_default is set true.\n>>>>\n>>>> The modifications I made are as follows:\n>>>> - get_matching_range_bounds() determines only offsets of range bounds\n>>>> according to each condition\n>>>> - These results are combined at perform_pruning_combine_step()\n>>>> - Whether the default partition is scanned or not is determined at\n>>>> get_matching_partitions()\n>>>>\n>>>> Attached the patch. Any feedback is greatly appreciated.\n>>> Thank you for reporting. Can you please add this to March CF in Bugs\n>>> category so as not to lose\n>> track\n>>> of this?\n>>>\n>>> I will try to send review comments soon.\n>>>\n>> Thank you for your reply. I added this to March CF.\n> I tested with simple use case and I confirmed it works correctly like below.\n>\n> In case using between clause:\n> postgres=# create table test1(id int, val text) partition by range (id); \n> postgres=# create table test1_1 partition of test1 for values from (0) to (100); \n> postgres=# create table test1_2 partition of test1 for values from (150) to (200);\n> postgres=# create table test1_def partition of test1 default; \n>\n> [HEAD]\n> postgres=# explain analyze select * from test1 where id between 0 and 50;\n> QUERY PLAN \n> -----------------------------------------------------------------------------------------------------------\n> Append (cost=0.00..58.16 rows=12 width=36) (actual time=0.008..0.008 rows=0 loops=1)\n> -> Seq Scan on test1_1 (cost=0.00..29.05 rows=6 width=36) (actual time=0.005..0.005 rows=0 loops=1)\n> Filter: ((id >= 0) AND (id <= 50))\n> -> Seq Scan on test1_def (cost=0.00..29.05 rows=6 width=36) (actual time=0.002..0.002 rows=0 loops=1)\n> Filter: ((id >= 0) AND (id <= 50))\n>\n>\n> [patched]\n> postgres=# explain analyze select * from test1 where id between 0 and 50;\n> QUERY PLAN \n> ---------------------------------------------------------------------------------------------------------\n> Append (cost=0.00..29.08 rows=6 width=36) (actual time=0.006..0.006 rows=0 loops=1)\n> -> Seq Scan on test1_1 (cost=0.00..29.05 rows=6 width=36) (actual time=0.004..0.005 rows=0 loops=1)\n> Filter: ((id >= 0) AND (id <= 50))\n>\n>\n>\n> I considered about another use case. If default partition contains rows whose id = 300 and then we add another partition which have constraints like id >= 300 and id < 400, I thought we won't scan the rows anymore. But I noticed we simply can't add such a partition.\n>\n> postgres=# insert into test1 values (300);\n> INSERT 0 1\n> postgres=# create table test1_3 partition of test1 for values from (300) to (400); \n> ERROR: updated partition constraint for default partition \"test1_def\" would be violated by some row\n>\n>\n> So I haven't come up with bad cases so far :)\n>\n> --\n> Yoshikazu Imai \n\nHello Yoshikazu-San,\n\nI tested your patch using some sub-partitions and found a possible problem.\n\nI create a new partitioned partition test1_3 with 2 sub-partitions :\n\n-------------------------\n\ncreate table test1_3 partition of test1 for values from (200) to (400)\npartition by range (id);\ncreate table test1_3_1 partition of test1_3 for values from (200) to (250);\ncreate table test1_3_2 partition of test1_3 for values from (250) to (350);\n\n# explain select * from test1 where (id > 0 and id < 30);\n QUERY PLAN \n---------------------------------------------------------------\n Append (cost=0.00..29.08 rows=6 width=36)\n -> Seq Scan on test1_1 (cost=0.00..29.05 rows=6 width=36)\n Filter: ((id > 0) AND (id < 30))\n(3 rows)\n\n# explain select * from test1 where (id > 220 and id < 230);\n QUERY PLAN \n-----------------------------------------------------------------\n Append (cost=0.00..29.08 rows=6 width=36)\n -> Seq Scan on test1_3_1 (cost=0.00..29.05 rows=6 width=36)\n Filter: ((id > 220) AND (id < 230))\n(3 rows)\n\n# explain select * from test1\nwhere (id > 0 and id < 30) or (id > 220 and id < 230);\n QUERY PLAN \n---------------------------------------------------------------------------\n Append (cost=0.00..106.40 rows=39 width=36)\n -> Seq Scan on test1_1 (cost=0.00..35.40 rows=13 width=36)\n Filter: (((id > 0) AND (id < 30)) OR ((id > 220) AND (id < 230)))\n -> Seq Scan on test1_3_1 (cost=0.00..35.40 rows=13 width=36)\n Filter: (((id > 0) AND (id < 30)) OR ((id > 220) AND (id < 230)))\n -> Seq Scan on test1_3_2 (cost=0.00..35.40 rows=13 width=36)\n Filter: (((id > 0) AND (id < 30)) OR ((id > 220) AND (id < 230)))\n(7 rows)\n\n-----------------\n\nPartition pruning is functioning when only the sub-partition is\nrequired. When both the partition and the sub-partition is required,\nthere is no pruning on the sub-partition.\n\nCordialement,\n\n-- \nThibaut Madelaine\nDalibo\n\n\n\n",
"msg_date": "Thu, 14 Mar 2019 15:10:53 +0100",
"msg_from": "Thibaut <thibaut.madelaine@dalibo.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "Hi Thibaut,\n\nThanks a lot for your test and comments.\n\n> \n> Le 28/02/2019 à 09:26, Imai, Yoshikazu a écrit :\n> > Hosoya-san\n> >\n> > On Wed, Feb 27, 2019 at 6:51 AM, Yuzuko Hosoya wrote:\n> >>> From: Amit Langote [mailto:Langote_Amit_f8@lab.ntt.co.jp]\n> >>> Sent: Wednesday, February 27, 2019 11:22 AM\n> >>>\n> >>> Hosoya-san,\n> >>>\n> >>> On 2019/02/22 17:14, Yuzuko Hosoya wrote:\n> >>>> Hi,\n> >>>>\n> >>>> I found the bug of default partition pruning when executing a range\n> >> query.\n> >>>> -----\n> >>>> postgres=# create table test1(id int, val text) partition by range\n> >>>> (id); postgres=# create table test1_1 partition of test1 for values\n> >>>> from (0) to (100); postgres=# create table test1_2 partition of\n> >>>> test1 for values from (150) to (200); postgres=# create table\n> >>>> test1_def partition of test1 default;\n> >>>>\n> >>>> postgres=# explain select * from test1 where id > 0 and id < 30;\n> >>>> QUERY PLAN\n> >>>> ----------------------------------------------------------------\n> >>>> Append (cost=0.00..11.83 rows=59 width=11)\n> >>>> -> Seq Scan on test1_1 (cost=0.00..5.00 rows=58 width=11)\n> >>>> Filter: ((id > 0) AND (id < 30))\n> >>>> -> Seq Scan on test1_def (cost=0.00..6.53 rows=1 width=12)\n> >>>> Filter: ((id > 0) AND (id < 30))\n> >>>> (5 rows)\n> >>>>\n> >>>> There is no need to scan the default partition, but it's scanned.\n> >>>> -----\n> >>>>\n> >>>> In the current implement, whether the default partition is scanned\n> >>>> or not is determined according to each condition of given WHERE\n> >>>> clause at get_matching_range_bounds(). In this example,\n> >>>> scan_default is set true according to id > 0 because id >= 200\n> >>>> matches the default partition. Similarly, according to id < 30,\n> >> scan_default is set true.\n> >>>> Then, these results are combined according to AND/OR at\n> >> perform_pruning_combine_step().\n> >>>> In this case, final result's scan_default is set true.\n> >>>>\n> >>>> The modifications I made are as follows:\n> >>>> - get_matching_range_bounds() determines only offsets of range bounds\n> >>>> according to each condition\n> >>>> - These results are combined at perform_pruning_combine_step()\n> >>>> - Whether the default partition is scanned or not is determined at\n> >>>> get_matching_partitions()\n> >>>>\n> >>>> Attached the patch. Any feedback is greatly appreciated.\n> >>> Thank you for reporting. Can you please add this to March CF in\n> >>> Bugs category so as not to lose\n> >> track\n> >>> of this?\n> >>>\n> >>> I will try to send review comments soon.\n> >>>\n> >> Thank you for your reply. I added this to March CF.\n> > I tested with simple use case and I confirmed it works correctly like below.\n> >\n> > In case using between clause:\n> > postgres=# create table test1(id int, val text) partition by range\n> > (id); postgres=# create table test1_1 partition of test1 for values\n> > from (0) to (100); postgres=# create table test1_2 partition of test1\n> > for values from (150) to (200); postgres=# create table test1_def\n> > partition of test1 default;\n> >\n> > [HEAD]\n> > postgres=# explain analyze select * from test1 where id between 0 and 50;\n> > QUERY PLAN\n> > ----------------------------------------------------------------------\n> > -------------------------------------\n> > Append (cost=0.00..58.16 rows=12 width=36) (actual time=0.008..0.008 rows=0 loops=1)\n> > -> Seq Scan on test1_1 (cost=0.00..29.05 rows=6 width=36) (actual time=0.005..0.005\n> rows=0 loops=1)\n> > Filter: ((id >= 0) AND (id <= 50))\n> > -> Seq Scan on test1_def (cost=0.00..29.05 rows=6 width=36) (actual\n> time=0.002..0.002 rows=0 loops=1)\n> > Filter: ((id >= 0) AND (id <= 50))\n> >\n> >\n> > [patched]\n> > postgres=# explain analyze select * from test1 where id between 0 and 50;\n> > QUERY PLAN\n> > ----------------------------------------------------------------------\n> > -----------------------------------\n> > Append (cost=0.00..29.08 rows=6 width=36) (actual time=0.006..0.006 rows=0 loops=1)\n> > -> Seq Scan on test1_1 (cost=0.00..29.05 rows=6 width=36) (actual time=0.004..0.005\n> rows=0 loops=1)\n> > Filter: ((id >= 0) AND (id <= 50))\n> >\n> >\n> >\n> > I considered about another use case. If default partition contains rows whose id = 300\n> and then we add another partition which have constraints like id >= 300 and id < 400, I thought\n> we won't scan the rows anymore. But I noticed we simply can't add such a partition.\n> >\n> > postgres=# insert into test1 values (300); INSERT 0 1 postgres=#\n> > create table test1_3 partition of test1 for values from (300) to\n> > (400);\n> > ERROR: updated partition constraint for default partition \"test1_def\"\n> > would be violated by some row\n> >\n> >\n> > So I haven't come up with bad cases so far :)\n> >\n> > --\n> > Yoshikazu Imai\n> \n> Hello Yoshikazu-San,\n> \n> I tested your patch using some sub-partitions and found a possible problem.\n> \n> I create a new partitioned partition test1_3 with 2 sub-partitions :\n> \n> -------------------------\n> \n> create table test1_3 partition of test1 for values from (200) to (400) partition by range\n> (id); create table test1_3_1 partition of test1_3 for values from (200) to (250); create\n> table test1_3_2 partition of test1_3 for values from (250) to (350);\n> \n> # explain select * from test1 where (id > 0 and id < 30);\n> QUERY PLAN\n> ---------------------------------------------------------------\n> Append (cost=0.00..29.08 rows=6 width=36)\n> -> Seq Scan on test1_1 (cost=0.00..29.05 rows=6 width=36)\n> Filter: ((id > 0) AND (id < 30))\n> (3 rows)\n> \n> # explain select * from test1 where (id > 220 and id < 230);\n> QUERY PLAN\n> -----------------------------------------------------------------\n> Append (cost=0.00..29.08 rows=6 width=36)\n> -> Seq Scan on test1_3_1 (cost=0.00..29.05 rows=6 width=36)\n> Filter: ((id > 220) AND (id < 230))\n> (3 rows)\n> \n> # explain select * from test1\n> where (id > 0 and id < 30) or (id > 220 and id < 230);\n> QUERY PLAN\n> ---------------------------------------------------------------------------\n> Append (cost=0.00..106.40 rows=39 width=36)\n> -> Seq Scan on test1_1 (cost=0.00..35.40 rows=13 width=36)\n> Filter: (((id > 0) AND (id < 30)) OR ((id > 220) AND (id < 230)))\n> -> Seq Scan on test1_3_1 (cost=0.00..35.40 rows=13 width=36)\n> Filter: (((id > 0) AND (id < 30)) OR ((id > 220) AND (id < 230)))\n> -> Seq Scan on test1_3_2 (cost=0.00..35.40 rows=13 width=36)\n> Filter: (((id > 0) AND (id < 30)) OR ((id > 220) AND (id < 230)))\n> (7 rows)\n> \n> -----------------\n> \n> Partition pruning is functioning when only the sub-partition is required. When both the\n> partition and the sub-partition is required, there is no pruning on the sub-partition.\n> \nIndeed, it's problematic. I also did test and I found that \nthis problem was occurred when any partition didn't match \nWHERE clauses. So following query didn't work correctly.\n\n# explain select * from test1_3 where (id > 0 and id < 30); \n QUERY PLAN \n-----------------------------------------------------------------\n Append (cost=0.00..58.16 rows=12 width=36)\n -> Seq Scan on test1_3_1 (cost=0.00..29.05 rows=6 width=36)\n Filter: ((id > 0) AND (id < 30))\n -> Seq Scan on test1_3_2 (cost=0.00..29.05 rows=6 width=36)\n Filter: ((id > 0) AND (id < 30))\n(5 rows)\n\nI created a new patch to handle this problem, and confirmed\nthe query you mentioned works as expected\n\n# explain select * from test1 where (id > 0 and id < 30) or (id > 220 and id < 230);\n QUERY PLAN \n---------------------------------------------------------------------------\n Append (cost=0.00..70.93 rows=26 width=36)\n -> Seq Scan on test1_1_1 (cost=0.00..35.40 rows=13 width=36)\n Filter: (((id > 0) AND (id < 30)) OR ((id > 220) AND (id < 230)))\n -> Seq Scan on test1_3_1 (cost=0.00..35.40 rows=13 width=36)\n Filter: (((id > 0) AND (id < 30)) OR ((id > 220) AND (id < 230)))\n(5 rows)\n\nv2 patch attached.\nCould you please check it again?\n\n--\nBest regards,\nYuzuko Hosoya",
"msg_date": "Fri, 15 Mar 2019 15:05:41 +0900",
"msg_from": "\"Yuzuko Hosoya\" <hosoya.yuzuko@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "RE: Problem with default partition pruning"
},
{
"msg_contents": "Hello.\n\nAt Fri, 15 Mar 2019 15:05:41 +0900, \"Yuzuko Hosoya\" <hosoya.yuzuko@lab.ntt.co.jp> wrote in <001901d4daf5$1ef4f640$5cdee2c0$@lab.ntt.co.jp>\n> v2 patch attached.\n> Could you please check it again?\n\nI have some comments on the patch itself.\n\nThe patch relies on the fact(?) that the lowest index is always\n-1 in range partition and uses it as pseudo default\npartition. I'm not sure it is really the fact and anyway it\ndonsn't seem the right thing to do. Could you explain how it\nworks, not what you did in this patch?\n\n\nL96:\n> /* There can only be zero or one matching partition. */\n> - if (partindices[off + 1] >= 0)\n> - result->bound_offsets = bms_make_singleton(off + 1);\n> - else\n> - result->scan_default =\n> - partition_bound_has_default(boundinfo);\n> + result->bound_offsets = bms_make_singleton(off + 1);\n\nThe comment had a meaning for the old code. Seems to need rewrite?\n\nL183:\n> + /* \n> + * All bounds are greater than the key, so we could only \n> + * expect to find the lookup key in the default partition. \n> + */\n\nLong trailing spaces are attached to every line without\nsubstantial modification.\n\nL198:\n> - * inclusive, no need add the adjacent partition.\n> + * inclusive, no need add the adjacent partition. If 'off' is\n> + * -1 indicating that all bounds are greater, then we simply\n> + * end up adding the first bound's offset, that is, 0.\n\n off doesn't seem to be -1 there.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 15 Mar 2019 17:30:07 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "Hosoya-san,\n\nOn 2019/03/15 15:05, Yuzuko Hosoya wrote:\n> Indeed, it's problematic. I also did test and I found that \n> this problem was occurred when any partition didn't match \n> WHERE clauses. So following query didn't work correctly.\n> \n> # explain select * from test1_3 where (id > 0 and id < 30); \n> QUERY PLAN \n> -----------------------------------------------------------------\n> Append (cost=0.00..58.16 rows=12 width=36)\n> -> Seq Scan on test1_3_1 (cost=0.00..29.05 rows=6 width=36)\n> Filter: ((id > 0) AND (id < 30))\n> -> Seq Scan on test1_3_2 (cost=0.00..29.05 rows=6 width=36)\n> Filter: ((id > 0) AND (id < 30))\n> (5 rows)\n> \n> I created a new patch to handle this problem, and confirmed\n> the query you mentioned works as expected\n> \n> # explain select * from test1 where (id > 0 and id < 30) or (id > 220 and id < 230);\n> QUERY PLAN \n> ---------------------------------------------------------------------------\n> Append (cost=0.00..70.93 rows=26 width=36)\n> -> Seq Scan on test1_1_1 (cost=0.00..35.40 rows=13 width=36)\n> Filter: (((id > 0) AND (id < 30)) OR ((id > 220) AND (id < 230)))\n> -> Seq Scan on test1_3_1 (cost=0.00..35.40 rows=13 width=36)\n> Filter: (((id > 0) AND (id < 30)) OR ((id > 220) AND (id < 230)))\n> (5 rows)\n> \n> v2 patch attached.\n> Could you please check it again?\n\nI think the updated patch breaks the promise that\nget_matching_range_bounds won't set scan_default based on individual\npruning value comparisons. How about the attached delta patch that\napplies on top of your earlier v1 patch, which fixes the issue reported by\nThibaut?\n\nThanks,\nAmit",
"msg_date": "Mon, 18 Mar 2019 18:44:07 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "Hello.\n\nAt Fri, 15 Mar 2019 17:30:07 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <20190315.173007.147577546.horiguchi.kyotaro@lab.ntt.co.jp>\n> The patch relies on the fact(?) that the lowest index is always\n> -1 in range partition and uses it as pseudo default\n> partition. I'm not sure it is really the fact and anyway it\n> donsn't seem the right thing to do. Could you explain how it\n> works, not what you did in this patch?\n\nI understood how it works but still uneasy that only list\npartitioning requires scan_default. Anyway please ignore this.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 19 Mar 2019 11:40:41 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "Hi.\n\nAt Mon, 18 Mar 2019 18:44:07 +0900, Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> wrote in <9bed6b79-f264-6976-b880-e2a5d23e9d85@lab.ntt.co.jp>\n> > v2 patch attached.\n> > Could you please check it again?\n> \n> I think the updated patch breaks the promise that\n> get_matching_range_bounds won't set scan_default based on individual\n> pruning value comparisons. How about the attached delta patch that\n> applies on top of your earlier v1 patch, which fixes the issue reported by\n> Thibaut?\n\nI read through the patch and understood how it works. And Amit's\nproposal looks fine.\n\nBut that makes me think of scan_default as a wart. \n\nThe attached patch is a refactoring that removes scan_default\nfrom PruneStepResult and the defaut partition is represented as\nthe same way as non-default partitions, without changing in\nbehavior. This improves the modularity of partprune code a bit.\n\nThe fix would be put on top of this easily.\n\nThoughts?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 19 Mar 2019 15:27:56 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "Hi Amit-san,\n\nFrom: Amit Langote [mailto:Langote_Amit_f8@lab.ntt.co.jp]\nSent: Monday, March 18, 2019 6:44 PM\n \n> Hosoya-san,\n> \n> On 2019/03/15 15:05, Yuzuko Hosoya wrote:\n> > Indeed, it's problematic. I also did test and I found that this\n> > problem was occurred when any partition didn't match WHERE clauses.\n> > So following query didn't work correctly.\n> >\n> > # explain select * from test1_3 where (id > 0 and id < 30);\n> > QUERY PLAN\n> > -----------------------------------------------------------------\n> > Append (cost=0.00..58.16 rows=12 width=36)\n> > -> Seq Scan on test1_3_1 (cost=0.00..29.05 rows=6 width=36)\n> > Filter: ((id > 0) AND (id < 30))\n> > -> Seq Scan on test1_3_2 (cost=0.00..29.05 rows=6 width=36)\n> > Filter: ((id > 0) AND (id < 30))\n> > (5 rows)\n> >\n> > I created a new patch to handle this problem, and confirmed the query\n> > you mentioned works as expected\n> >\n> > # explain select * from test1 where (id > 0 and id < 30) or (id > 220 and id < 230);\n> > QUERY PLAN\n> > ----------------------------------------------------------------------\n> > ----- Append (cost=0.00..70.93 rows=26 width=36)\n> > -> Seq Scan on test1_1_1 (cost=0.00..35.40 rows=13 width=36)\n> > Filter: (((id > 0) AND (id < 30)) OR ((id > 220) AND (id < 230)))\n> > -> Seq Scan on test1_3_1 (cost=0.00..35.40 rows=13 width=36)\n> > Filter: (((id > 0) AND (id < 30)) OR ((id > 220) AND (id <\n> > 230)))\n> > (5 rows)\n> >\n> > v2 patch attached.\n> > Could you please check it again?\n> \n> I think the updated patch breaks the promise that get_matching_range_bounds won't set scan_default\n> based on individual pruning value comparisons. How about the attached delta patch that applies on\n> top of your earlier v1 patch, which fixes the issue reported by Thibaut?\n> \nIndeed. I agreed with your proposal.\nAlso, I confirmed your patch works correctly.\n\nBest regards,\nYuzuko Hosoya\n\n\n\n",
"msg_date": "Tue, 19 Mar 2019 16:01:09 +0900",
"msg_from": "\"Yuzuko Hosoya\" <hosoya.yuzuko@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "RE: Problem with default partition pruning"
},
{
"msg_contents": "\nLe 19/03/2019 à 08:01, Yuzuko Hosoya a écrit :\n> Hi Amit-san,\n>\n> From: Amit Langote [mailto:Langote_Amit_f8@lab.ntt.co.jp]\n> Sent: Monday, March 18, 2019 6:44 PM\n> \n>> Hosoya-san,\n>>\n>> On 2019/03/15 15:05, Yuzuko Hosoya wrote:\n>>> Indeed, it's problematic. I also did test and I found that this\n>>> problem was occurred when any partition didn't match WHERE clauses.\n>>> So following query didn't work correctly.\n>>>\n>>> # explain select * from test1_3 where (id > 0 and id < 30);\n>>> QUERY PLAN\n>>> -----------------------------------------------------------------\n>>> Append (cost=0.00..58.16 rows=12 width=36)\n>>> -> Seq Scan on test1_3_1 (cost=0.00..29.05 rows=6 width=36)\n>>> Filter: ((id > 0) AND (id < 30))\n>>> -> Seq Scan on test1_3_2 (cost=0.00..29.05 rows=6 width=36)\n>>> Filter: ((id > 0) AND (id < 30))\n>>> (5 rows)\n>>>\n>>> I created a new patch to handle this problem, and confirmed the query\n>>> you mentioned works as expected\n>>>\n>>> # explain select * from test1 where (id > 0 and id < 30) or (id > 220 and id < 230);\n>>> QUERY PLAN\n>>> ----------------------------------------------------------------------\n>>> ----- Append (cost=0.00..70.93 rows=26 width=36)\n>>> -> Seq Scan on test1_1_1 (cost=0.00..35.40 rows=13 width=36)\n>>> Filter: (((id > 0) AND (id < 30)) OR ((id > 220) AND (id < 230)))\n>>> -> Seq Scan on test1_3_1 (cost=0.00..35.40 rows=13 width=36)\n>>> Filter: (((id > 0) AND (id < 30)) OR ((id > 220) AND (id <\n>>> 230)))\n>>> (5 rows)\n>>>\n>>> v2 patch attached.\n>>> Could you please check it again?\n>> I think the updated patch breaks the promise that get_matching_range_bounds won't set scan_default\n>> based on individual pruning value comparisons. How about the attached delta patch that applies on\n>> top of your earlier v1 patch, which fixes the issue reported by Thibaut?\n>>\n> Indeed. I agreed with your proposal.\n> Also, I confirmed your patch works correctly.\n>\n> Best regards,\n> Yuzuko Hosoya\n\nI kept on testing with sub-partitioning.\nI found a case, using 2 default partitions, where a default partition is\nnot pruned:\n\n--------------\n\ncreate table test2(id int, val text) partition by range (id);\ncreate table test2_20_plus_def partition of test2 default;\ncreate table test2_0_20 partition of test2 for values from (0) to (20)\n partition by range (id);\ncreate table test2_0_10 partition of test2_0_20 for values from (0) to (10);\ncreate table test2_10_20_def partition of test2_0_20 default;\n\n# explain (costs off) select * from test2 where id=5 or id=25;\n QUERY PLAN \n-----------------------------------------\n Append\n -> Seq Scan on test2_0_10\n Filter: ((id = 5) OR (id = 25))\n -> Seq Scan on test2_10_20_def\n Filter: ((id = 5) OR (id = 25))\n -> Seq Scan on test2_20_plus_def\n Filter: ((id = 5) OR (id = 25))\n(7 rows)\n\n--------------\n\nI have the same output using Amit's v1-delta.patch or Hosoya's\nv2_default_partition_pruning.patch.\n\n\n\n",
"msg_date": "Tue, 19 Mar 2019 15:58:57 +0100",
"msg_from": "Thibaut Madelaine <thibaut.madelaine@dalibo.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "Hi Thibaut,\n\nOn 2019/03/19 23:58, Thibaut Madelaine wrote:\n> I kept on testing with sub-partitioning.\nThanks.\n\n> I found a case, using 2 default partitions, where a default partition is\n> not pruned:\n> \n> --------------\n> \n> create table test2(id int, val text) partition by range (id);\n> create table test2_20_plus_def partition of test2 default;\n> create table test2_0_20 partition of test2 for values from (0) to (20)\n> partition by range (id);\n> create table test2_0_10 partition of test2_0_20 for values from (0) to (10);\n> create table test2_10_20_def partition of test2_0_20 default;\n> \n> # explain (costs off) select * from test2 where id=5 or id=25;\n> QUERY PLAN \n> -----------------------------------------\n> Append\n> -> Seq Scan on test2_0_10\n> Filter: ((id = 5) OR (id = 25))\n> -> Seq Scan on test2_10_20_def\n> Filter: ((id = 5) OR (id = 25))\n> -> Seq Scan on test2_20_plus_def\n> Filter: ((id = 5) OR (id = 25))\n> (7 rows)\n> \n> --------------\n> \n> I have the same output using Amit's v1-delta.patch or Hosoya's\n> v2_default_partition_pruning.patch.\n\nI think I've figured what may be wrong.\n\nPartition pruning step generation code should ignore any arguments of an\nOR clause that won't be true for a sub-partitioned partition, given its\npartition constraint.\n\nIn this case, id = 25 contradicts test2_0_20's partition constraint (which\nis, a IS NOT NULL AND a >= 0 AND a < 20), so the OR clause should really\nbe simplified to id = 5, ignoring the id = 25 argument. Note that we\nremove id = 25 only for the considerations of pruning and not from the\nactual clause that's passed to the final plan, although it wouldn't be a\nbad idea to try to do that.\n\nAttached revised delta patch, which includes the fix described above.\n\nThanks,\nAmit",
"msg_date": "Wed, 20 Mar 2019 18:06:13 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "\nLe 20/03/2019 à 10:06, Amit Langote a écrit :\n> Hi Thibaut,\n>\n> On 2019/03/19 23:58, Thibaut Madelaine wrote:\n>> I kept on testing with sub-partitioning.\n> Thanks.\n>\n>> I found a case, using 2 default partitions, where a default partition is\n>> not pruned:\n>>\n>> --------------\n>>\n>> create table test2(id int, val text) partition by range (id);\n>> create table test2_20_plus_def partition of test2 default;\n>> create table test2_0_20 partition of test2 for values from (0) to (20)\n>> partition by range (id);\n>> create table test2_0_10 partition of test2_0_20 for values from (0) to (10);\n>> create table test2_10_20_def partition of test2_0_20 default;\n>>\n>> # explain (costs off) select * from test2 where id=5 or id=25;\n>> QUERY PLAN \n>> -----------------------------------------\n>> Append\n>> -> Seq Scan on test2_0_10\n>> Filter: ((id = 5) OR (id = 25))\n>> -> Seq Scan on test2_10_20_def\n>> Filter: ((id = 5) OR (id = 25))\n>> -> Seq Scan on test2_20_plus_def\n>> Filter: ((id = 5) OR (id = 25))\n>> (7 rows)\n>>\n>> --------------\n>>\n>> I have the same output using Amit's v1-delta.patch or Hosoya's\n>> v2_default_partition_pruning.patch.\n> I think I've figured what may be wrong.\n>\n> Partition pruning step generation code should ignore any arguments of an\n> OR clause that won't be true for a sub-partitioned partition, given its\n> partition constraint.\n>\n> In this case, id = 25 contradicts test2_0_20's partition constraint (which\n> is, a IS NOT NULL AND a >= 0 AND a < 20), so the OR clause should really\n> be simplified to id = 5, ignoring the id = 25 argument. Note that we\n> remove id = 25 only for the considerations of pruning and not from the\n> actual clause that's passed to the final plan, although it wouldn't be a\n> bad idea to try to do that.\n>\n> Attached revised delta patch, which includes the fix described above.\n>\n> Thanks,\n> Amit\nAmit, I tested many cases with nested range sub-partitions... and I did\nnot find any problem with your last patch :-)\n\nI tried mixing with hash partitions with no problems.\n\nFrom the patch, there seems to be less checks than before. I cannot\nthink of a case that can have performance impacts.\n\nHosoya-san, if you agree with Amit's proposal, do you think you can send\na patch unifying your default_partition_pruning.patch and Amit's second\nv1-delta.patch?\n\nCordialement,\n\nThibaut\n\n\n\n\n\n",
"msg_date": "Thu, 21 Mar 2019 14:28:16 +0100",
"msg_from": "Thibaut <thibaut.madelaine@dalibo.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "Hi,\n\nThanks a lot for additional tests and the new patch.\n\n\n> Le 20/03/2019 à 10:06, Amit Langote a écrit :\n> > Hi Thibaut,\n> >\n> > On 2019/03/19 23:58, Thibaut Madelaine wrote:\n> >> I kept on testing with sub-partitioning.\n> > Thanks.\n> >\n> >> I found a case, using 2 default partitions, where a default partition\n> >> is not pruned:\n> >>\n> >> --------------\n> >>\n> >> create table test2(id int, val text) partition by range (id); create\n> >> table test2_20_plus_def partition of test2 default; create table\n> >> test2_0_20 partition of test2 for values from (0) to (20)\n> >> partition by range (id);\n> >> create table test2_0_10 partition of test2_0_20 for values from (0)\n> >> to (10); create table test2_10_20_def partition of test2_0_20\n> >> default;\n> >>\n> >> # explain (costs off) select * from test2 where id=5 or id=25;\n> >> QUERY PLAN\n> >> -----------------------------------------\n> >> Append\n> >> -> Seq Scan on test2_0_10\n> >> Filter: ((id = 5) OR (id = 25))\n> >> -> Seq Scan on test2_10_20_def\n> >> Filter: ((id = 5) OR (id = 25))\n> >> -> Seq Scan on test2_20_plus_def\n> >> Filter: ((id = 5) OR (id = 25))\n> >> (7 rows)\n> >>\n> >> --------------\n> >>\n> >> I have the same output using Amit's v1-delta.patch or Hosoya's\n> >> v2_default_partition_pruning.patch.\n> > I think I've figured what may be wrong.\n> >\n> > Partition pruning step generation code should ignore any arguments of\n> > an OR clause that won't be true for a sub-partitioned partition, given\n> > its partition constraint.\n> >\n> > In this case, id = 25 contradicts test2_0_20's partition constraint\n> > (which is, a IS NOT NULL AND a >= 0 AND a < 20), so the OR clause\n> > should really be simplified to id = 5, ignoring the id = 25 argument.\n> > Note that we remove id = 25 only for the considerations of pruning and\n> > not from the actual clause that's passed to the final plan, although\n> > it wouldn't be a bad idea to try to do that.\n> >\n> > Attached revised delta patch, which includes the fix described above.\n> >\n> > Thanks,\n> > Amit\n> Amit, I tested many cases with nested range sub-partitions... and I did not find any problem with your\n> last patch :-)\n> \n> I tried mixing with hash partitions with no problems.\n> \n> From the patch, there seems to be less checks than before. I cannot think of a case that can have\n> performance impacts.\n> \n> Hosoya-san, if you agree with Amit's proposal, do you think you can send a patch unifying your\n> default_partition_pruning.patch and Amit's second v1-delta.patch?\n>\n\nI understood Amit's proposal. But I think the issue Thibaut reported would \noccur regardless of whether clauses have OR clauses or not as follows.\nI tested a query which should output \"One-Time Filter: false\".\n\n# explain select * from test2_0_20 where id = 25;\n QUERY PLAN \n-----------------------------------------------------------------------\n Append (cost=0.00..25.91 rows=6 width=36)\n -> Seq Scan on test2_10_20_def (cost=0.00..25.88 rows=6 width=36)\n Filter: (id = 25)\n\n\nAs Amit described in the previous email, id = 25 contradicts test2_0_20's\npartition constraint, so I think this clause should be ignored and we can\nalso handle this case in the similar way as Amit proposal.\n\nI attached v1-delta-2.patch which fix the above issue. \n\nWhat do you think about it?\n\n\nBest regards,\nYuzuko Hosoya",
"msg_date": "Fri, 22 Mar 2019 15:02:51 +0900",
"msg_from": "\"Yuzuko Hosoya\" <hosoya.yuzuko@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "RE: Problem with default partition pruning"
},
{
"msg_contents": "Hosoya-san,\n\nOn 2019/03/22 15:02, Yuzuko Hosoya wrote:\n> I understood Amit's proposal. But I think the issue Thibaut reported would \n> occur regardless of whether clauses have OR clauses or not as follows.\n> I tested a query which should output \"One-Time Filter: false\".\n> \n> # explain select * from test2_0_20 where id = 25;\n> QUERY PLAN \n> -----------------------------------------------------------------------\n> Append (cost=0.00..25.91 rows=6 width=36)\n> -> Seq Scan on test2_10_20_def (cost=0.00..25.88 rows=6 width=36)\n> Filter: (id = 25)\n> \n\nGood catch, thanks.\n\n> As Amit described in the previous email, id = 25 contradicts test2_0_20's\n> partition constraint, so I think this clause should be ignored and we can\n> also handle this case in the similar way as Amit proposal.\n> \n> I attached v1-delta-2.patch which fix the above issue. \n> \n> What do you think about it?\n\nIt looks fine to me. You put the code block to check whether a give\nclause contradicts the partition constraint in its perfect place. :)\n\nMaybe we should have two patches as we seem to be improving two things:\n\n1. Patch to fix problems with default partition pruning originally\nreported by Hosoya-san\n\n2. Patch to determine if a given clause contradicts a sub-partitioned\ntable's partition constraint, fixing problems unearthed by Thibaut's tests\n\nAbout the patch that Horiguchi-san proposed upthread, I think it has merit\nthat it will make partprune.c code easier to reason about, but I think we\nshould pursue it separately.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Fri, 22 Mar 2019 15:38:27 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "\nLe 22/03/2019 à 07:38, Amit Langote a écrit :\n> Hosoya-san,\n>\n> On 2019/03/22 15:02, Yuzuko Hosoya wrote:\n>> I understood Amit's proposal. But I think the issue Thibaut reported would \n>> occur regardless of whether clauses have OR clauses or not as follows.\n>> I tested a query which should output \"One-Time Filter: false\".\n>>\n>> # explain select * from test2_0_20 where id = 25;\n>> QUERY PLAN \n>> -----------------------------------------------------------------------\n>> Append (cost=0.00..25.91 rows=6 width=36)\n>> -> Seq Scan on test2_10_20_def (cost=0.00..25.88 rows=6 width=36)\n>> Filter: (id = 25)\n>>\n> Good catch, thanks.\n>\n>> As Amit described in the previous email, id = 25 contradicts test2_0_20's\n>> partition constraint, so I think this clause should be ignored and we can\n>> also handle this case in the similar way as Amit proposal.\n>>\n>> I attached v1-delta-2.patch which fix the above issue. \n>>\n>> What do you think about it?\n> It looks fine to me. You put the code block to check whether a give\n> clause contradicts the partition constraint in its perfect place. :)\n>\n> Maybe we should have two patches as we seem to be improving two things:\n>\n> 1. Patch to fix problems with default partition pruning originally\n> reported by Hosoya-san\n>\n> 2. Patch to determine if a given clause contradicts a sub-partitioned\n> table's partition constraint, fixing problems unearthed by Thibaut's tests\n>\n> About the patch that Horiguchi-san proposed upthread, I think it has merit\n> that it will make partprune.c code easier to reason about, but I think we\n> should pursue it separately.\n>\n> Thanks,\n> Amit\n\nHosoya-san, very good idea to run queries directly on tables partitions!\n\nI tested your last patch and if I didn't mix up patches on the end of a\ntoo long week, I get a problem when querying the sub-sub partition:\n\ntest=# explain select * from test2_0_10 where id = 25;\n QUERY PLAN \n------------------------------------------------------------\n Seq Scan on test2_0_10 (cost=0.00..25.88 rows=6 width=36)\n Filter: (id = 25)\n(2 rows)\n\n\nCordialement,\n\nThibaut\n\n\n",
"msg_date": "Fri, 22 Mar 2019 18:36:19 +0100",
"msg_from": "Thibaut Madelaine <thibaut.madelaine@dalibo.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "Hi,\n\nOn 2019/03/23 2:36, Thibaut Madelaine wrote:\n> I tested your last patch and if I didn't mix up patches on the end of a\n> too long week, I get a problem when querying the sub-sub partition:\n> \n> test=# explain select * from test2_0_10 where id = 25;\n> QUERY PLAN \n> ------------------------------------------------------------\n> Seq Scan on test2_0_10 (cost=0.00..25.88 rows=6 width=36)\n> Filter: (id = 25)\n> (2 rows)\n\nThe problem here is not really related to partition pruning, but another\nproblem I recently sent an email about:\n\nhttps://www.postgresql.org/message-id/9813f079-f16b-61c8-9ab7-4363cab28d80%40lab.ntt.co.jp\n\nThe problem in this case is that *constraint exclusion* is not working,\nbecause partition constraint is not loaded by the planner. Note that\npruning is only used if a query specifies the parent table, not a partition.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Mon, 25 Mar 2019 09:21:06 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "Hi,\n\n> \n> Hi,\n> \n> On 2019/03/23 2:36, Thibaut Madelaine wrote:\n> > I tested your last patch and if I didn't mix up patches on the end of\n> > a too long week, I get a problem when querying the sub-sub partition:\n> >\n> > test=# explain select * from test2_0_10 where id = 25;\n> > QUERY PLAN\n> > ------------------------------------------------------------\n> > Seq Scan on test2_0_10 (cost=0.00..25.88 rows=6 width=36)\n> > Filter: (id = 25)\n> > (2 rows)\n> \n> The problem here is not really related to partition pruning, but another problem I recently sent an\n> email about:\n> \n> https://www.postgresql.org/message-id/9813f079-f16b-61c8-9ab7-4363cab28d80%40lab.ntt.co.jp\n> \n> The problem in this case is that *constraint exclusion* is not working, because partition constraint\n> is not loaded by the planner. Note that pruning is only used if a query specifies the parent table,\n> not a partition.\n\nThanks for the comments.\n\nI saw that email. Also, I checked that query Thibaut mentioned worked\ncorrectly with Amit's patch discussed in that thread.\n\n\nBest regards,\nYuzuko Hosoya\n\n\n\n",
"msg_date": "Mon, 25 Mar 2019 11:03:05 +0900",
"msg_from": "\"Yuzuko Hosoya\" <hosoya.yuzuko@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "RE: Problem with default partition pruning"
},
{
"msg_contents": "Hi,\n\n> Maybe we should have two patches as we seem to be improving two things:\n> \n> 1. Patch to fix problems with default partition pruning originally reported by Hosoya-san\n> \n> 2. Patch to determine if a given clause contradicts a sub-partitioned table's partition constraint,\n> fixing problems unearthed by Thibaut's tests\n\nI attached the latest patches according to Amit comment.\nv3_default_partition_pruning.patch fixes default partition pruning problems\nand ignore_contradictory_where_clauses_at_partprune_step.patch fixes\nsub-partition problems Thibaut tested.\n\nBest regards,\nYuzuko Hosoya",
"msg_date": "Tue, 2 Apr 2019 14:02:08 +0900",
"msg_from": "\"Yuzuko Hosoya\" <hosoya.yuzuko@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "RE: Problem with default partition pruning"
},
{
"msg_contents": "Hosoya-san,\n\nOn 2019/04/02 14:02, Yuzuko Hosoya wrote:\n> Hi,\n> \n>> Maybe we should have two patches as we seem to be improving two things:\n>>\n>> 1. Patch to fix problems with default partition pruning originally reported by Hosoya-san\n>>\n>> 2. Patch to determine if a given clause contradicts a sub-partitioned table's partition constraint,\n>> fixing problems unearthed by Thibaut's tests\n> \n> I attached the latest patches according to Amit comment.\n> v3_default_partition_pruning.patch fixes default partition pruning problems\n> and ignore_contradictory_where_clauses_at_partprune_step.patch fixes\n> sub-partition problems Thibaut tested.\n\nThanks for dividing patches that way.\n\nWould it be a good idea to add some new test cases to these patches, just\nso it's easily apparent what we're changing?\n\nSo, we could add the test case presented by Thibaut at the following link\nto the default_partition_pruning.patch:\n\nhttps://www.postgresql.org/message-id/a4968068-6401-7a9c-8bd4-6a3bc9164a86%40dalibo.com\n\nAnd, another reported at the following link to\nignore_contradictory_where_clauses_at_partprune_step.patch:\n\nhttps://www.postgresql.org/message-id/bd03f475-30d4-c4d0-3d7f-d2fbde755971%40dalibo.com\n\nActually, it might be possible/better to construct the test queries in\npartition_prune.sql using the existing tables in that script, that is,\nwithout defining new tables just for adding the new test cases. If not,\nmaybe it's OK to create the new tables too.\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Wed, 3 Apr 2019 10:54:49 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "Amit-san,\n\nThanks for the comments.\n\n> \n> Thanks for dividing patches that way.\n> \n> Would it be a good idea to add some new test cases to these patches, just so it's easily apparent what\n> we're changing?\nYes, I agree with you.\n\n> \n> So, we could add the test case presented by Thibaut at the following link to the\n> default_partition_pruning.patch:\n> \n> https://www.postgresql.org/message-id/a4968068-6401-7a9c-8bd4-6a3bc9164a86%40dalibo.com\n>\n> And, another reported at the following link to\n> ignore_contradictory_where_clauses_at_partprune_step.patch:\n> \n> https://www.postgresql.org/message-id/bd03f475-30d4-c4d0-3d7f-d2fbde755971%40dalibo.com\n> \n> Actually, it might be possible/better to construct the test queries in partition_prune.sql using the\n> existing tables in that script, that is, without defining new tables just for adding the new test cases.\n> If not, maybe it's OK to create the new tables too.\n>\nI see. I added some test cases to each patch according to tests \ndiscussed in this thread.\n\nHowever, I found another problem as follows. This query should \noutput \"One-Time Filter: false\" because rlp3's constraints \ncontradict WHERE clause.\n\n-----\npostgres=# \\d+ rlp3\n Partitioned table \"public.rlp3\"\n Column | Type | Collation | Nullable | Default | Storage | Stats target | Description \n--------+-------------------+-----------+----------+---------+----------+--------------+-------------\n b | character varying | | | | extended | | \n a | integer | | | | plain | | \nPartition of: rlp FOR VALUES FROM (15) TO (20)\nPartition constraint: ((a IS NOT NULL) AND (a >= 15) AND (a < 20))\nPartition key: LIST (b varchar_ops)\nPartitions: rlp3abcd FOR VALUES IN ('ab', 'cd'),\n rlp3efgh FOR VALUES IN ('ef', 'gh'),\n rlp3nullxy FOR VALUES IN (NULL, 'xy'),\n rlp3_default DEFAULT\n\npostgres=# explain select * from rlp3 where a = 2;\n QUERY PLAN \n--------------------------------------------------------------------\n Append (cost=0.00..103.62 rows=24 width=36)\n -> Seq Scan on rlp3abcd (cost=0.00..25.88 rows=6 width=36)\n Filter: (a = 2)\n -> Seq Scan on rlp3efgh (cost=0.00..25.88 rows=6 width=36)\n Filter: (a = 2)\n -> Seq Scan on rlp3nullxy (cost=0.00..25.88 rows=6 width=36)\n Filter: (a = 2)\n -> Seq Scan on rlp3_default (cost=0.00..25.88 rows=6 width=36)\n Filter: (a = 2)\n(9 rows)\n-----\n\nI think that the place of check contradiction process was wrong \nAt ignore_contradictory_where_clauses_at_partprune_step.patch.\nSo I fixed it.\n\nAttached the latest patches. Please check it again.\n\nBest regards,\nYuzuko Hosoya",
"msg_date": "Thu, 4 Apr 2019 13:00:55 +0900",
"msg_from": "\"Yuzuko Hosoya\" <hosoya.yuzuko@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "RE: Problem with default partition pruning"
},
{
"msg_contents": "Hosoya-san,\n\n\nOn 2019/04/04 13:00, Yuzuko Hosoya wrote:\n> I added some test cases to each patch according to tests \n> discussed in this thread.\n\nThanks a lot.\n\n> However, I found another problem as follows. This query should \n> output \"One-Time Filter: false\" because rlp3's constraints \n> contradict WHERE clause.\n> \n> -----\n> postgres=# \\d+ rlp3\n> Partitioned table \"public.rlp3\"\n> Column | Type | Collation | Nullable | Default | Storage | Stats target | Description \n> --------+-------------------+-----------+----------+---------+----------+--------------+-------------\n> b | character varying | | | | extended | | \n> a | integer | | | | plain | | \n> Partition of: rlp FOR VALUES FROM (15) TO (20)\n> Partition constraint: ((a IS NOT NULL) AND (a >= 15) AND (a < 20))\n> Partition key: LIST (b varchar_ops)\n> Partitions: rlp3abcd FOR VALUES IN ('ab', 'cd'),\n> rlp3efgh FOR VALUES IN ('ef', 'gh'),\n> rlp3nullxy FOR VALUES IN (NULL, 'xy'),\n> rlp3_default DEFAULT\n> \n> postgres=# explain select * from rlp3 where a = 2;\n> QUERY PLAN \n> --------------------------------------------------------------------\n> Append (cost=0.00..103.62 rows=24 width=36)\n> -> Seq Scan on rlp3abcd (cost=0.00..25.88 rows=6 width=36)\n> Filter: (a = 2)\n> -> Seq Scan on rlp3efgh (cost=0.00..25.88 rows=6 width=36)\n> Filter: (a = 2)\n> -> Seq Scan on rlp3nullxy (cost=0.00..25.88 rows=6 width=36)\n> Filter: (a = 2)\n> -> Seq Scan on rlp3_default (cost=0.00..25.88 rows=6 width=36)\n> Filter: (a = 2)\n> (9 rows)\n> -----\n\nThis one too would be solved with the other patch I mentioned to fix\nget_relation_info() to load the partition constraint so that constraint\nexclusion can use it. Partition in the earlier example given by Thibaut\nis a leaf partition, whereas rlp3 above is a sub-partitioned partition,\nbut both are partitions nonetheless.\n\nFixing partprune.c like we're doing with the\nv2_ignore_contradictory_where_clauses_at_partprune_step.patch only works\nfor the latter, because only partitioned tables visit partprune.c.\n\nOTOH, the other patch only applies to situations where\nconstraint_exclusion = on.\n\n> I think that the place of check contradiction process was wrong \n> At ignore_contradictory_where_clauses_at_partprune_step.patch.\n> So I fixed it.\n\nThanks. Patch contains some whitespace noise:\n\n$ git diff --check\nsrc/backend/partitioning/partprune.c:790: trailing whitespace.\n+ * given its partition constraint, we can ignore it,\nsrc/backend/partitioning/partprune.c:791: trailing whitespace.\n+ * that is not try to pass it to the pruning code.\nsrc/backend/partitioning/partprune.c:792: trailing whitespace.\n+ * We should do that especially to avoid pruning code\nsrc/backend/partitioning/partprune.c:810: trailing whitespace.\n+\nsrc/test/regress/sql/partition_prune.sql:87: trailing whitespace.\n+-- where clause contradicts sub-partition's constraint\n\nCan you please fix it?\n\n\nBTW, now I'm a bit puzzled between whether this case should be fixed by\nhacking on partprune.c like this patch does or whether to work on getting\nthe other patch committed and expect users to set constraint_exclusion =\non for this to behave as expected. The original intention of setting\npartition_qual in set_relation_partition_info() was for partprune.c to use\nit to remove useless arguments of OR clauses which otherwise would cause\nthe failure to correctly prune the default partitions of sub-partitioned\ntables. As shown by the examples in this thread, the original effort was\ninsufficient, which this patch aims to improve. But, it also expands the\nscope of partprune.c's usage of partition_qual, which is to effectively\nperform full-blown constraint exclusion without being controllable by\nconstraint_exclusion GUC, which may be seen as being good or bad. The\nfact that it helps in getting partition pruning working correctly in more\nobscure cases like those discussed in this thread means it's good maybe.\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Fri, 5 Apr 2019 18:46:55 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "Amit-san,\n\n> -----Original Message-----\n> From: Amit Langote [mailto:Langote_Amit_f8@lab.ntt.co.jp]\n> Sent: Friday, April 05, 2019 6:47 PM\n> To: Yuzuko Hosoya <hosoya.yuzuko@lab.ntt.co.jp>; 'Thibaut' <thibaut.madelaine@dalibo.com>; 'Imai,\n> Yoshikazu' <imai.yoshikazu@jp.fujitsu.com>\n> Cc: 'PostgreSQL Hackers' <pgsql-hackers@lists.postgresql.org>\n> Subject: Re: Problem with default partition pruning\n> \n> Hosoya-san,\n> \n> \n> On 2019/04/04 13:00, Yuzuko Hosoya wrote:\n> > I added some test cases to each patch according to tests discussed in\n> > this thread.\n> \n> Thanks a lot.\n> \n> > However, I found another problem as follows. This query should output\n> > \"One-Time Filter: false\" because rlp3's constraints contradict WHERE\n> > clause.\n> >\n> > -----\n> > postgres=# \\d+ rlp3\n> > Partitioned table \"public.rlp3\"\n> > Column | Type | Collation | Nullable | Default | Storage | Stats target | Description\n> >\n> --------+-------------------+-----------+----------+---------+----------+--------------+---------\n> ----\n> > b | character varying | | | | extended | |\n> > a | integer | | | | plain | |\n> > Partition of: rlp FOR VALUES FROM (15) TO (20) Partition constraint:\n> > ((a IS NOT NULL) AND (a >= 15) AND (a < 20)) Partition key: LIST (b\n> > varchar_ops)\n> > Partitions: rlp3abcd FOR VALUES IN ('ab', 'cd'),\n> > rlp3efgh FOR VALUES IN ('ef', 'gh'),\n> > rlp3nullxy FOR VALUES IN (NULL, 'xy'),\n> > rlp3_default DEFAULT\n> >\n> > postgres=# explain select * from rlp3 where a = 2;\n> > QUERY PLAN\n> > --------------------------------------------------------------------\n> > Append (cost=0.00..103.62 rows=24 width=36)\n> > -> Seq Scan on rlp3abcd (cost=0.00..25.88 rows=6 width=36)\n> > Filter: (a = 2)\n> > -> Seq Scan on rlp3efgh (cost=0.00..25.88 rows=6 width=36)\n> > Filter: (a = 2)\n> > -> Seq Scan on rlp3nullxy (cost=0.00..25.88 rows=6 width=36)\n> > Filter: (a = 2)\n> > -> Seq Scan on rlp3_default (cost=0.00..25.88 rows=6 width=36)\n> > Filter: (a = 2)\n> > (9 rows)\n> > -----\n> \n> This one too would be solved with the other patch I mentioned to fix\n> get_relation_info() to load the partition constraint so that constraint exclusion can use it.\n> Partition in the earlier example given by Thibaut is a leaf partition, whereas rlp3 above is a\n> sub-partitioned partition, but both are partitions nonetheless.\n> \n> Fixing partprune.c like we're doing with the\n> v2_ignore_contradictory_where_clauses_at_partprune_step.patch only works for the latter, because only\n> partitioned tables visit partprune.c.\n> \n> OTOH, the other patch only applies to situations where constraint_exclusion = on.\n> \nI see. I think that following example discussed in this thread before would\nalso be solved with your patch, not v2_ignore_contradictory_where_clauses_at_partprune_step.patch.\n\npostgres=# set constraint_exclusion to on;\n\npostgres=# explain select * from test2_0_20 where id = 25;\n QUERY PLAN \n------------------------------------------\n Result (cost=0.00..0.00 rows=0 width=0)\n One-Time Filter: false\n(2 rows)\n\n\n> > I think that the place of check contradiction process was wrong At\n> > ignore_contradictory_where_clauses_at_partprune_step.patch.\n> > So I fixed it.\n> \n> Thanks. Patch contains some whitespace noise:\n> \n> $ git diff --check\n> src/backend/partitioning/partprune.c:790: trailing whitespace.\n> + * given its partition constraint, we can ignore it,\n> src/backend/partitioning/partprune.c:791: trailing whitespace.\n> + * that is not try to pass it to the pruning code.\n> src/backend/partitioning/partprune.c:792: trailing whitespace.\n> + * We should do that especially to avoid pruning code\n> src/backend/partitioning/partprune.c:810: trailing whitespace.\n> +\n> src/test/regress/sql/partition_prune.sql:87: trailing whitespace.\n> +-- where clause contradicts sub-partition's constraint\n> \n> Can you please fix it?\n> \nThanks for checking.\nI'm attaching the latest patch.\n\n> \n> BTW, now I'm a bit puzzled between whether this case should be fixed by hacking on partprune.c like\n> this patch does or whether to work on getting the other patch committed and expect users to set\n> constraint_exclusion = on for this to behave as expected. The original intention of setting\n> partition_qual in set_relation_partition_info() was for partprune.c to use it to remove useless\n> arguments of OR clauses which otherwise would cause the failure to correctly prune the default partitions\n> of sub-partitioned tables. As shown by the examples in this thread, the original effort was\n> insufficient, which this patch aims to improve. But, it also expands the scope of partprune.c's usage\n> of partition_qual, which is to effectively perform full-blown constraint exclusion without being\n> controllable by constraint_exclusion GUC, which may be seen as being good or bad. The fact that it\n> helps in getting partition pruning working correctly in more obscure cases like those discussed in\n> this thread means it's good maybe.\n> \nUmm, even though this modification might be overhead, I think this problem should be solved\nwithout setting constraint_exclusion GUC. But I'm not sure.\n\nBest regards,\nYuzuko Hosoya",
"msg_date": "Mon, 8 Apr 2019 16:57:35 +0900",
"msg_from": "\"Yuzuko Hosoya\" <hosoya.yuzuko@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "RE: Problem with default partition pruning"
},
{
"msg_contents": "At Mon, 8 Apr 2019 16:57:35 +0900, \"Yuzuko Hosoya\" <hosoya.yuzuko@lab.ntt.co.jp> wrote in <00c101d4ede0$babd4390$3037cab0$@lab.ntt.co.jp>\n> > BTW, now I'm a bit puzzled between whether this case should be fixed by hacking on partprune.c like\n> > this patch does or whether to work on getting the other patch committed and expect users to set\n> > constraint_exclusion = on for this to behave as expected. The original intention of setting\n> > partition_qual in set_relation_partition_info() was for partprune.c to use it to remove useless\n> > arguments of OR clauses which otherwise would cause the failure to correctly prune the default partitions\n> > of sub-partitioned tables. As shown by the examples in this thread, the original effort was\n> > insufficient, which this patch aims to improve. But, it also expands the scope of partprune.c's usage\n> > of partition_qual, which is to effectively perform full-blown constraint exclusion without being\n> > controllable by constraint_exclusion GUC, which may be seen as being good or bad. The fact that it\n> > helps in getting partition pruning working correctly in more obscure cases like those discussed in\n> > this thread means it's good maybe.\n> > \n> Umm, even though this modification might be overhead, I think this problem should be solved\n> without setting constraint_exclusion GUC. But I'm not sure.\n\nPartition pruning and constraint exclusion are orthogonal\nfunctions. Note that the default value of the variable is\nCONSTRAINT_EXCLUSION_PARTITION and the behavior is not a perfect\nbug. So I think we can reasonably ignore constraints when\nconstraint_exclusion is intentionally turned off.\n\nAs the result I propose to move the \"if(partconstr)\" block in the\nlatest patches after the constant-false block, changing the\ncondition as \"if (partconstr && constraint_exclusion !=\nCONSTRAINT_EXCLUSION_OFF)\".\n\nThis make partprune reacts to constraint_exclusion the consistent\nway with the old-fashioned partitioning.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Mon, 08 Apr 2019 20:42:51 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "At Mon, 08 Apr 2019 20:42:51 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <20190408.204251.143128146.horiguchi.kyotaro@lab.ntt.co.jp>\n> At Mon, 8 Apr 2019 16:57:35 +0900, \"Yuzuko Hosoya\" <hosoya.yuzuko@lab.ntt.co.jp> wrote in <00c101d4ede0$babd4390$3037cab0$@lab.ntt.co.jp>\n> > > BTW, now I'm a bit puzzled between whether this case should be fixed by hacking on partprune.c like\n> > > this patch does or whether to work on getting the other patch committed and expect users to set\n> > > constraint_exclusion = on for this to behave as expected. The original intention of setting\n> > > partition_qual in set_relation_partition_info() was for partprune.c to use it to remove useless\n> > > arguments of OR clauses which otherwise would cause the failure to correctly prune the default partitions\n> > > of sub-partitioned tables. As shown by the examples in this thread, the original effort was\n> > > insufficient, which this patch aims to improve. But, it also expands the scope of partprune.c's usage\n> > > of partition_qual, which is to effectively perform full-blown constraint exclusion without being\n> > > controllable by constraint_exclusion GUC, which may be seen as being good or bad. The fact that it\n> > > helps in getting partition pruning working correctly in more obscure cases like those discussed in\n> > > this thread means it's good maybe.\n> > > \n> > Umm, even though this modification might be overhead, I think this problem should be solved\n> > without setting constraint_exclusion GUC. But I'm not sure.\n> \n> Partition pruning and constraint exclusion are orthogonal\n> functions. Note that the default value of the variable is\n> CONSTRAINT_EXCLUSION_PARTITION and the behavior is not a perfect\n> bug. So I think we can reasonably ignore constraints when\n> constraint_exclusion is intentionally turned off.\n\n> As the result I propose to move the \"if(partconstr)\" block in the\n> latest patches after the constant-false block, changing the\n> condition as \"if (partconstr && constraint_exclusion !=\n> CONSTRAINT_EXCLUSION_OFF)\".\n\nMmm. Something is wrong. I should have been sleeping at the\ntime. In my opinion, what we should there is:\n\n- Try partition pruning first.\n\n- If the partition was not pruned, and constraint is set, check\n for constant false.\n\n- if constraint_exclusion is turned on and constraint is set,\n examine the constraint.\n\nSorry for the stupidity.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Tue, 09 Apr 2019 09:04:04 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "At Mon, 8 Apr 2019 16:57:35 +0900, \"Yuzuko Hosoya\" <hosoya.yuzuko@lab.ntt.co.jp> wrote in <00c101d4ede0$babd4390$3037cab0$@lab.ntt.co.jp>\n> > BTW, now I'm a bit puzzled between whether this case should be fixed by hacking on partprune.c like\n> > this patch does or whether to work on getting the other patch committed and expect users to set\n> > constraint_exclusion = on for this to behave as expected. The original intention of setting\n> > partition_qual in set_relation_partition_info() was for partprune.c to use it to remove useless\n> > arguments of OR clauses which otherwise would cause the failure to correctly prune the default partitions\n> > of sub-partitioned tables. As shown by the examples in this thread, the original effort was\n> > insufficient, which this patch aims to improve. But, it also expands the scope of partprune.c's usage\n> > of partition_qual, which is to effectively perform full-blown constraint exclusion without being\n> > controllable by constraint_exclusion GUC, which may be seen as being good or bad. The fact that it\n> > helps in getting partition pruning working correctly in more obscure cases like those discussed in\n> > this thread means it's good maybe.\n> > \n> Umm, even though this modification might be overhead, I think this problem should be solved\n> without setting constraint_exclusion GUC. But I'm not sure.\n\nAs the second thought. Partition constraint is not constraint\nexpression so that's fair to apply partqual ignoring\nconstraint_exclusion. The variable is set false to skip useless\nexpression evaluation on all partitions, but partqual should be\nevaluated just once. Sorry for my confusion.\n\nSo still it is wrong that the new code is added in\ngen_partprune_steps_internal. If partqual results true and the\nclause is long, the partqual is evaluated uselessly at every\nrecursion.\n\nMaybe we should do that when we find that the current clause\ndoesn't match part attributes. Specifically just after the for\nloop \"for (i = 0 ; i < part_scheme->partnattrs; i++)\".\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Tue, 09 Apr 2019 10:28:48 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "Sigh..\n\nAt Tue, 09 Apr 2019 10:28:48 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <20190409.102848.252476604.horiguchi.kyotaro@lab.ntt.co.jp>\n> As the second thought. Partition constraint is not constraint\n> expression so that's fair to apply partqual ignoring\n> constraint_exclusion. The variable is set false to skip useless\n> expression evaluation on all partitions, but partqual should be\n> evaluated just once. Sorry for my confusion.\n> \n> So still it is wrong that the new code is added in\n> gen_partprune_steps_internal.\n\nSo still it is wrong that the new code is added at the beginning\nof the loop on clauses in gen_partprune_steps_internal.\n\n> If partqual results true and the\n> clause is long, the partqual is evaluated uselessly at every\n> recursion.\n> \n> Maybe we should do that when we find that the current clause\n> doesn't match part attributes. Specifically just after the for\n> loop \"for (i = 0 ; i < part_scheme->partnattrs; i++)\".\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Tue, 09 Apr 2019 10:33:17 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "Horiguchi-san,\n\nThanks for your comments.\n\n> -----Original Message-----\n> From: Kyotaro HORIGUCHI [mailto:horiguchi.kyotaro@lab.ntt.co.jp]\n> Sent: Tuesday, April 09, 2019 10:33 AM\n> To: hosoya.yuzuko@lab.ntt.co.jp\n> Cc: Langote_Amit_f8@lab.ntt.co.jp; thibaut.madelaine@dalibo.com; imai.yoshikazu@jp.fujitsu.com;\n> pgsql-hackers@lists.postgresql.org\n> Subject: Re: Problem with default partition pruning\n> \n> Sigh..\n> \n> At Tue, 09 Apr 2019 10:28:48 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI\n> <horiguchi.kyotaro@lab.ntt.co.jp> wrote in\n> <20190409.102848.252476604.horiguchi.kyotaro@lab.ntt.co.jp>\n> > As the second thought. Partition constraint is not constraint\n> > expression so that's fair to apply partqual ignoring\n> > constraint_exclusion. The variable is set false to skip useless\n> > expression evaluation on all partitions, but partqual should be\n> > evaluated just once. Sorry for my confusion.\n> >\n> > So still it is wrong that the new code is added in\n> > gen_partprune_steps_internal.\n> \n> So still it is wrong that the new code is added at the beginning of the loop on clauses in\n> gen_partprune_steps_internal.\n> \n> > If partqual results true and the clause\n> > is long, the partqual is evaluated uselessly at every recursion.\n> >\n> > Maybe we should do that when we find that the current clause doesn't\n> > match part attributes. Specifically just after the for loop \"for (i =\n> > 0 ; i < part_scheme->partnattrs; i++)\".\n>\nI think we should check whether WHERE clause contradicts partition\nconstraint even when the clause matches part attributes. So I moved\n\"if (partqual)\" block to the beginning of the loop you mentioned. \n\nI'm attaching the latest version. Could you please check it again?\n\nBest regards,\nYuzuko Hosoya",
"msg_date": "Tue, 9 Apr 2019 16:41:47 +0900",
"msg_from": "\"Yuzuko Hosoya\" <hosoya.yuzuko@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "RE: Problem with default partition pruning"
},
{
"msg_contents": "Hi.\n\nAt Tue, 9 Apr 2019 16:41:47 +0900, \"Yuzuko Hosoya\" <hosoya.yuzuko@lab.ntt.co.jp> wrote in <00cf01d4eea7$afa43370$0eec9a50$@lab.ntt.co.jp>\n> > So still it is wrong that the new code is added at the beginning of the loop on clauses in\n> > gen_partprune_steps_internal.\n> > \n> > > If partqual results true and the clause\n> > > is long, the partqual is evaluated uselessly at every recursion.\n> > >\n> > > Maybe we should do that when we find that the current clause doesn't\n> > > match part attributes. Specifically just after the for loop \"for (i =\n> > > 0 ; i < part_scheme->partnattrs; i++)\".\n> >\n> I think we should check whether WHERE clause contradicts partition\n> constraint even when the clause matches part attributes. So I moved\n\nWhy? If clauses contains a clause on a partition key, the clause\nis involved in determination of whether a partition survives or\nnot in ordinary way. Could you show how or on what configuration\n(tables and query) it happens that such a matching clause needs\nto be checked against partqual?\n\nThe \"if (partconstr)\" block uselessly runs for every clause in\nthe clause tree other than BoolExpr. If we want do that, isn't\njust doing predicate_refuted_by(partconstr, clauses, false)\nsufficient before looping over clauses?\n\n\n> \"if (partqual)\" block to the beginning of the loop you mentioned. \n>\n> I'm attaching the latest version. Could you please check it again?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Tue, 09 Apr 2019 17:37:25 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "At Tue, 09 Apr 2019 17:37:25 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <20190409.173725.31175835.horiguchi.kyotaro@lab.ntt.co.jp>\n> > I'm attaching the latest version. Could you please check it again?\n\nBy the way, I noticed that partition constraint in a multi-level\npartition contains duplicate clauses.\n\ncreate table p (a int) partition by range (a);\ncreate table c1 partition of p for values from (0) to (10) partition by range (a);\ncreate table c11 partition of c1 for values from (0) to (2) partition by range (a);\ncreate table c12 partition of c1 for values from (2) to (4) partition by range (a);\n\n=# \\d+ c12\n| Partitioned table \"public.c12\"\n| Column | Type | Collation | Nullable | Default | Storage | Stats target | De\n| scription \n| --------+---------+-----------+----------+---------+---------+--------------+---\n| ----------\n| a | integer | | | | plain | | \n| Partition of: c1 FOR VALUES FROM (2) TO (4)\n| Partition constraint: ((a IS NOT NULL) AND (a >= 0) AND (a < 10) AND (a IS NOT N\n| ULL) AND (a >= 2) AND (a < 4))\n| Partition key: RANGE (a)\n| Number of partitions: 0\n\n\nThe partition constraint is equivalent to \"(a IS NOT NULL) AND (a\n>= 2) AND (a < 4)\". Is it intentional (for, for example,\nperformance reasons)? Or is it reasonable to deduplicate the\nquals?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Tue, 09 Apr 2019 17:51:24 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "Horiguchi-san,\n\nOn 2019/04/09 17:51, Kyotaro HORIGUCHI wrote:\n> At Tue, 09 Apr 2019 17:37:25 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <20190409.173725.31175835.horiguchi.kyotaro@lab.ntt.co.jp>\n>>> I'm attaching the latest version. Could you please check it again?\n> \n> By the way, I noticed that partition constraint in a multi-level\n> partition contains duplicate clauses.\n> \n> create table p (a int) partition by range (a);\n> create table c1 partition of p for values from (0) to (10) partition by range (a);\n> create table c11 partition of c1 for values from (0) to (2) partition by range (a);\n> create table c12 partition of c1 for values from (2) to (4) partition by range (a);\n> \n> =# \\d+ c12\n> | Partitioned table \"public.c12\"\n> | Column | Type | Collation | Nullable | Default | Storage | Stats target | De\n> | scription \n> | --------+---------+-----------+----------+---------+---------+--------------+---\n> | ----------\n> | a | integer | | | | plain | | \n> | Partition of: c1 FOR VALUES FROM (2) TO (4)\n> | Partition constraint: ((a IS NOT NULL) AND (a >= 0) AND (a < 10) AND (a IS NOT N\n> | ULL) AND (a >= 2) AND (a < 4))\n> | Partition key: RANGE (a)\n> | Number of partitions: 0\n> \n> \n> The partition constraint is equivalent to \"(a IS NOT NULL) AND (a\n>> = 2) AND (a < 4)\". Is it intentional (for, for example,\n> performance reasons)? Or is it reasonable to deduplicate the\n> quals?\n\nYeah, we don't try to simplify that due to lack of infrastructure, maybe.\nIf said infrastructure was present, maybe CHECK constraints would already\nbe using that, which doesn't seem to be the case.\n\ncreate table foo (a int check ((a IS NOT NULL) AND (a >= 0) AND (a < 10)\nAND (a IS NOT NULL) AND (a >= 2) AND (a < 4)));\n\n\\d foo\n Table \"public.foo\"\n Column │ Type │ Collation │ Nullable │ Default\n────────┼─────────┼───────────┼──────────┼─────────\n a │ integer │ │ │\nCheck constraints:\n \"foo_a_check\" CHECK (a IS NOT NULL AND a >= 0 AND a < 10 AND a IS NOT\nNULL AND a >= 2 AND a < 4)\n\nNow it's true that users wouldn't manually write expressions like that,\nbut the expressions might be an automatically generated, which is also the\ncase with partition constraint of a deeply nested partition.\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Tue, 9 Apr 2019 18:09:20 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "Hi, Amit. Thank you for the explanation.\n\nAt Tue, 9 Apr 2019 18:09:20 +0900, Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> wrote in <4c1074cc-bf60-1610-c728-9a5b12f5b234@lab.ntt.co.jp>\n> > The partition constraint is equivalent to \"(a IS NOT NULL) AND (a\n> >> = 2) AND (a < 4)\". Is it intentional (for, for example,\n> > performance reasons)? Or is it reasonable to deduplicate the\n> > quals?\n> \n> Yeah, we don't try to simplify that due to lack of infrastructure, maybe.\n> If said infrastructure was present, maybe CHECK constraints would already\n> be using that, which doesn't seem to be the case.\n\nDoesn't predicate_implied_by do that?\nWith the attached small patch, the partqual in my example becomes.\n\nPartition constraint: ((a IS NOT NULL) AND (a >= 2) AND (a < 4))\n\nAnd for in a more complex case:\n\ncreate table p2 (a int, b int) partition by range (a, b);\ncreate table c21 partition of p2 for values from (0, 0) to (1, 50) partition by range (a, b);\ncreate table c22 partition of p2 for values from (1, 50) to (2, 100) partition by range (a, b);\ncreate table c211 partition of c21 for values from (0, 0) to (0, 1000);\ncreate table c212 partition of c21 for values from (0, 1000) to (0, 2000);\n\n\\d+ c212\n..\nPartition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND ((a > 0) OR ((a =\n 0) AND (b >= 0))) AND ((a < 1) OR ((a = 1) AND (b < 50))) AND (a IS NOT NULL) A\nND (b IS NOT NULL) AND (a = 0) AND (b >= 1000) AND (b < 2000))\n\nis reduced to:\n\nPartition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a = 0) AND (b >=\n 1000) AND (b < 2000))\n\nOf course this cannot be reducible:\n\ncreate table p3 (a int, b int) partition by range (a);\ncreate table c31 partition of p3 for values from (0) to (1) partition by range(b);\ncreate table c311 partition of c31 for values from (0) to (1);\n\\d+ c311\n\nPartition constraint: ((a IS NOT NULL) AND (a >= 0) AND (a < 1) AND (b IS NOT NU\nLL) AND (b >= 0) AND (b < 1))\n\nI think this is useful even counting possible degradation, and I\nbelieve generate_partition_qual is not called so often.\n\n\n> create table foo (a int check ((a IS NOT NULL) AND (a >= 0) AND (a < 10)\n> AND (a IS NOT NULL) AND (a >= 2) AND (a < 4)));\n> \n> \\d foo\n> Table \"public.foo\"\n> Column │ Type │ Collation │ Nullable │ Default\n> ────────┼─────────┼───────────┼──────────┼─────────\n> a │ integer │ │ │\n> Check constraints:\n> \"foo_a_check\" CHECK (a IS NOT NULL AND a >= 0 AND a < 10 AND a IS NOT\n> NULL AND a >= 2 AND a < 4)\n> \n> Now it's true that users wouldn't manually write expressions like that,\n> but the expressions might be an automatically generated, which is also the\n> case with partition constraint of a deeply nested partition.\n\nDifferently from manually written constraint, partition\nconstraint is highly reducible.\n\nThoughts?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 09 Apr 2019 18:49:42 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "Horiguchi-san,\n\nOn 2019/04/09 18:49, Kyotaro HORIGUCHI wrote:\n> Hi, Amit. Thank you for the explanation.\n> \n> At Tue, 9 Apr 2019 18:09:20 +0900, Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> wrote in <4c1074cc-bf60-1610-c728-9a5b12f5b234@lab.ntt.co.jp>\n>>> The partition constraint is equivalent to \"(a IS NOT NULL) AND (a\n>>>> = 2) AND (a < 4)\". Is it intentional (for, for example,\n>>> performance reasons)? Or is it reasonable to deduplicate the\n>>> quals?\n>>\n>> Yeah, we don't try to simplify that due to lack of infrastructure, maybe.\n>> If said infrastructure was present, maybe CHECK constraints would already\n>> be using that, which doesn't seem to be the case.\n> \n> Doesn't predicate_implied_by do that?\n>\n> With the attached small patch, the partqual in my example becomes.\n\nAh, I was wrong in saying we lack the infrastructure.\n\n> Partition constraint: ((a IS NOT NULL) AND (a >= 2) AND (a < 4))\n> \n> And for in a more complex case:\n> \n> create table p2 (a int, b int) partition by range (a, b);\n> create table c21 partition of p2 for values from (0, 0) to (1, 50) partition by range (a, b);\n> create table c22 partition of p2 for values from (1, 50) to (2, 100) partition by range (a, b);\n> create table c211 partition of c21 for values from (0, 0) to (0, 1000);\n> create table c212 partition of c21 for values from (0, 1000) to (0, 2000);\n> \n> \\d+ c212\n> ..\n> Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND ((a > 0) OR ((a =\n> 0) AND (b >= 0))) AND ((a < 1) OR ((a = 1) AND (b < 50))) AND (a IS NOT NULL) A\n> ND (b IS NOT NULL) AND (a = 0) AND (b >= 1000) AND (b < 2000))\n> \n> is reduced to:\n> \n> Partition constraint: ((a IS NOT NULL) AND (b IS NOT NULL) AND (a = 0) AND (b >=\n> 1000) AND (b < 2000))\n> \n> Of course this cannot be reducible:\n> \n> create table p3 (a int, b int) partition by range (a);\n> create table c31 partition of p3 for values from (0) to (1) partition by range(b);\n> create table c311 partition of c31 for values from (0) to (1);\n> \\d+ c311\n> \n> Partition constraint: ((a IS NOT NULL) AND (a >= 0) AND (a < 1) AND (b IS NOT NU\n> LL) AND (b >= 0) AND (b < 1))\n> \n> I think this is useful even counting possible degradation, and I\n> believe generate_partition_qual is not called so often.\n\nI think more commonly used forms of sub-partitioning will use different\ncolumns at different levels as in the 2nd example. So, although we don't\ncall generate_partition_qual() as much as we used to before, even at the\ntimes we do, we'd encounter this type of sub-partitioning more often and\nthe proposed optimization step will end up being futile in more cases than\nthe cases in which it would help. Maybe, that was the reason not to try\ntoo hard in the first place, not the lack of infrastructure as I was saying.\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Wed, 10 Apr 2019 10:48:38 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "On 2019/04/09 17:37, Kyotaro HORIGUCHI wrote:\n> At Tue, 9 Apr 2019 16:41:47 +0900, \"Yuzuko Hosoya\" <hosoya.yuzuko@lab.ntt.co.jp> wrote\n>>> So still it is wrong that the new code is added at the beginning of the loop on clauses in\n>>> gen_partprune_steps_internal.\n>>>\n>>>> If partqual results true and the clause\n>>>> is long, the partqual is evaluated uselessly at every recursion.\n>>>>\n>>>> Maybe we should do that when we find that the current clause doesn't\n>>>> match part attributes. Specifically just after the for loop \"for (i =\n>>>> 0 ; i < part_scheme->partnattrs; i++)\".\n>>>\n>> I think we should check whether WHERE clause contradicts partition\n>> constraint even when the clause matches part attributes. So I moved\n> \n> Why? If clauses contains a clause on a partition key, the clause\n> is involved in determination of whether a partition survives or\n> not in ordinary way. Could you show how or on what configuration\n> (tables and query) it happens that such a matching clause needs\n> to be checked against partqual?\n> \n> The \"if (partconstr)\" block uselessly runs for every clause in\n> the clause tree other than BoolExpr. If we want do that, isn't\n> just doing predicate_refuted_by(partconstr, clauses, false)\n> sufficient before looping over clauses?\n\nYeah, I think we should move the \"if (partconstr)\" block to the \"if\n(is_orclause(clause))\" block as I originally proposed in:\n\nhttps://www.postgresql.org/message-id/9bb31dfe-b0d0-53f3-3ea6-e64b811424cf%40lab.ntt.co.jp\n\nIt's kind of a hack to get over the limitation that\nget_matching_partitions() can't prune default partitions for certain OR\nclauses and I think we shouldn't let that hack grow into what seems like\nalmost duplicating relation_excluded_by_constraints() but without the\nconstraint_exclusion GUC guard.\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Wed, 10 Apr 2019 11:17:53 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "Hi, Amit.\n\nAt Wed, 10 Apr 2019 10:48:38 +0900, Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> wrote in <4ef8d47d-b0c7-3093-5aaa-226162c5b59b@lab.ntt.co.jp>\n> > I think this is useful even counting possible degradation, and I\n> > believe generate_partition_qual is not called so often.\n> \n> I think more commonly used forms of sub-partitioning will use different\n> columns at different levels as in the 2nd example. So, although we don't\n> call generate_partition_qual() as much as we used to before, even at the\n> times we do, we'd encounter this type of sub-partitioning more often and\n> the proposed optimization step will end up being futile in more cases than\n> the cases in which it would help. Maybe, that was the reason not to try\n> too hard in the first place, not the lack of infrastructure as I was saying.\n\nRange partitioning on date could be a common example of\nmultilevel partitioning, but I agree with you given a premise\nthat partition qual is not scanned so frequently.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Wed, 10 Apr 2019 12:06:45 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "At Wed, 10 Apr 2019 11:17:53 +0900, Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> wrote in <494124a7-d305-1bc9-ef64-d5c790e13e86@lab.ntt.co.jp>\n> On 2019/04/09 17:37, Kyotaro HORIGUCHI wrote:\n> > At Tue, 9 Apr 2019 16:41:47 +0900, \"Yuzuko Hosoya\" <hosoya.yuzuko@lab.ntt.co.jp> wrote\n> >>> So still it is wrong that the new code is added at the beginning of the loop on clauses in\n> >>> gen_partprune_steps_internal.\n> >>>\n> >>>> If partqual results true and the clause\n> >>>> is long, the partqual is evaluated uselessly at every recursion.\n> >>>>\n> >>>> Maybe we should do that when we find that the current clause doesn't\n> >>>> match part attributes. Specifically just after the for loop \"for (i =\n> >>>> 0 ; i < part_scheme->partnattrs; i++)\".\n> >>>\n> >> I think we should check whether WHERE clause contradicts partition\n> >> constraint even when the clause matches part attributes. So I moved\n> > \n> > Why? If clauses contains a clause on a partition key, the clause\n> > is involved in determination of whether a partition survives or\n> > not in ordinary way. Could you show how or on what configuration\n> > (tables and query) it happens that such a matching clause needs\n> > to be checked against partqual?\n> > \n> > The \"if (partconstr)\" block uselessly runs for every clause in\n> > the clause tree other than BoolExpr. If we want do that, isn't\n> > just doing predicate_refuted_by(partconstr, clauses, false)\n> > sufficient before looping over clauses?\n> \n> Yeah, I think we should move the \"if (partconstr)\" block to the \"if\n> (is_orclause(clause))\" block as I originally proposed in:\n> \n> https://www.postgresql.org/message-id/9bb31dfe-b0d0-53f3-3ea6-e64b811424cf%40lab.ntt.co.jp\n> \n> It's kind of a hack to get over the limitation that\n> get_matching_partitions() can't prune default partitions for certain OR\n> clauses and I think we shouldn't let that hack grow into what seems like\n> almost duplicating relation_excluded_by_constraints() but without the\n> constraint_exclusion GUC guard.\n\nThat leaves an issue of contradicting clauses that is not an arm\nof OR-expr. Isn't that what Hosoya-san is trying to fix?\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Wed, 10 Apr 2019 12:53:17 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "On 2019/04/10 12:53, Kyotaro HORIGUCHI wrote:\n> At Wed, 10 Apr 2019 11:17:53 +0900, Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> wrote:\n>> Yeah, I think we should move the \"if (partconstr)\" block to the \"if\n>> (is_orclause(clause))\" block as I originally proposed in:\n>>\n>> https://www.postgresql.org/message-id/9bb31dfe-b0d0-53f3-3ea6-e64b811424cf%40lab.ntt.co.jp\n>>\n>> It's kind of a hack to get over the limitation that\n>> get_matching_partitions() can't prune default partitions for certain OR\n>> clauses and I think we shouldn't let that hack grow into what seems like\n>> almost duplicating relation_excluded_by_constraints() but without the\n>> constraint_exclusion GUC guard.\n> \n> That leaves an issue of contradicting clauses that is not an arm\n> of OR-expr. Isn't that what Hosoya-san is trying to fix?\n\nYes, that's right. But as I said, maybe we should try not to duplicate\nthe functionality of relation_excluded_by_constraints() in partprune.c.\n\nTo summarize, aside from the problem described by the subject of this\nthread (patch for that is v4_default_partition_pruning.patch posted by\nHosoya-san on 2019/04/04), we have identified couple of other issues:\n\n1. One that Thibaut reported on 2019/03/04\n\n> I kept on testing with sub-partitioning.Thanks.\n> I found a case, using 2 default partitions, where a default partition is\n> not pruned:\n>\n> --------------\n>\n> create table test2(id int, val text) partition by range (id);\n> create table test2_20_plus_def partition of test2 default;\n> create table test2_0_20 partition of test2 for values from (0) to (20)\n> partition by range (id);\n> create table test2_0_10 partition of test2_0_20 for values from (0) to (10);\n> create table test2_10_20_def partition of test2_0_20 default;\n>\n> # explain (costs off) select * from test2 where id=5 or id=25;\n> QUERY PLAN\n> -----------------------------------------\n> Append\n> -> Seq Scan on test2_0_10\n> Filter: ((id = 5) OR (id = 25))\n> -> Seq Scan on test2_10_20_def\n> Filter: ((id = 5) OR (id = 25))\n> -> Seq Scan on test2_20_plus_def\n> Filter: ((id = 5) OR (id = 25))\n> (7 rows)\n\nFor this, we can move the \"if (partconstr)\" block in the same if\n(is_orclause(clause)) block, as proposed in the v1-delta.patch that I\nproposed on 2019/03/20. Note that that patch restricts the scope of\napplying predicate_refuted_by() only to the problem that's currently\ntricky to solve by partition pruning alone -- pruning default partitions\nfor OR clauses like in the above example.\n\n2. Hosoya-san reported on 2019/03/22 that a contradictory WHERE clause\napplied to a *partition* doesn't return an empty plan:\n\n> I understood Amit's proposal. But I think the issue Thibaut reported\n> would occur regardless of whether clauses have OR clauses or not as\n> follows.\n>\n> I tested a query which should output \"One-Time Filter: false\".\n>\n> # explain select * from test2_0_20 where id = 25;\n> QUERY PLAN\n> -----------------------------------------------------------------------\n> Append (cost=0.00..25.91 rows=6 width=36)\n> -> Seq Scan on test2_10_20_def (cost=0.00..25.88 rows=6 width=36)\n> Filter: (id = 25)\n\nSo, she proposed to apply predicate_refuted_by to the whole\nbaserestrictinfo (at least in the latest patch), which is same as always\nperforming constraint exclusion to sub-partitioned partitions. I\ninitially thought it might be a good idea, but only later realized that\nnow there will be two places doing the same constraint exclusion proof --\ngen_partprune_steps_internal(), and set_rel_size() calling\nrelation_excluded_by_constraints(). The latter depends on\nconstraint_exclusion GUC whose default being 'partition' would mean we'd\nnot get an empty plan with it. Even if you turn it to 'on', a bug of\nget_relation_constraints() will prevent the partition constraint from\nbeing loaded and performing constraint exclusion with it; I reported it in\n[1].\n\nI think that we may be better off solving the latter problem as follows:\n\n1. Modify relation_excluded_by_constraints() to *always* try to exclude\n\"baserel\" partitions using their partition constraint (disregarding\nconstraint_exclusion = off/partition).\n\n2. Modify prune_append_rel_partitions(), which runs much earlier these\ndays compared to set_rel_size(), to call relation_excluded_by_constraint()\nmodified as described in step 1. If it returns true, don't perform\npartition pruning, set the appendrel parent as dummy right away. It's not\ndone today, but appendrel parent can also be set to dummy based on the\nresult of pruning, that is, when get_matching_partitions() returns no\nmatching partitions.\n\n3. Modify set_base_rel_sizes() to ignore already-dummy rels, so that we\ndon't perform constraint exclusion again via set_rel_size().\n\nI have to say this other problem involving partition constraints is quite\ncomplicated (aforementioned past bug messing up the situation further), so\nit would be nice if a committer can review and commit the solutions for\nthe originally reported pruning issues.\n\nThanks,\nAmit\n\n[1]\nhttps://www.postgresql.org/message-id/9813f079-f16b-61c8-9ab7-4363cab28d80@lab.ntt.co.jp\n\n\n\n",
"msg_date": "Wed, 10 Apr 2019 14:55:48 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "On 2019/04/10 14:55, Amit Langote wrote:\n> 2. Hosoya-san reported on 2019/03/22 that a contradictory WHERE clause\n> applied to a *partition* doesn't return an empty plan:\n> \n>> I understood Amit's proposal. But I think the issue Thibaut reported\n>> would occur regardless of whether clauses have OR clauses or not as\n>> follows.\n>>\n>> I tested a query which should output \"One-Time Filter: false\".\n>>\n>> # explain select * from test2_0_20 where id = 25;\n>> QUERY PLAN\n>> -----------------------------------------------------------------------\n>> Append (cost=0.00..25.91 rows=6 width=36)\n>> -> Seq Scan on test2_10_20_def (cost=0.00..25.88 rows=6 width=36)\n>> Filter: (id = 25)\n> \n> So, she proposed to apply predicate_refuted_by to the whole\n> baserestrictinfo (at least in the latest patch), which is same as always\n> performing constraint exclusion to sub-partitioned partitions. I\n> initially thought it might be a good idea, but only later realized that\n> now there will be two places doing the same constraint exclusion proof --\n> gen_partprune_steps_internal(), and set_rel_size() calling\n> relation_excluded_by_constraints(). The latter depends on\n> constraint_exclusion GUC whose default being 'partition' would mean we'd\n> not get an empty plan with it. Even if you turn it to 'on', a bug of\n> get_relation_constraints() will prevent the partition constraint from\n> being loaded and performing constraint exclusion with it; I reported it in\n> [1].\n> \n> I think that we may be better off solving the latter problem as follows:\n> \n> 1. Modify relation_excluded_by_constraints() to *always* try to exclude\n> \"baserel\" partitions using their partition constraint (disregarding\n> constraint_exclusion = off/partition).\n> \n> 2. Modify prune_append_rel_partitions(), which runs much earlier these\n> days compared to set_rel_size(), to call relation_excluded_by_constraint()\n> modified as described in step 1. If it returns true, don't perform\n> partition pruning, set the appendrel parent as dummy right away. It's not\n> done today, but appendrel parent can also be set to dummy based on the\n> result of pruning, that is, when get_matching_partitions() returns no\n> matching partitions.\n> \n> 3. Modify set_base_rel_sizes() to ignore already-dummy rels, so that we\n> don't perform constraint exclusion again via set_rel_size().\n> \n> I have to say this other problem involving partition constraints is quite\n> complicated (aforementioned past bug messing up the situation further), so\n> it would be nice if a committer can review and commit the solutions for\n> the originally reported pruning issues.\n\nJust to be clear, I wrote this for HEAD. In PG 11, set_rel_size() and\nrelation_excluded_by_constraints() run before\nprune_append_rel_partitions(), so we won't need to change the latter when\nback-patching.\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Wed, 10 Apr 2019 15:08:27 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "At Wed, 10 Apr 2019 14:55:48 +0900, Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> wrote in <d2c38e4e-ade4-74de-f686-af37e4a5f1c1@lab.ntt.co.jp>\n> On 2019/04/10 12:53, Kyotaro HORIGUCHI wrote:\n> > At Wed, 10 Apr 2019 11:17:53 +0900, Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> wrote:\n> >> Yeah, I think we should move the \"if (partconstr)\" block to the \"if\n> >> (is_orclause(clause))\" block as I originally proposed in:\n> >>\n> >> https://www.postgresql.org/message-id/9bb31dfe-b0d0-53f3-3ea6-e64b811424cf%40lab.ntt.co.jp\n> >>\n> >> It's kind of a hack to get over the limitation that\n> >> get_matching_partitions() can't prune default partitions for certain OR\n> >> clauses and I think we shouldn't let that hack grow into what seems like\n> >> almost duplicating relation_excluded_by_constraints() but without the\n> >> constraint_exclusion GUC guard.\n> > \n> > That leaves an issue of contradicting clauses that is not an arm\n> > of OR-expr. Isn't that what Hosoya-san is trying to fix?\n> \n> Yes, that's right. But as I said, maybe we should try not to duplicate\n> the functionality of relation_excluded_by_constraints() in partprune.c.\n\nCurrently we classify partition constraint as a constraint. So it\nshould be handled not in partition pruning, but constraint\nexclusion code. That's sound reasonable.\n\n> To summarize, aside from the problem described by the subject of this\n> thread (patch for that is v4_default_partition_pruning.patch posted by\n> Hosoya-san on 2019/04/04), we have identified couple of other issues:\n> \n> 1. One that Thibaut reported on 2019/03/04\n> \n> > I kept on testing with sub-partitioning.Thanks.\n> > I found a case, using 2 default partitions, where a default partition is\n> > not pruned:\n> >\n> > --------------\n> >\n> > create table test2(id int, val text) partition by range (id);\n> > create table test2_20_plus_def partition of test2 default;\n> > create table test2_0_20 partition of test2 for values from (0) to (20)\n> > partition by range (id);\n> > create table test2_0_10 partition of test2_0_20 for values from (0) to (10);\n> > create table test2_10_20_def partition of test2_0_20 default;\n> >\n> > # explain (costs off) select * from test2 where id=5 or id=25;\n> > QUERY PLAN\n> > -----------------------------------------\n> > Append\n> > -> Seq Scan on test2_0_10\n> > Filter: ((id = 5) OR (id = 25))\n> > -> Seq Scan on test2_10_20_def\n> > Filter: ((id = 5) OR (id = 25))\n> > -> Seq Scan on test2_20_plus_def\n> > Filter: ((id = 5) OR (id = 25))\n> > (7 rows)\n> \n> For this, we can move the \"if (partconstr)\" block in the same if\n> (is_orclause(clause)) block, as proposed in the v1-delta.patch that I\n> proposed on 2019/03/20. Note that that patch restricts the scope of\n> applying predicate_refuted_by() only to the problem that's currently\n> tricky to solve by partition pruning alone -- pruning default partitions\n> for OR clauses like in the above example.\n\nThis is a failure of partition pruning, which should be resolved\nin the partprune code.\n\n> 2. Hosoya-san reported on 2019/03/22 that a contradictory WHERE clause\n> applied to a *partition* doesn't return an empty plan:\n> \n> > I understood Amit's proposal. But I think the issue Thibaut reported\n> > would occur regardless of whether clauses have OR clauses or not as\n> > follows.\n> >\n> > I tested a query which should output \"One-Time Filter: false\".\n> >\n> > # explain select * from test2_0_20 where id = 25;\n> > QUERY PLAN\n> > -----------------------------------------------------------------------\n> > Append (cost=0.00..25.91 rows=6 width=36)\n> > -> Seq Scan on test2_10_20_def (cost=0.00..25.88 rows=6 width=36)\n> > Filter: (id = 25)\n> \n> So, she proposed to apply predicate_refuted_by to the whole\n> baserestrictinfo (at least in the latest patch), which is same as always\n> performing constraint exclusion to sub-partitioned partitions. I\n> initially thought it might be a good idea, but only later realized that\n> now there will be two places doing the same constraint exclusion proof --\n> gen_partprune_steps_internal(), and set_rel_size() calling\n> relation_excluded_by_constraints(). The latter depends on\n> constraint_exclusion GUC whose default being 'partition' would mean we'd\n> not get an empty plan with it. Even if you turn it to 'on', a bug of\n> get_relation_constraints() will prevent the partition constraint from\n> being loaded and performing constraint exclusion with it; I reported it in\n> [1].\n\nHmm. One perplexing thing here is the fact that partition\nconstraint is not a table constraint but a partitioning\nclassification in users' view.\n\n> I think that we may be better off solving the latter problem as follows:\n> \n> 1. Modify relation_excluded_by_constraints() to *always* try to exclude\n> \"baserel\" partitions using their partition constraint (disregarding\n> constraint_exclusion = off/partition).\n> \n> 2. Modify prune_append_rel_partitions(), which runs much earlier these\n> days compared to set_rel_size(), to call relation_excluded_by_constraint()\n> modified as described in step 1. If it returns true, don't perform\n> partition pruning, set the appendrel parent as dummy right away. It's not\n> done today, but appendrel parent can also be set to dummy based on the\n> result of pruning, that is, when get_matching_partitions() returns no\n> matching partitions.\n> \n> 3. Modify set_base_rel_sizes() to ignore already-dummy rels, so that we\n> don't perform constraint exclusion again via set_rel_size().\n> \n> I have to say this other problem involving partition constraints is quite\n> complicated (aforementioned past bug messing up the situation further), so\n> it would be nice if a committer can review and commit the solutions for\n> the originally reported pruning issues.\n\nTend to agree. Anyway the other problem needs to involve parent\nof the specified relation, which is not a thing that doesn't seem\nto be able to be done the ordinary way.\n\n> Thanks,\n> Amit\n> \n> [1]\n> https://www.postgresql.org/message-id/9813f079-f16b-61c8-9ab7-4363cab28d80@lab.ntt.co.jp\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Wed, 10 Apr 2019 17:30:51 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, failed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nHi Hosoya-san,\r\nI tested different types of key values, and multi-level partitioned tables, and found no problems.\r\nOnly the SQL in the file of src/test/regress/results/partition_prune.out has a space that caused the regression test to fail.\n\nThe new status of this patch is: Waiting on Author\n",
"msg_date": "Mon, 17 Jun 2019 03:28:41 +0000",
"msg_from": "Shawn Wang <shawn.wang@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "On 2019-Jun-17, Shawn Wang wrote:\n\n> I tested different types of key values, and multi-level partitioned tables, and found no problems.\n> Only the SQL in the file of src/test/regress/results/partition_prune.out has a space that caused the regression test to fail.\n\nIt's not clear to me what patch were you reviewing. The latest patch I\nsee in this thread, in [1], does not apply in any branches. As another\ntest, I tried to apply the patch on commit 489e431ba56b (before Tom's\npartprune changes in mid May); if you use \"patch -p1\n--ignore-whitespace\" it is accepted, but the failure case proposed at\nthe start of the thread shows the same behavior (namely, that test1_def\nis scanned when it is not needed):\n\n55432 12devel 23506=# create table test1(id int, val text) partition by range (id);\ncreate table test1_1 partition of test1 for values from (0) to (100);\ncreate table test1_2 partition of test1 for values from (150) to (200);\ncreate table test1_def partition of test1 default;\nexplain select * from test1 where id > 0 and id < 30;\nCREATE TABLE\nDuración: 5,736 ms\nCREATE TABLE\nDuración: 5,622 ms\nCREATE TABLE\nDuración: 3,585 ms\nCREATE TABLE\nDuración: 3,828 ms\n QUERY PLAN \n─────────────────────────────────────────────────────────────────\n Append (cost=0.00..58.16 rows=12 width=36)\n -> Seq Scan on test1_1 (cost=0.00..29.05 rows=6 width=36)\n Filter: ((id > 0) AND (id < 30))\n -> Seq Scan on test1_def (cost=0.00..29.05 rows=6 width=36)\n Filter: ((id > 0) AND (id < 30))\n(5 filas)\n\nDuración: 2,465 ms\n\n\n[1] https://postgr.es/m/00cf01d4eea7$afa43370$0eec9a50$@lab.ntt.co.jp\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 21 Jun 2019 16:03:14 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "Hi Alvaro,\nThank you for your reply.\nYou can see that the mail start time is February 22. So I looked at the\nlatest version at that time. I found that v11.2 was the newest branch at\nthe time. So I tried to merge this patch into the code, and I found that\neverything worked. So I tested on this branch and got the results.\nYou need to add the v4_default_partition_pruning.patch\n<https://www.postgresql.org/message-id/attachment/100463/v4_default_partition_pruning.patch>\nfirst,\nand then add the\nv3_ignore_contradictory_where_clauses_at_partprune_step.patch\n<https://www.postgresql.org/message-id/attachment/100591/v3_ignore_contradictory_where_clauses_at_partprune_step.patch>\n.\nOtherwise, you will find some errors.\nI hope this helps you.\n\nRegards.\n\n-- \nShawn Wang\n\nAlvaro Herrera <alvherre@2ndquadrant.com> 于2019年6月22日周六 上午4:03写道:\n\n> On 2019-Jun-17, Shawn Wang wrote:\n>\n> > I tested different types of key values, and multi-level partitioned\n> tables, and found no problems.\n> > Only the SQL in the file of src/test/regress/results/partition_prune.out\n> has a space that caused the regression test to fail.\n>\n> It's not clear to me what patch were you reviewing. The latest patch I\n> see in this thread, in [1], does not apply in any branches. As another\n> test, I tried to apply the patch on commit 489e431ba56b (before Tom's\n> partprune changes in mid May); if you use \"patch -p1\n> --ignore-whitespace\" it is accepted, but the failure case proposed at\n> the start of the thread shows the same behavior (namely, that test1_def\n> is scanned when it is not needed):\n>\n> 55432 12devel 23506=# create table test1(id int, val text) partition by\n> range (id);\n> create table test1_1 partition of test1 for values from (0) to (100);\n> create table test1_2 partition of test1 for values from (150) to (200);\n> create table test1_def partition of test1 default;\n> explain select * from test1 where id > 0 and id < 30;\n> CREATE TABLE\n> Duración: 5,736 ms\n> CREATE TABLE\n> Duración: 5,622 ms\n> CREATE TABLE\n> Duración: 3,585 ms\n> CREATE TABLE\n> Duración: 3,828 ms\n> QUERY PLAN\n> ─────────────────────────────────────────────────────────────────\n> Append (cost=0.00..58.16 rows=12 width=36)\n> -> Seq Scan on test1_1 (cost=0.00..29.05 rows=6 width=36)\n> Filter: ((id > 0) AND (id < 30))\n> -> Seq Scan on test1_def (cost=0.00..29.05 rows=6 width=36)\n> Filter: ((id > 0) AND (id < 30))\n> (5 filas)\n>\n> Duración: 2,465 ms\n>\n>\n> [1] https://postgr.es/m/00cf01d4eea7$afa43370$0eec9a50$@lab.ntt.co.jp\n>\n> --\n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n>\n\nHi Alvaro,Thank you for your reply.You can see that the mail start time is February 22. So I looked at the latest version at that time. I found that v11.2 was the newest branch at the time. So I tried to merge this patch into the code, and I found that everything worked. So I tested on this branch and got the results.You need to add the v4_default_partition_pruning.patch first, and then add the v3_ignore_contradictory_where_clauses_at_partprune_step.patch.Otherwise, you will find some errors.I hope this helps you.Regards.-- Shawn WangAlvaro Herrera <alvherre@2ndquadrant.com> 于2019年6月22日周六 上午4:03写道:On 2019-Jun-17, Shawn Wang wrote:\n\n> I tested different types of key values, and multi-level partitioned tables, and found no problems.\n> Only the SQL in the file of src/test/regress/results/partition_prune.out has a space that caused the regression test to fail.\n\nIt's not clear to me what patch were you reviewing. The latest patch I\nsee in this thread, in [1], does not apply in any branches. As another\ntest, I tried to apply the patch on commit 489e431ba56b (before Tom's\npartprune changes in mid May); if you use \"patch -p1\n--ignore-whitespace\" it is accepted, but the failure case proposed at\nthe start of the thread shows the same behavior (namely, that test1_def\nis scanned when it is not needed):\n\n55432 12devel 23506=# create table test1(id int, val text) partition by range (id);\ncreate table test1_1 partition of test1 for values from (0) to (100);\ncreate table test1_2 partition of test1 for values from (150) to (200);\ncreate table test1_def partition of test1 default;\nexplain select * from test1 where id > 0 and id < 30;\nCREATE TABLE\nDuración: 5,736 ms\nCREATE TABLE\nDuración: 5,622 ms\nCREATE TABLE\nDuración: 3,585 ms\nCREATE TABLE\nDuración: 3,828 ms\n QUERY PLAN \n─────────────────────────────────────────────────────────────────\n Append (cost=0.00..58.16 rows=12 width=36)\n -> Seq Scan on test1_1 (cost=0.00..29.05 rows=6 width=36)\n Filter: ((id > 0) AND (id < 30))\n -> Seq Scan on test1_def (cost=0.00..29.05 rows=6 width=36)\n Filter: ((id > 0) AND (id < 30))\n(5 filas)\n\nDuración: 2,465 ms\n\n\n[1] https://postgr.es/m/00cf01d4eea7$afa43370$0eec9a50$@lab.ntt.co.jp\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 24 Jun 2019 09:24:33 +0800",
"msg_from": "shawn wang <shawn.wang.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "On 2019-Jun-24, shawn wang wrote:\n\nHello,\n\n> Thank you for your reply.\n> You can see that the mail start time is February 22. So I looked at the\n> latest version at that time. I found that v11.2 was the newest branch at\n> the time. So I tried to merge this patch into the code, and I found that\n> everything worked.\n\nI see -- I only tried master, didn't occur to me to try it against\nREL_11_STABLE.\n\n> So I tested on this branch and got the results.\n> You need to add the v4_default_partition_pruning.patch\n> <https://www.postgresql.org/message-id/attachment/100463/v4_default_partition_pruning.patch>\n> first,\n> and then add the\n> v3_ignore_contradictory_where_clauses_at_partprune_step.patch\n> <https://www.postgresql.org/message-id/attachment/100591/v3_ignore_contradictory_where_clauses_at_partprune_step.patch>\n\nOh, so there are two patches? It's easier to keep track if they're\nalways posted together. Anyway, I may have some time to have a look\ntomorrow (Monday).\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 23 Jun 2019 23:45:37 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "Hello Shawn, Alvaro,\n\nThank you for testing patches and comments.\nYes, there are two patches:\n(1) v4_default_partition_pruning.patch fixes problems with default\npartition pruning\nand (2) v3_ignore_contradictory_where_clauses_at_partprune_step.patch determines\nif a given clause contradicts a sub-partitioned table's partition constraint.\nI'll post two patches together next time.\n\nAnyway, I'll rebase two patches to apply on master and fix space.\n\nRegards,\nYuzuko Hosoya\n\nOn Mon, Jun 24, 2019 at 12:45 PM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Jun-24, shawn wang wrote:\n>\n> Hello,\n>\n> > Thank you for your reply.\n> > You can see that the mail start time is February 22. So I looked at the\n> > latest version at that time. I found that v11.2 was the newest branch at\n> > the time. So I tried to merge this patch into the code, and I found that\n> > everything worked.\n>\n> I see -- I only tried master, didn't occur to me to try it against\n> REL_11_STABLE.\n>\n> > So I tested on this branch and got the results.\n> > You need to add the v4_default_partition_pruning.patch\n> > <https://www.postgresql.org/message-id/attachment/100463/v4_default_partition_pruning.patch>\n> > first,\n> > and then add the\n> > v3_ignore_contradictory_where_clauses_at_partprune_step.patch\n> > <https://www.postgresql.org/message-id/attachment/100591/v3_ignore_contradictory_where_clauses_at_partprune_step.patch>\n>\n> Oh, so there are two patches? It's easier to keep track if they're\n> always posted together. Anyway, I may have some time to have a look\n> tomorrow (Monday).\n>\n> --\n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n-- \nBest regards,\nYuzuko Hosoya\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 25 Jun 2019 13:45:19 +0900",
"msg_from": "yuzuko <yuzukohosoya@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "Hello,\n\nOn Tue, Jun 25, 2019 at 1:45 PM yuzuko <yuzukohosoya@gmail.com> wrote:\n>\n> Hello Shawn, Alvaro,\n>\n> Thank you for testing patches and comments.\n> Yes, there are two patches:\n> (1) v4_default_partition_pruning.patch fixes problems with default\n> partition pruning\n> and (2) v3_ignore_contradictory_where_clauses_at_partprune_step.patch determines\n> if a given clause contradicts a sub-partitioned table's partition constraint.\n> I'll post two patches together next time.\n>\n> Anyway, I'll rebase two patches to apply on master and fix space.\n>\n\nAttach the latest patches discussed in this thread. I rebased the second\npatch (v5_ignore_contradictory_where_clauses_at_partprune_step.patch)\non the current master. Could you please check them again?\n\n--\nBest regards,\nYuzuko Hosoya\nNTT Open Source Software Center",
"msg_date": "Thu, 27 Jun 2019 11:34:13 +0900",
"msg_from": "yuzuko <yuzukohosoya@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "Hosoya-san,\n\nThanks for updating the patches.\n\nI have no comment in particular about\nv4_default_partition_pruning.patch, but let me reiterate my position\nabout v5_ignore_contradictory_where_clauses_at_partprune_step.patch,\nwhich I first stated in the following email a few months ago:\n\nhttps://www.postgresql.org/message-id/d2c38e4e-ade4-74de-f686-af37e4a5f1c1%40lab.ntt.co.jp\n\nThis patch proposes to apply constraint exclusion to check whether it\nwill be wasteful to generate pruning steps from a given clause against\na given sub-partitioned table, because the clause contradicts its\npartition clause. Actually, the patch started out to generalize the\nexisting usage of constraint exclusion in partprune.c that's used to\nskip processing useless arguments of an OR clause. The problem with\nsteps generated from such contradictory clauses is that they fail to\nprune the default partition of a sub-partitioned table, because the\nvalue extracted from such a clause appears to the pruning logic to\nfall in the default partition, given that the pruning logic proper is\nunaware of the partition constraint of the partitioned table that\npruning is being applied to. Here is an example similar to one that\nHosoya-san shared earlier on this thread that shows the problem.\n\ncreate table p (a int) partition by range (a);\ncreate table p1 partition of p for values from (0) to (20) partition\nby range (a);\ncreate table p11 partition of p1 for values from (0) to (10);\ncreate table p1_def partition of p1 default;\n-- p11 correctly pruned, but p1_def not\nexplain select * from p1 where a = 25;\n QUERY PLAN\n──────────────────────────────────────────────────────────────\n Append (cost=0.00..41.94 rows=13 width=4)\n -> Seq Scan on p1_def (cost=0.00..41.88 rows=13 width=4)\n Filter: (a = 25)\n(3 rows)\n\nHere without the knowledge that p1's range is restricted to 0 <= a <\n20 by way of its partition constraint, the pruning logic, when handed\nthe value 25, concludes that p1_def must be scanned. With the patch,\npartprune.c concludes without performing pruning that scanning any of\np1's partitions is unnecessary.\n\nexplain select * from p1 where a = 25;\n QUERY PLAN\n──────────────────────────────────────────\n Result (cost=0.00..0.00 rows=0 width=0)\n One-Time Filter: false\n(2 rows)\n\nActually, as of 11.4, setting constraint_exclusion = on, by way of\nrelation_excluded_by_constraints(), will give you the same result even\nwithout the patch. My argument earlier was that we shouldn't have two\nplaces that will do essentially the same processing -- partprune.c\nwith the patch applied and relation_excluded_by_constraints(). That\nis, we should only keep the latter, with the trade-off that users have\nto live with the default partition of sub-partitioned tables not being\npruned in some corner cases like this one.\n\nNote that there's still a problem with the existing usage of\nconstraint exclusion in partprune.c, which Thibaut first reported on\nthis thread [1].\n\nexplain select * from p1 where a = 25 or a = 5;\n QUERY PLAN\n──────────────────────────────────────────────────────────────\n Append (cost=0.00..96.75 rows=50 width=4)\n -> Seq Scan on p11 (cost=0.00..48.25 rows=25 width=4)\n Filter: ((a = 25) OR (a = 5))\n -> Seq Scan on p1_def (cost=0.00..48.25 rows=25 width=4)\n Filter: ((a = 25) OR (a = 5))\n(5 rows)\n\nHere only one of the OR's arguments can be true for p1's partitions,\nbut partprune.c's current usage of constraint exclusion fails to\nnotice that. I had posted a patch [2] to solve this specific problem.\nHosoya-san's patch is a generalization of my patch.\n\nThanks,\nAmit\n\n[1] https://www.postgresql.org/message-id/bd03f475-30d4-c4d0-3d7f-d2fbde755971%40dalibo.com\n\n[2] https://www.postgresql.org/message-id/9bb31dfe-b0d0-53f3-3ea6-e64b811424cf%40lab.ntt.co.jp\n\n\n",
"msg_date": "Wed, 3 Jul 2019 17:13:00 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "Hello,\n\nI noticed that the patch is still marked as \"Waiting on Author\" ever\nsince Shawn set it that way on June 17. Since Hosoya-san posted\nupdated patches on June 27, the status should've been changed to\n\"Needs Review\". Or maybe \"Ready for Committer\", because the last time\nI looked, at least the default partition pruning issue seems to be\nsufficiently taken care of by the latest patch. Whether or not we\nshould apply the other patch (more aggressive use of constraint\nexclusion by partprune.c on partitioned partitions), I'm not sure, but\nmaybe a committer can decide in an instant. :)\n\nI've marked it RfC for now.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Wed, 31 Jul 2019 17:17:09 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "On 2019-Jul-31, Amit Langote wrote:\n\n> Hello,\n> \n> I noticed that the patch is still marked as \"Waiting on Author\" ever\n> since Shawn set it that way on June 17. Since Hosoya-san posted\n> updated patches on June 27, the status should've been changed to\n> \"Needs Review\". Or maybe \"Ready for Committer\", because the last time\n> I looked, at least the default partition pruning issue seems to be\n> sufficiently taken care of by the latest patch. Whether or not we\n> should apply the other patch (more aggressive use of constraint\n> exclusion by partprune.c on partitioned partitions), I'm not sure, but\n> maybe a committer can decide in an instant. :)\n\nThanks for the status update. I intend to get this patch pushed before\nthe next set of minors.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 31 Jul 2019 08:48:58 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "On Wed, Jul 31, 2019 at 9:49 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> On 2019-Jul-31, Amit Langote wrote:\n> > I noticed that the patch is still marked as \"Waiting on Author\" ever\n> > since Shawn set it that way on June 17. Since Hosoya-san posted\n> > updated patches on June 27, the status should've been changed to\n> > \"Needs Review\". Or maybe \"Ready for Committer\", because the last time\n> > I looked, at least the default partition pruning issue seems to be\n> > sufficiently taken care of by the latest patch. Whether or not we\n> > should apply the other patch (more aggressive use of constraint\n> > exclusion by partprune.c on partitioned partitions), I'm not sure, but\n> > maybe a committer can decide in an instant. :)\n>\n> Thanks for the status update. I intend to get this patch pushed before\n> the next set of minors.\n\nThank you Alvaro.\n\nRegards,\nAmit\n\n\n",
"msg_date": "Fri, 2 Aug 2019 10:30:38 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "On 2019-Jul-03, Amit Langote wrote:\n\n> Hosoya-san,\n> \n> Thanks for updating the patches.\n> \n> I have no comment in particular about\n> v4_default_partition_pruning.patch,\n\nCool, thanks. I spent some time reviewing this patch (the first one)\nand I propose the attached cosmetic changes. Mostly they consist of a\nfew comment rewordings.\n\nThere is one Assert() that changed in a pretty significant way ...\ninnocent though the change looks. The original (not Hosoya-san's\npatch's fault) had an assertion which is being changed thus:\n\n minoff = 0;\n maxoff = boundinfo->ndatums;\n\t...\n if (partindices[minoff] < 0)\n minoff++;\n if (partindices[maxoff] < 0)\n maxoff--;\n \n result->scan_default = partition_bound_has_default(boundinfo);\n- Assert(minoff >= 0 && maxoff >= 0);\n+ Assert(partindices[minoff] >= 0 &&\n+ partindices[maxoff] >= 0);\n\nNote that the original Assert() here was verifying whether minoff and\nmaxoff are both >= 0. But that seems pretty useless since it seems\nalmost impossible to have them become that given what we do to them.\nWhat I think this code *really* wants to check is whether *the partition\nindexes* that they point to are not negative: the transformation that\nthe two \"if\" lines do means to ignore bounds that correspond to value\nranges uncovered by any partition. And after the incr/decr operators,\nwe expect that the bounds will be those of existing partitions ... so\nthey shouldn't be -1.\n\n\nOther changes are addition of braces to some one-line blocks that had\nsignificant comments, and minor code rearrangements to make things look\nmore easily understandable.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sun, 4 Aug 2019 02:29:25 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "So this is the best commit messages I could come up with at this stupid\nhour. I think the wording is pretty poor but at least it seems correct.\nI'm not sure I'll be able to get this pushed tomorrow, but I'll try.\n\n Improve pruning of a default partition\n\n When querying a partitioned table containing a default partition, we\n were wrongly deciding to include it in the scan too early in the\n process, failing to exclude it in some cases. If we reinterpret the\n PruneStepResult.scan_default flag slightly, we can do a better job at\n detecting that it can be excluded. The change is that we avoid setting\n the flag for that pruning step unless the step absolutely requires the\n default partition to be scanned (in contrast with the previous\n arrangement, which was to set it unless the step was able to prune it).\n So get_matching_partitions() must explicitly check the partition that\n each returned bound value corresponds to in order to determine whether\n the default one needs to be included, rather than relying on the flag\n from the final step result.\n\n Author: Yuzuko Hosoya <hosoya.yuzuko@lab.ntt.co.jp>\n Reviewed-by: Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>\n Discussion: https://postgr.es/m/00e601d4ca86$932b8bc0$b982a340$@lab.ntt.co.jp\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 4 Aug 2019 03:12:32 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "On 2019-Aug-04, Alvaro Herrera wrote:\n\n> So this is the best commit messages I could come up with at this stupid\n> hour. I think the wording is pretty poor but at least it seems correct.\n> I'm not sure I'll be able to get this pushed tomorrow, but I'll try.\n\nPushed. Since this is Sunday before minors, I'll be checking buildfarm\nand will summarily revert if anything goes wrong.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 4 Aug 2019 11:24:39 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "I propose the comment rewordings as attached. Mostly, this updates the\ncomment atop the function to cover the case being modified, and then the\nnew comment just refers to the new explicitly stated policy, so it\nbcomes simpler.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sun, 4 Aug 2019 18:43:49 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "Hi Alvaro,\n\nOn Mon, Aug 5, 2019 at 12:24 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Aug-04, Alvaro Herrera wrote:\n>\n> > So this is the best commit messages I could come up with at this stupid\n> > hour. I think the wording is pretty poor but at least it seems correct.\n> > I'm not sure I'll be able to get this pushed tomorrow, but I'll try.\n>\n> Pushed. Since this is Sunday before minors, I'll be checking buildfarm\n> and will summarily revert if anything goes wrong.\n\nThanks for the revisions and committing. I can imagine the stress\nwhen writing that commit message, but it seems correct to me. Thanks\nto Hosoya-san for spotting the problem and working on the fix.\n\nIt had occurred to me when reviewing this patch, prompted by\nHoriguchi-san's comment [1], that our design of PruneStepResult might\nnot be so good. Especially, we don't really need to track whether the\ndefault partition needs to be scanned on a per-step basis. Maybe, the\nresult of each step should be simply a Bitmapset storing the set of\nbound offsets. We then check in the end if any of the bound offsets\nin the final set (that is, after having executed all the steps)\nrequire scanning the default partition, very much like what we did in\nthe committed patch. ISTM, scan_null wouldn't cause the same problems\nas scan_default did, so we can add the null_index to a given step's\nresult set when executing the step. IOW, we can get rid of the\nunnecessary abstraction that is the PruneStepResult struct.\n\nThanks,\nAmit\n\n[1] https://www.postgresql.org/message-id/20190319.152756.202071463.horiguchi.kyotaro%40lab.ntt.co.jp\n\n\n",
"msg_date": "Mon, 5 Aug 2019 11:02:55 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "Hi Alvaro and Amit,\n\nThanks for reviewing and fixing the patch.\nAlso, I confirmed the commit message explained\nthe modification clearly. Thanks a lot.\n\nYuzuko Hosoya\n\nOn Mon, Aug 5, 2019 at 12:24 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Aug-04, Alvaro Herrera wrote:\n>\n> > So this is the best commit messages I could come up with at this stupid\n> > hour. I think the wording is pretty poor but at least it seems correct.\n> > I'm not sure I'll be able to get this pushed tomorrow, but I'll try.\n>\n> Pushed. Since this is Sunday before minors, I'll be checking buildfarm\n> and will summarily revert if anything goes wrong.\n>\n> --\n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n-- \nBest regards,\nYuzuko Hosoya\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 5 Aug 2019 11:09:17 +0900",
"msg_from": "yuzuko <yuzukohosoya@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "Hi Alvaro,\n\nThanks for reviewing.\nThe modification you made seems correct to me.\n\nHowever, I'm still concerned that the block\n-----\n if (partconstr)\n {\n partconstr = (List *)\n expression_planner((Expr *) partconstr);\n if (context->rel->relid != 1)\n ChangeVarNodes((Node *) partconstr, 1,\n context->rel->relid, 0);\n if (predicate_refuted_by(partconstr,\n list_make1(clause),\n false))\n {\n context->contradictory = true;\n return NIL;\n }\n }\n-----\nis written in the right place as Amit explained [1].\n\nAt first, we tried to fix the following problematic query\nwhich was reported by Thibaut before:\n\ncreate table p (a int) partition by range (a);\ncreate table p1 partition of p for values from (0) to (20) partition\nby range (a);\ncreate table p11 partition of p1 for values from (0) to (10);\ncreate table p1_def partition of p1 default;\nexplain select * from p1 where a = 25 or a = 5;\n QUERY PLAN\n──────────────────────────────────────\n Append (cost=0.00..96.75 rows=50 width=4)\n -> Seq Scan on p11 (cost=0.00..48.25 rows=25 width=4)\n Filter: ((a = 25) OR (a = 5))\n -> Seq Scan on p1_def (cost=0.00..48.25 rows=25 width=4)\n Filter: ((a = 25) OR (a = 5))\n(5 rows)\n\nAnd Amit proposed the patch to fix this problem[2].\nIn this patch, the above if() block was written in another place.\nAfter that, I found the following query also doesn't work correctly:\n\nexplain select * from p1 where a = 25;\n QUERY PLAN\n───────────────────────────────────────\n Append (cost=0.00..41.94 rows=13 width=4)\n -> Seq Scan on p1_def (cost=0.00..41.88 rows=13 width=4)\n Filter: (a = 25)\n(3 rows)\n\nSo I proposed moving the if() block to the current place.\nThe latest patch can solve both queries but I found the latter\nproblem can be solved by setting constraint_exclusion = on.\n\nWhich approach will be suitable?\n\n\n[1] https://www.postgresql.org/message-id/CA%2BHiwqG%2BnSD0vcJacArYgYcFVtpTJQ0fx1gBgoZkA_isKd6Z2w%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/9bb31dfe-b0d0-53f3-3ea6-e64b811424cf%40lab.ntt.co.jp\n\nBest regards,\n\nYuzuko Hosoya\nNTT Open Source Software Center\n\nOn Mon, Aug 5, 2019 at 11:03 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> I propose the comment rewordings as attached. Mostly, this updates the\n> comment atop the function to cover the case being modified, and then the\n> new comment just refers to the new explicitly stated policy, so it\n> bcomes simpler.\n>\n> --\n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n-- \nBest regards,\nYuzuko Hosoya\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 5 Aug 2019 14:18:52 +0900",
"msg_from": "yuzuko <yuzukohosoya@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "On 2019-Aug-05, yuzuko wrote:\n\n> However, I'm still concerned that the block\n> -----\n> ...\n> -----\n> is written in the right place as Amit explained [1].\n\nYeah, I have that patch installed locally and I'm looking about it.\nThanks for the reminder. I also have an eye on Horiguchi's patch\nelsewhere in the thread.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 5 Aug 2019 10:07:06 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "On 2019-Aug-05, yuzuko wrote:\n\n> So I proposed moving the if() block to the current place.\n> The latest patch can solve both queries but I found the latter\n> problem can be solved by setting constraint_exclusion = on.\n\nSo we have three locations for that test; one is where it currently is,\nwhich handles a small subset of the cases. The other is where Amit\nfirst proposed putting it, which handles some additional cases; and the\nthird one is where your latest patch puts it, which seems to handle all\ncases. Isn't that what Amit is saying? If that's correct (and that's\nwhat I want to imply with the comment changes I proposed), then we\nshould just accept that version of the patch.\n\nI don't think that we care about what happens with constraint_exclusion\nis on. That's not the recommended value for that setting anyway.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 5 Aug 2019 10:39:20 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "Hi Alvaro,\n\nOn Mon, Aug 5, 2019 at 11:39 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> On 2019-Aug-05, yuzuko wrote:\n>\n> > So I proposed moving the if() block to the current place.\n> > The latest patch can solve both queries but I found the latter\n> > problem can be solved by setting constraint_exclusion = on.\n>\n> So we have three locations for that test; one is where it currently is,\n> which handles a small subset of the cases. The other is where Amit\n> first proposed putting it, which handles some additional cases; and the\n> third one is where your latest patch puts it, which seems to handle all\n> cases. Isn't that what Amit is saying? If that's correct (and that's\n> what I want to imply with the comment changes I proposed), then we\n> should just accept that version of the patch.\n\nThat's a correct summary, thanks.\n\n> I don't think that we care about what happens with constraint_exclusion\n> is on. That's not the recommended value for that setting anyway.\n\nOne issue I expressed with unconditionally applying constraint\nexclusion in partprune.c the way the third patch proposes to do it is\nthat it means we end up performing the same processing twice for a\ngiven relation in come cases. Specifically, when constraint_exclusion\nis set to on, relation_excluded_by_constraints() that occurs when\nset_rel_sizes() is applied to that relation would perform the same\nproof. But maybe we shouldn't worry about the repetition too much,\nbecause it's not likely to be very problematic in practice,\nconsidering that setting constraint_exclusion to on is not\nrecommended.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Tue, 6 Aug 2019 11:49:47 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "On 2019-Aug-05, Alvaro Herrera wrote:\n\n> So we have three locations for that test; one is where it currently is,\n> which handles a small subset of the cases. The other is where Amit\n> first proposed putting it, which handles some additional cases; and the\n> third one is where your latest patch puts it, which seems to handle all\n> cases. Isn't that what Amit is saying? If that's correct (and that's\n> what I want to imply with the comment changes I proposed), then we\n> should just accept that version of the patch.\n\n... actually, there's a fourth possible location, which is outside the\nper-partitioning-attribute loop. Nothing in the moved block is to be\ndone per attribute, so it'd be wasted work AFAICS. I propose the\nattached.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 6 Aug 2019 09:30:53 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "Hello,\n\nOn 2019-Aug-06, Amit Langote wrote:\n\n> On Mon, Aug 5, 2019 at 11:39 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> > I don't think that we care about what happens with constraint_exclusion\n> > is on. That's not the recommended value for that setting anyway.\n> \n> One issue I expressed with unconditionally applying constraint\n> exclusion in partprune.c the way the third patch proposes to do it is\n> that it means we end up performing the same processing twice for a\n> given relation in come cases. Specifically, when constraint_exclusion\n> is set to on, relation_excluded_by_constraints() that occurs when\n> set_rel_sizes() is applied to that relation would perform the same\n> proof. But maybe we shouldn't worry about the repetition too much,\n> because it's not likely to be very problematic in practice,\n> considering that setting constraint_exclusion to on is not\n> recommended.\n\nWell, if this is really all that duplicative, one thing we could do is\nrun this check in get_partprune_steps_internal only if\nconstraint_exclusion is a value other than on; we should achieve the\nsame effect with no repetition. Patch for that is attached. However,\nif I run the server with constraint_exclusion=on, the regression test\nfail with the attached diff. I didn't look at what test is failing, but\nit seems to me that it's not really duplicative in all cases, only some.\nTherefore we can't do it.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 6 Aug 2019 13:17:40 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "On 2019-Aug-06, Alvaro Herrera wrote:\n\n> Well, if this is really all that duplicative, one thing we could do is\n> run this check in get_partprune_steps_internal only if\n> constraint_exclusion is a value other than on; we should achieve the\n> same effect with no repetition. Patch for that is attached. However,\n> if I run the server with constraint_exclusion=on, the regression test\n> fail with the attached diff. I didn't look at what test is failing, but\n> it seems to me that it's not really duplicative in all cases, only some.\n> Therefore we can't do it.\n\nRight ... One of the failing cases is one that was benefitting from\nconstraint_exclusion in the location where we were doing it previously.\n\nI think trying to fix this problem by selectively moving where to apply\nconstraint exclusion would be very bug-prone, and hard to detect whether\nwe're missing one spot or doing it multiple times in some other cases.\nSo I think we shouldn't try. If this is a real problem, then we should\nadd a flag to the reloptinfo and set it when we've done pruning, then\ndo nothing if we already did it. I'm not sure that this is correct, and\nI'm even less sure that it is worth the trouble.\n\nIn short, I propose to get this done as the patch I posted in\nhttps://postgr.es/m/20190806133053.GA23706@alvherre.pgsql\n\nCheers\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 6 Aug 2019 18:17:59 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "Hello,\n\n> > Well, if this is really all that duplicative, one thing we could do is\n> > run this check in get_partprune_steps_internal only if\n> > constraint_exclusion is a value other than on; we should achieve the\n> > same effect with no repetition. Patch for that is attached. However,\n> > if I run the server with constraint_exclusion=on, the regression test\n> > fail with the attached diff. I didn't look at what test is failing, but\n> > it seems to me that it's not really duplicative in all cases, only some.\n> > Therefore we can't do it.\n>\n> Right ... One of the failing cases is one that was benefitting from\n> constraint_exclusion in the location where we were doing it previously.\n>\nThanks for testing.\n\n> I think trying to fix this problem by selectively moving where to apply\n> constraint exclusion would be very bug-prone, and hard to detect whether\n> we're missing one spot or doing it multiple times in some other cases.\n> So I think we shouldn't try. If this is a real problem, then we should\n> add a flag to the reloptinfo and set it when we've done pruning, then\n> do nothing if we already did it. I'm not sure that this is correct, and\n> I'm even less sure that it is worth the trouble.\n>\nIndeed, we should not do that from the viewpoint of cost-effectiveness.\nI think we can ignore the duplicate processing considering it doesn't\nhappen in all cases.\n\n> In short, I propose to get this done as the patch I posted in\n> https://postgr.es/m/20190806133053.GA23706@alvherre.pgsql\n>\nI agree with your proposal. Also, I confirmed a default partition was pruned\nas expected with your patch.\n\n-- \nBest regards,\nYuzuko Hosoya\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 7 Aug 2019 15:29:59 +0900",
"msg_from": "yuzuko <yuzukohosoya@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "On Wed, Aug 7, 2019 at 3:30 PM yuzuko <yuzukohosoya@gmail.com> wrote:\n> > In short, I propose to get this done as the patch I posted in\n> > https://postgr.es/m/20190806133053.GA23706@alvherre.pgsql\n> >\n> I agree with your proposal. Also, I confirmed a default partition was pruned\n> as expected with your patch.\n\n+1.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Wed, 7 Aug 2019 15:38:53 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "On Tue, 6 Aug 2019 at 23:18, Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> On 2019-Aug-06, Alvaro Herrera wrote:\n>\n> > Well, if this is really all that duplicative, one thing we could do is\n> > run this check in get_partprune_steps_internal only if\n> > constraint_exclusion is a value other than on; we should achieve the\n> > same effect with no repetition. Patch for that is attached. However,\n> > if I run the server with constraint_exclusion=on, the regression test\n> > fail with the attached diff. I didn't look at what test is failing, but\n> > it seems to me that it's not really duplicative in all cases, only some.\n> > Therefore we can't do it.\n>\n> Right ... One of the failing cases is one that was benefitting from\n> constraint_exclusion in the location where we were doing it previously.\n>\n> I think trying to fix this problem by selectively moving where to apply\n> constraint exclusion would be very bug-prone, and hard to detect whether\n> we're missing one spot or doing it multiple times in some other cases.\n> So I think we shouldn't try. If this is a real problem, then we should\n> add a flag to the reloptinfo and set it when we've done pruning, then\n> do nothing if we already did it. I'm not sure that this is correct, and\n> I'm even less sure that it is worth the trouble.\n>\n> In short, I propose to get this done as the patch I posted in\n> https://postgr.es/m/20190806133053.GA23706@alvherre.pgsql\n\n\nI saw your recent commit and it scares me in various places, noted below.\n\n\"Commit: Apply constraint exclusion more generally in partitioning\"\n\n\"This applies particularly to the default partition...\"\n\nMy understanding of the thread was the complaint was about removing the\ndefault partition. I would prefer to see code executed just for that case,\nso that people who do not define a default partition are unaffected.\n\n\"So in certain cases\nwe're scanning partitions that we don't need to.\"\n\nAvoiding that has been the subject of months of work.\n\n\"This has the unwanted side-effect of testing some (most? all?)\nconstraints more than once if constraint_exclusion=on. That seems\nunavoidable as far as I can tell without some additional work, but\nthat's not the recommended setting for that parameter anyway.\nHowever, because this imposes additional processing cost for all\nqueries using partitioned tables...\"\n\nOne of the major features of PG12 is the additional performance and\nscalability of partitoning, but we don't seem to have benchmarked the\neffect of this patch on those issues.\n\nPlease could we do perf checks, with tests up to 1000s of partitions? And\nif there is a regression, I would vote to revoke this patch or address the\nrequest in a less general way.\n\nHopefully I have misunderstood and/or there is no regression.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Solutions for the Enterprise\n\nOn Tue, 6 Aug 2019 at 23:18, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:On 2019-Aug-06, Alvaro Herrera wrote:\n\n> Well, if this is really all that duplicative, one thing we could do is\n> run this check in get_partprune_steps_internal only if\n> constraint_exclusion is a value other than on; we should achieve the\n> same effect with no repetition. Patch for that is attached. However,\n> if I run the server with constraint_exclusion=on, the regression test\n> fail with the attached diff. I didn't look at what test is failing, but\n> it seems to me that it's not really duplicative in all cases, only some.\n> Therefore we can't do it.\n\nRight ... One of the failing cases is one that was benefitting from\nconstraint_exclusion in the location where we were doing it previously.\n\nI think trying to fix this problem by selectively moving where to apply\nconstraint exclusion would be very bug-prone, and hard to detect whether\nwe're missing one spot or doing it multiple times in some other cases.\nSo I think we shouldn't try. If this is a real problem, then we should\nadd a flag to the reloptinfo and set it when we've done pruning, then\ndo nothing if we already did it. I'm not sure that this is correct, and\nI'm even less sure that it is worth the trouble.\n\nIn short, I propose to get this done as the patch I posted in\nhttps://postgr.es/m/20190806133053.GA23706@alvherre.pgsqlI saw your recent commit and it scares me in various places, noted below.\"Commit: Apply constraint exclusion more generally in partitioning\"\"This applies particularly to the default partition...\"My understanding of the thread was the complaint was about removing the default partition. I would prefer to see code executed just for that case, so that people who do not define a default partition are unaffected.\"So in certain caseswe're scanning partitions that we don't need to.\"Avoiding that has been the subject of months of work.\"This has the unwanted side-effect of testing some (most? all?)constraints more than once if constraint_exclusion=on. That seemsunavoidable as far as I can tell without some additional work, butthat's not the recommended setting for that parameter anyway.However, because this imposes additional processing cost for allqueries using partitioned tables...\" One of the major features of PG12 is the additional performance and scalability of partitoning, but we don't seem to have benchmarked the effect of this patch on those issues.Please could we do perf checks, with tests up to 1000s of partitions? And if there is a regression, I would vote to revoke this patch or address the request in a less general way.Hopefully I have misunderstood and/or there is no regression.-- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Solutions for the Enterprise",
"msg_date": "Wed, 7 Aug 2019 19:28:17 +0100",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "On 2019-Aug-07, Simon Riggs wrote:\n\n> I saw your recent commit and it scares me in various places, noted below.\n> \n> \"Commit: Apply constraint exclusion more generally in partitioning\"\n> \n> \"This applies particularly to the default partition...\"\n> \n> My understanding of the thread was the complaint was about removing the\n> default partition. I would prefer to see code executed just for that case,\n> so that people who do not define a default partition are unaffected.\n\nWell, as the commit message noted, it applies to other cases also, not\njust the default partition. The default partition just happens to be\nthe most visible case.\n\n> \"So in certain cases\n> we're scanning partitions that we don't need to.\"\n> \n> Avoiding that has been the subject of months of work.\n\nWell, yes, avoiding that is the point of this commit also: we were\nscanning some partitions for some queries, after this patch we're\nsupposed not to.\n\n> \"This has the unwanted side-effect of testing some (most? all?)\n> constraints more than once if constraint_exclusion=on. That seems\n> unavoidable as far as I can tell without some additional work, but\n> that's not the recommended setting for that parameter anyway.\n> However, because this imposes additional processing cost for all\n> queries using partitioned tables...\"\n> \n> One of the major features of PG12 is the additional performance and\n> scalability of partitoning, but we don't seem to have benchmarked the\n> effect of this patch on those issues.\n> \n> Please could we do perf checks, with tests up to 1000s of partitions? And\n> if there is a regression, I would vote to revoke this patch or address the\n> request in a less general way.\n\nI'll have a look.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 7 Aug 2019 16:27:06 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "Hi Alvaro,\n\nOn Thu, Aug 8, 2019 at 5:27 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> On 2019-Aug-07, Simon Riggs wrote:\n> > I saw your recent commit and it scares me in various places, noted below.\n> >\n> > \"Commit: Apply constraint exclusion more generally in partitioning\"\n> >\n> > \"This applies particularly to the default partition...\"\n> >\n> > My understanding of the thread was the complaint was about removing the\n> > default partition. I would prefer to see code executed just for that case,\n> > so that people who do not define a default partition are unaffected.\n>\n> Well, as the commit message noted, it applies to other cases also, not\n> just the default partition. The default partition just happens to be\n> the most visible case.\n\nJust to be clear, I don't think there was any patch posted on this\nthread that was to address *non-default* partitions failing to be\npruned by \"partition pruning\". If that had been the problem, we'd be\nfixing the bugs of the partition pruning code, not apply constraint\nexclusion more generally to paper over such bugs. I may be misreading\nwhat you wrote here though.\n\nThe way I interpret the \"generally\" in the \"apply constraint exclusion\nmore generally\" is thus: we can't prune the default partition without\nthe constraint exclusion clutches for evidently a broader sets of\nclauses than the previous design assumed. The previous design assumed\nthat only OR clauses whose arguments contradicted the parent's\npartition constraint are problematic, but evidently any clause set\nthat contradicts the partition constraint is problematic. Again, the\nproblem is that it's impossible to prune the \"default\" partition with\nsuch clauses, not the *non-default* ones -- values extracted from\ncontradictory clauses would not match any of the bounds so all\nnon-default partitions would be pruned that way.\n\nBy the way, looking closer at the patch committed today, I realized I\nhad misunderstood what you proposed as the *4th* possible place to\nmove the constraint exclusion check to. I had misread the proposal\nand thought you meant to move it outside the outermost loop of\ngen_partprune_steps_internal(), but that's not where the check is now.\nI think it's better to call predicate_refuted_by() only once by\npassing the whole list of clauses instead of for each clause\nseparately. The result would be the same but the former would be more\nefficient, because it avoids repeatedly paying the cost of setting up\npredtest.c data structures when predicate_refuted_by() is called.\nSorry that I'm only saying this now.\n\nAlso it wouldn't be incorrect to do the check only if the parent has a\ndefault partition. That will also address the Simon's concern this\nmight slow down the cases where this effort is useless.\n\nI've attached a patch that does that. When working on it, I realized\nthat the way RelOptInfo.partition_qual is processed is a bit\nduplicative, so I created a separate patch to make that a bit more\nconsistent.\n\nThanks,\nAmit",
"msg_date": "Thu, 8 Aug 2019 14:50:54 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "Hi Hosoya-san, \n\n\n\n\nI am sorry for so late to reply to you.\n\n\n\nI merged the patches into master(commit: 44460d7017cde005d7a2e246db0b32375bfec15d).\n\nI tested the case I used in the previous patches and didn't find any issues. \n\n\n\nNow I find that you are rethinking some of the details. \n\nI will continue to pay attention to this and will follow up and feedback in time.\n\n\n\nRegards, \n\n\n \n\n-- \n\nShawn Wang\n\n\n\n\n\n\n---- On Thu, 27 Jun 2019 10:34:13 +0800 yuzuko <yuzukohosoya@gmail.com> wrote ----\n\n\n\nHello, \n \nOn Tue, Jun 25, 2019 at 1:45 PM yuzuko <mailto:yuzukohosoya@gmail.com> wrote: \n> \n> Hello Shawn, Alvaro, \n> \n> Thank you for testing patches and comments. \n> Yes, there are two patches: \n> (1) v4_default_partition_pruning.patch fixes problems with default \n> partition pruning \n> and (2) v3_ignore_contradictory_where_clauses_at_partprune_step.patch determines \n> if a given clause contradicts a sub-partitioned table's partition constraint. \n> I'll post two patches together next time. \n> \n> Anyway, I'll rebase two patches to apply on master and fix space. \n> \n \nAttach the latest patches discussed in this thread. I rebased the second \npatch (v5_ignore_contradictory_where_clauses_at_partprune_step.patch) \non the current master. Could you please check them again? \n \n-- \nBest regards, \nYuzuko Hosoya \nNTT Open Source Software Center\nHi Hosoya-san, I am sorry for so late to reply to you.I merged the patches into master(commit: 44460d7017cde005d7a2e246db0b32375bfec15d).I tested the case I used in the previous patches and didn't find any issues. Now I find that you are rethinking some of the details. I will continue to pay attention to this and will follow up and feedback in time.Regards, -- Shawn Wang---- On Thu, 27 Jun 2019 10:34:13 +0800 yuzuko <yuzukohosoya@gmail.com> wrote ----Hello, On Tue, Jun 25, 2019 at 1:45 PM yuzuko <yuzukohosoya@gmail.com> wrote: > > Hello Shawn, Alvaro, > > Thank you for testing patches and comments. > Yes, there are two patches: > (1) v4_default_partition_pruning.patch fixes problems with default > partition pruning > and (2) v3_ignore_contradictory_where_clauses_at_partprune_step.patch determines > if a given clause contradicts a sub-partitioned table's partition constraint. > I'll post two patches together next time. > > Anyway, I'll rebase two patches to apply on master and fix space. > Attach the latest patches discussed in this thread. I rebased the second patch (v5_ignore_contradictory_where_clauses_at_partprune_step.patch) on the current master. Could you please check them again? -- Best regards, Yuzuko Hosoya NTT Open Source Software Center",
"msg_date": "Thu, 08 Aug 2019 15:34:05 +0800",
"msg_from": "Shawn Wang <shawn.wang@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "On Wed, 7 Aug 2019 at 21:27, Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> On 2019-Aug-07, Simon Riggs wrote:\n>\n> > I saw your recent commit and it scares me in various places, noted below.\n> >\n> > \"Commit: Apply constraint exclusion more generally in partitioning\"\n> >\n> > \"This applies particularly to the default partition...\"\n> >\n> > My understanding of the thread was the complaint was about removing the\n> > default partition. I would prefer to see code executed just for that\n> case,\n> > so that people who do not define a default partition are unaffected.\n>\n> Well, as the commit message noted, it applies to other cases also, not\n> just the default partition. The default partition just happens to be\n> the most visible case.\n>\n> > \"So in certain cases\n> > we're scanning partitions that we don't need to.\"\n> >\n> > Avoiding that has been the subject of months of work.\n>\n> Well, yes, avoiding that is the point of this commit also: we were\n> scanning some partitions for some queries, after this patch we're\n> supposed not to.\n>\n\nUnderstood\n\nMy concern was about the additional execution time caused when there would\nbe no benefit, especially if the algoithmic cost is O(N) or similar (i.e.\nworse than O(k))\n\nIf people have a default partition, I have no problem in there being\nadditional execution time in that case only since there is only ever one\ndefault partition.\n\n\n> > Please could we do perf checks, with tests up to 1000s of partitions? And\n> > if there is a regression, I would vote to revoke this patch or address\n> the\n> > request in a less general way.\n>\n> I'll have a look.\n>\n\nThanks\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Solutions for the Enterprise\n\nOn Wed, 7 Aug 2019 at 21:27, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:On 2019-Aug-07, Simon Riggs wrote:\n\n> I saw your recent commit and it scares me in various places, noted below.\n> \n> \"Commit: Apply constraint exclusion more generally in partitioning\"\n> \n> \"This applies particularly to the default partition...\"\n> \n> My understanding of the thread was the complaint was about removing the\n> default partition. I would prefer to see code executed just for that case,\n> so that people who do not define a default partition are unaffected.\n\nWell, as the commit message noted, it applies to other cases also, not\njust the default partition. The default partition just happens to be\nthe most visible case.\n\n> \"So in certain cases\n> we're scanning partitions that we don't need to.\"\n> \n> Avoiding that has been the subject of months of work.\n\nWell, yes, avoiding that is the point of this commit also: we were\nscanning some partitions for some queries, after this patch we're\nsupposed not to.UnderstoodMy concern was about the additional execution time caused when there would be no benefit, especially if the algoithmic cost is O(N) or similar (i.e. worse than O(k))If people have a default partition, I have no problem in there being additional execution time in that case only since there is only ever one default partition. \n> Please could we do perf checks, with tests up to 1000s of partitions? And\n> if there is a regression, I would vote to revoke this patch or address the\n> request in a less general way.\n\nI'll have a look.Thanks -- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Solutions for the Enterprise",
"msg_date": "Thu, 8 Aug 2019 08:54:12 +0100",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "Hi Simon,\n\nOn Thu, Aug 8, 2019 at 4:54 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n> On Wed, 7 Aug 2019 at 21:27, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>> Well, yes, avoiding that is the point of this commit also: we were\n>> scanning some partitions for some queries, after this patch we're\n>> supposed not to.\n>\n>\n> Understood\n>\n> My concern was about the additional execution time caused when there would be no benefit, especially if the algoithmic cost is O(N) or similar (i.e. worse than O(k))\n\nNote that we apply constraint exclusion only to the (sub-partitioned)\nparent, not to all partitions, so the cost is not O(N) in the number\nof partitions. The idea is that if the parent is excluded, all of its\npartitions are. We normally wouldn't need to use constrain exclusion,\nbecause partition pruning can handle most cases. What partition\npruning can't handle sufficiently well though is the case where a\nclause set that contradicts the partition constraint is specified --\nwhile all non-default partitions are correctly pruned, the default\npartition is not. Using constraint exclusion is a workaround for that\ndeficiency of the partition pruning logic.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Thu, 8 Aug 2019 17:08:51 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "Hello,\n\nOn Thu, Aug 8, 2019 at 5:09 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> Hi Simon,\n>\n> On Thu, Aug 8, 2019 at 4:54 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n> > On Wed, 7 Aug 2019 at 21:27, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> >> Well, yes, avoiding that is the point of this commit also: we were\n> >> scanning some partitions for some queries, after this patch we're\n> >> supposed not to.\n> >\n> >\n> > Understood\n> >\n> > My concern was about the additional execution time caused when there would be no benefit, especially if the algoithmic cost is O(N) or similar (i.e. worse than O(k))\n>\n> Note that we apply constraint exclusion only to the (sub-partitioned)\n> parent, not to all partitions, so the cost is not O(N) in the number\n> of partitions. The idea is that if the parent is excluded, all of its\n> partitions are. We normally wouldn't need to use constrain exclusion,\n> because partition pruning can handle most cases. What partition\n> pruning can't handle sufficiently well though is the case where a\n> clause set that contradicts the partition constraint is specified --\n> while all non-default partitions are correctly pruned, the default\n> partition is not. Using constraint exclusion is a workaround for that\n> deficiency of the partition pruning logic.\n>\nBesides that, the additional code will not be executed if people don't\ndefine any default partition in the latest patch Amit proposed. So I think\nthis patch has no effect such as Simon's concern.\n\nI looked at Amit's patches and found it worked correctly.\n\n-- \nBest regards,\nYuzuko Hosoya\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 8 Aug 2019 18:58:59 +0900",
"msg_from": "yuzuko <yuzukohosoya@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "Sorry for being late. I found it is committed before I caught up\nthis thread again..\n\nAt Thu, 8 Aug 2019 14:50:54 +0900, Amit Langote <amitlangote09@gmail.com> wrote in <CA+HiwqEmJizJ4DmuPWCL-WrHGO-hFVd08TS7HnCkSF4CyZr8tg@mail.gmail.com>\n> Hi Alvaro,\n> \n> On Thu, Aug 8, 2019 at 5:27 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > On 2019-Aug-07, Simon Riggs wrote:\n> > > I saw your recent commit and it scares me in various places, noted below.\n> > >\n> > > \"Commit: Apply constraint exclusion more generally in partitioning\"\n> > >\n> > > \"This applies particularly to the default partition...\"\n> > >\n> > > My understanding of the thread was the complaint was about removing the\n> > > default partition. I would prefer to see code executed just for that case,\n> > > so that people who do not define a default partition are unaffected.\n> >\n> > Well, as the commit message noted, it applies to other cases also, not\n> > just the default partition. The default partition just happens to be\n> > the most visible case.\n> \n> Just to be clear, I don't think there was any patch posted on this\n> thread that was to address *non-default* partitions failing to be\n> pruned by \"partition pruning\". If that had been the problem, we'd be\n> fixing the bugs of the partition pruning code, not apply constraint\n> exclusion more generally to paper over such bugs. I may be misreading\n> what you wrote here though.\n> \n> The way I interpret the \"generally\" in the \"apply constraint exclusion\n> more generally\" is thus: we can't prune the default partition without\n> the constraint exclusion clutches for evidently a broader sets of\n> clauses than the previous design assumed. The previous design assumed\n> that only OR clauses whose arguments contradicted the parent's\n> partition constraint are problematic, but evidently any clause set\n> that contradicts the partition constraint is problematic. Again, the\n> problem is that it's impossible to prune the \"default\" partition with\n> such clauses, not the *non-default* ones -- values extracted from\n> contradictory clauses would not match any of the bounds so all\n> non-default partitions would be pruned that way.\n> \n> By the way, looking closer at the patch committed today, I realized I\n> had misunderstood what you proposed as the *4th* possible place to\n> move the constraint exclusion check to. I had misread the proposal\n> and thought you meant to move it outside the outermost loop of\n> gen_partprune_steps_internal(), but that's not where the check is now.\n> I think it's better to call predicate_refuted_by() only once by\n> passing the whole list of clauses instead of for each clause\n> separately. The result would be the same but the former would be more\n> efficient, because it avoids repeatedly paying the cost of setting up\n> predtest.c data structures when predicate_refuted_by() is called.\n> Sorry that I'm only saying this now.\n\n+1 as I mentioned in [1].\n\n> Also it wouldn't be incorrect to do the check only if the parent has a\n> default partition. That will also address the Simon's concern this\n> might slow down the cases where this effort is useless.\n> \n> I've attached a patch that does that. When working on it, I realized\n> that the way RelOptInfo.partition_qual is processed is a bit\n> duplicative, so I created a separate patch to make that a bit more\n> consistent.\n\n0001 seems reasonable. By the way, the patch doesn't touch\nget_relation_constraints(), but I suppose it can use the modified\npartition constraint qual already stored in rel->partition_qual\nin set_relation_partition_info. And we could move constifying to\nset_rlation_partition_info?\n\nAlso, I'd like to see comments that the partition_quals is\nalready varnode-fixed.\n\nAnd 0002, yeah, just +1 from me.\n\n\n[1] https://www.postgresql.org/message-id/20190409.173725.31175835.horiguchi.kyotaro@lab.ntt.co.jp\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 09 Aug 2019 12:09:20 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "Horiguchi-san,\n\nThanks for the review.\n\nOn Fri, Aug 9, 2019 at 12:09 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Thu, 8 Aug 2019 14:50:54 +0900, Amit Langote wrote:\n> > When working on it, I realized\n> > that the way RelOptInfo.partition_qual is processed is a bit\n> > duplicative, so I created a separate patch to make that a bit more\n> > consistent.\n>\n> 0001 seems reasonable. By the way, the patch doesn't touch\n> get_relation_constraints(), but I suppose it can use the modified\n> partition constraint qual already stored in rel->partition_qual\n> in set_relation_partition_info. And we could move constifying to\n> set_rlation_partition_info?\n\nAh, good advice. This make partition constraint usage within the\nplanner quite a bit more consistent.\n\n> Also, I'd like to see comments that the partition_quals is\n> already varnode-fixed.\n\nAdded a one-line comment.\n\n> And 0002, yeah, just +1 from me.\n\nThanks.\n\nAttached updated patches; only 0001 changed per above comments.\n\nRegards,\nAmit",
"msg_date": "Fri, 9 Aug 2019 13:17:41 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "On Fri, Aug 9, 2019 at 1:17 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Fri, Aug 9, 2019 at 12:09 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > At Thu, 8 Aug 2019 14:50:54 +0900, Amit Langote wrote:\n> > > When working on it, I realized\n> > > that the way RelOptInfo.partition_qual is processed is a bit\n> > > duplicative, so I created a separate patch to make that a bit more\n> > > consistent.\n> >\n> > 0001 seems reasonable. By the way, the patch doesn't touch\n> > get_relation_constraints(), but I suppose it can use the modified\n> > partition constraint qual already stored in rel->partition_qual\n> > in set_relation_partition_info. And we could move constifying to\n> > set_rlation_partition_info?\n>\n> Ah, good advice. This make partition constraint usage within the\n> planner quite a bit more consistent.\n\nHmm, oops. I think that judgement was a bit too rushed on my part. I\nunintentionally ended up making the partition constraint to *always*\nbe fetched, whereas we don't need it in most cases. I've reverted\nthat change. RelOptInfo.partition_qual is poorly named in retrospect.\n:( It's not set for all partitions, only those that are partitioned\nthemselves.\n\nAttached updated patches.\n\nThanks,\nAmit",
"msg_date": "Fri, 9 Aug 2019 14:02:36 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "At Fri, 9 Aug 2019 14:02:36 +0900, Amit Langote <amitlangote09@gmail.com> wrote in <CA+HiwqGm18B8UQ5Sip_nsNYmDiHtoaVORvCPumo_bbXTXHPRBw@mail.gmail.com>\n> On Fri, Aug 9, 2019 at 1:17 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Fri, Aug 9, 2019 at 12:09 PM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > > At Thu, 8 Aug 2019 14:50:54 +0900, Amit Langote wrote:\n> > > > When working on it, I realized\n> > > > that the way RelOptInfo.partition_qual is processed is a bit\n> > > > duplicative, so I created a separate patch to make that a bit more\n> > > > consistent.\n> > >\n> > > 0001 seems reasonable. By the way, the patch doesn't touch\n> > > get_relation_constraints(), but I suppose it can use the modified\n> > > partition constraint qual already stored in rel->partition_qual\n> > > in set_relation_partition_info. And we could move constifying to\n> > > set_rlation_partition_info?\n> >\n> > Ah, good advice. This make partition constraint usage within the\n> > planner quite a bit more consistent.\n> \n> Hmm, oops. I think that judgement was a bit too rushed on my part. I\n> unintentionally ended up making the partition constraint to *always*\n> be fetched, whereas we don't need it in most cases. I've reverted\n\n(v2 has been withdrawn before I see it:p)\n\nYeah, I agreed. It is needed only by (sub)partition parents.\n\n> that change. RelOptInfo.partition_qual is poorly named in retrospect.\n> :( It's not set for all partitions, only those that are partitioned\n> themselves.\n> \n> Attached updated patches.\n\n+++ b/src/backend/optimizer/util/plancat.c\n@@ -1267,10 +1267,14 @@ get_relation_constraints(PlannerInfo *root,\n */\n if (include_partition && relation->rd_rel->relispartition)\n {\n...\n+ else\n {\n+ /* Nope, fetch from the relcache. */\n\nI seems to me that include_partition is true both and only for\nmodern and old-fasheoned partition parents.\nset_relation_partition_info() is currently called only for modern\npartition parents. If we need that at the place above,\nset_relation_partition_info can be called also for old-fashioned\npartition parent, and get_relation_constraints may forget the\nelse case in a broad way.\n\n\n+ /* Nope, fetch from the relcache. */\n+ List *pcqual = RelationGetPartitionQual(relation);\n\nIf the comment above is right, This would be duplicate. What we\nshould do instaed is only eval_const_expression. And we could\nmove it to set_relation_partition_info completely. Consitify must\nbe useful in both cases.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 09 Aug 2019 14:44:38 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "Thanks for the comments.\n\nOn Fri, Aug 9, 2019 at 2:44 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> +++ b/src/backend/optimizer/util/plancat.c\n> @@ -1267,10 +1267,14 @@ get_relation_constraints(PlannerInfo *root,\n> */\n> if (include_partition && relation->rd_rel->relispartition)\n> {\n> ...\n> + else\n> {\n> + /* Nope, fetch from the relcache. */\n>\n> I seems to me that include_partition is true both and only for\n> modern and old-fasheoned partition parents.\n> set_relation_partition_info() is currently called only for modern\n> partition parents. If we need that at the place above,\n> set_relation_partition_info can be called also for old-fashioned\n> partition parent, and get_relation_constraints may forget the\n> else case in a broad way.\n\n\"include_partition\" doesn't have anything to do with what kind the\npartition parent is. It is true when the input relation that is a\npartition is directly mentioned in the query (RELOPT_BASEREL) and\nconstraint_exclusion is on (inheritance_planner considerations makes\nthe actual code a bit hard to follow but we'll hopefully simplify that\nin the near future). That is also the only case where we need to\nconsider the partition constraint when doing constraint exclusion.\nRegarding how this relates to partition_qual:\n\n* get_relation_constraints() can use it if it's set, which would be\nthe case if the partition in question is partitioned itself\n\n* It wouldn't be set if the partition in question is a leaf partition,\nso it will have to get it directly from the relcache\n\n> + /* Nope, fetch from the relcache. */\n> + List *pcqual = RelationGetPartitionQual(relation);\n>\n> If the comment above is right, This would be duplicate. What we\n> should do instaed is only eval_const_expression. And we could\n> move it to set_relation_partition_info completely. Consitify must\n> be useful in both cases.\n\nAs described above, this block of code is not really duplicative in\nthe sense that when it runs, that would be the first time in a query\nto fetch the partition constraint of the relation in question.\n\nAlso, note that expression_planner() calls eval_const_expressions(),\nso constification happens in both cases. I guess different places\nhave grown different styles of processing constraint expressions as\nthe APIs have evolved over time.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Fri, 9 Aug 2019 16:29:48 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "On 2019-Aug-09, Amit Langote wrote:\n\n> Hmm, oops. I think that judgement was a bit too rushed on my part. I\n> unintentionally ended up making the partition constraint to *always*\n> be fetched, whereas we don't need it in most cases. I've reverted\n> that change.\n\nYeah, I was quite confused about this point yesterday while I was trying\nto make sense of your patches.\n\n> RelOptInfo.partition_qual is poorly named in retrospect.\n> :( It's not set for all partitions, only those that are partitioned\n> themselves.\n\nOh. Hmm, I think this realization further clarifies things.\n\nSince we're only changing this in the master branch anyway, maybe we can\nfind a better name for it.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 9 Aug 2019 10:41:34 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "v3-0001 still seems to leave things a bit duplicative. I think we can\nmake it better if we move the logic to set RelOptInfo->partition_qual to\na separate routine (set_baserel_partition_constraint mirroring the\nexisting set_baserel_partition_key_exprs), and then call that from both\nplaces that need access to partition_qual.\n\nSo I propose that the attached v4 patch should be the final form of this\n(also rebased across today's list_concat API change). I verified that\nconstraint exclusion is not being called by partprune unless a default\npartition exists (thanks errbacktrace()); I think that should appease\nSimon's performance concern for the most common case of default\npartition not existing.\n\nI think I was not really understanding the comments being added by\nAmit's v3, so I reworded them. I hope I understood the intent of the\ncode correctly.\n\nI'm not comfortable with RelOptInfo->partition_qual. But I'd rather\nleave that for another time.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 12 Aug 2019 13:45:09 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "On Mon, 12 Aug 2019 at 18:45, Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> I think that should appease\n> Simon's performance concern for the most common case of default\n> partition not existing.\n>\n\nMuch appreciated, thank you.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Solutions for the Enterprise\n\nOn Mon, 12 Aug 2019 at 18:45, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:I think that should appease\nSimon's performance concern for the most common case of default\npartition not existing.Much appreciated, thank you. -- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Solutions for the Enterprise",
"msg_date": "Mon, 12 Aug 2019 18:49:31 +0100",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "Hi Alvaro,\n\nOn Tue, Aug 13, 2019 at 2:45 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> v3-0001 still seems to leave things a bit duplicative. I think we can\n> make it better if we move the logic to set RelOptInfo->partition_qual to\n> a separate routine (set_baserel_partition_constraint mirroring the\n> existing set_baserel_partition_key_exprs), and then call that from both\n> places that need access to partition_qual.\n>\n> So I propose that the attached v4 patch should be the final form of this\n> (also rebased across today's list_concat API change). I verified that\n> constraint exclusion is not being called by partprune unless a default\n> partition exists (thanks errbacktrace()); I think that should appease\n> Simon's performance concern for the most common case of default\n> partition not existing.\n>\n> I think I was not really understanding the comments being added by\n> Amit's v3, so I reworded them. I hope I understood the intent of the\n> code correctly.\n\nThanks a lot for revising. Looks neat, except:\n\n+ * This is a measure of last resort only to be used because the default\n+ * partition cannot be pruned using the steps; regular pruning, which is\n+ * cheaper, is sufficient when no default partition exists.\n\nThis text appears to imply that the default can *never* be pruned with\nsteps. Maybe, the first sentence should read something like: \"...the\ndefault cannot be pruned using the steps generated from clauses that\ncontradict the parent's partition constraint\".\n\n> I'm not comfortable with RelOptInfo->partition_qual. But I'd rather\n> leave that for another time.\n\nSure.\n\nRegards,\nAmit\n\n\n",
"msg_date": "Tue, 13 Aug 2019 14:01:25 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "On 2019-Aug-13, Amit Langote wrote:\n\n> Thanks a lot for revising. Looks neat, except:\n> \n> + * This is a measure of last resort only to be used because the default\n> + * partition cannot be pruned using the steps; regular pruning, which is\n> + * cheaper, is sufficient when no default partition exists.\n> \n> This text appears to imply that the default can *never* be pruned with\n> steps. Maybe, the first sentence should read something like: \"...the\n> default cannot be pruned using the steps generated from clauses that\n> contradict the parent's partition constraint\".\n\nThanks! I have pushed it with this change.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 13 Aug 2019 11:25:17 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
},
{
"msg_contents": "On Wed, Aug 14, 2019 at 12:25 AM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n> On 2019-Aug-13, Amit Langote wrote:\n>\n> > Thanks a lot for revising. Looks neat, except:\n> >\n> > + * This is a measure of last resort only to be used because the default\n> > + * partition cannot be pruned using the steps; regular pruning, which is\n> > + * cheaper, is sufficient when no default partition exists.\n> >\n> > This text appears to imply that the default can *never* be pruned with\n> > steps. Maybe, the first sentence should read something like: \"...the\n> > default cannot be pruned using the steps generated from clauses that\n> > contradict the parent's partition constraint\".\n>\n> Thanks! I have pushed it with this change.\n\nThank you Alvaro. This takes care of all the issues around default\npartition pruning reported on this thread. Thanks everyone.\n\nRegards,\nAmit\n\n\n",
"msg_date": "Wed, 14 Aug 2019 10:12:22 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with default partition pruning"
}
] |
[
{
"msg_contents": "The following sequence causes autovacuum to crash on master:\n\nCREATE TEMP TABLE foo(a int);\nDROP TABLE foo;\nDROP SCHEMA pg_temp_3;\nCREATE TEMP TABLE bar(a int);\nDROP TABLE bar;\n\nit coredumps with:\n\n#1 0x00007f96b5daa42a in __GI_abort () at abort.c:89\n#2 0x000055d997915ba3 in ExceptionalCondition\n(conditionName=conditionName@entry=0x55d997b2a05f \"!(strvalue != ((void\n*)0))\",\n errorType=errorType@entry=0x55d997965ebd \"FailedAssertion\",\nfileName=fileName@entry=0x55d997b2a054 \"snprintf.c\",\n lineNumber=lineNumber@entry=442) at assert.c:54\n#3 0x000055d99795e4ec in dopr (target=target@entry=0x7ffc04f479e0,\nformat=0x55d997abfa2d \".%s\\\"\",\n format@entry=0x55d997abfa00 \"autovacuum: dropping orphan temp table\n\\\"%s.%s.%s\\\"\", args=<optimized out>) at snprintf.c:442\n#4 0x000055d99795e6ce in pg_vsnprintf (str=<optimized out>,\ncount=<optimized out>, count@entry=1024,\n fmt=fmt@entry=0x55d997abfa00 \"autovacuum: dropping orphan temp table\n\\\"%s.%s.%s\\\"\", args=args@entry=0x7ffc04f47a88)\n at snprintf.c:195\n#5 0x000055d997963c21 in pvsnprintf (buf=<optimized out>, len=len@entry\n=1024,\n fmt=fmt@entry=0x55d997abfa00 \"autovacuum: dropping orphan temp table\n\\\"%s.%s.%s\\\"\", args=args@entry=0x7ffc04f47a88)\n at psprintf.c:110\n#6 0x000055d9976deef8 in appendStringInfoVA (str=str@entry=0x7ffc04f47a70,\n fmt=fmt@entry=0x55d997abfa00 \"autovacuum: dropping orphan temp table\n\\\"%s.%s.%s\\\"\", args=args@entry=0x7ffc04f47a88)\n at stringinfo.c:136\n#7 0x000055d997919b60 in errmsg (fmt=fmt@entry=0x55d997abfa00 \"autovacuum:\ndropping orphan temp table \\\"%s.%s.%s\\\"\") at elog.c:794\n#8 0x000055d9974dd551 in do_autovacuum () at autovacuum.c:2255\n\n\nWe are certainly not supposed to go DROP SCHEMA on the temp namespaces, but\nwe are also not supposed to coredump on it (if we were, we should prevent\npeople from DROP SCHEMA it and we don't).\n\nIn fact, we probably *should* prevent the dropping of the temp schema? But\nthat's independent from fixing this one.\n\nThe reason for the crash is 6d842be6c11, where Tom added an assert for\npassing null into %s. But I don't think we can blame that patch for the\nproblem -- it's passing the NULL there in the first place that's the\nproblem.\n\nAFAICT the actual drop works fine, it's just the logging that crashes. So\nmaybe we should just add a check and make it log something like \"<dropped>\"\nif pg_namespace_name() returns null?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nThe following sequence causes autovacuum to crash on master:CREATE TEMP TABLE foo(a int);DROP TABLE foo;DROP SCHEMA pg_temp_3;CREATE TEMP TABLE bar(a int);DROP TABLE bar;it coredumps with:#1 0x00007f96b5daa42a in __GI_abort () at abort.c:89#2 0x000055d997915ba3 in ExceptionalCondition (conditionName=conditionName@entry=0x55d997b2a05f \"!(strvalue != ((void *)0))\", errorType=errorType@entry=0x55d997965ebd \"FailedAssertion\", fileName=fileName@entry=0x55d997b2a054 \"snprintf.c\", lineNumber=lineNumber@entry=442) at assert.c:54#3 0x000055d99795e4ec in dopr (target=target@entry=0x7ffc04f479e0, format=0x55d997abfa2d \".%s\\\"\", format@entry=0x55d997abfa00 \"autovacuum: dropping orphan temp table \\\"%s.%s.%s\\\"\", args=<optimized out>) at snprintf.c:442#4 0x000055d99795e6ce in pg_vsnprintf (str=<optimized out>, count=<optimized out>, count@entry=1024, fmt=fmt@entry=0x55d997abfa00 \"autovacuum: dropping orphan temp table \\\"%s.%s.%s\\\"\", args=args@entry=0x7ffc04f47a88) at snprintf.c:195#5 0x000055d997963c21 in pvsnprintf (buf=<optimized out>, len=len@entry=1024, fmt=fmt@entry=0x55d997abfa00 \"autovacuum: dropping orphan temp table \\\"%s.%s.%s\\\"\", args=args@entry=0x7ffc04f47a88) at psprintf.c:110#6 0x000055d9976deef8 in appendStringInfoVA (str=str@entry=0x7ffc04f47a70, fmt=fmt@entry=0x55d997abfa00 \"autovacuum: dropping orphan temp table \\\"%s.%s.%s\\\"\", args=args@entry=0x7ffc04f47a88) at stringinfo.c:136#7 0x000055d997919b60 in errmsg (fmt=fmt@entry=0x55d997abfa00 \"autovacuum: dropping orphan temp table \\\"%s.%s.%s\\\"\") at elog.c:794#8 0x000055d9974dd551 in do_autovacuum () at autovacuum.c:2255 We are certainly not supposed to go DROP SCHEMA on the temp namespaces, but we are also not supposed to coredump on it (if we were, we should prevent people from DROP SCHEMA it and we don't).In fact, we probably *should* prevent the dropping of the temp schema? But that's independent from fixing this one.The reason for the crash is 6d842be6c11, where Tom added an assert for passing null into %s. But I don't think we can blame that patch for the problem -- it's passing the NULL there in the first place that's the problem.AFAICT the actual drop works fine, it's just the logging that crashes. So maybe we should just add a check and make it log something like \"<dropped>\" if pg_namespace_name() returns null?-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Fri, 22 Feb 2019 09:43:13 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Autovaccuum vs temp tables crash"
},
{
"msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> The reason for the crash is 6d842be6c11, where Tom added an assert for\n> passing null into %s. But I don't think we can blame that patch for the\n> problem -- it's passing the NULL there in the first place that's the\n> problem.\n\nIndeed; this crash existed on some platforms all along (which means\nwe'd better back-patch the fix).\n\n> AFAICT the actual drop works fine, it's just the logging that crashes. So\n> maybe we should just add a check and make it log something like \"<dropped>\"\n> if pg_namespace_name() returns null?\n\n+1 ... maybe \"(dropped)\", because we tend to use parens for this sort\nof thing, I think.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 22 Feb 2019 09:45:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Autovaccuum vs temp tables crash"
},
{
"msg_contents": "On Fri, Feb 22, 2019 at 3:43 AM Magnus Hagander <magnus@hagander.net> wrote:\n> We are certainly not supposed to go DROP SCHEMA on the temp namespaces, ...\n\nActually, I think that's supposed to work.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Fri, 22 Feb 2019 11:49:49 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovaccuum vs temp tables crash"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Feb 22, 2019 at 3:43 AM Magnus Hagander <magnus@hagander.net> wrote:\n>> We are certainly not supposed to go DROP SCHEMA on the temp namespaces, ...\n\n> Actually, I think that's supposed to work.\n\nIf it's in active use by any session (including your own), that's not going\nto have nice consequences; the owning session will have the OID in\nstatic storage, and it will be unhappy when that OID becomes dangling.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 22 Feb 2019 12:54:47 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Autovaccuum vs temp tables crash"
},
{
"msg_contents": "On Fri, Feb 22, 2019 at 12:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Fri, Feb 22, 2019 at 3:43 AM Magnus Hagander <magnus@hagander.net> wrote:\n> >> We are certainly not supposed to go DROP SCHEMA on the temp namespaces, ...\n>\n> > Actually, I think that's supposed to work.\n>\n> If it's in active use by any session (including your own), that's not going\n> to have nice consequences; the owning session will have the OID in\n> static storage, and it will be unhappy when that OID becomes dangling.\n\nMaybe that's something we should fix?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Fri, 22 Feb 2019 13:03:54 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovaccuum vs temp tables crash"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Feb 22, 2019 at 12:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Robert Haas <robertmhaas@gmail.com> writes:\n>>> On Fri, Feb 22, 2019 at 3:43 AM Magnus Hagander <magnus@hagander.net> wrote:\n>>>> We are certainly not supposed to go DROP SCHEMA on the temp namespaces, ...\n\n>>> Actually, I think that's supposed to work.\n\n>> If it's in active use by any session (including your own), that's not going\n>> to have nice consequences; the owning session will have the OID in\n>> static storage, and it will be unhappy when that OID becomes dangling.\n\n> Maybe that's something we should fix?\n\nWhy? It would likely be a significant amount of effort and added overhead,\nto accomplish no obviously-useful goal.\n\nNote that all the temp schemas are made as owned by the bootstrap\nsuperuser, so there is no real argument to be made that people might\nbe expecting they should be able to delete them.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 22 Feb 2019 13:14:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Autovaccuum vs temp tables crash"
},
{
"msg_contents": "On Fri, Feb 22, 2019 at 1:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Why? It would likely be a significant amount of effort and added overhead,\n> to accomplish no obviously-useful goal.\n>\n> Note that all the temp schemas are made as owned by the bootstrap\n> superuser, so there is no real argument to be made that people might\n> be expecting they should be able to delete them.\n\nHmm, well maybe you're right. Just seems like an odd wart.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Fri, 22 Feb 2019 13:15:35 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovaccuum vs temp tables crash"
},
{
"msg_contents": "On Fri, Feb 22, 2019 at 09:45:42AM -0500, Tom Lane wrote:\n> +1 ... maybe \"(dropped)\", because we tend to use parens for this sort\n> of thing, I think.\n\n+1. Using \"dropped\" sounds good to me in this context. Perhaps we\ncould have something more fancy like what's used for dropped columns?\nIt would be nice to get a reference to a schema, like say \"dropped\ntemporary schema\".\n--\nMichael",
"msg_date": "Sat, 23 Feb 2019 08:29:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Autovaccuum vs temp tables crash"
},
{
"msg_contents": "On Fri, Feb 22, 2019 at 7:15 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Fri, Feb 22, 2019 at 1:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Why? It would likely be a significant amount of effort and added\n> overhead,\n> > to accomplish no obviously-useful goal.\n> >\n> > Note that all the temp schemas are made as owned by the bootstrap\n> > superuser, so there is no real argument to be made that people might\n> > be expecting they should be able to delete them.\n>\n> Hmm, well maybe you're right. Just seems like an odd wart.\n>\n\nWell, the way it works now is you can drop them. But if you then create\nanother temp table in the same session, it will get an oid of the already\ndropped schema in the relnamespace column.\n\nThat just seems plain broken.\n\nI think we need to either prevent dropping of temp namespaces *or* we need\nto create a new entry in pg_namespace in this particular case.\n\nI wonder if other \"fun\" things could happen if you go rename the namespace,\nhaven't tried that yet...\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Fri, Feb 22, 2019 at 7:15 PM Robert Haas <robertmhaas@gmail.com> wrote:On Fri, Feb 22, 2019 at 1:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Why? It would likely be a significant amount of effort and added overhead,\n> to accomplish no obviously-useful goal.\n>\n> Note that all the temp schemas are made as owned by the bootstrap\n> superuser, so there is no real argument to be made that people might\n> be expecting they should be able to delete them.\n\nHmm, well maybe you're right. Just seems like an odd wart.Well, the way it works now is you can drop them. But if you then create another temp table in the same session, it will get an oid of the already dropped schema in the relnamespace column.That just seems plain broken.I think we need to either prevent dropping of temp namespaces *or* we need to create a new entry in pg_namespace in this particular case.I wonder if other \"fun\" things could happen if you go rename the namespace, haven't tried that yet... -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Sat, 23 Feb 2019 14:48:58 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: Autovaccuum vs temp tables crash"
},
{
"msg_contents": "On Sat, Feb 23, 2019 at 02:48:58PM +0100, Magnus Hagander wrote:\n> I think we need to either prevent dropping of temp namespaces *or* we need\n> to create a new entry in pg_namespace in this particular case.\n\nPerhaps I am missing something, but it would be just more simple to\nnow allow users to restrict that?\n\n> I wonder if other \"fun\" things could happen if you go rename the namespace,\n> haven't tried that yet...\n\nIn this case the OID remains the same, still there are some cases\nwhere we rely on the namespace name, and one is CLUSTER.\nobjectaddress.c uses as well get_namespace_name_or_temp(), which would\nbe messed up, so it would be better to prevent a temp namespace to be\nrenamed. Could ALTER SCHEMA OWNER TO also be a problem?\n--\nMichael",
"msg_date": "Sun, 24 Feb 2019 00:18:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Autovaccuum vs temp tables crash"
},
{
"msg_contents": "On Sat, Feb 23, 2019 at 12:29 AM Michael Paquier <michael@paquier.xyz>\nwrote:\n\n> On Fri, Feb 22, 2019 at 09:45:42AM -0500, Tom Lane wrote:\n> > +1 ... maybe \"(dropped)\", because we tend to use parens for this sort\n> > of thing, I think.\n>\n> +1. Using \"dropped\" sounds good to me in this context. Perhaps we\n> could have something more fancy like what's used for dropped columns?\n> It would be nice to get a reference to a schema, like say \"dropped\n> temporary schema\".\n>\n\nI think that's unnecessary given the context:\n\n2019-02-23 15:43:56.375 CET [17250] LOG: autovacuum: dropping orphan temp\ntable \"postgres.(dropped).bar\"\n\nThat said, it just moves the crash. Turns out the problem goes a lot deeper\nthan just the logging, it's basically the cleanup of orphaned temp tables\nthat's completely broken if you drop the namespace.\n\nTRAP: FailedAssertion(\"!(relation->rd_backend != (-1))\", File:\n\"relcache.c\", Line: 1085)\n2019-02-23 15:43:56.378 CET [17146] LOG: server process (PID 17250) was\nterminated by signal 6: Aborted\n\n#2 0x0000563e0fe4bc83 in ExceptionalCondition\n(conditionName=conditionName@entry=0x563e10035fb0 \"!(relation->rd_backend\n!= (-1))\",\n errorType=errorType@entry=0x563e0fe9bf9d \"FailedAssertion\",\nfileName=fileName@entry=0x563e100357dc \"relcache.c\",\n lineNumber=lineNumber@entry=1085) at assert.c:54\n#3 0x0000563e0fe40e18 in RelationBuildDesc (targetRelId=24580,\ninsertIt=insertIt@entry=true) at relcache.c:1085\n#4 0x0000563e0fe41a86 in RelationIdGetRelation (relationId=<optimized\nout>, relationId@entry=24580) at relcache.c:1894\n#5 0x0000563e0fa24b4c in relation_open (relationId=relationId@entry=24580,\nlockmode=lockmode@entry=8) at relation.c:59\n#6 0x0000563e0fadcfea in heap_drop_with_catalog (relid=24580) at\nheap.c:1856\n#7 0x0000563e0fad9145 in doDeletion (flags=21, object=<optimized out>) at\ndependency.c:1329\n#8 deleteOneObject (flags=21, depRel=0x7ffd80db4808, object=<optimized\nout>) at dependency.c:1231\n#9 deleteObjectsInList (targetObjects=targetObjects@entry=0x563e10640110,\ndepRel=depRel@entry=0x7ffd80db4808, flags=flags@entry=21)\n at dependency.c:271\n#10 0x0000563e0fad91f0 in performDeletion (object=object@entry=0x7ffd80db4944,\nbehavior=behavior@entry=DROP_CASCADE,\n flags=flags@entry=21) at dependency.c:352\n#11 0x0000563e0fa13532 in do_autovacuum () at autovacuum.c:2269\n\n\nSo basically I think it's the wrong approach to try to fix this error\nmessage. We need to fix the underlying problem. Which is the ability to\ndrop the temp table schemas and then create temp tables which ahve no\nschemas.\n\nWe could try to recreate the namespace if dropped. But a quick fix around\nthat just moved coredumps around to a lot of other dependent places.\n\nI think we're better off just peventing the explicit drop of a temp schema.\nSee attached?\n\nWe'd also want to block things like:\nALTER SCHEMA pg_temp_3 RENAME TO foobar;\nDROP SCHEMA foobar;\n\nAre there any more things beyond RENAME we need to block?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>",
"msg_date": "Sat, 23 Feb 2019 16:28:15 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: Autovaccuum vs temp tables crash"
},
{
"msg_contents": "On Sat, Feb 23, 2019 at 4:18 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Sat, Feb 23, 2019 at 02:48:58PM +0100, Magnus Hagander wrote:\n> > I think we need to either prevent dropping of temp namespaces *or* we\n> need\n> > to create a new entry in pg_namespace in this particular case.\n>\n> Perhaps I am missing something, but it would be just more simple to\n> now allow users to restrict that?\n>\n\nI can't parse what you are saying here. Now allow users to restrict what?\n\n\n> I wonder if other \"fun\" things could happen if you go rename the\n> namespace,\n> > haven't tried that yet...\n>\n> In this case the OID remains the same, still there are some cases\n> where we rely on the namespace name, and one is CLUSTER.\n> objectaddress.c uses as well get_namespace_name_or_temp(), which would\n> be messed up, so it would be better to prevent a temp namespace to be\n> renamed. Could ALTER SCHEMA OWNER TO also be a problem?\n>\n\nOr possibly altering permissions on it?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Sat, Feb 23, 2019 at 4:18 PM Michael Paquier <michael@paquier.xyz> wrote:On Sat, Feb 23, 2019 at 02:48:58PM +0100, Magnus Hagander wrote:\n> I think we need to either prevent dropping of temp namespaces *or* we need\n> to create a new entry in pg_namespace in this particular case.\n\nPerhaps I am missing something, but it would be just more simple to\nnow allow users to restrict that?I can't parse what you are saying here. Now allow users to restrict what?\n> I wonder if other \"fun\" things could happen if you go rename the namespace,\n> haven't tried that yet...\n\nIn this case the OID remains the same, still there are some cases\nwhere we rely on the namespace name, and one is CLUSTER.\nobjectaddress.c uses as well get_namespace_name_or_temp(), which would\nbe messed up, so it would be better to prevent a temp namespace to be\nrenamed. Could ALTER SCHEMA OWNER TO also be a problem?Or possibly altering permissions on it? -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Sat, 23 Feb 2019 16:29:24 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: Autovaccuum vs temp tables crash"
},
{
"msg_contents": "On Sat, Feb 23, 2019 at 4:28 PM Magnus Hagander <magnus@hagander.net> wrote:\n\n>\n>\n> On Sat, Feb 23, 2019 at 12:29 AM Michael Paquier <michael@paquier.xyz>\n> wrote:\n>\n>> On Fri, Feb 22, 2019 at 09:45:42AM -0500, Tom Lane wrote:\n>> > +1 ... maybe \"(dropped)\", because we tend to use parens for this sort\n>> > of thing, I think.\n>>\n>> +1. Using \"dropped\" sounds good to me in this context. Perhaps we\n>> could have something more fancy like what's used for dropped columns?\n>> It would be nice to get a reference to a schema, like say \"dropped\n>> temporary schema\".\n>>\n>\n> I think that's unnecessary given the context:\n>\n> 2019-02-23 15:43:56.375 CET [17250] LOG: autovacuum: dropping orphan temp\n> table \"postgres.(dropped).bar\"\n>\n> That said, it just moves the crash. Turns out the problem goes a lot\n> deeper than just the logging, it's basically the cleanup of orphaned temp\n> tables that's completely broken if you drop the namespace.\n>\n> TRAP: FailedAssertion(\"!(relation->rd_backend != (-1))\", File:\n> \"relcache.c\", Line: 1085)\n> 2019-02-23 15:43:56.378 CET [17146] LOG: server process (PID 17250) was\n> terminated by signal 6: Aborted\n>\n> #2 0x0000563e0fe4bc83 in ExceptionalCondition\n> (conditionName=conditionName@entry=0x563e10035fb0 \"!(relation->rd_backend\n> != (-1))\",\n> errorType=errorType@entry=0x563e0fe9bf9d \"FailedAssertion\",\n> fileName=fileName@entry=0x563e100357dc \"relcache.c\",\n> lineNumber=lineNumber@entry=1085) at assert.c:54\n> #3 0x0000563e0fe40e18 in RelationBuildDesc (targetRelId=24580,\n> insertIt=insertIt@entry=true) at relcache.c:1085\n> #4 0x0000563e0fe41a86 in RelationIdGetRelation (relationId=<optimized\n> out>, relationId@entry=24580) at relcache.c:1894\n> #5 0x0000563e0fa24b4c in relation_open (relationId=relationId@entry=24580,\n> lockmode=lockmode@entry=8) at relation.c:59\n> #6 0x0000563e0fadcfea in heap_drop_with_catalog (relid=24580) at\n> heap.c:1856\n> #7 0x0000563e0fad9145 in doDeletion (flags=21, object=<optimized out>) at\n> dependency.c:1329\n> #8 deleteOneObject (flags=21, depRel=0x7ffd80db4808, object=<optimized\n> out>) at dependency.c:1231\n> #9 deleteObjectsInList (targetObjects=targetObjects@entry=0x563e10640110,\n> depRel=depRel@entry=0x7ffd80db4808, flags=flags@entry=21)\n> at dependency.c:271\n> #10 0x0000563e0fad91f0 in performDeletion (object=object@entry=0x7ffd80db4944,\n> behavior=behavior@entry=DROP_CASCADE,\n> flags=flags@entry=21) at dependency.c:352\n> #11 0x0000563e0fa13532 in do_autovacuum () at autovacuum.c:2269\n>\n>\n> So basically I think it's the wrong approach to try to fix this error\n> message. We need to fix the underlying problem. Which is the ability to\n> drop the temp table schemas and then create temp tables which ahve no\n> schemas.\n>\n> We could try to recreate the namespace if dropped. But a quick fix around\n> that just moved coredumps around to a lot of other dependent places.\n>\n> I think we're better off just peventing the explicit drop of a temp\n> schema. See attached?\n>\n> We'd also want to block things like:\n> ALTER SCHEMA pg_temp_3 RENAME TO foobar;\n> DROP SCHEMA foobar;\n>\n> Are there any more things beyond RENAME we need to block?\n>\n>\nOoh, RENAME has problems entirely unrelated to that:\n\nALTER SCHEMA pg_catalog RENAME TO foobar;\n\nYeah, that breaks things...\n\nRenaming schemas only check that the *target* name is not reserved. Surely\nit should also check that the *source* name is not reserved? (Unless\nallowSystemTableMods)?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Sat, Feb 23, 2019 at 4:28 PM Magnus Hagander <magnus@hagander.net> wrote:On Sat, Feb 23, 2019 at 12:29 AM Michael Paquier <michael@paquier.xyz> wrote:On Fri, Feb 22, 2019 at 09:45:42AM -0500, Tom Lane wrote:\n> +1 ... maybe \"(dropped)\", because we tend to use parens for this sort\n> of thing, I think.\n\n+1. Using \"dropped\" sounds good to me in this context. Perhaps we\ncould have something more fancy like what's used for dropped columns?\nIt would be nice to get a reference to a schema, like say \"dropped\ntemporary schema\".I think that's unnecessary given the context:2019-02-23 15:43:56.375 CET [17250] LOG: autovacuum: dropping orphan temp table \"postgres.(dropped).bar\"That said, it just moves the crash. Turns out the problem goes a lot deeper than just the logging, it's basically the cleanup of orphaned temp tables that's completely broken if you drop the namespace.TRAP: FailedAssertion(\"!(relation->rd_backend != (-1))\", File: \"relcache.c\", Line: 1085)2019-02-23 15:43:56.378 CET [17146] LOG: server process (PID 17250) was terminated by signal 6: Aborted#2 0x0000563e0fe4bc83 in ExceptionalCondition (conditionName=conditionName@entry=0x563e10035fb0 \"!(relation->rd_backend != (-1))\", errorType=errorType@entry=0x563e0fe9bf9d \"FailedAssertion\", fileName=fileName@entry=0x563e100357dc \"relcache.c\", lineNumber=lineNumber@entry=1085) at assert.c:54#3 0x0000563e0fe40e18 in RelationBuildDesc (targetRelId=24580, insertIt=insertIt@entry=true) at relcache.c:1085#4 0x0000563e0fe41a86 in RelationIdGetRelation (relationId=<optimized out>, relationId@entry=24580) at relcache.c:1894#5 0x0000563e0fa24b4c in relation_open (relationId=relationId@entry=24580, lockmode=lockmode@entry=8) at relation.c:59#6 0x0000563e0fadcfea in heap_drop_with_catalog (relid=24580) at heap.c:1856#7 0x0000563e0fad9145 in doDeletion (flags=21, object=<optimized out>) at dependency.c:1329#8 deleteOneObject (flags=21, depRel=0x7ffd80db4808, object=<optimized out>) at dependency.c:1231#9 deleteObjectsInList (targetObjects=targetObjects@entry=0x563e10640110, depRel=depRel@entry=0x7ffd80db4808, flags=flags@entry=21) at dependency.c:271#10 0x0000563e0fad91f0 in performDeletion (object=object@entry=0x7ffd80db4944, behavior=behavior@entry=DROP_CASCADE, flags=flags@entry=21) at dependency.c:352#11 0x0000563e0fa13532 in do_autovacuum () at autovacuum.c:2269So basically I think it's the wrong approach to try to fix this error message. We need to fix the underlying problem. Which is the ability to drop the temp table schemas and then create temp tables which ahve no schemas.We could try to recreate the namespace if dropped. But a quick fix around that just moved coredumps around to a lot of other dependent places.I think we're better off just peventing the explicit drop of a temp schema. See attached?We'd also want to block things like:ALTER SCHEMA pg_temp_3 RENAME TO foobar;DROP SCHEMA foobar;Are there any more things beyond RENAME we need to block?Ooh, RENAME has problems entirely unrelated to that:ALTER SCHEMA pg_catalog RENAME TO foobar;Yeah, that breaks things...Renaming schemas only check that the *target* name is not reserved. Surely it should also check that the *source* name is not reserved? (Unless allowSystemTableMods)?-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Sat, 23 Feb 2019 16:37:46 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: Autovaccuum vs temp tables crash"
},
{
"msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> On Fri, Feb 22, 2019 at 7:15 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>> On Fri, Feb 22, 2019 at 1:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Note that all the temp schemas are made as owned by the bootstrap\n>>> superuser, so there is no real argument to be made that people might\n>>> be expecting they should be able to delete them.\n\n>> Hmm, well maybe you're right. Just seems like an odd wart.\n\n> Well, the way it works now is you can drop them. But if you then create\n> another temp table in the same session, it will get an oid of the already\n> dropped schema in the relnamespace column.\n\nOnly if you're superuser.\n\n> That just seems plain broken.\n\nThere are a *lot* of ways that a superuser can break things. I'm not\nreal sure that this one is special enough that we need a defense\nagainst it.\n\nHowever, if someone held a gun to my head and said fix it, I'd be inclined\nto do so by having temp-namespace creation insert a \"pin\" dependency into\npg_depend. Arguably, the only reason we don't create all the temp\nnamespaces during bootstrap is because we aren't sure how many we'd need\n--- but if we did do that, they'd presumably end up pinned.\n\n> I wonder if other \"fun\" things could happen if you go rename the namespace,\n> haven't tried that yet...\n\nI put that one on exactly a par with renaming all the \"=\" operators.\nYes, the system will let a superuser do it, and no, it's not a good idea.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sat, 23 Feb 2019 10:50:47 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Autovaccuum vs temp tables crash"
},
{
"msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> I think we're better off just peventing the explicit drop of a temp schema.\n> See attached?\n\nI think this is a poor implementation of a bad idea. Would you like a\nlist of the ways a superuser can break the system? We could start with\n\"DELETE FROM pg_proc;\".\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sat, 23 Feb 2019 11:00:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Autovaccuum vs temp tables crash"
},
{
"msg_contents": "On Sat, Feb 23, 2019 at 5:01 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Magnus Hagander <magnus@hagander.net> writes:\n> > I think we're better off just peventing the explicit drop of a temp\n> schema.\n> > See attached?\n>\n> I think this is a poor implementation of a bad idea. Would you like a\n> list of the ways a superuser can break the system? We could start with\n> \"DELETE FROM pg_proc;\".\n>\n\nYeah, true.\n\nThat said, I'm not sure there is much point in fixing the original problem\neither. It comes down to a \"don't do that\", as the system just keeps\ncrashing even if we fix that one. Trying to fix every possible place that\nbreaks if there are tables with invalid data in pg_class like that is not\nlikely to work either.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Sat, Feb 23, 2019 at 5:01 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Magnus Hagander <magnus@hagander.net> writes:\n> I think we're better off just peventing the explicit drop of a temp schema.\n> See attached?\n\nI think this is a poor implementation of a bad idea. Would you like a\nlist of the ways a superuser can break the system? We could start with\n\"DELETE FROM pg_proc;\".Yeah, true.That said, I'm not sure there is much point in fixing the original problem either. It comes down to a \"don't do that\", as the system just keeps crashing even if we fix that one. Trying to fix every possible place that breaks if there are tables with invalid data in pg_class like that is not likely to work either. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Sat, 23 Feb 2019 17:02:42 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: Autovaccuum vs temp tables crash"
},
{
"msg_contents": "On Sat, Feb 23, 2019 at 04:29:24PM +0100, Magnus Hagander wrote:\n> On Sat, Feb 23, 2019 at 4:18 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> Perhaps I am missing something, but it would be just more simple to\n>> now allow users to restrict that?\n>>\n> \n> I can't parse what you are saying here. Now allow users to restrict\n> what?\n\nSecond try after some sleep:\n\"Perhaps I am missing something, but it would be just more simple to\nnow allow users to drop it?\"\n--\nMichael",
"msg_date": "Sun, 24 Feb 2019 08:13:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Autovaccuum vs temp tables crash"
},
{
"msg_contents": "On 2019-Feb-23, Tom Lane wrote:\n\n> However, if someone held a gun to my head and said fix it, I'd be inclined\n> to do so by having temp-namespace creation insert a \"pin\" dependency into\n> pg_depend. Arguably, the only reason we don't create all the temp\n> namespaces during bootstrap is because we aren't sure how many we'd need\n> --- but if we did do that, they'd presumably end up pinned.\n\nIs there a problem if we start with very high max_backends and this pins\na few thousands schemas that are later no longer needed? There's no\ndecent way to drop them ... (I'm not sure it matters all that much,\nexcept for bloat in pg_namespace.)\n\nHow about hardcoding a pin for any schema that's within the current\nmax_backends?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Tue, 26 Feb 2019 20:26:32 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Autovaccuum vs temp tables crash"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Feb-23, Tom Lane wrote:\n>> However, if someone held a gun to my head and said fix it, I'd be inclined\n>> to do so by having temp-namespace creation insert a \"pin\" dependency into\n>> pg_depend. Arguably, the only reason we don't create all the temp\n>> namespaces during bootstrap is because we aren't sure how many we'd need\n>> --- but if we did do that, they'd presumably end up pinned.\n\n> Is there a problem if we start with very high max_backends and this pins\n> a few thousands schemas that are later no longer needed? There's no\n> decent way to drop them ... (I'm not sure it matters all that much,\n> except for bloat in pg_namespace.)\n\n> How about hardcoding a pin for any schema that's within the current\n> max_backends?\n\nI remain skeptical that there's a problem here that so badly needs\nfixed as to justify half-baked hacks in the dependency system. We'd\nbe more likely to create problems than fix them.\n\nThe existing state of affairs is that a superuser who really needs to drop\na temp schema can do so, if she's careful that it's not active. Pinning\nthings would break that, or at least add an additional roadblock. If it's\nsome sort of virtual pin rather than a regular pg_depend entry, then it\n*would* be impossible to get around (mumble ... DELETE FROM pg_namespace\n... mumble). As against that, what problem are we fixing by preventing\nsuperusers from doing that? A careless superuser can screw things up\narbitrarily badly in any case, so I'm not that fussed about the hazard\nthat the namespace isn't idle.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 26 Feb 2019 19:21:40 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Autovaccuum vs temp tables crash"
},
{
"msg_contents": "On Tue, Feb 26, 2019 at 07:21:40PM -0500, Tom Lane wrote:\n> The existing state of affairs is that a superuser who really needs to drop\n> a temp schema can do so, if she's careful that it's not active. Pinning\n> things would break that, or at least add an additional roadblock. If it's\n> some sort of virtual pin rather than a regular pg_depend entry, then it\n> *would* be impossible to get around (mumble ... DELETE FROM pg_namespace\n> ... mumble). As against that, what problem are we fixing by preventing\n> superusers from doing that? A careless superuser can screw things up\n> arbitrarily badly in any case, so I'm not that fussed about the hazard\n> that the namespace isn't idle.\n\nAnd when you try to do chirugy on a corrupted cluster, it can be on\nthe contrary very useful to be able to work with objects and\nmanipulate them more freely as a superuser.\n--\nMichael",
"msg_date": "Wed, 27 Feb 2019 15:39:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Autovaccuum vs temp tables crash"
}
] |
[
{
"msg_contents": "So I started looking into the bug noted in [1], but before getting to\nmulti-row inserts, I concluded that the current single-row behaviour\nisn't spec-compliant.\n\nIn particular, Syntax Rule 11b of section 14.11 says that an INSERT\nstatement on a GENERATED ALWAYS identity column must specify an\noverriding clause, but it doesn't place any restriction on the type of\noverriding clause allowed. In other words it should be possible to use\neither OVERRIDING SYSTEM VALUE or OVERRIDING USER VALUE, but we\ncurrently throw an error unless it's the former.\n\nIt's useful to allow OVERRIDING USER VALUE for precisely the example\nuse-case given in the INSERT docs:\n\n This clause is useful for example when copying values between tables.\n Writing <literal>INSERT INTO tbl2 OVERRIDING USER VALUE SELECT * FROM\n tbl1</literal> will copy from <literal>tbl1</literal> all columns that\n are not identity columns in <literal>tbl2</literal> while values for\n the identity columns in <literal>tbl2</literal> will be generated by\n the sequences associated with <literal>tbl2</literal>.\n\nwhich currently only works for a GENERATED BY DEFAULT identity column,\nbut should work equally well for a GENERATED ALWAYS identity column.\n\nSo I propose the attached patch.\n\nRegards,\nDean\n\n\n[1] https://postgr.es/m/CAEZATCUmSp3-8nLOpgGcPkpUEXK9TJGM%3DiA6q4E2Sn%3D%2BbwkKNA%40mail.gmail.com",
"msg_date": "Fri, 22 Feb 2019 14:12:55 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "INSERT ... OVERRIDING USER VALUE vs GENERATED ALWAYS identity columns"
},
{
"msg_contents": "On 2019-02-22 15:12, Dean Rasheed wrote:\n> In particular, Syntax Rule 11b of section 14.11 says that an INSERT\n> statement on a GENERATED ALWAYS identity column must specify an\n> overriding clause, but it doesn't place any restriction on the type of\n> overriding clause allowed. In other words it should be possible to use\n> either OVERRIDING SYSTEM VALUE or OVERRIDING USER VALUE, but we\n> currently throw an error unless it's the former.\n\nIt appears you are right.\n\n> - and the column in new rows will automatically have values from the\n> - sequence assigned to it.\n> + and new rows in the column will automatically have values from the\n> + sequence assigned to them.\n\nThe \"it\" refers to \"the column\", so I think it's correct.\n\n> specifies <literal>OVERRIDING SYSTEM VALUE</literal>. If\n<literal>BY\n> DEFAULT</literal> is specified, then the user-specified value takes\n> - precedence. See <xref linkend=\"sql-insert\"/> for details. (In\n> + precedence, unless the <command>INSERT</command> statement\nspecifies\n> + <literal>OVERRIDING USER VALUE</literal>.\n> + See <xref linkend=\"sql-insert\"/> for details. (In\n\nIsn't your change that it now applies to both ALWAYS and BY DEFAULT? So\nwhy attach this phrase to the BY DEFAULT explanation?\n\n> <para>\n> + Additionally, if <literal>ALWAYS</literal> is specified, any\nattempt to\n> + update the value of the column using an <command>UPDATE</command>\n> + statement specifying any value other than\n<literal>DEFAULT</literal>\n> + will be rejected. If <literal>BY DEFAULT</literal> is\nspecified, the\n> + system will allow values in the column to be updated.\n> + </para>\n\nThis is already documented on the INSERT reference page.\n\n> -\t\t\t\t\t\t\t errhint(\"Use OVERRIDING SYSTEM VALUE to override.\")));\n> +\t\t\t\t\t\t\t errhint(\"You must specify either OVERRIDING SYSTEM VALUE or\nOVERRIDING USER VALUE.\")));\n\nIs this a good hint? If the user wanted to insert something, then\nspecifying OVERRIDING USER VALUE won't really accomplish that.\nOVERRIDING USER VALUE is only useful in the specific situations that the\ndocumentation discussed. Can we detect those?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Mon, 25 Feb 2019 13:47:41 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: INSERT ... OVERRIDING USER VALUE vs GENERATED ALWAYS identity\n columns"
},
{
"msg_contents": "On Mon, 25 Feb 2019 at 12:47, Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> > - and the column in new rows will automatically have values from the\n> > - sequence assigned to it.\n> > + and new rows in the column will automatically have values from the\n> > + sequence assigned to them.\n>\n> The \"it\" refers to \"the column\", so I think it's correct.\n>\n\nAh, OK. I failed to parse the original wording.\n\n\n> > specifies <literal>OVERRIDING SYSTEM VALUE</literal>. If\n> <literal>BY\n> > DEFAULT</literal> is specified, then the user-specified value takes\n> > - precedence. See <xref linkend=\"sql-insert\"/> for details. (In\n> > + precedence, unless the <command>INSERT</command> statement\n> specifies\n> > + <literal>OVERRIDING USER VALUE</literal>.\n> > + See <xref linkend=\"sql-insert\"/> for details. (In\n>\n> Isn't your change that it now applies to both ALWAYS and BY DEFAULT? So\n> why attach this phrase to the BY DEFAULT explanation?\n>\n\nThe last couple of sentences of that paragraph are describing the\ncircumstances under which the user-specified value will be applied. So\nfor the ALWAYS case, it's only if OVERRIDING SYSTEM VALUE is\nspecified, and for the BY DEFAULT case, it's only if OVERRIDING USER\nVALUE isn't specified. Without that additional text, the original\nwording could be taken to mean that for a BY DEFAULT column, the\nuser-specified value always gets applied.\n\n\n> > <para>\n> > + Additionally, if <literal>ALWAYS</literal> is specified, any\n> attempt to\n> > + update the value of the column using an <command>UPDATE</command>\n> > + statement specifying any value other than\n> <literal>DEFAULT</literal>\n> > + will be rejected. If <literal>BY DEFAULT</literal> is\n> specified, the\n> > + system will allow values in the column to be updated.\n> > + </para>\n>\n> This is already documented on the INSERT reference page.\n>\n\nI can't see anywhere where we document how UPDATE behaves with identity columns.\n\n\n> > - errhint(\"Use OVERRIDING SYSTEM VALUE to override.\")));\n> > + errhint(\"You must specify either OVERRIDING SYSTEM VALUE or\n> OVERRIDING USER VALUE.\")));\n>\n> Is this a good hint? If the user wanted to insert something, then\n> specifying OVERRIDING USER VALUE won't really accomplish that.\n> OVERRIDING USER VALUE is only useful in the specific situations that the\n> documentation discussed. Can we detect those?\n>\n\nHmm, I'm not sure that we reliably guess what the user intended. What\nexactly did you have in mind?\n\nRegards,\nDean\n\n",
"msg_date": "Mon, 25 Feb 2019 14:36:30 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: INSERT ... OVERRIDING USER VALUE vs GENERATED ALWAYS identity\n columns"
},
{
"msg_contents": "We appear to have lost track of this. I have re-read everything and \nexpanded your patch a bit with additional documentation and comments in \nthe tests.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 27 Mar 2020 12:29:23 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: INSERT ... OVERRIDING USER VALUE vs GENERATED ALWAYS identity\n columns"
},
{
"msg_contents": "On Fri, 27 Mar 2020 at 11:29, Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> We appear to have lost track of this.\n\nAh yes, indeed!\n\n> I have re-read everything and\n> expanded your patch a bit with additional documentation and comments in\n> the tests.\n\nI looked that over, and it all looks good to me.\n\nRegards,\nDean\n\n\n",
"msg_date": "Fri, 27 Mar 2020 16:33:34 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: INSERT ... OVERRIDING USER VALUE vs GENERATED ALWAYS identity\n columns"
},
{
"msg_contents": "On 3/27/20 9:33 AM, Dean Rasheed wrote:\n> On Fri, 27 Mar 2020 at 11:29, Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n>>\n>> I have re-read everything and\n>> expanded your patch a bit with additional documentation and comments in\n>> the tests.\n> \n> I looked that over, and it all looks good to me.\n\nI concur. And it matches my reading of the standard (apart from the\nintentional derivation).\n-- \nVik Fearing\n\n\n",
"msg_date": "Fri, 27 Mar 2020 17:58:04 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: INSERT ... OVERRIDING USER VALUE vs GENERATED ALWAYS identity\n columns"
},
{
"msg_contents": "On 2020-03-27 17:58, Vik Fearing wrote:\n> On 3/27/20 9:33 AM, Dean Rasheed wrote:\n>> On Fri, 27 Mar 2020 at 11:29, Peter Eisentraut\n>> <peter.eisentraut@2ndquadrant.com> wrote:\n>>>\n>>> I have re-read everything and\n>>> expanded your patch a bit with additional documentation and comments in\n>>> the tests.\n>>\n>> I looked that over, and it all looks good to me.\n> \n> I concur. And it matches my reading of the standard (apart from the\n> intentional derivation).\n\nCommitted, thanks!\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 31 Mar 2020 08:51:44 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: INSERT ... OVERRIDING USER VALUE vs GENERATED ALWAYS identity\n columns"
}
] |
[
{
"msg_contents": "I've set up 2 instances of PostgreSQL 11. On instance A, I created a table\nwith 2 local partitions and 2 partitions on instance B using foreign data\nwrappers, following https://pgdash.io/blog/postgres-11-sharding.html.\nInserting rows into this table works as expected, with rows ending up in\nthe appropriate partition. However, updating those rows only moves them\nacross partitions in some of the situations:\n\n - From local partition to local partition\n - From local partition to foreign partition\n\nRows are not moved\n\n - From foreign partition to local partition\n - From foreign partition to foreign partition\n\nIs this the expected behavior? Am I missing something or configured\nsomething incorrectly?\n\nThanks,\nDerek\n\nI've set up 2 instances of PostgreSQL 11. On instance A, I created a table with 2 local partitions and 2 partitions on instance B using foreign data wrappers, following https://pgdash.io/blog/postgres-11-sharding.html. Inserting rows into this table works as expected, with rows ending up in the appropriate partition. However, updating those rows only moves them across partitions in some of the situations:From local partition to local partitionFrom local partition to foreign partitionRows are not movedFrom foreign partition to local partitionFrom foreign partition to foreign partitionIs this the expected behavior? Am I missing something or configured something incorrectly? Thanks,Derek",
"msg_date": "Fri, 22 Feb 2019 09:44:05 -0500",
"msg_from": "Derek Hans <derek.hans@gmail.com>",
"msg_from_op": true,
"msg_subject": "Update does not move row across foreign partitions in v11"
},
{
"msg_contents": "Hi all,\nThis behavior makes the new data sharding functionality in v11 only\nmarginally useful as you can't shard across database instances.\nConsidering data sharding appeared to be one of the key improvements in\nv11, I'm confused - am I misunderstanding the expected functionality?\n\nThanks!\n\nOn Fri, Feb 22, 2019 at 9:44 AM Derek Hans <derek.hans@gmail.com> wrote:\n\n> I've set up 2 instances of PostgreSQL 11. On instance A, I created a table\n> with 2 local partitions and 2 partitions on instance B using foreign data\n> wrappers, following https://pgdash.io/blog/postgres-11-sharding.html.\n> Inserting rows into this table works as expected, with rows ending up in\n> the appropriate partition. However, updating those rows only moves them\n> across partitions in some of the situations:\n>\n> - From local partition to local partition\n> - From local partition to foreign partition\n>\n> Rows are not moved\n>\n> - From foreign partition to local partition\n> - From foreign partition to foreign partition\n>\n> Is this the expected behavior? Am I missing something or configured\n> something incorrectly?\n>\n> Thanks,\n> Derek\n>\n\n\n-- \n*Derek*\n+1 (415) 754-0519 | derek.hans@gmail.com | Skype: derek.hans\n\nHi all, This behavior makes the new data sharding functionality in v11 only marginally useful as you can't shard across database instances. Considering data sharding appeared to be one of the key improvements in v11, I'm confused - am I misunderstanding the expected functionality?Thanks!On Fri, Feb 22, 2019 at 9:44 AM Derek Hans <derek.hans@gmail.com> wrote:I've set up 2 instances of PostgreSQL 11. On instance A, I created a table with 2 local partitions and 2 partitions on instance B using foreign data wrappers, following https://pgdash.io/blog/postgres-11-sharding.html. Inserting rows into this table works as expected, with rows ending up in the appropriate partition. However, updating those rows only moves them across partitions in some of the situations:From local partition to local partitionFrom local partition to foreign partitionRows are not movedFrom foreign partition to local partitionFrom foreign partition to foreign partitionIs this the expected behavior? Am I missing something or configured something incorrectly? Thanks,Derek\n-- Derek+1 (415) 754-0519 | derek.hans@gmail.com | Skype: derek.hans",
"msg_date": "Wed, 27 Feb 2019 13:53:48 -0500",
"msg_from": "Derek Hans <derek.hans@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Update does not move row across foreign partitions in v11"
},
{
"msg_contents": "On 2019-Feb-22, Derek Hans wrote:\n\n> I've set up 2 instances of PostgreSQL 11. On instance A, I created a table\n> with 2 local partitions and 2 partitions on instance B using foreign data\n> wrappers, following https://pgdash.io/blog/postgres-11-sharding.html.\n> Inserting rows into this table works as expected, with rows ending up in\n> the appropriate partition. However, updating those rows only moves them\n> across partitions in some of the situations:\n> \n> - From local partition to local partition\n> - From local partition to foreign partition\n> \n> Rows are not moved\n> \n> - From foreign partition to local partition\n> - From foreign partition to foreign partition\n> \n> Is this the expected behavior? Am I missing something or configured\n> something incorrectly?\n\nSounds like a bug to me.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Wed, 27 Feb 2019 16:31:14 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Update does not move row across foreign partitions in v11"
},
{
"msg_contents": "Based on a reply to reporting this as a bug, moving rows out of foreign\npartitions is not yet implemented so this is behaving as expected. There's\na mention of this limitation in the Notes section of the Update docs.\n\nOn Wed, Feb 27, 2019 at 6:12 PM Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> On 2019-Feb-22, Derek Hans wrote:\n>\n> > I've set up 2 instances of PostgreSQL 11. On instance A, I created a\n> table\n> > with 2 local partitions and 2 partitions on instance B using foreign data\n> > wrappers, following https://pgdash.io/blog/postgres-11-sharding.html.\n> > Inserting rows into this table works as expected, with rows ending up in\n> > the appropriate partition. However, updating those rows only moves them\n> > across partitions in some of the situations:\n> >\n> > - From local partition to local partition\n> > - From local partition to foreign partition\n> >\n> > Rows are not moved\n> >\n> > - From foreign partition to local partition\n> > - From foreign partition to foreign partition\n> >\n> > Is this the expected behavior? Am I missing something or configured\n> > something incorrectly?\n>\n> Sounds like a bug to me.\n>\n> --\n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\n\n-- \n*Derek*\n+1 (415) 754-0519 | derek.hans@gmail.com | Skype: derek.hans\n\nBased on a reply to reporting this as a bug, moving rows out of foreign partitions is not yet implemented so this is behaving as expected. There's a mention of this limitation in the Notes section of the Update docs. On Wed, Feb 27, 2019 at 6:12 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:On 2019-Feb-22, Derek Hans wrote:\n\n> I've set up 2 instances of PostgreSQL 11. On instance A, I created a table\n> with 2 local partitions and 2 partitions on instance B using foreign data\n> wrappers, following https://pgdash.io/blog/postgres-11-sharding.html.\n> Inserting rows into this table works as expected, with rows ending up in\n> the appropriate partition. However, updating those rows only moves them\n> across partitions in some of the situations:\n> \n> - From local partition to local partition\n> - From local partition to foreign partition\n> \n> Rows are not moved\n> \n> - From foreign partition to local partition\n> - From foreign partition to foreign partition\n> \n> Is this the expected behavior? Am I missing something or configured\n> something incorrectly?\n\nSounds like a bug to me.\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n-- Derek+1 (415) 754-0519 | derek.hans@gmail.com | Skype: derek.hans",
"msg_date": "Mon, 4 Mar 2019 09:00:50 -0500",
"msg_from": "Derek Hans <derek.hans@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Update does not move row across foreign partitions in v11"
},
{
"msg_contents": "On Tue, 5 Mar 2019 at 03:01, Derek Hans <derek.hans@gmail.com> wrote:\n> Based on a reply to reporting this as a bug, moving rows out of foreign partitions is not yet implemented so this is behaving as expected. There's a mention of this limitation in the Notes section of the Update docs.\n\n(Moving this discussion to -Hackers)\n\nIn [1], Derek reports that once a row is inserted into a foreign\npartition that an UPDATE does not correctly route it back out into the\ncorrect partition.\n\nI didn't really follow the foreign partition code when it went in, but\ndo recall being involved in the documentation about the limitations of\npartitioned tables in table 5.10.2.3 in [2]. Unfortunately, table\n5.10.2.3 does not seem to mention this limitation at all. As Derek\nmentions, there is a brief mention in [3] in the form of:\n\n\"Currently, rows cannot be moved from a partition that is a foreign\ntable to some other partition, but they can be moved into a foreign\ntable if the foreign data wrapper supports it.\"\n\nI don't quite understand what a \"foreign table to some other\npartition\" is meant to mean. Partitions don't have foreign tables,\nthey can only be one themselves.\n\nI've tried to put all this right again in the attached. However, I was\na bit unsure of what \"but they can be moved into a foreign table if\nthe foreign data wrapper supports it.\" is referring to. Copying Robert\nand Etsuro as this was all added in 3d956d9562aa. Hopefully, they can\nconfirm what is meant by this.\n\n[1] https://www.postgresql.org/message-id/CAGrP7a3Xc1Qy_B2WJcgAD8uQTS_NDcJn06O5mtS_Ne1nYhBsyw@mail.gmail.com\n[2] https://www.postgresql.org/docs/devel/ddl-partitioning.html#DDL-PARTITIONING-DECLARATIVE-LIMITATIONS\n[3] https://www.postgresql.org/docs/devel/sql-update.html\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Wed, 6 Mar 2019 15:06:32 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Update does not move row across foreign partitions in v11"
},
{
"msg_contents": "Hi David,\n\nOn 2019/03/06 11:06, David Rowley wrote:\n> On Tue, 5 Mar 2019 at 03:01, Derek Hans <derek.hans@gmail.com> wrote:\n>> Based on a reply to reporting this as a bug, moving rows out of foreign partitions is not yet implemented so this is behaving as expected. There's a mention of this limitation in the Notes section of the Update docs.\n> \n> (Moving this discussion to -Hackers)\n> \n> In [1], Derek reports that once a row is inserted into a foreign\n> partition that an UPDATE does not correctly route it back out into the\n> correct partition.\n> \n> I didn't really follow the foreign partition code when it went in, but\n> do recall being involved in the documentation about the limitations of\n> partitioned tables in table 5.10.2.3 in [2]. Unfortunately, table\n> 5.10.2.3 does not seem to mention this limitation at all. As Derek\n> mentions, there is a brief mention in [3] in the form of:\n> \n> \"Currently, rows cannot be moved from a partition that is a foreign\n> table to some other partition, but they can be moved into a foreign\n> table if the foreign data wrapper supports it.\"\n> \n> I don't quite understand what a \"foreign table to some other\n> partition\" is meant to mean. Partitions don't have foreign tables,\n> they can only be one themselves.\n> \n> I've tried to put all this right again in the attached. However, I was\n> a bit unsure of what \"but they can be moved into a foreign table if\n> the foreign data wrapper supports it.\" is referring to. Copying Robert\n> and Etsuro as this was all added in 3d956d9562aa. Hopefully, they can\n> confirm what is meant by this.\n\nDid you miss my reply on that thread?\n\nhttps://www.postgresql.org/message-id/CA%2BHiwqF3gma5HfCJb4_cOk0_%2BLEpVc57EHdBfz_EKt%2BNu0hNYg%40mail.gmail.com\n\nThanks,\nAmit\n\n\n",
"msg_date": "Wed, 6 Mar 2019 11:26:43 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Update does not move row across foreign partitions in v11"
},
{
"msg_contents": "On Wed, 6 Mar 2019 at 15:26, Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> wrote:\n>\n> > I've tried to put all this right again in the attached. However, I was\n> > a bit unsure of what \"but they can be moved into a foreign table if\n> > the foreign data wrapper supports it.\" is referring to. Copying Robert\n> > and Etsuro as this was all added in 3d956d9562aa. Hopefully, they can\n> > confirm what is meant by this.\n>\n> Did you miss my reply on that thread?\n>\n> https://www.postgresql.org/message-id/CA%2BHiwqF3gma5HfCJb4_cOk0_%2BLEpVc57EHdBfz_EKt%2BNu0hNYg%40mail.gmail.com\n\nYes. I wasn't aware that there were two threads for this.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Wed, 6 Mar 2019 15:29:44 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Update does not move row across foreign partitions in v11"
},
{
"msg_contents": "On 2019/03/06 11:29, David Rowley wrote:\n> On Wed, 6 Mar 2019 at 15:26, Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> wrote:\n>>\n>>> I've tried to put all this right again in the attached. However, I was\n>>> a bit unsure of what \"but they can be moved into a foreign table if\n>>> the foreign data wrapper supports it.\" is referring to. Copying Robert\n>>> and Etsuro as this was all added in 3d956d9562aa. Hopefully, they can\n>>> confirm what is meant by this.\n>>\n>> Did you miss my reply on that thread?\n>>\n>> https://www.postgresql.org/message-id/CA%2BHiwqF3gma5HfCJb4_cOk0_%2BLEpVc57EHdBfz_EKt%2BNu0hNYg%40mail.gmail.com\n> \n> Yes. I wasn't aware that there were two threads for this.\n\nAh, indeed. In the documentation fix patch I'd posted, I also made\nchanges to release-11.sgml to link to the limitations section. (I'm\nattaching it here for your reference.)\n\nThanks,\nAmit",
"msg_date": "Wed, 6 Mar 2019 11:34:27 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Update does not move row across foreign partitions in v11"
},
{
"msg_contents": "(2019/03/06 11:06), David Rowley wrote:\n> On Tue, 5 Mar 2019 at 03:01, Derek Hans<derek.hans@gmail.com> wrote:\n>> Based on a reply to reporting this as a bug, moving rows out of foreign partitions is not yet implemented so this is behaving as expected. There's a mention of this limitation in the Notes section of the Update docs.\n>\n> (Moving this discussion to -Hackers)\n>\n> In [1], Derek reports that once a row is inserted into a foreign\n> partition that an UPDATE does not correctly route it back out into the\n> correct partition.\n>\n> I didn't really follow the foreign partition code when it went in, but\n> do recall being involved in the documentation about the limitations of\n> partitioned tables in table 5.10.2.3 in [2]. Unfortunately, table\n> 5.10.2.3 does not seem to mention this limitation at all. As Derek\n> mentions, there is a brief mention in [3] in the form of:\n>\n> \"Currently, rows cannot be moved from a partition that is a foreign\n> table to some other partition, but they can be moved into a foreign\n> table if the foreign data wrapper supports it.\"\n>\n> I don't quite understand what a \"foreign table to some other\n> partition\" is meant to mean. Partitions don't have foreign tables,\n> they can only be one themselves.\n\nI think \"foreign table\" is describing \"partition\" in front of that; \"a \npartition that is a foreign table\".\n\n> I've tried to put all this right again in the attached. However, I was\n> a bit unsure of what \"but they can be moved into a foreign table if\n> the foreign data wrapper supports it.\" is referring to. Copying Robert\n> and Etsuro as this was all added in 3d956d9562aa. Hopefully, they can\n> confirm what is meant by this.\n\nThat means that rows can be moved from a local partition to a foreign \npartition if the FDW supports it.\n\nIMO, I think the existing mention in [3] is good, so I would vote for \nputting the same mention in table 5.10.2.3 in [2] as well.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Wed, 06 Mar 2019 12:29:24 +0900",
"msg_from": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Update does not move row across foreign partitions in v11"
},
{
"msg_contents": "On Wed, 6 Mar 2019 at 16:29, Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp> wrote:\n>\n> (2019/03/06 11:06), David Rowley wrote:\n> > I don't quite understand what a \"foreign table to some other\n> > partition\" is meant to mean. Partitions don't have foreign tables,\n> > they can only be one themselves.\n>\n> I think \"foreign table\" is describing \"partition\" in front of that; \"a\n> partition that is a foreign table\".\n\nI think I was reading this wrong:\n\n- Currently, rows cannot be moved from a partition that is a\n- foreign table to some other partition, but they can be moved into a foreign\n- table if the foreign data wrapper supports it.\n\nI parsed it as \"cannot be moved from a partition, that is a foreign\ntable to some other partition\"\n\nand subsequently struggled with what \"a foreign table to some other\npartition\" is.\n\nbut now looking at it, I think it's meant to mean:\n\n\"cannot be moved from a foreign table partition to another partition\"\n\n> > I've tried to put all this right again in the attached. However, I was\n> > a bit unsure of what \"but they can be moved into a foreign table if\n> > the foreign data wrapper supports it.\" is referring to. Copying Robert\n> > and Etsuro as this was all added in 3d956d9562aa. Hopefully, they can\n> > confirm what is meant by this.\n>\n> That means that rows can be moved from a local partition to a foreign\n> partition if the FDW supports it.\n\nIt seems a bit light on detail to me. If I was a user I'd want to know\nwhat exactly the FDW needed to support this. Does it need a special\npartition move function? Looking at ExecFindPartition(), this check\nseems to be done in CheckValidResultRel() and is basically:\n\ncase RELKIND_FOREIGN_TABLE:\n/* Okay only if the FDW supports it */\nfdwroutine = resultRelInfo->ri_FdwRoutine;\nswitch (operation)\n{\ncase CMD_INSERT:\nif (fdwroutine->ExecForeignInsert == NULL)\nereport(ERROR,\n(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\nerrmsg(\"cannot insert into foreign table \\\"%s\\\"\",\nRelationGetRelationName(resultRel))));\n\nAlternatively, we could just remove the mention about \"if the FDW\nsupports it\", since it's probably unlikely for an FDW not to support\nINSERT.\n\n> IMO, I think the existing mention in [3] is good, so I would vote for\n> putting the same mention in table 5.10.2.3 in [2] as well.\n\nI think the sentence is unclear, at least I struggled to parse it the\nfirst time. Happy for Amit to choose some better words and include in\nhis patch. I think it should be done in the same commit.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Wed, 6 Mar 2019 16:47:47 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Update does not move row across foreign partitions in v11"
},
{
"msg_contents": "(2019/03/06 11:34), Amit Langote wrote:\n> Ah, indeed. In the documentation fix patch I'd posted, I also made\n> changes to release-11.sgml to link to the limitations section. (I'm\n> attaching it here for your reference.)\n\nI'm not sure it's a good idea to make changes to the release notes like \nthat, because 1) that would make the release notes verbose, and 2) it \nmight end up doing the same thing to items that have some limitations in \nthe existing/future release notes (eg, FOR EACH ROW triggers on \npartitioned tables added to V11 has the limitation listed on the \nlimitation section, so the same link would be needed.), for consistency.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Wed, 06 Mar 2019 13:04:28 +0900",
"msg_from": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Update does not move row across foreign partitions in v11"
},
{
"msg_contents": "Fujita-san,\n\nOn 2019/03/06 13:04, Etsuro Fujita wrote:\n> (2019/03/06 11:34), Amit Langote wrote:\n>> Ah, indeed. In the documentation fix patch I'd posted, I also made\n>> changes to release-11.sgml to link to the limitations section. (I'm\n>> attaching it here for your reference.)\n> \n> I'm not sure it's a good idea to make changes to the release notes like\n> that, because 1) that would make the release notes verbose, and 2) it\n> might end up doing the same thing to items that have some limitations in\n> the existing/future release notes (eg, FOR EACH ROW triggers on\n> partitioned tables added to V11 has the limitation listed on the\n> limitation section, so the same link would be needed.), for consistency.\n\nOK, sure. It just seemed to me that the original complainer found it\nquite a bit surprising that such a limitation is not mentioned in the\nrelease notes, but maybe that's fine. It seems we don't normally list\nfeature limitations in the release notes, which as you rightly say, would\nmake them verbose.\n\nThe main problem here is indeed that the limitation is not listed under\nthe partitioning limitations in ddl.sgml, where it's easier to notice than\nin the UPDATE's page. I've updated my patch to remove the release-11.sgml\nchanges.\n\nThanks,\nAmit",
"msg_date": "Wed, 6 Mar 2019 13:18:00 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Update does not move row across foreign partitions in v11"
},
{
"msg_contents": "On 2019/03/06 12:47, David Rowley wrote:\n> On Wed, 6 Mar 2019 at 16:29, Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp> wrote:\n>> That means that rows can be moved from a local partition to a foreign\n>> partition if the FDW supports it.\n> \n> It seems a bit light on detail to me. If I was a user I'd want to know\n> what exactly the FDW needed to support this. Does it need a special\n> partition move function? Looking at ExecFindPartition(), this check\n> seems to be done in CheckValidResultRel() and is basically:\n> \n> case RELKIND_FOREIGN_TABLE:\n> /* Okay only if the FDW supports it */\n> fdwroutine = resultRelInfo->ri_FdwRoutine;\n> switch (operation)\n> {\n> case CMD_INSERT:\n> if (fdwroutine->ExecForeignInsert == NULL)\n> ereport(ERROR,\n> (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> errmsg(\"cannot insert into foreign table \\\"%s\\\"\",\n> RelationGetRelationName(resultRel))));\n> \n> Alternatively, we could just remove the mention about \"if the FDW\n> supports it\", since it's probably unlikely for an FDW not to support\n> INSERT.\n\nAFAIK, there's no special support in FDWs for \"tuple moving\" as such. The\n\"if the FDW supports it\" refers to the FDW's ability to handle tuple\nrouting. Note that moving/re-routing involves calling\nExecPrepareTupleRouting followed by ExecInsert on the new tupls after the\nold tuple is deleted. If an FDW doesn't support tuple routing, then a\ntuple cannot be moved into it. That's what that text is talking about.\n\nMaybe, we should reword it as \"if the FDW supports tuple routing\", so that\na reader doesn't go looking around for \"tuple moving support\" in FDWs.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Wed, 6 Mar 2019 13:20:17 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Update does not move row across foreign partitions in v11"
},
{
"msg_contents": "On Wed, 6 Mar 2019 at 17:20, Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> wrote:\n>\n> On 2019/03/06 12:47, David Rowley wrote:\n> > It seems a bit light on detail to me. If I was a user I'd want to know\n> > what exactly the FDW needed to support this. Does it need a special\n> > partition move function? Looking at ExecFindPartition(), this check\n> > seems to be done in CheckValidResultRel() and is basically:\n> >\n> > case RELKIND_FOREIGN_TABLE:\n> > /* Okay only if the FDW supports it */\n> > fdwroutine = resultRelInfo->ri_FdwRoutine;\n> > switch (operation)\n> > {\n> > case CMD_INSERT:\n> > if (fdwroutine->ExecForeignInsert == NULL)\n> > ereport(ERROR,\n> > (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> > errmsg(\"cannot insert into foreign table \\\"%s\\\"\",\n> > RelationGetRelationName(resultRel))));\n> >\n> > Alternatively, we could just remove the mention about \"if the FDW\n> > supports it\", since it's probably unlikely for an FDW not to support\n> > INSERT.\n>\n> AFAIK, there's no special support in FDWs for \"tuple moving\" as such. The\n> \"if the FDW supports it\" refers to the FDW's ability to handle tuple\n> routing. Note that moving/re-routing involves calling\n> ExecPrepareTupleRouting followed by ExecInsert on the new tupls after the\n> old tuple is deleted. If an FDW doesn't support tuple routing, then a\n> tuple cannot be moved into it. That's what that text is talking about.\n>\n> Maybe, we should reword it as \"if the FDW supports tuple routing\", so that\n> a reader doesn't go looking around for \"tuple moving support\" in FDWs.\n\nI think you missed my point. If there's no special support for \"tuple\nmoving\", as you say, then what help is it to tell the user \"if the FDW\nsupports tuple routing\"? The answer is, it's not any help. How would\nthe user check such a fact?\n\nAs far as I can tell, this is just the requirements as defined in\nCheckValidResultRel() for CMD_INSERT. Fragments of which I pasted\nabove.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Wed, 6 Mar 2019 17:30:25 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Update does not move row across foreign partitions in v11"
},
{
"msg_contents": "On 2019/03/06 13:30, David Rowley wrote:\n> On Wed, 6 Mar 2019 at 17:20, Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> wrote:\n>>\n>> On 2019/03/06 12:47, David Rowley wrote:\n>>> It seems a bit light on detail to me. If I was a user I'd want to know\n>>> what exactly the FDW needed to support this. Does it need a special\n>>> partition move function? Looking at ExecFindPartition(), this check\n>>> seems to be done in CheckValidResultRel() and is basically:\n>>>\n>>> case RELKIND_FOREIGN_TABLE:\n>>> /* Okay only if the FDW supports it */\n>>> fdwroutine = resultRelInfo->ri_FdwRoutine;\n>>> switch (operation)\n>>> {\n>>> case CMD_INSERT:\n>>> if (fdwroutine->ExecForeignInsert == NULL)\n>>> ereport(ERROR,\n>>> (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n>>> errmsg(\"cannot insert into foreign table \\\"%s\\\"\",\n>>> RelationGetRelationName(resultRel))));\n>>>\n>>> Alternatively, we could just remove the mention about \"if the FDW\n>>> supports it\", since it's probably unlikely for an FDW not to support\n>>> INSERT.\n>>\n>> AFAIK, there's no special support in FDWs for \"tuple moving\" as such. The\n>> \"if the FDW supports it\" refers to the FDW's ability to handle tuple\n>> routing. Note that moving/re-routing involves calling\n>> ExecPrepareTupleRouting followed by ExecInsert on the new tupls after the\n>> old tuple is deleted. If an FDW doesn't support tuple routing, then a\n>> tuple cannot be moved into it. That's what that text is talking about.\n>>\n>> Maybe, we should reword it as \"if the FDW supports tuple routing\", so that\n>> a reader doesn't go looking around for \"tuple moving support\" in FDWs.\n> \n> I think you missed my point. If there's no special support for \"tuple\n> moving\", as you say, then what help is it to tell the user \"if the FDW\n> supports tuple routing\"? The answer is, it's not any help. How would\n> the user check such a fact?\n\nHmm, maybe getting the following error, like one would get in PG 10 when\nusing postgres_fdw-managed partitions:\n\nERROR: cannot route inserted tuples to a foreign table\n\nGetting the above error is perhaps not the best way for a user to learn of\nthis fact, but maybe we (and hopefully other FDW authors) mention this in\nthe documentation?\n\n> As far as I can tell, this is just the requirements as defined in\n> CheckValidResultRel() for CMD_INSERT. Fragments of which I pasted\n> above.\n\nOnly supporting INSERT doesn't suffice though. An FDW which intends to\nsupport tuple routing and hence 1-way tuple moving needs to updated like\npostgres_fdw was in PG 11.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Wed, 6 Mar 2019 13:53:12 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Update does not move row across foreign partitions in v11"
},
{
"msg_contents": "(2019/03/06 13:18), Amit Langote wrote:\n> The main problem here is indeed that the limitation is not listed under\n> the partitioning limitations in ddl.sgml, where it's easier to notice than\n> in the UPDATE's page.\n\nAgreed.\n\n> I've updated my patch to remove the release-11.sgml\n> changes.\n\nThanks for the updated patch!\n\n--- a/doc/src/sgml/ddl.sgml\n+++ b/doc/src/sgml/ddl.sgml\n@@ -3376,6 +3376,13 @@ ALTER TABLE measurement ATTACH PARTITION \nmeasurement_y2008m02\n </para>\n </listitem>\n\n+ <listitem>\n+ <para>\n+ <command>UPDATE</command> row movement is not supported in the cases\n+ where the old row is contained in a foreign table partition.\n+ </para>\n+ </listitem>\n\nISTM that it's also a limitation that rows can be moved from a local \npartition to a foreign partition *if the FDW support tuple routing*, so \nI would vote for mentioning that as well here.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Wed, 06 Mar 2019 15:10:38 +0900",
"msg_from": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Update does not move row across foreign partitions in v11"
},
{
"msg_contents": "(2019/03/06 13:53), Amit Langote wrote:\n> On 2019/03/06 13:30, David Rowley wrote:\n\n>> I think you missed my point. If there's no special support for \"tuple\n>> moving\", as you say, then what help is it to tell the user \"if the FDW\n>> supports tuple routing\"? The answer is, it's not any help. How would\n>> the user check such a fact?\n>\n> Hmm, maybe getting the following error, like one would get in PG 10 when\n> using postgres_fdw-managed partitions:\n>\n> ERROR: cannot route inserted tuples to a foreign table\n>\n> Getting the above error is perhaps not the best way for a user to learn of\n> this fact, but maybe we (and hopefully other FDW authors) mention this in\n> the documentation?\n\n+1\n\n>> As far as I can tell, this is just the requirements as defined in\n>> CheckValidResultRel() for CMD_INSERT. Fragments of which I pasted\n>> above.\n>\n> Only supporting INSERT doesn't suffice though. An FDW which intends to\n> support tuple routing and hence 1-way tuple moving needs to updated like\n> postgres_fdw was in PG 11.\n\nThat's right; the \"if the FDW supports it\" in the documentation refers \nto the FDW's support for the callback functions BeginForeignInsert() and \nEndForeignInsert() described in 57.2.4. FDW Routines For Updating \nForeign Tables [1] in addition to ExecForeignInsert(), as stated there:\n\n\"Tuples inserted into a partitioned table by INSERT or COPY FROM are \nrouted to partitions. If an FDW supports routable foreign-table \npartitions, it should also provide the following callback functions.\"\n\nBest regards,\nEtsuro Fujita\n\n[1] \nhttps://www.postgresql.org/docs/current/fdw-callbacks.html#FDW-CALLBACKS-UPDATE\n\n\n",
"msg_date": "Wed, 06 Mar 2019 15:16:25 +0900",
"msg_from": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Update does not move row across foreign partitions in v11"
},
{
"msg_contents": "Fujita-san,\n\nOn 2019/03/06 15:10, Etsuro Fujita wrote:\n> --- a/doc/src/sgml/ddl.sgml\n> +++ b/doc/src/sgml/ddl.sgml\n> @@ -3376,6 +3376,13 @@ ALTER TABLE measurement ATTACH PARTITION\n> measurement_y2008m02\n> </para>\n> </listitem>\n> \n> + <listitem>\n> + <para>\n> + <command>UPDATE</command> row movement is not supported in the cases\n> + where the old row is contained in a foreign table partition.\n> + </para>\n> + </listitem>\n> \n> ISTM that it's also a limitation that rows can be moved from a local\n> partition to a foreign partition *if the FDW support tuple routing*, so I\n> would vote for mentioning that as well here.\n\nThanks for checking.\n\nI have updated the patch to include a line about this in the same\nparagraph, because maybe we don't need to make a new <listitem> for it.\n\nThanks,\nAmit",
"msg_date": "Wed, 6 Mar 2019 15:34:12 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Update does not move row across foreign partitions in v11"
},
{
"msg_contents": "(2019/03/06 15:34), Amit Langote wrote:\n> On 2019/03/06 15:10, Etsuro Fujita wrote:\n>> --- a/doc/src/sgml/ddl.sgml\n>> +++ b/doc/src/sgml/ddl.sgml\n>> @@ -3376,6 +3376,13 @@ ALTER TABLE measurement ATTACH PARTITION\n>> measurement_y2008m02\n>> </para>\n>> </listitem>\n>>\n>> +<listitem>\n>> +<para>\n>> +<command>UPDATE</command> row movement is not supported in the cases\n>> + where the old row is contained in a foreign table partition.\n>> +</para>\n>> +</listitem>\n>>\n>> ISTM that it's also a limitation that rows can be moved from a local\n>> partition to a foreign partition *if the FDW support tuple routing*, so I\n>> would vote for mentioning that as well here.\n>\n> Thanks for checking.\n>\n> I have updated the patch to include a line about this in the same\n> paragraph, because maybe we don't need to make a new<listitem> for it.\n\nThanks for the patch!\n\nThe patch looks good to me, but one thing I'm wondering is: as suggested \nby David, it would be better to rephrase this mention in the UPDATE \nreference page, in a single commit:\n\n\"Currently, rows cannot be moved from a partition that is a foreign \ntable to some other partition, but they can be moved into a foreign \ntable if the foreign data wrapper supports it.\"\n\nI don't think it needs to be completely rephrased; it's enough for me to \nrewrite it to something like this:\n\n\"Currently, rows cannot be moved from a foreign-table partition to some \nother partition, but they can be moved into a foreign-table partition if \nthe foreign data wrapper supports tuple routing.\"\n\nAnd to make maintenance work easy, I think it might be better to just \nput this on the limitations section of 5.10. Table Partitioning. What \ndo you think about that?\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Thu, 07 Mar 2019 21:35:03 +0900",
"msg_from": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Update does not move row across foreign partitions in v11"
},
{
"msg_contents": "On Thu, Mar 7, 2019 at 7:35 AM Etsuro Fujita\n<fujita.etsuro@lab.ntt.co.jp> wrote:\n> Thanks for the patch!\n>\n> The patch looks good to me, but one thing I'm wondering is: as suggested\n> by David, it would be better to rephrase this mention in the UPDATE\n> reference page, in a single commit:\n>\n> \"Currently, rows cannot be moved from a partition that is a foreign\n> table to some other partition, but they can be moved into a foreign\n> table if the foreign data wrapper supports it.\"\n>\n> I don't think it needs to be completely rephrased; it's enough for me to\n> rewrite it to something like this:\n>\n> \"Currently, rows cannot be moved from a foreign-table partition to some\n> other partition, but they can be moved into a foreign-table partition if\n> the foreign data wrapper supports tuple routing.\"\n\nI prefer David's wording.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Thu, 7 Mar 2019 08:54:17 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Update does not move row across foreign partitions in v11"
},
{
"msg_contents": "On 2019/03/07 22:54, Robert Haas wrote:\n> On Thu, Mar 7, 2019 at 7:35 AM Etsuro Fujita\n> <fujita.etsuro@lab.ntt.co.jp> wrote:\n>> Thanks for the patch!\n>>\n>> The patch looks good to me, but one thing I'm wondering is: as suggested\n>> by David, it would be better to rephrase this mention in the UPDATE\n>> reference page, in a single commit:\n>>\n>> \"Currently, rows cannot be moved from a partition that is a foreign\n>> table to some other partition, but they can be moved into a foreign\n>> table if the foreign data wrapper supports it.\"\n>>\n>> I don't think it needs to be completely rephrased; it's enough for me to\n>> rewrite it to something like this:\n>>\n>> \"Currently, rows cannot be moved from a foreign-table partition to some\n>> other partition, but they can be moved into a foreign-table partition if\n>> the foreign data wrapper supports tuple routing.\"\n> \n> I prefer David's wording.\n\nIIUC, David's suggestion [1] is to change the existing wording, which he\nfound hard to parse, to something like Fujita-san is suggesting.\n\nThanks,\nAmit\n\n[1]\nhttps://www.postgresql.org/message-id/CAKJS1f-SauQJftjcaQ7C_tzHh_be5C8shT-E9qYnVp%2Bjh4-Fww%40mail.gmail.com\n\n\n",
"msg_date": "Fri, 8 Mar 2019 09:36:07 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Update does not move row across foreign partitions in v11"
},
{
"msg_contents": "Thanks for the review.\n\nOn 2019/03/07 21:35, Etsuro Fujita wrote:\n> The patch looks good to me, but one thing I'm wondering is: as suggested\n> by David, it would be better to rephrase this mention in the UPDATE\n> reference page, in a single commit:\n> \n> \"Currently, rows cannot be moved from a partition that is a foreign table\n> to some other partition, but they can be moved into a foreign table if the\n> foreign data wrapper supports it.\"\n> \n> I don't think it needs to be completely rephrased; it's enough for me to\n> rewrite it to something like this:\n> \n> \"Currently, rows cannot be moved from a foreign-table partition to some\n> other partition, but they can be moved into a foreign-table partition if\n> the foreign data wrapper supports tuple routing.\"\n> \n> And to make maintenance work easy, I think it might be better to just put\n> this on the limitations section of 5.10. Table Partitioning. What do you\n> think about that?\n\nI agree, so updated the patch this way.\n\nDavid, can you confirm if the rewritten text reads unambiguous or perhaps\nsuggest a better wording?\n\nThanks,\nAmit",
"msg_date": "Fri, 8 Mar 2019 11:06:55 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Update does not move row across foreign partitions in v11"
},
{
"msg_contents": "On Fri, 8 Mar 2019 at 15:07, Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> wrote:\n> David, can you confirm if the rewritten text reads unambiguous or perhaps\n> suggest a better wording?\n\nSo this is the text:\n\n+ Currently, rows cannot be moved from a foreign-table partition to some\n+ other partition, but they can be moved into a foreign-table partition if\n+ the foreign data wrapper supports tuple routing.\n\nI read this to mean that rows cannot normally be moved out of a\nforeign-table partition unless the new partition is a foreign one that\nuses an FDW that supports tuple routing.\n\nSo let's test that:\n\ncreate extension postgres_fdw ;\ndo $$ begin execute 'create server loopback foreign data wrapper\npostgres_fdw options (dbname ''' || current_database() || ''');'; end;\n$$;\ncreate user mapping for current_user server loopback;\n\ncreate table listp (a int) partition by list (a);\ncreate table listp1 (a int, check (a = 1));\ncreate table listp2 (a int, check (a = 2));\n\ncreate foreign table listpf1 partition of listp for values in (1)\nserver loopback options (table_name 'listp1');\ncreate foreign table listpf2 partition of listp for values in (2)\nserver loopback options (table_name 'listp2');\n\ninsert into listp values (1);\n\nupdate listp set a = 2 where a = 1;\nERROR: new row for relation \"listp1\" violates check constraint \"listp1_a_check\"\nDETAIL: Failing row contains (2).\nCONTEXT: remote SQL command: UPDATE public.listp1 SET a = 2 WHERE ((a = 1))\n\nI'd be filing a bug report for that as I'm moving a row into a foreign\ntable with an FDW that supports tuple routing.\n\nWhere I think you're going wrong is, in one part of the sentence\nyou're talking about UPDATE, then in the next part you seem to\nmagically jump to talking about INSERT. Since the entire paragraph is\ntalking about UPDATE, why is it relevant to talk about INSERT?\n\nI thought my doc_confirm_foreign_partition_limitations.patch had this\npretty clear with:\n\n+ <listitem>\n+ <para>\n+ Currently, an <command>UPDATE</command> of a partitioned table cannot\n+ move rows out of a foreign partition into another partition.\n+ </para>\n+ </listitem>\n\nOr is my understanding of this incorrect? I also think the new\nparagraph is a good move as it's a pretty restrictive limitation for\nanyone that wants to set up a partition hierarchy with foreign\npartitions.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Fri, 8 Mar 2019 23:29:30 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Update does not move row across foreign partitions in v11"
},
{
"msg_contents": "(2019/03/08 19:29), David Rowley wrote:\n> On Fri, 8 Mar 2019 at 15:07, Amit Langote<Langote_Amit_f8@lab.ntt.co.jp> wrote:\n>> David, can you confirm if the rewritten text reads unambiguous or perhaps\n>> suggest a better wording?\n>\n> So this is the text:\n>\n> + Currently, rows cannot be moved from a foreign-table partition to some\n> + other partition, but they can be moved into a foreign-table partition if\n> + the foreign data wrapper supports tuple routing.\n>\n> I read this to mean that rows cannot normally be moved out of a\n> foreign-table partition unless the new partition is a foreign one that\n> uses an FDW that supports tuple routing.\n>\n> So let's test that:\n>\n> create extension postgres_fdw ;\n> do $$ begin execute 'create server loopback foreign data wrapper\n> postgres_fdw options (dbname ''' || current_database() || ''');'; end;\n> $$;\n> create user mapping for current_user server loopback;\n>\n> create table listp (a int) partition by list (a);\n> create table listp1 (a int, check (a = 1));\n> create table listp2 (a int, check (a = 2));\n>\n> create foreign table listpf1 partition of listp for values in (1)\n> server loopback options (table_name 'listp1');\n> create foreign table listpf2 partition of listp for values in (2)\n> server loopback options (table_name 'listp2');\n>\n> insert into listp values (1);\n>\n> update listp set a = 2 where a = 1;\n> ERROR: new row for relation \"listp1\" violates check constraint \"listp1_a_check\"\n> DETAIL: Failing row contains (2).\n> CONTEXT: remote SQL command: UPDATE public.listp1 SET a = 2 WHERE ((a = 1))\n>\n> I'd be filing a bug report for that as I'm moving a row into a foreign\n> table with an FDW that supports tuple routing.\n\nFair enough.\n\n> Where I think you're going wrong is, in one part of the sentence\n> you're talking about UPDATE, then in the next part you seem to\n> magically jump to talking about INSERT. Since the entire paragraph is\n> talking about UPDATE, why is it relevant to talk about INSERT?\n>\n> I thought my doc_confirm_foreign_partition_limitations.patch had this\n> pretty clear with:\n>\n> +<listitem>\n> +<para>\n> + Currently, an<command>UPDATE</command> of a partitioned table cannot\n> + move rows out of a foreign partition into another partition.\n> +</para>\n> +</listitem>\n>\n> Or is my understanding of this incorrect?\n\nIMO I think it's better that we also mention that the UPDATE can move \nrows into a foreign partition if the FDW supports it. No?\n\n> I also think the new\n> paragraph is a good move as it's a pretty restrictive limitation for\n> anyone that wants to set up a partition hierarchy with foreign\n> partitions.\n\n+1\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Fri, 08 Mar 2019 20:09:23 +0900",
"msg_from": "Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Update does not move row across foreign partitions in v11"
},
{
"msg_contents": "On Sat, 9 Mar 2019 at 00:09, Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp> wrote:\n> IMO I think it's better that we also mention that the UPDATE can move\n> rows into a foreign partition if the FDW supports it. No?\n\nIn my opinion, there's not much need to talk about what the\nlimitations are not when you're mentioning what the limitations are.\nMaybe it would be worth it if the text was slightly unclear on what's\naffected, but I thought my version was fairly clear.\n\nIf you think that it's still unclear, then I wouldn't object to adding\n\" There is no such restriction on <command>UPDATE</command> row\nmovements out of native partitions into foreign ones.\". Obviously,\nit's got to be clear for everyone, not just the person who wrote it.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Sat, 9 Mar 2019 00:21:35 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Update does not move row across foreign partitions in v11"
},
{
"msg_contents": "On Fri, Mar 8, 2019 at 8:21 PM David Rowley\n<david.rowley@2ndquadrant.com> wrote:\n>\n> On Sat, 9 Mar 2019 at 00:09, Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp> wrote:\n> > IMO I think it's better that we also mention that the UPDATE can move\n> > rows into a foreign partition if the FDW supports it. No?\n>\n> In my opinion, there's not much need to talk about what the\n> limitations are not when you're mentioning what the limitations are.\n\nIn this case, I think it might be OK to contrast the two cases so that\nit's clear exactly which side doesn't work and which works. We also\nhave the following text in limitations:\n\n <listitem>\n <para>\n While primary keys are supported on partitioned tables, foreign\n keys referencing partitioned tables are not supported. (Foreign key\n references from a partitioned table to some other table are supported.)\n </para>\n </listitem>\n\n> Maybe it would be worth it if the text was slightly unclear on what's\n> affected, but I thought my version was fairly clear.\n>\n> If you think that it's still unclear, then I wouldn't object to adding\n> \" There is no such restriction on <command>UPDATE</command> row\n> movements out of native partitions into foreign ones.\". Obviously,\n> it's got to be clear for everyone, not just the person who wrote it.\n\nOK, how about this:\n\n--- a/doc/src/sgml/ddl.sgml\n+++ b/doc/src/sgml/ddl.sgml\n@@ -3877,6 +3877,15 @@ ALTER TABLE measurement ATTACH PARTITION\nmeasurement_y2008m02\n </para>\n </listitem>\n\n+ <listitem>\n+ <para>\n+ While rows can be moved from local partitions to a foreign-table\n+ partition (provided the foreign data wrapper supports tuple routing),\n+ they cannot be moved from a foreign-table partition to some\n+ other partition.\n+ </para>\n+ </listitem>\n+\n <listitem>\n <para>\n <literal>BEFORE ROW</literal> triggers, if necessary, must be defined\n\ndiff --git a/doc/src/sgml/ref/update.sgml b/doc/src/sgml/ref/update.sgml\nindex 77430a586c..f5cf8eab85 100644\n--- a/doc/src/sgml/ref/update.sgml\n+++ b/doc/src/sgml/ref/update.sgml\n@@ -291,9 +291,9 @@ UPDATE <replaceable class=\"parameter\">count</replaceable>\n concurrent <command>UPDATE</command> or <command>DELETE</command> on the\n same row may miss this row. For details see the section\n <xref linkend=\"ddl-partitioning-declarative-limitations\"/>.\n- Currently, rows cannot be moved from a partition that is a\n- foreign table to some other partition, but they can be moved into a foreign\n- table if the foreign data wrapper supports it.\n+ While rows can be moved from local partitions to a foreign-table partition\n+ partition (provided the foreign data wrapper supports tuple routing), they\n+ cannot be moved from a foreign-table partition to some other partition.\n </para>\n </refsect1>\n\nAt least in update.sgml, we describe that row movement is implemented\nas DELETE+INSERT, so I think maybe it's OK to mention tuple routing\nwhen mentioning why that 1-way movement works. If someone's using an\nFDW that doesn't support tuple routing to begin with, they'll be get\nan error when trying to move rows from a local partition to a foreign\ntable partition using that FDW, which is this:\n\nERROR: cannot route inserted tuples to a foreign table\n\nThen maybe they will come back to the read limitations and see why the\ntuple movement didn't work.\n\nThanks,\nAmit",
"msg_date": "Fri, 8 Mar 2019 22:52:41 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Update does not move row across foreign partitions in v11"
},
{
"msg_contents": "On 2019-Mar-08, Amit Langote wrote:\n\n> diff --git a/doc/src/sgml/ref/update.sgml b/doc/src/sgml/ref/update.sgml\n> index 77430a586c..f5cf8eab85 100644\n> --- a/doc/src/sgml/ref/update.sgml\n> +++ b/doc/src/sgml/ref/update.sgml\n> @@ -291,9 +291,9 @@ UPDATE <replaceable class=\"parameter\">count</replaceable>\n> concurrent <command>UPDATE</command> or <command>DELETE</command> on the\n> same row may miss this row. For details see the section\n> <xref linkend=\"ddl-partitioning-declarative-limitations\"/>.\n> - Currently, rows cannot be moved from a partition that is a\n> - foreign table to some other partition, but they can be moved into a foreign\n> - table if the foreign data wrapper supports it.\n> + While rows can be moved from local partitions to a foreign-table partition\n> + partition (provided the foreign data wrapper supports tuple routing), they\n> + cannot be moved from a foreign-table partition to some other partition.\n> </para>\n> </refsect1>\n\nLGTM. Maybe I'd change \"some other\" to \"another\", but maybe on a\ndifferent phase of the moon I'd leave it alone.\n\nI'm not sure about copying the same to ddl.sgml. Why is that needed?\nUpdate is not DDL. ddl.sgml does say this: \"Partitions can also be\nforeign tables, although they have some limitations that normal tables\ndo not; see CREATE FOREIGN TABLE for more information.\" which suggests\nthat the limitation might need to be added to create_foreign_table.sgml.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 8 Mar 2019 11:09:29 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Update does not move row across foreign partitions in v11"
},
{
"msg_contents": "On Fri, Mar 8, 2019 at 11:09 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Mar-08, Amit Langote wrote:\n>\n> > diff --git a/doc/src/sgml/ref/update.sgml b/doc/src/sgml/ref/update.sgml\n> > index 77430a586c..f5cf8eab85 100644\n> > --- a/doc/src/sgml/ref/update.sgml\n> > +++ b/doc/src/sgml/ref/update.sgml\n> > @@ -291,9 +291,9 @@ UPDATE <replaceable class=\"parameter\">count</replaceable>\n> > concurrent <command>UPDATE</command> or <command>DELETE</command> on the\n> > same row may miss this row. For details see the section\n> > <xref linkend=\"ddl-partitioning-declarative-limitations\"/>.\n> > - Currently, rows cannot be moved from a partition that is a\n> > - foreign table to some other partition, but they can be moved into a foreign\n> > - table if the foreign data wrapper supports it.\n> > + While rows can be moved from local partitions to a foreign-table partition\n> > + partition (provided the foreign data wrapper supports tuple routing), they\n> > + cannot be moved from a foreign-table partition to some other partition.\n> > </para>\n> > </refsect1>\n>\n> LGTM. Maybe I'd change \"some other\" to \"another\", but maybe on a\n> different phase of the moon I'd leave it alone.\n\nDone.\n\n> I'm not sure about copying the same to ddl.sgml. Why is that needed?\n> Update is not DDL.\n\nHmm, maybe because there's already a huge block of text describing\ncertain limitations of UPDATE row movement under concurrency?\n\nActually, I remember commenting *against* having that text in\nddl.sgml, but it got in there anyway.\n\n> ddl.sgml does say this: \"Partitions can also be\n> foreign tables, although they have some limitations that normal tables\n> do not; see CREATE FOREIGN TABLE for more information.\" which suggests\n> that the limitation might need to be added to create_foreign_table.sgml.\n\nActually, that \"more information\" never got added to\ncreate_foreign_table.sgml. There should've been some text about the\nlack for tuple routing at least in PG 10's docs, but I guess that\nnever happened. Should we start now by listing this UPDATE row\nmovement limitation?\n\nAnyway, I've only attached the patch for update.sgml this time.\n\nThanks,\nAmit",
"msg_date": "Fri, 8 Mar 2019 23:43:29 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Update does not move row across foreign partitions in v11"
},
{
"msg_contents": "On 2019-Mar-08, Amit Langote wrote:\n\n> On Fri, Mar 8, 2019 at 11:09 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> > I'm not sure about copying the same to ddl.sgml. Why is that needed?\n> > Update is not DDL.\n> \n> Hmm, maybe because there's already a huge block of text describing\n> certain limitations of UPDATE row movement under concurrency?\n\nUh, you're right, there is. That seems misplaced :-( I'm not sure it\neven counts as a \"limitation\"; it seems to belong to the NOTES section\nof UPDATE rather than where it is now.\n\n> Actually, I remember commenting *against* having that text in\n> ddl.sgml, but it got in there anyway.\n\nWe can move it now ...\n\n> > ddl.sgml does say this: \"Partitions can also be\n> > foreign tables, although they have some limitations that normal tables\n> > do not; see CREATE FOREIGN TABLE for more information.\" which suggests\n> > that the limitation might need to be added to create_foreign_table.sgml.\n> \n> Actually, that \"more information\" never got added to\n> create_foreign_table.sgml. There should've been some text about the\n> lack for tuple routing at least in PG 10's docs, but I guess that\n> never happened.\n\nSigh.\n\nSince version 10 is going to be supported for a few years still, maybe\nwe should add it there.\n\n> Should we start now by listing this UPDATE row movement limitation?\n\nI think we should, yes.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 8 Mar 2019 12:03:18 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Update does not move row across foreign partitions in v11"
},
{
"msg_contents": "On Sat, Mar 9, 2019 at 12:03 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Mar-08, Amit Langote wrote:\n>\n> > On Fri, Mar 8, 2019 at 11:09 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> > > I'm not sure about copying the same to ddl.sgml. Why is that needed?\n> > > Update is not DDL.\n> >\n> > Hmm, maybe because there's already a huge block of text describing\n> > certain limitations of UPDATE row movement under concurrency?\n>\n> Uh, you're right, there is. That seems misplaced :-( I'm not sure it\n> even counts as a \"limitation\"; it seems to belong to the NOTES section\n> of UPDATE rather than where it is now.\n>\n> > Actually, I remember commenting *against* having that text in\n> > ddl.sgml, but it got in there anyway.\n>\n> We can move it now ...\n>\n> > > ddl.sgml does say this: \"Partitions can also be\n> > > foreign tables, although they have some limitations that normal tables\n> > > do not; see CREATE FOREIGN TABLE for more information.\" which suggests\n> > > that the limitation might need to be added to create_foreign_table.sgml.\n> >\n> > Actually, that \"more information\" never got added to\n> > create_foreign_table.sgml. There should've been some text about the\n> > lack for tuple routing at least in PG 10's docs, but I guess that\n> > never happened.\n>\n> Sigh.\n>\n> Since version 10 is going to be supported for a few years still, maybe\n> we should add it there.\n>\n> > Should we start now by listing this UPDATE row movement limitation?\n>\n> I think we should, yes.\n\nAttached find 3 patches -- for PG 10, 11, and HEAD. I also realizes\nthat a description of PARTITION OF clause was also missing in the\nParameters section of CREATE FOREIGN TABLE, which is fixed too.\n\nThanks,\nAmit",
"msg_date": "Sat, 9 Mar 2019 01:13:17 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Update does not move row across foreign partitions in v11"
},
{
"msg_contents": "On 2019-Mar-09, Amit Langote wrote:\n\n> Attached find 3 patches -- for PG 10, 11, and HEAD. I also realizes\n> that a description of PARTITION OF clause was also missing in the\n> Parameters section of CREATE FOREIGN TABLE, which is fixed too.\n\nThanks! Applied all three -- I appreciate your help in getting this\nsorted out. (There were a number of xml/sgml errors, plus one typo,\nthough.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Tue, 12 Mar 2019 13:04:55 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Update does not move row across foreign partitions in v11"
},
{
"msg_contents": "As the original reporter, thanks a ton for all the hard work you're putting\ninto the documentation!\n\nOn Tue, Mar 12, 2019, 12:04 PM Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> On 2019-Mar-09, Amit Langote wrote:\n>\n> > Attached find 3 patches -- for PG 10, 11, and HEAD. I also realizes\n> > that a description of PARTITION OF clause was also missing in the\n> > Parameters section of CREATE FOREIGN TABLE, which is fixed too.\n>\n> Thanks! Applied all three -- I appreciate your help in getting this\n> sorted out. (There were a number of xml/sgml errors, plus one typo,\n> though.)\n>\n> --\n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nAs the original reporter, thanks a ton for all the hard work you're putting into the documentation!On Tue, Mar 12, 2019, 12:04 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:On 2019-Mar-09, Amit Langote wrote:\n\n> Attached find 3 patches -- for PG 10, 11, and HEAD. I also realizes\n> that a description of PARTITION OF clause was also missing in the\n> Parameters section of CREATE FOREIGN TABLE, which is fixed too.\n\nThanks! Applied all three -- I appreciate your help in getting this\nsorted out. (There were a number of xml/sgml errors, plus one typo,\nthough.)\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 12 Mar 2019 18:36:45 -0400",
"msg_from": "Derek Hans <derek.hans@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Update does not move row across foreign partitions in v11"
},
{
"msg_contents": "On 2019/03/13 1:04, Alvaro Herrera wrote:\n> On 2019-Mar-09, Amit Langote wrote:\n> \n>> Attached find 3 patches -- for PG 10, 11, and HEAD. I also realizes\n>> that a description of PARTITION OF clause was also missing in the\n>> Parameters section of CREATE FOREIGN TABLE, which is fixed too.\n> \n> Thanks! Applied all three -- I appreciate your help in getting this\n> sorted out. (There were a number of xml/sgml errors, plus one typo,\n> though.)\n\nAh, thanks for fixing and committing.\n\nRegards,\nAmit\n\n\n",
"msg_date": "Wed, 13 Mar 2019 09:19:17 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Update does not move row across foreign partitions in v11"
}
] |
[
{
"msg_contents": "ri_triggers.c is endlessly long and repetitive. I want to clean it up a\nbit (more).\n\nI looked into all these switch cases for the unimplemented MATCH PARTIAL\noption. I toyed around with how a MATCH PARTIAL implementation would\nactually look like, and it likely wouldn't use the existing code\nstructure anyway, so let's just simplify this for now.\n\nFirst, have ri_FetchConstraintInfo() check that riinfo->confmatchtype\nis valid. Then we don't have to repeat that everywhere.\n\nIn the various referential action functions, we don't need to pay\nattention to the match type at all right now, so remove all that code.\nA future MATCH PARTIAL implementation would probably have some\nconditions added to the present code, but it won't need an entirely\nseparate switch branch in each case.\n\nIn RI_FKey_fk_upd_check_required(), reorganize the code to make it\nmuch simpler.\n\nSeparately, the comment style is also very generous and wasteful with\nvertical space. That can be shrunk a bit.\n\nAttached are some patches.\n\nFinal score:\n\nbranch wc -l\nREL9_6_STABLE 3671\nREL_10_STABLE 3668\nREL_11_STABLE 3179\nmaster 3034\npatch 2695\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 22 Feb 2019 17:05:06 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "some ri_triggers.c cleanup"
},
{
"msg_contents": "On Fri, Feb 22, 2019 at 11:05 AM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> ri_triggers.c is endlessly long and repetitive. I want to clean it up a\n> bit (more).\n>\n\nHaving just been down this road, I agree that a lot of cleanup is needed\nand possible.\n\n\n> I looked into all these switch cases for the unimplemented MATCH PARTIAL\n> option. I toyed around with how a MATCH PARTIAL implementation would\n> actually look like, and it likely wouldn't use the existing code\n> structure anyway, so let's just simplify this for now.\n>\n\n+1\n\n\n\n> Attached are some patches.\n\n\nI intend to look this over in much greater detail, but I did skim the code\nand it seems like you left the SET DEFAULT and SET NULL paths separate. In\nmy attempt at statement level triggers I realized that they only differed\nby the one literal value, and parameterized the function.\n\nOn Fri, Feb 22, 2019 at 11:05 AM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:ri_triggers.c is endlessly long and repetitive. I want to clean it up a\nbit (more).Having just been down this road, I agree that a lot of cleanup is needed and possible. \nI looked into all these switch cases for the unimplemented MATCH PARTIAL\noption. I toyed around with how a MATCH PARTIAL implementation would\nactually look like, and it likely wouldn't use the existing code\nstructure anyway, so let's just simplify this for now.+1 Attached are some patches.I intend to look this over in much greater detail, but I did skim the code and it seems like you left the SET DEFAULT and SET NULL paths separate. In my attempt at statement level triggers I realized that they only differed by the one literal value, and parameterized the function.",
"msg_date": "Fri, 22 Feb 2019 13:12:42 -0500",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: some ri_triggers.c cleanup"
},
{
"msg_contents": "On Fri, Feb 22, 2019 at 1:12 PM Corey Huinker <corey.huinker@gmail.com>\nwrote:\n\n> On Fri, Feb 22, 2019 at 11:05 AM Peter Eisentraut <\n> peter.eisentraut@2ndquadrant.com> wrote:\n>\n>> ri_triggers.c is endlessly long and repetitive. I want to clean it up a\n>> bit (more).\n>>\n>\n> Having just been down this road, I agree that a lot of cleanup is needed\n> and possible.\n>\n>\n>> I looked into all these switch cases for the unimplemented MATCH PARTIAL\n>> option. I toyed around with how a MATCH PARTIAL implementation would\n>> actually look like, and it likely wouldn't use the existing code\n>> structure anyway, so let's just simplify this for now.\n>>\n>\n> +1\n>\n>\n>\n>> Attached are some patches.\n>\n>\n> I intend to look this over in much greater detail, but I did skim the code\n> and it seems like you left the SET DEFAULT and SET NULL paths separate. In\n> my attempt at statement level triggers I realized that they only differed\n> by the one literal value, and parameterized the function.\n>\n>\n\nI've looked it over more closely now and I think that it's a nice\nimprovement.\n\nAs I suspected, the code for SET NULL and SET DEFAULT are highly similar\n(see .diff), the major difference being two constants, the order of some\nvariable declarations, and the recheck in the set-default case.\n\nThe changes were so simple that I felt remiss not adding the patch for you\n(see .patch).\n\nPasses make check.",
"msg_date": "Sat, 23 Feb 2019 18:34:32 -0500",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: some ri_triggers.c cleanup"
},
{
"msg_contents": "On 2019-02-24 00:34, Corey Huinker wrote:\n> As I suspected, the code for SET NULL and SET DEFAULT are highly similar\n> (see .diff), the major difference being two constants, the order of some\n> variable declarations, and the recheck in the set-default case.\n> \n> The changes were so simple that I felt remiss not adding the patch for\n> you (see .patch).\n\nRight, this makes a lot of sense, similar to how ri_restrict() combines\nRESTRICT and NO ACTION.\n\nI'll take a closer look at your patch.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Mon, 25 Feb 2019 09:33:34 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: some ri_triggers.c cleanup"
},
{
"msg_contents": ">\n> Right, this makes a lot of sense, similar to how ri_restrict() combines\n> RESTRICT and NO ACTION.\n>\n\nI'm pretty sure that's where I got the idea, yes.\n\nRight, this makes a lot of sense, similar to how ri_restrict() combines\nRESTRICT and NO ACTION.I'm pretty sure that's where I got the idea, yes.",
"msg_date": "Mon, 25 Feb 2019 11:17:05 -0500",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: some ri_triggers.c cleanup"
},
{
"msg_contents": "On 2019-02-25 17:17, Corey Huinker wrote:\n> Right, this makes a lot of sense, similar to how ri_restrict() combines\n> RESTRICT and NO ACTION.\n> \n> \n> I'm pretty sure that's where I got the idea, yes. \n\nCommitted, including your patch.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Thu, 28 Feb 2019 20:45:36 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: some ri_triggers.c cleanup"
}
] |
[
{
"msg_contents": "Fix plan created for inherited UPDATE/DELETE with all tables excluded.\n\nIn the case where inheritance_planner() finds that every table has\nbeen excluded by constraints, it thought it could get away with\nmaking a plan consisting of just a dummy Result node. While certainly\nthere's no updating or deleting to be done, this had two user-visible\nproblems: the plan did not report the correct set of output columns\nwhen a RETURNING clause was present, and if there were any\nstatement-level triggers that should be fired, it didn't fire them.\n\nHence, rather than only generating the dummy Result, we need to\nstick a valid ModifyTable node on top, which requires a tad more\neffort here.\n\nIt's been broken this way for as long as inheritance_planner() has\nknown about deleting excluded subplans at all (cf commit 635d42e9c),\nso back-patch to all supported branches.\n\nAmit Langote and Tom Lane, per a report from Petr Fedorov.\n\nDiscussion: https://postgr.es/m/5da6f0f0-1364-1876-6978-907678f89a3e@phystech.edu\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/ab5fcf2b04f9cc4ecccb1832faabadb047087d23\n\nModified Files\n--------------\nsrc/backend/optimizer/plan/planner.c | 68 +++++++++++++++++++++++-----------\nsrc/test/regress/expected/inherit.out | 41 ++++++++++++++++++++\nsrc/test/regress/expected/triggers.out | 34 +++++++++++++++++\nsrc/test/regress/sql/inherit.sql | 15 ++++++++\nsrc/test/regress/sql/triggers.sql | 27 ++++++++++++++\n5 files changed, 163 insertions(+), 22 deletions(-)\n\n",
"msg_date": "Fri, 22 Feb 2019 17:23:47 +0000",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "pgsql: Fix plan created for inherited UPDATE/DELETE with all tables\n exc"
},
{
"msg_contents": "On 2019/02/23 2:23, Tom Lane wrote:\n> Fix plan created for inherited UPDATE/DELETE with all tables excluded.\n> \n> In the case where inheritance_planner() finds that every table has\n> been excluded by constraints, it thought it could get away with\n> making a plan consisting of just a dummy Result node. While certainly\n> there's no updating or deleting to be done, this had two user-visible\n> problems: the plan did not report the correct set of output columns\n> when a RETURNING clause was present, and if there were any\n> statement-level triggers that should be fired, it didn't fire them.\n> \n> Hence, rather than only generating the dummy Result, we need to\n> stick a valid ModifyTable node on top, which requires a tad more\n> effort here.\n> \n> It's been broken this way for as long as inheritance_planner() has\n> known about deleting excluded subplans at all (cf commit 635d42e9c),\n> so back-patch to all supported branches.\n\nI noticed that we may have forgotten to fix one more thing in this commit\n-- nominalRelation may refer to a range table entry that's not referenced\nelsewhere in the finished plan:\n\ncreate table parent (a int);\ncreate table child () inherits (parent);\nexplain verbose update parent set a = a where false;\n QUERY PLAN\n───────────────────────────────────────────────────────────\n Update on public.parent (cost=0.00..0.00 rows=0 width=0)\n Update on public.parent\n -> Result (cost=0.00..0.00 rows=0 width=0)\n Output: a, ctid\n One-Time Filter: false\n(5 rows)\n\nI think the \"Update on public.parent\" shown in the 2nd row is unnecessary,\nbecause it won't really be updated. It's shown because nominalRelation\ndoesn't match resultRelation, which prompts explain.c to to print the\nchild target relations separately per this code in show_modifytable_info():\n\n /* Should we explicitly label target relations? */\n labeltargets = (mtstate->mt_nplans > 1 ||\n (mtstate->mt_nplans == 1 &&\n mtstate->resultRelInfo->ri_RangeTableIndex !=\nnode->nominalRelation));\n\nAttached a patch to adjust nominalRelation in this case so that \"parent\"\nwon't be shown a second time, as follows:\n\nexplain verbose update parent set a = a where false;\n QUERY PLAN\n───────────────────────────────────────────────────────────\n Update on public.parent (cost=0.00..0.00 rows=0 width=0)\n -> Result (cost=0.00..0.00 rows=0 width=0)\n Output: parent.a, parent.ctid\n One-Time Filter: false\n(4 rows)\n\nAs you may notice, Result node's targetlist is shown differently than\nbefore, that is, columns are shown prefixed with table name. In the old\noutput, the prefix or \"refname\" ends up NULL, because the Result node's\ntargetlist uses resultRelation (1) as varno, which is not referenced\nanywhere in the plan tree (for ModifyTable, explain.c prefers to use\nnominalRelation instead of resultRelation). With patch applied,\nnominalRelation == resultRelation, so it is referenced and hence its\n\"refname\" non-NULL. Maybe this change is fine though?\n\nThanks,\nAmit",
"msg_date": "Thu, 18 Apr 2019 11:42:49 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix plan created for inherited UPDATE/DELETE with all\n tables exc"
},
{
"msg_contents": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes:\n> On 2019/02/23 2:23, Tom Lane wrote:\n>> Fix plan created for inherited UPDATE/DELETE with all tables excluded.\n\n> I noticed that we may have forgotten to fix one more thing in this commit\n> -- nominalRelation may refer to a range table entry that's not referenced\n> elsewhere in the finished plan:\n\n> create table parent (a int);\n> create table child () inherits (parent);\n> explain verbose update parent set a = a where false;\n> QUERY PLAN\n> ───────────────────────────────────────────────────────────\n> Update on public.parent (cost=0.00..0.00 rows=0 width=0)\n> Update on public.parent\n> -> Result (cost=0.00..0.00 rows=0 width=0)\n> Output: a, ctid\n> One-Time Filter: false\n> (5 rows)\n\n> I think the \"Update on public.parent\" shown in the 2nd row is unnecessary,\n> because it won't really be updated.\n\nWell, perhaps, but nonetheless that's what the plan tree shows.\nMoreover, even though it will receive no row changes, we're going to fire\nstatement-level triggers on it. So I'm not entirely convinced that\nsuppressing it is a step forward ...\n\n> As you may notice, Result node's targetlist is shown differently than\n> before, that is, columns are shown prefixed with table name.\n\n... and that definitely isn't one.\n\nI also think that your patch largely falsifies the discussion at 1543ff:\n\n * Set the nominal target relation of the ModifyTable node if not\n * already done. If the target is a partitioned table, we already set\n * nominalRelation to refer to the partition root, above. For\n * non-partitioned inheritance cases, we'll use the first child\n * relation (even if it's excluded) as the nominal target relation.\n * Because of the way expand_inherited_rtentry works, that should be\n * the RTE representing the parent table in its role as a simple\n * member of the inheritance set.\n *\n * It would be logically cleaner to *always* use the inheritance\n * parent RTE as the nominal relation; but that RTE is not otherwise\n * referenced in the plan in the non-partitioned inheritance case.\n * Instead the duplicate child RTE created by expand_inherited_rtentry\n * is used elsewhere in the plan, so using the original parent RTE\n * would give rise to confusing use of multiple aliases in EXPLAIN\n * output for what the user will think is the \"same\" table. OTOH,\n * it's not a problem in the partitioned inheritance case, because\n * there is no duplicate RTE for the parent.\n\nWe'd have to rethink exactly what the goals are if we want to change\nthe definition of nominalRelation like this.\n\nOne idea, perhaps, is to teach explain.c to not list partitioned tables\nas subsidiary targets, independently of nominalRelation. But I'm not\nconvinced that we need to do anything at all here. The existing output\nfor this case is exactly like it was in v10 and v11.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 18 Apr 2019 15:41:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Fix plan created for inherited UPDATE/DELETE with all\n tables exc"
},
{
"msg_contents": "On 2019/04/19 4:41, Tom Lane wrote:\n> Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes:\n>> On 2019/02/23 2:23, Tom Lane wrote:\n>>> Fix plan created for inherited UPDATE/DELETE with all tables excluded.\n> \n>> I noticed that we may have forgotten to fix one more thing in this commit\n>> -- nominalRelation may refer to a range table entry that's not referenced\n>> elsewhere in the finished plan:\n> \n>> create table parent (a int);\n>> create table child () inherits (parent);\n>> explain verbose update parent set a = a where false;\n>> QUERY PLAN\n>> ───────────────────────────────────────────────────────────\n>> Update on public.parent (cost=0.00..0.00 rows=0 width=0)\n>> Update on public.parent\n>> -> Result (cost=0.00..0.00 rows=0 width=0)\n>> Output: a, ctid\n>> One-Time Filter: false\n>> (5 rows)\n> \n>> I think the \"Update on public.parent\" shown in the 2nd row is unnecessary,\n>> because it won't really be updated.\n> \n> Well, perhaps, but nonetheless that's what the plan tree shows.\n> Moreover, even though it will receive no row changes, we're going to fire\n> statement-level triggers on it.\n\nDoesn't \"Update on public.parent\" in the first line serve that purpose\nthough? It identifies exactly the table whose statement triggers will be\nfired. IOW, we are still listing the only table that the execution is\ngoing to do something meaningful with as part of this otherwise empty command.\n\n>> As you may notice, Result node's targetlist is shown differently than\n>> before, that is, columns are shown prefixed with table name.\n> \n> ... and that definitely isn't one.\n\nHmm, I'm thinking that showing the table name prefix would be better at\nleast in this case, because otherwise it's unclear whose columns the\nResult node is showing; with the prefix, it's clear they are parent's.\n\nHowever, it seems that for the same case but with a partitioned table\ntarget, the prefix is not shown even with the patch:\n\ncreate table p (a int, b text) partition by list (a);\ncreate table p1 partition of p for values in (1);\nexplain verbose update p set a = 1 where false returning *;\n QUERY PLAN\n──────────────────────────────────────────────────────\n Update on public.p (cost=0.00..0.00 rows=0 width=0)\n Output: a, b\n -> Result (cost=0.00..0.00 rows=0 width=0)\n Output: 1, b, ctid\n One-Time Filter: false\n(5 rows)\n\n...but for a totally unrelated reason. At least in HEAD, there's only one\nentry in the range table -- the original target table -- and no entries\nfor partitions due to our recent surgery of inheritance planning.\nexplain.c: show_plan_tlist() has the following rule to determine whether\nto show the prefix:\n\n useprefix = list_length(es->rtable) > 1;\n\nSame rule applies to the case where the target relation is a regular table.\n\nI guess such a discrepancy is not good.\n\n> I also think that your patch largely falsifies the discussion at 1543ff:\n> \n> * Set the nominal target relation of the ModifyTable node if not\n> * already done. If the target is a partitioned table, we already set\n> * nominalRelation to refer to the partition root, above. For\n> * non-partitioned inheritance cases, we'll use the first child\n> * relation (even if it's excluded) as the nominal target relation.\n> * Because of the way expand_inherited_rtentry works, that should be\n> * the RTE representing the parent table in its role as a simple\n> * member of the inheritance set.\n> *\n> * It would be logically cleaner to *always* use the inheritance\n> * parent RTE as the nominal relation; but that RTE is not otherwise\n> * referenced in the plan in the non-partitioned inheritance case.\n> * Instead the duplicate child RTE created by expand_inherited_rtentry\n> * is used elsewhere in the plan, so using the original parent RTE\n> * would give rise to confusing use of multiple aliases in EXPLAIN\n> * output for what the user will think is the \"same\" table. OTOH,\n> * it's not a problem in the partitioned inheritance case, because\n> * there is no duplicate RTE for the parent.\n> \n> We'd have to rethink exactly what the goals are if we want to change\n> the definition of nominalRelation like this.\n\nYeah, I missed updating this and maybe some other comments about\nnominalRelation.\n\n> One idea, perhaps, is to teach explain.c to not list partitioned tables\n> as subsidiary targets, independently of nominalRelation.\n\nWe don't list partitioned tables as subsidiary targets to begin with,\nbecause they're not added to resultRelations list, where the subsidiary\ntargets that are listed come from.\n\n> But I'm not\n> convinced that we need to do anything at all here. The existing output\n> for this case is exactly like it was in v10 and v11.\n\nNote that I'm only talking about the case in which all result relations\nhave been excluded, *irrespective* of whether the result relation is a\nregular inherited table or a partitioned table, which the commit at the\nhead of this discussion thread addressed in all branches. The only\ndifference between the the regular inheritance target relation case and\npartitioned table target relation case is that nominalRelation refers to\nthe first child member relation for the former, whereas it refers to the\noriginal target relation for the latter. Given that difference, the\nproblem I'm addressing with the proposed patch doesn't occur when the\ntarget relation is a partitioned table.\n\nIn a way, what we implicitly aimed for with the previously committed fix\nfor the empty UPDATE case is that the resulting plan and execution\nbehavior is same for all 3 types of target relations: regular table\n(handled totally in grouping_planner()), regular inherited table, and\npartitioned table (the latter two handled in inheritance_planner()). Per\nmy complaint here, that is not totally the case, which is simply a result\nof differences in the underlying implementation of those three cases, some\nof which I've mentioned above.\n\n-- on HEAD (and tips of other branches)\ncreate table foo (a int);\ncreate table foo1 () inherits (foo);\ncreate table p (a int) partition by list (a);\ncreate table p1 partition of p for values in (1);\n\n-- only this case shows subsidiary target relation\nexplain verbose update foo set a = 1 where false returning *;\n QUERY PLAN\n────────────────────────────────────────────────────────\n Update on public.foo (cost=0.00..0.00 rows=0 width=0)\n Output: a\n Update on public.foo\n -> Result (cost=0.00..0.00 rows=0 width=0)\n Output: 1, ctid\n One-Time Filter: false\n(6 rows)\n\nexplain verbose update foo1 set a = 1 where false returning *;\n QUERY PLAN\n─────────────────────────────────────────────────────────\n Update on public.foo1 (cost=0.00..0.00 rows=0 width=0)\n Output: a\n -> Result (cost=0.00..0.00 rows=0 width=10)\n Output: 1, ctid\n One-Time Filter: false\n(5 rows)\n\nexplain verbose update p set a = 1 where false returning *;\n QUERY PLAN\n──────────────────────────────────────────────────────\n Update on public.p (cost=0.00..0.00 rows=0 width=0)\n Output: a\n -> Result (cost=0.00..0.00 rows=0 width=0)\n Output: 1, ctid\n One-Time Filter: false\n(5 rows)\n\nNow, there's a showstopper to my proposed patch (targetlists being\ndisplayed differently in different cases as illustrated above), but I do\nthink we should try to make the output uniform in some way.\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Fri, 19 Apr 2019 16:07:17 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix plan created for inherited UPDATE/DELETE with all\n tables exc"
}
] |
[
{
"msg_contents": "The CTE change in PostgreSQL 12 broke several of PostGIS regression tests\nbecause many of our tests are negative tests that test to confirm we get\nwarnings in certain cases. In the past, these would output 1 notice because\nthe CTE was materialized, now they output 1 for each column.\n\nAn example is as follows:\n\nWITH data AS ( SELECT '#2911' l, ST_Metadata(ST_Rescale( ST_AddBand(\nST_MakeEmptyRaster(10, 10, 0, 0, 1, -1, 0, 0, 0), 1, '8BUI', 0, 0 ),\n2.0, -2.0 )) m ) SELECT l, (m).* FROM data;\n\nIn prior versions this raster test would return one notice:\n\nNOTICE: Raster has default geotransform. Adjusting metadata for use of GDAL\nWarp API\n\nNow it returns 10 notices because the call is being done 10 times (1 for\neach column)\n\nNOTICE: Raster has default geotransform. Adjusting metadata for use of GDAL\nWarp API\nNOTICE: Raster has default geotransform. Adjusting metadata for use of GDAL\nWarp API\nNOTICE: Raster has default geotransform. Adjusting metadata for use of GDAL\nWarp API\nNOTICE: Raster has default geotransform. Adjusting metadata for use of GDAL\nWarp API\nNOTICE: Raster has default geotransform. Adjusting metadata for use of GDAL\nWarp API\nNOTICE: Raster has default geotransform. Adjusting metadata for use of GDAL\nWarp API\nNOTICE: Raster has default geotransform. Adjusting metadata for use of GDAL\nWarp API\nNOTICE: Raster has default geotransform. Adjusting metadata for use of GDAL\nWarp API\nNOTICE: Raster has default geotransform. Adjusting metadata for use of GDAL\nWarp API\nNOTICE: Raster has default geotransform. Adjusting metadata for use of GDAL\nWarp API\n\nThe regression errors are easy enough to fix with OFFSET or subquery. What\nI'm more concerned about is that I expect we'll have performance\ndegradation.\n\nHistorically PostGIS functions haven't been costed right and can't be\nbecause they rely on INLINING of sql functions which gets broken when too\nhigh of cost is put on functions. We have a ton of functions like these\nthat return composite objects and this above function is particularly\nexpensive so to have it call that 10 times is almost guaranteed to be a\nperformance killer.\n\nI know there is a new MATERIALIZED keyword to get the old behavior, but\npeople are not going to be able to change their apps to introduce new\nkeywords, especially ones meant to be deployed by many versions of\nPostgreSQL.\n\nThat said IS THERE or can there be a GUC like \n\nset cte_materialized = on;\n\nto get the old behavior?\n\nThanks,\nRegina\nPostGIS PSC member\n\n\n\n",
"msg_date": "Fri, 22 Feb 2019 15:33:08 -0500",
"msg_from": "\"Regina Obe\" <lr@pcorp.us>",
"msg_from_op": true,
"msg_subject": "CTE Changes in PostgreSQL 12, can we have a GUC to get old behavior"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-22 15:33:08 -0500, Regina Obe wrote:\n> The CTE change in PostgreSQL 12 broke several of PostGIS regression tests\n> because many of our tests are negative tests that test to confirm we get\n> warnings in certain cases. In the past, these would output 1 notice because\n> the CTE was materialized, now they output 1 for each column.\n> \n> An example is as follows:\n> \n> WITH data AS ( SELECT '#2911' l, ST_Metadata(ST_Rescale( ST_AddBand(\n> ST_MakeEmptyRaster(10, 10, 0, 0, 1, -1, 0, 0, 0), 1, '8BUI', 0, 0 ),\n> 2.0, -2.0 )) m ) SELECT l, (m).* FROM data;\n\n> The regression errors are easy enough to fix with OFFSET or subquery. What\n> I'm more concerned about is that I expect we'll have performance\n> degradation.\n> \n> Historically PostGIS functions haven't been costed right and can't be\n> because they rely on INLINING of sql functions which gets broken when too\n> high of cost is put on functions. We have a ton of functions like these\n> that return composite objects and this above function is particularly\n> expensive so to have it call that 10 times is almost guaranteed to be a\n> performance killer.\n\nI think there's a fair argument that we shouldn't inline in a way that\nincreases the number of function calls due to (foo).*. In fact, I'm\nmildly surprised that we do that?\n\n\n> That said IS THERE or can there be a GUC like \n> \n> set cte_materialized = on;\n> \n> to get the old behavior?\n\n-incredibly many. That'll just make it harder to understand what SQL\nmeans.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Fri, 22 Feb 2019 12:39:45 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: CTE Changes in PostgreSQL 12, can we have a GUC to get old\n behavior"
},
{
"msg_contents": "On Fri, Feb 22, 2019 at 3:33 PM Regina Obe <lr@pcorp.us> wrote:\n> Historically PostGIS functions haven't been costed right and can't be\n> because they rely on INLINING of sql functions which gets broken when too\n> high of cost is put on functions. We have a ton of functions like these\n> that return composite objects and this above function is particularly\n> expensive so to have it call that 10 times is almost guaranteed to be a\n> performance killer.\n\nThis is good evidence that using the cost to decide on inlining is a\nterrible idea and should be changed.\n\n> I know there is a new MATERIALIZED keyword to get the old behavior, but\n> people are not going to be able to change their apps to introduce new\n> keywords, especially ones meant to be deployed by many versions of\n> PostgreSQL.\n>\n> That said IS THERE or can there be a GUC like\n>\n> set cte_materialized = on;\n>\n> to get the old behavior?\n\nBehavior changing GUCs *really* suck. If we add such a GUC, it will\naffect not only PostGIS but everything run on the server -- and we\nmade this change because we believe it's going to improve things\noverall. I'm really reluctant to believe that it's right to encourage\npeople to go back in the opposite direction, especially because it\nmeans there will be no consistency from one PostgreSQL system to the\nnext.\n\nI think there are probably other ways of fixing this query that won't\nhave such dramatic effects; it doesn't really seem to need to use\nWITH, and I bet you could also tweak the WITH query to prevent\ninlining. I also think Andres's question about why this gets inlined\nin the first place is a good one; the (m).* seems like it ought to be\ncounted as a multiple reference.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Fri, 22 Feb 2019 16:08:14 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CTE Changes in PostgreSQL 12,\n can we have a GUC to get old behavior"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-02-22 15:33:08 -0500, Regina Obe wrote:\n>> The CTE change in PostgreSQL 12 broke several of PostGIS regression tests\n>> because many of our tests are negative tests that test to confirm we get\n>> warnings in certain cases. In the past, these would output 1 notice because\n>> the CTE was materialized, now they output 1 for each column.\n>> An example is as follows:\n>> WITH data AS ( SELECT '#2911' l, ST_Metadata(ST_Rescale( ST_AddBand(\n>> ST_MakeEmptyRaster(10, 10, 0, 0, 1, -1, 0, 0, 0), 1, '8BUI', 0, 0 ),\n>> 2.0, -2.0 )) m ) SELECT l, (m).* FROM data;\n\n> I think there's a fair argument that we shouldn't inline in a way that\n> increases the number of function calls due to (foo).*. In fact, I'm\n> mildly surprised that we do that?\n\nWell, is (foo).* all that much different from multiple written-out calls\nto the function? But yeah, I'd rather try to address this complaint by\ntweaking the inlining rules. It would certainly help if the function\nhad a realistic (high) cost estimate, because if it pretends to be cheap,\nwe really *should* be willing to inline it, one would think.\n\n>> That said IS THERE or can there be a GUC like \n>> set cte_materialized = on;\n>> to get the old behavior?\n\n> -incredibly many. That'll just make it harder to understand what SQL\n> means.\n\nAgreed. Now that we bit that bullet, I don't want to make things even\nmessier with a GUC.\n\nThe thing to do if you want to force backwards-compatible behavior without\nusing new-in-v12 syntax is to insert good ol' OFFSET 0 in the CTE query.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 22 Feb 2019 16:14:23 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CTE Changes in PostgreSQL 12,\n can we have a GUC to get old behavior"
},
{
"msg_contents": "> I think there are probably other ways of fixing this query that won't have\n> such dramatic effects; it doesn't really seem to need to use WITH, and I bet\n> you could also tweak the WITH query to prevent inlining. \n\nYes I know I can change THIS QUERY. I've changed other ones to work around this.\nNormally I just use a LATERAL for this.\n\nMy point is lots of people use CTEs intentionally for this kind of thing because they know they are materialized.\n\nIt's going to make a lot of people hesitant to upgrade if they think they need to revisit every CTE (that they intentionally wrote cause they thought it would be materialized) to throw in a MATERIALIZED keyword.\n\n> I also think\n> Andres's question about why this gets inlined in the first place is a good one;\n> the (m).* seems like it ought to be counted as a multiple reference.\n> \n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL\n> Company\nWell if we can at least prevent the multiple reference thing from inlining that might be good enough to solve most performance regression issues that arise.\n\nThanks,\nRegina\n\n\n\n\n\n",
"msg_date": "Fri, 22 Feb 2019 16:27:28 -0500",
"msg_from": "\"Regina Obe\" <lr@pcorp.us>",
"msg_from_op": true,
"msg_subject": "RE: CTE Changes in PostgreSQL 12,\n can we have a GUC to get old behavior"
},
{
"msg_contents": "On Fri, Feb 22, 2019 at 4:27 PM Regina Obe <lr@pcorp.us> wrote:\n> It's going to make a lot of people hesitant to upgrade if they think they need to revisit every CTE (that they intentionally wrote cause they thought it would be materialized) to throw in a MATERIALIZED keyword.\n\nYou might be right, but I hope you're wrong, because if you're right,\nthen it means that the recent change was ill-considered and ought to\nbe reverted. And I think there are a lot of people who are REALLY\nlooking forward to that behavior change.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Fri, 22 Feb 2019 16:30:54 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CTE Changes in PostgreSQL 12,\n can we have a GUC to get old behavior"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-22 16:27:28 -0500, Regina Obe wrote:\n> > I think there are probably other ways of fixing this query that won't have\n> > such dramatic effects; it doesn't really seem to need to use WITH, and I bet\n> > you could also tweak the WITH query to prevent inlining. \n> \n> Yes I know I can change THIS QUERY. I've changed other ones to work around this.\n> Normally I just use a LATERAL for this.\n> \n> My point is lots of people use CTEs intentionally for this kind of thing because they know they are materialized.\n> \n> It's going to make a lot of people hesitant to upgrade if they think they need to revisit every CTE (that they intentionally wrote cause they thought it would be materialized) to throw in a MATERIALIZED keyword.\n\nThis was extensively discussed, in several threads about inlining\nCTEs. But realistically, it doesn't actually solve the problem to offer\na GUC, because we'd not be able to remove it anytime soon. I see\nbenefit in discussing how we can address regressions like your example\nquery here, but I don't feel there's any benefit in re-opening the whole\ndiscussion about GUCs, defaults, and whatnot.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Fri, 22 Feb 2019 13:32:09 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: CTE Changes in PostgreSQL 12, can we have a GUC to get old\n behavior"
}
] |
[
{
"msg_contents": "I noticed that ALTER ROLE/USER succeeds even when called without any\noptions:\n\npostgres=# alter user foo;\nALTER ROLE\npostgres=# alter role foo;\nALTER ROLE\npostgres=# alter group foo;\nERROR: syntax error at or near \";\"\nLINE 1: alter group foo;\n\nThat seems odd, does nothing useful, and is inconsistent with, for\nexample, ALTER GROUP as shown above.\n\nProposed patch attached.\n\nComments/thoughts?\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development",
"msg_date": "Fri, 22 Feb 2019 16:13:24 -0500",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": true,
"msg_subject": "oddity with ALTER ROLE/USER"
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> I noticed that ALTER ROLE/USER succeeds even when called without any\n> options:\n\n> postgres=# alter user foo;\n> ALTER ROLE\n> postgres=# alter role foo;\n> ALTER ROLE\n> postgres=# alter group foo;\n> ERROR: syntax error at or near \";\"\n> LINE 1: alter group foo;\n\n> That seems odd, does nothing useful, and is inconsistent with, for\n> example, ALTER GROUP as shown above.\n\n> Proposed patch attached.\n\nIf you want to make it act like alter group, why not make it act\nlike alter group? That is, the way to fix this is to change the\ngrammar so that AlterOptRoleList doesn't permit an expansion with\nzero list elements.\n\nHaving said that, I can't get excited about changing this at all.\nNobody will thank us for it, and someone might complain.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 22 Feb 2019 16:19:10 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: oddity with ALTER ROLE/USER"
},
{
"msg_contents": "On 2/22/19 4:19 PM, Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n>> I noticed that ALTER ROLE/USER succeeds even when called without any\n>> options:\n> \n>> postgres=# alter user foo;\n>> ALTER ROLE\n>> postgres=# alter role foo;\n>> ALTER ROLE\n>> postgres=# alter group foo;\n>> ERROR: syntax error at or near \";\"\n>> LINE 1: alter group foo;\n> \n>> That seems odd, does nothing useful, and is inconsistent with, for\n>> example, ALTER GROUP as shown above.\n> \n>> Proposed patch attached.\n> \n> If you want to make it act like alter group, why not make it act\n> like alter group? That is, the way to fix this is to change the\n> grammar so that AlterOptRoleList doesn't permit an expansion with\n> zero list elements.\n\n\nI considered that but liked the more specific error message.\n\n> Having said that, I can't get excited about changing this at all.\n> Nobody will thank us for it, and someone might complain.\n\n\nThe other route is change the documentation to reflect reality I guess.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development",
"msg_date": "Fri, 22 Feb 2019 17:22:09 -0500",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": true,
"msg_subject": "Re: oddity with ALTER ROLE/USER"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Joe Conway <mail@joeconway.com> writes:\n> > I noticed that ALTER ROLE/USER succeeds even when called without any\n> > options:\n> \n> > postgres=# alter user foo;\n> > ALTER ROLE\n> > postgres=# alter role foo;\n> > ALTER ROLE\n> > postgres=# alter group foo;\n> > ERROR: syntax error at or near \";\"\n> > LINE 1: alter group foo;\n> \n> > That seems odd, does nothing useful, and is inconsistent with, for\n> > example, ALTER GROUP as shown above.\n> \n> > Proposed patch attached.\n> \n> If you want to make it act like alter group, why not make it act\n> like alter group? That is, the way to fix this is to change the\n> grammar so that AlterOptRoleList doesn't permit an expansion with\n> zero list elements.\n> \n> Having said that, I can't get excited about changing this at all.\n> Nobody will thank us for it, and someone might complain.\n\nIs there no chance that someone's issueing such an ALTER ROLE foo;\nstatement and thinking it's actually doing something? I suppose it's\npretty unlikely, but we do complain in some cases where we've been asked\nto do something and we end up not actually doing anything for various\nreasons (\"no privileges were granted...\"), though that's only a warning.\nMaybe that'd be useful to have here though? At least then we're making\nit clear to the user that nothing is happening and we don't break\nexisting things.\n\nThanks!\n\nStephen",
"msg_date": "Sat, 23 Feb 2019 09:06:42 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: oddity with ALTER ROLE/USER"
}
] |
[
{
"msg_contents": "This seems to be the cleanest way to get a JIT build working on FreeBSD\non the armv7 platform. Without this, it fails because the ABI functions\n__aeabi_ldivmod and so on are not found by the symbol lookup for JITted\ncode.\n\nIt's completely unclear who (llvm, freebsd, or us) is at fault here for\nit not working before; some of the LLVM code contains special-case\nsymbol lookups for the ARM __aeabi funcs, but only enabled when\ncompiling on Android; the response I got from asking about it on\n#bsdmips (which despite its name is the main freebsd/arm hangout) was\n\"this is an ongoing mess that isn't really getting worked on\". And\nhonestly I suspect the set of people actually trying to use LLVM for JIT\non freebsd/arm is negligible; I only did it because testing weird edge\ncases is often useful.\n\nThis patch should be harmless on any 32-bit arm platform where JIT\nalready works, because all it's doing is forcibly preloading symbols.\n\nTest platform details:\nFreeBSD 12.0-STABLE r344243 arm\nCPU: ARM Cortex-A7 (Raspberry Pi 2B)\nLLVM 7.0.1 (known to fail on LLVM 6 unless you patch LLVM itself)\n\n./configure --prefix=/data/small/andrew/pgsql \\\n --with-includes=/usr/local/include \\\n --with-libs=/usr/local/lib \\\n --with-openssl \\\n --enable-debug \\\n --enable-depend \\\n --enable-cassert \\\n --with-llvm \\\n LLVM_CONFIG=llvm-config70 \\\n CC=\"ccache clang70\" \\\n CLANG=\"ccache clang70\" \\\n CXX=\"ccache clang++70\" \\\n CFLAGS=\"-O2 -mcpu=cortex-a7\"\n\n-- \nAndrew (irc:RhodiumToad)",
"msg_date": "Sat, 23 Feb 2019 02:12:29 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": true,
"msg_subject": "JIT on FreeBSD ARMv7"
},
{
"msg_contents": ">>>>> \"Andrew\" == Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n\n Andrew> This seems to be the cleanest way to get a JIT build working on\n Andrew> FreeBSD on the armv7 platform. Without this, it fails because\n Andrew> the ABI functions __aeabi_ldivmod and so on are not found by\n Andrew> the symbol lookup for JITted code.\n\nTurns out it needs more symbols, as per the attached.\n\n-- \nAndrew (irc:RhodiumToad)",
"msg_date": "Tue, 26 Feb 2019 02:39:42 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": true,
"msg_subject": "Re: JIT on FreeBSD ARMv7"
}
] |
[
{
"msg_contents": "For quite some years now there's been dissatisfaction with our List\ndata structure implementation. Because it separately palloc's each\nlist cell, it chews up lots of memory, and it's none too cache-friendly\nbecause the cells aren't necessarily adjacent. Moreover, our typical\nusage is to just build a list by repeated lappend's and never modify it,\nso that the flexibility of having separately insertable/removable list\ncells is usually wasted.\n\nEvery time this has come up, I've opined that the right fix is to jack\nup the List API and drive a new implementation underneath, as we did\nonce before (cf commit d0b4399d81). I thought maybe it was about time\nto provide some evidence for that position, so attached is a POC patch\nthat changes Lists into expansible arrays, while preserving most of\ntheir existing API.\n\nThe big-picture performance change is that this makes list_nth()\na cheap O(1) operation, while lappend() is still pretty cheap;\non the downside, lcons() becomes O(N), as does insertion or deletion\nin the middle of a list. But we don't use lcons() very much\n(and maybe a lot of the existing usages aren't really necessary?),\nwhile insertion/deletion in long lists is a vanishingly infrequent\noperation. Meanwhile, making list_nth() cheap is a *huge* win.\n\nThe most critical thing that we lose by doing this is that when a\nList is modified, all of its cells may need to move, which breaks\na lot of code that assumes it can insert or delete a cell while\nhanging onto a pointer to a nearby cell. In almost every case,\nthis takes the form of doing list insertions or deletions inside\na foreach() loop, and the pointer that's invalidated is the loop's\ncurrent-cell pointer. Fortunately, with list_nth() now being so cheap,\nwe can replace these uses of foreach() with loops using an integer\nindex variable and fetching the next list element directly with\nlist_nth(). Most of these places were loops around list_delete_cell\ncalls, which I replaced with a new function list_delete_nth_cell\nto go along with the emphasis on the integer index.\n\nI don't claim to have found every case where that could happen,\nalthough I added debug support in list.c to force list contents\nto move on every list modification, and this patch does pass\ncheck-world with that support turned on. I fear that some such\nbugs remain, though.\n\nThere is one big way in which I failed to preserve the old API\nsyntactically: lnext() now requires a pointer to the List as\nwell as the current ListCell, so that it can figure out where\nthe end of the cell array is. That requires touching something\nlike 150 places that otherwise wouldn't have had to be touched,\nwhich is annoying, even though most of those changes are trivial.\n\nI thought about avoiding that by requiring Lists to keep a \"sentinel\"\nvalue in the cell after the end of the active array, so that lnext()\ncould look for the sentinel to detect list end. However, that idea\ndoesn't really work, because if the list array has been moved, the\nspot where the sentinel had been could have been reallocated and\nfilled with something else. So this'd offer no defense against the\npossibility of a stale ListCell pointer, which is something that\nwe absolutely need defenses for. As the patch stands we can have\nquite a strong defense, because we can check whether the presented\nListCell pointer actually points into the list's current data array.\n\nAnother annoying consequence of lnext() needing a List pointer is that\nthe List arguments of foreach() and related macros are now evaluated\neach time through the loop. I could only find half a dozen places\nwhere that was actually unsafe (all fixed in the draft patch), but\nit's still bothersome. I experimented with ways to avoid that, but\nthe only way I could get it to work was to define foreach like this:\n\n#define foreach(cell, l) for (const List *cell##__foreach = foreach_setup(l, &cell); cell != NULL; cell = lnext(cell##__foreach, cell))\n\nstatic inline const List *\nforeach_setup(const List *l, ListCell **cell)\n{\n *cell = list_head(l);\n return l;\n}\n\nThat works, but there are two problems. The lesser one is that a\nnot-very-bright compiler might think that the \"cell\" variable has to\nbe forced into memory, because its address is taken. The bigger one is\nthat this coding forces the \"cell\" variable to be exactly \"ListCell *\";\nyou can't add const or volatile qualifiers to it without getting\ncompiler warnings. There are actually more places that'd be affected\nby that than by the need to avoid multiple evaluations. I don't think\nthe const markings would be a big deal to lose, and the two places in\ndo_autovacuum that need \"volatile\" (because of a nearby PG_TRY) could\nbe rewritten to not use foreach. So I'm tempted to do that, but it's\nnot very pretty. Maybe somebody can think of a better solution?\n\nThere's a lot of potential follow-on work that I've not touched yet:\n\n1. This still involves at least two palloc's for every nonempty List,\nbecause I kept the header and the data array separate. Perhaps it's\nworth allocating those in one palloc. However, right now there's an\nassumption that the header of a nonempty List doesn't move when you\nchange its contents; that's embedded in the API of the lappend_cell\nfunctions, and more than likely there's code that depends on that\nsilently because it doesn't bother to store the results of other\nList functions back into the appropriate pointer. So if we do that\nat all I think it'd be better tackled in a separate patch; and I'm\nnot really convinced it's worth the trouble and extra risk of bugs.\n\n2. list_qsort() is now absolutely stupidly defined. It should just\nqsort the list's data array in-place. But that requires an API\nbreak for the caller-supplied comparator, since there'd be one less\nlevel of indirection. I think we should change this, but again it'd\nbe better done as an independent patch to make it more visible in the\ngit history.\n\n3. There are a number of places where we've built flat arrays\nparalleling Lists, such as the planner's simple_rte_array. That's\npointless now and could be undone, buying some code simplicity.\nVarious other micro-optimizations could doubtless be done too;\nI've not looked hard.\n\nI haven't attempted any performance measurements on this, but at\nleast in principle it should speed things up a bit, especially\nfor complex test cases involving longer Lists. I don't have any\nvery suitable test cases at hand, anyway.\n\nI think this is too late for v12, both because of the size of the\npatch and because of the likelihood that it introduces a few bugs.\nI'd like to consider pushing it early in the v13 cycle, though.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 23 Feb 2019 21:24:40 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "POC: converting Lists into arrays"
},
{
"msg_contents": "Hello Tom,\n\n> For quite some years now there's been dissatisfaction with our List\n> data structure implementation. Because it separately palloc's each\n> list cell, it chews up lots of memory, and it's none too cache-friendly\n> because the cells aren't necessarily adjacent. Moreover, our typical\n> usage is to just build a list by repeated lappend's and never modify it,\n> so that the flexibility of having separately insertable/removable list\n> cells is usually wasted.\n>\n> Every time this has come up, I've opined that the right fix is to jack\n> up the List API and drive a new implementation underneath, as we did\n> once before (cf commit d0b4399d81). I thought maybe it was about time\n> to provide some evidence for that position, so attached is a POC patch\n> that changes Lists into expansible arrays, while preserving most of\n> their existing API.\n\nMy 0.02ᅵ about this discussion (I assume that it is what you want): I had \nthe same issue in the past on a research project. I used a similar but \nslightly different approach:\n\nI did not touch the existing linked list implementation but provided \nanother data structure, which was a linked list of buckets (small arrays) \nstack kept from the head, with buckets allocated on need but not freed \nuntil the final deallocation. If pop was used extensively, a linked list \nof freed bucket was kept, so that they could be reused. Some expensive \nlist-like functions were not provided, so the data structure could replace \nlists in some but not all instances, which was fine. The dev had then to \nchoose which data structure was best for its use case, and critical \nperformance cases could be replaced.\n\nNote that a \"foreach\", can be done reasonably cleanly with a \nstack-allocated iterator & c99 struct initialization syntax, which is now \nallowed in pg AFAICR, something like:\n\n typedef struct { ... } stack_iterator;\n\n #define foreach_stack(i, s) \\\n for (stack_iterator i = SITER_INIT(s); SITER_GO_ON(&i); SITER_NEXT(&i))\n\nUsed with a simple pattern:\n\n foreach_stack(i, s)\n {\n item_type = GET_ITEM(i);\n ...\n }\n\nThis approach is not as transparent as your approach, but changes are \nsomehow less extensive, and it provides choices instead of trying to do a \none solution must fit all use cases. Also, it allows to revisit the \npointer to reference choices on some functions with limited impact.\nIn particular the data structure is used for a \"string buffer\" \nimplementation (like the PQExpBuffer stuff).\n\n\n-- \nFabien.",
"msg_date": "Mon, 25 Feb 2019 08:43:16 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On Sat, Feb 23, 2019 at 9:24 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Every time this has come up, I've opined that the right fix is to jack\n> up the List API and drive a new implementation underneath, as we did\n> once before (cf commit d0b4399d81). I thought maybe it was about time\n> to provide some evidence for that position, so attached is a POC patch\n> that changes Lists into expansible arrays, while preserving most of\n> their existing API.\n\nI'm not really convinced that this is the way to go. The thing is,\nany third-party code people have that uses a List may simply break.\nIf you kept the existing List and changed a bunch of existing code to\nuse a new Vector implementation, or Thomas's SimpleVector stuff, then\nthat wouldn't happen. The reason why people - or at least me - have\nbeen reluctant to accept that you can just jack up the API and drive a\nnew implementation underneath is that the new implementation will\ninvolve breaking guarantees on which existing code relies; indeed,\nyour email makes it pretty clear that this is the case. If you could\nreplace the existing implementation without breaking any code, that\nwould be a no-brainer but there's no real way to do that and get the\nperformance benefits you're seeking to obtain.\n\nIt is also perhaps worth mentioning that reimplementing a List as an\narray means that it is... not a list. That is a pretty odd state of\naffairs, and to me is another sign that we want to leave the existing\nthing alone and convert some/most/all core code to use a new thing.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Mon, 25 Feb 2019 13:02:03 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Sat, Feb 23, 2019 at 9:24 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Every time this has come up, I've opined that the right fix is to jack\n>> up the List API and drive a new implementation underneath, as we did\n>> once before (cf commit d0b4399d81).\n\n> I'm not really convinced that this is the way to go. The thing is,\n> any third-party code people have that uses a List may simply break.\n> If you kept the existing List and changed a bunch of existing code to\n> use a new Vector implementation, or Thomas's SimpleVector stuff, then\n> that wouldn't happen.\n\nI'm not following your point here. If we change key data structures\n(i.e. parsetrees, plan trees, execution trees) to use some other list-ish\nAPI, that *in itself* breaks everything that accesses those data\nstructures. The approach I propose here isn't zero-breakage, but it\nrequires far fewer places to be touched than a complete API replacement\nwould do.\n\nJust as with the dlist/slist stuff, inventing a new list API might be\nacceptable for all-new data structures that didn't exist before, but\nit isn't going to really help for code and data structures that've been\naround for decades.\n\n> If you could\n> replace the existing implementation without breaking any code, that\n> would be a no-brainer but there's no real way to do that and get the\n> performance benefits you're seeking to obtain.\n\nYup. So are you saying that we'll never redesign parsetrees again?\nWe break things regularly, as long as the cost/benefit justifies it.\n\n> It is also perhaps worth mentioning that reimplementing a List as an\n> array means that it is... not a list. That is a pretty odd state of\n> affairs, and to me is another sign that we want to leave the existing\n> thing alone and convert some/most/all core code to use a new thing.\n\nI completely disagree. Your proposal is probably an order of magnitude\nmore painful than the approach I suggest here, while not really offering\nany additional performance benefit (or if you think there would be some,\nyou haven't explained how). Strictly on cost/benefit grounds, it isn't\never going to happen that way.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 25 Feb 2019 13:17:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On Mon, Feb 25, 2019 at 1:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I'm not following your point here. If we change key data structures\n> (i.e. parsetrees, plan trees, execution trees) to use some other list-ish\n> API, that *in itself* breaks everything that accesses those data\n> structures. The approach I propose here isn't zero-breakage, but it\n> requires far fewer places to be touched than a complete API replacement\n> would do.\n\nSure, but if you have third-party code that touches those things,\nit'll fail to compile. With your proposed approach, there seems to be\na risk that it will compile but not work.\n\n> Yup. So are you saying that we'll never redesign parsetrees again?\n> We break things regularly, as long as the cost/benefit justifies it.\n\nI'm mostly objecting to the degree that the breakage is *silent*.\n\n> I completely disagree. Your proposal is probably an order of magnitude\n> more painful than the approach I suggest here, while not really offering\n> any additional performance benefit (or if you think there would be some,\n> you haven't explained how). Strictly on cost/benefit grounds, it isn't\n> ever going to happen that way.\n\nWhy would it be ten times more painful, exactly?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Mon, 25 Feb 2019 13:30:56 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Feb 25, 2019 at 1:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I'm not following your point here. If we change key data structures\n>> (i.e. parsetrees, plan trees, execution trees) to use some other list-ish\n>> API, that *in itself* breaks everything that accesses those data\n>> structures. The approach I propose here isn't zero-breakage, but it\n>> requires far fewer places to be touched than a complete API replacement\n>> would do.\n\n> Sure, but if you have third-party code that touches those things,\n> it'll fail to compile. With your proposed approach, there seems to be\n> a risk that it will compile but not work.\n\nFailing to compile isn't really a benefit IMO. Now, if we could avoid\nthe *semantic* differences (like whether it's safe to hold onto a pointer\ninto a List while doing FOO on the list), then we'd have something.\nThe biggest problem with what I'm proposing is that it doesn't always\nmanage to do that --- but any other implementation is going to break\nsuch assumptions too. I do not think that forcing cosmetic changes\non people is going to do much to help them revisit possibly-hidden\nassumptions like those. What will help is to provide debugging aids to\nflush out such assumptions, which I've endeavored to do in this patch.\nAnd I would say that any competing proposal is going to be a failure\nunless it provides at-least-as-effective support for flushing out bugs\nin naive updates of existing List-using code.\n\n>> I completely disagree. Your proposal is probably an order of magnitude\n>> more painful than the approach I suggest here, while not really offering\n>> any additional performance benefit (or if you think there would be some,\n>> you haven't explained how). Strictly on cost/benefit grounds, it isn't\n>> ever going to happen that way.\n\n> Why would it be ten times more painful, exactly?\n\nBecause it involves touching ten times more code (and that's a very\nconservative estimate). Excluding changes in pg_list.h + list.c,\nwhat I posted touches approximately 600 lines of code (520 insertions,\n644 deletions to be exact). For comparison's sake, there are about\n1800 uses of foreach in the tree, each of which would require at least\n3 changes to replace (the foreach itself, the ListCell variable\ndeclaration, and at least one lfirst() reference in the loop body).\nSo we've already blown past 5000 lines worth of changes if we want to\ndo it another way ... and that's just *one* component of the List API.\nNor is there any reason to think the changes would be any more mechanical\nthan what I had to do here. (No fair saying that I already found the\ntrouble spots, either. A different implementation would likely break\nassumptions in different ways.)\n\nIf I said your proposal involved two orders of magnitude more work,\nI might not be far off the mark.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 25 Feb 2019 13:59:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On Mon, Feb 25, 2019 at 10:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Because it involves touching ten times more code (and that's a very\n> conservative estimate). Excluding changes in pg_list.h + list.c,\n> what I posted touches approximately 600 lines of code (520 insertions,\n> 644 deletions to be exact). For comparison's sake, there are about\n> 1800 uses of foreach in the tree, each of which would require at least\n> 3 changes to replace (the foreach itself, the ListCell variable\n> declaration, and at least one lfirst() reference in the loop body).\n\nIf we knew that the list code was the bottleneck in a handful of\ncases, then I'd come down on Robert's side here. It would then be\npossible to update the relevant bottlenecked code in an isolated\nfashion, while getting the lion's share of the benefit. However, I\ndon't think that that's actually possible. The costs of using Lists\neverywhere are real and measurable, but it's also highly distributed.\nAt least, that's my recollection from previous discussion from several\nyears back. I remember talking about this with Andres in early 2016.\n\n> So we've already blown past 5000 lines worth of changes if we want to\n> do it another way ... and that's just *one* component of the List API.\n\nIf you want to stop third party code from compiling, you can find a\nway to do that without really changing your approach. Nothing stops\nyou from changing some symbol names minimally, and then making\ncorresponding mechanical changes to all of the client code within the\ntree. Third party code authors would then follow this example, with\nthe expectation that it's probably going to be a totally mechanical\nprocess.\n\nI'm not necessarily advocating that approach. I'm simply pointing out\nthat a compromise is quite possible.\n\n> Nor is there any reason to think the changes would be any more mechanical\n> than what I had to do here. (No fair saying that I already found the\n> trouble spots, either. A different implementation would likely break\n> assumptions in different ways.)\n\nThe idea of making a new vector/array implementation that is a more or\nless drop in replacement for List seems okay to me. C++ has both a\nstd::list and a std::vector, and they support almost the same\ninterface. Obviously the situation is different here, since you're\nretrofitting a new implementation with different performance\ncharacteristics, rather than implementing both in a green field\nsituation. But it's not that different.\n\n-- \nPeter Geoghegan\n\n",
"msg_date": "Mon, 25 Feb 2019 11:59:06 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-25 13:02:03 -0500, Robert Haas wrote:\n> On Sat, Feb 23, 2019 at 9:24 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Every time this has come up, I've opined that the right fix is to jack\n> > up the List API and drive a new implementation underneath, as we did\n> > once before (cf commit d0b4399d81). I thought maybe it was about time\n> > to provide some evidence for that position, so attached is a POC patch\n> > that changes Lists into expansible arrays, while preserving most of\n> > their existing API.\n> \n> I'm not really convinced that this is the way to go. The thing is,\n> any third-party code people have that uses a List may simply break.\n> If you kept the existing List and changed a bunch of existing code to\n> use a new Vector implementation, or Thomas's SimpleVector stuff, then\n> that wouldn't happen. The reason why people - or at least me - have\n> been reluctant to accept that you can just jack up the API and drive a\n> new implementation underneath is that the new implementation will\n> involve breaking guarantees on which existing code relies; indeed,\n> your email makes it pretty clear that this is the case. If you could\n> replace the existing implementation without breaking any code, that\n> would be a no-brainer but there's no real way to do that and get the\n> performance benefits you're seeking to obtain.\n\nYea, it'd be more convincing. I'm not convinced it'd be a no-brainer\nthough. Unless you've been hacking PG for a fair bit, the pg_list.h APIs\nare very hard to understand / remember. Given this change essentially\nrequires auditing all code that uses List, ISTM we'd be much better off\nalso changing the API at the same time. Yes that'll mean there'll be\nvestigial uses nobody bothered to convert in extension etc, but that's\nnot that bad.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 25 Feb 2019 12:51:19 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> If we knew that the list code was the bottleneck in a handful of\n> cases, then I'd come down on Robert's side here. It would then be\n> possible to update the relevant bottlenecked code in an isolated\n> fashion, while getting the lion's share of the benefit. However, I\n> don't think that that's actually possible. The costs of using Lists\n> everywhere are real and measurable, but it's also highly distributed.\n> At least, that's my recollection from previous discussion from several\n> years back. I remember talking about this with Andres in early 2016.\n\nYeah, that's exactly the point. If we could replace some small number\nof places with a vector-ish data structure and get all/most of the win,\nthen that would be the way to go about it. But I'm pretty sure that\nwe aren't going to make much of an improvement without wholesale\nchanges. Nor is it really all that attractive to have some pieces of\nthe parse/plan/execution tree data structures using one kind of list\nwhile other places use another. If we're to attack this at all,\nI think we should go for a wholesale replacement.\n\nAnother way of looking at this is that if we expected that extensions\nhad a lot of private Lists, unrelated to these core data structures,\nit might be worth preserving the List implementation so as not to cause\nproblems for such usage. But I doubt that that's the case; or that\nany such private lists are more likely to be broken by these API changes\nthan the core usage is (600 changes in however many lines we've got is\nnot a lot); or that people would really want to deal with two independent\nlist implementations with different behaviors just to avoid revisiting\nsome portions of their code while they're being forced to revisit others\nanyway.\n\n> If you want to stop third party code from compiling, you can find a\n> way to do that without really changing your approach. Nothing stops\n> you from changing some symbol names minimally, and then making\n> corresponding mechanical changes to all of the client code within the\n> tree. Third party code authors would then follow this example, with\n> the expectation that it's probably going to be a totally mechanical\n> process.\n\nYeah, if we expected that only mechanical changes would be needed, and\nforcing certain syntax changes would be a good guide to what had to be\ndone, then this'd be a helpful way to proceed. The lnext changes in\nmy proposed patch do indeed work like that. But the part that's actually\nhard is finding/fixing the places where you can't safely use lnext\nanymore, and there's nothing very mechanical about that. (Unless you want\nto just forbid lnext altogether, which maybe is a reasonable thing to\ncontemplate, but I judged it overkill.)\n\nBTW, I neglected to respond to Robert's earlier point that\n\n>>> It is also perhaps worth mentioning that reimplementing a List as an\n>>> array means that it is... not a list. That is a pretty odd state of\n>>> affairs\n\nI think the reason we have Lisp-style lists all over the place has little\nto do with whether those are ideal data structures, and a lot to do with\nthe fact that chunks of Postgres were originally written in Lisp, and\nin that language using lists for everything is just How It's Done.\nI don't have any problem with regarding that nomenclature as being mostly\na legacy thing, which is how I documented it in the proposed revision\nto pg_list.h's header comment.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 25 Feb 2019 15:55:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-25 13:59:36 -0500, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Mon, Feb 25, 2019 at 1:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I'm not following your point here. If we change key data structures\n> >> (i.e. parsetrees, plan trees, execution trees) to use some other list-ish\n> >> API, that *in itself* breaks everything that accesses those data\n> >> structures. The approach I propose here isn't zero-breakage, but it\n> >> requires far fewer places to be touched than a complete API replacement\n> >> would do.\n> \n> > Sure, but if you have third-party code that touches those things,\n> > it'll fail to compile. With your proposed approach, there seems to be\n> > a risk that it will compile but not work.\n> \n> Failing to compile isn't really a benefit IMO.\n\nIt's a huge benefit. It's a lot of effort to look through all source\ncode for potential breakages. Especially if all list usages, rather than\nsome planner detail that comparatively few extensions touch, needs to be\naudited.\n\n\n> >> I completely disagree. Your proposal is probably an order of magnitude\n> >> more painful than the approach I suggest here, while not really offering\n> >> any additional performance benefit (or if you think there would be some,\n> >> you haven't explained how). Strictly on cost/benefit grounds, it isn't\n> >> ever going to happen that way.\n> \n> > Why would it be ten times more painful, exactly?\n> \n> Because it involves touching ten times more code (and that's a very\n> conservative estimate). Excluding changes in pg_list.h + list.c,\n> what I posted touches approximately 600 lines of code (520 insertions,\n> 644 deletions to be exact). For comparison's sake, there are about\n> 1800 uses of foreach in the tree, each of which would require at least\n> 3 changes to replace (the foreach itself, the ListCell variable\n> declaration, and at least one lfirst() reference in the loop body).\n> So we've already blown past 5000 lines worth of changes if we want to\n> do it another way ... and that's just *one* component of the List API.\n> Nor is there any reason to think the changes would be any more mechanical\n> than what I had to do here. (No fair saying that I already found the\n> trouble spots, either. A different implementation would likely break\n> assumptions in different ways.)\n\nFWIW, rewrites of this kind can be quite nicely automated using\ncoccinelle [1]. One sometimes needs to do a bit of mop-up with variable\nnames, but otherwise it should be mostly complete.\n\nGreetings,\n\nAndres Freund\n\n[1] http://coccinelle.lip6.fr/\n\n",
"msg_date": "Mon, 25 Feb 2019 12:55:53 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Yea, it'd be more convincing. I'm not convinced it'd be a no-brainer\n> though. Unless you've been hacking PG for a fair bit, the pg_list.h APIs\n> are very hard to understand / remember. Given this change essentially\n> requires auditing all code that uses List, ISTM we'd be much better off\n> also changing the API at the same time. Yes that'll mean there'll be\n> vestigial uses nobody bothered to convert in extension etc, but that's\n> not that bad.\n\nThe pain factor for back-patching is alone a strong reason for not just\nrandomly replacing the List API with different spellings.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 25 Feb 2019 15:57:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> FWIW, rewrites of this kind can be quite nicely automated using\n> coccinelle [1]. One sometimes needs to do a bit of mop-up with variable\n> names, but otherwise it should be mostly complete.\n\nI'm getting slightly annoyed by arguments that reject a live, workable\npatch in favor of pie-in-the-sky proposals. Both you and Robert seem\nto be advocating solutions that don't exist and would take a very large\namount of work to create. If you think differently, let's see a patch.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 25 Feb 2019 16:03:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-25 11:59:06 -0800, Peter Geoghegan wrote:\n> On Mon, Feb 25, 2019 at 10:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Because it involves touching ten times more code (and that's a very\n> > conservative estimate). Excluding changes in pg_list.h + list.c,\n> > what I posted touches approximately 600 lines of code (520 insertions,\n> > 644 deletions to be exact). For comparison's sake, there are about\n> > 1800 uses of foreach in the tree, each of which would require at least\n> > 3 changes to replace (the foreach itself, the ListCell variable\n> > declaration, and at least one lfirst() reference in the loop body).\n> \n> If we knew that the list code was the bottleneck in a handful of\n> cases, then I'd come down on Robert's side here. It would then be\n> possible to update the relevant bottlenecked code in an isolated\n> fashion, while getting the lion's share of the benefit. However, I\n> don't think that that's actually possible. The costs of using Lists\n> everywhere are real and measurable, but it's also highly distributed.\n> At least, that's my recollection from previous discussion from several\n> years back. I remember talking about this with Andres in early 2016.\n\nIt's distributed, but not *that* distributed. The largest source of\n\"cost\" at execution time used to be all-over expression evaluation, but\nthat's gone now. That was a lot of places, but it's not outside of reach\nof a targeted change. Now it's targetlist handling, which'd have to\nchange together with plan time, where it's a large issue.\n\n\n> > So we've already blown past 5000 lines worth of changes if we want to\n> > do it another way ... and that's just *one* component of the List API.\n> \n> If you want to stop third party code from compiling, you can find a\n> way to do that without really changing your approach. Nothing stops\n> you from changing some symbol names minimally, and then making\n> corresponding mechanical changes to all of the client code within the\n> tree. Third party code authors would then follow this example, with\n> the expectation that it's probably going to be a totally mechanical\n> process.\n> \n> I'm not necessarily advocating that approach. I'm simply pointing out\n> that a compromise is quite possible.\n\nThat breaks extension code using lists unnecessarily though. And given\nthat there's semantic change, I don't htink it's an entirely mechanical\nprocess.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 25 Feb 2019 13:03:44 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-25 16:03:43 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > FWIW, rewrites of this kind can be quite nicely automated using\n> > coccinelle [1]. One sometimes needs to do a bit of mop-up with variable\n> > names, but otherwise it should be mostly complete.\n> \n> I'm getting slightly annoyed by arguments that reject a live, workable\n> patch in favor of pie-in-the-sky proposals. Both you and Robert seem\n> to be advocating solutions that don't exist and would take a very large\n> amount of work to create. If you think differently, let's see a patch.\n\nUhm, we're talking about an invasive proposal from two weekend days\nago. It seems far from crazy to voice our concerns with the silent\nbreakage you propose. Nor, even if we were obligated to work on an\nalternative approach, which we aren't, would it be realistic for us to\nhave written an alternative implementation within the last few hours,\nwhile also working on our own priorities.\n\nI'm actually quite interested in this topic, both in the sense that it's\ngreat to see work, and in the sense that I'm willing to help with the\neffort.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 25 Feb 2019 13:21:00 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On Mon, Feb 25, 2019 at 1:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I'm getting slightly annoyed by arguments that reject a live, workable\n> patch in favor of pie-in-the-sky proposals. Both you and Robert seem\n> to be advocating solutions that don't exist and would take a very large\n> amount of work to create. If you think differently, let's see a patch.\n\nISTM that we should separate the question of whether or not the List\nAPI needs to continue to work without needing to change code in third\nparty extensions from the question of whether or not the List API\nneeds to be replaced whole cloth. These are not exactly independent\nquestions, but they don't necessarily need to be discussed all at\nonce.\n\nAndres said that he doesn't like the pg_list.h API. It's not pretty,\nbut is it really that bad?\n\nThe List implementation claims to be generic, but it's not actually\nthat generic. It has to work as a Node. It's not quite fair to say\nthat it's confusing without acknowledging that pg_list.h is special to\nquery processing.\n\n-- \nPeter Geoghegan\n\n",
"msg_date": "Mon, 25 Feb 2019 13:21:30 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-25 13:21:30 -0800, Peter Geoghegan wrote:\n> ISTM that we should separate the question of whether or not the List\n> API needs to continue to work without needing to change code in third\n> party extensions from the question of whether or not the List API\n> needs to be replaced whole cloth. These are not exactly independent\n> questions, but they don't necessarily need to be discussed all at\n> once.\n\nI'm not convinced by that - if we are happy with the list API, not\nduplicating code would be a stronger argument than if we actually are\nunhappy. It makes no sense to go around and replace the same code twice\nin a row if we also think other changes should be made (at the same\ntime, we obviously ought not to do too much at once, otherwise we'll\nnever get anywhere).\n\n\n> Andres said that he doesn't like the pg_list.h API. It's not pretty,\n> but is it really that bad?\n\nYes. The function names alone confound anybody new to postgres, we tend\nto forget that after a few years. A lot of the function return types are\nbasically unpredictable without reading the code, the number of builtin\ntypes is pretty restrictive, and there's no typesafety around the choice\nof actually stored.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 25 Feb 2019 13:31:31 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "\n\nOn 2/25/19 10:03 PM, Andres Freund wrote:\n> Hi,\n> \n> On 2019-02-25 11:59:06 -0800, Peter Geoghegan wrote:\n>> On Mon, Feb 25, 2019 at 10:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Because it involves touching ten times more code (and that's a very\n>>> conservative estimate). Excluding changes in pg_list.h + list.c,\n>>> what I posted touches approximately 600 lines of code (520 insertions,\n>>> 644 deletions to be exact). For comparison's sake, there are about\n>>> 1800 uses of foreach in the tree, each of which would require at least\n>>> 3 changes to replace (the foreach itself, the ListCell variable\n>>> declaration, and at least one lfirst() reference in the loop body).\n>>\n>> If we knew that the list code was the bottleneck in a handful of\n>> cases, then I'd come down on Robert's side here. It would then be\n>> possible to update the relevant bottlenecked code in an isolated\n>> fashion, while getting the lion's share of the benefit. However, I\n>> don't think that that's actually possible. The costs of using Lists\n>> everywhere are real and measurable, but it's also highly distributed.\n>> At least, that's my recollection from previous discussion from several\n>> years back. I remember talking about this with Andres in early 2016.\n> \n> It's distributed, but not *that* distributed. The largest source of\n> \"cost\" at execution time used to be all-over expression evaluation, but\n> that's gone now. That was a lot of places, but it's not outside of reach\n> of a targeted change. Now it's targetlist handling, which'd have to\n> change together with plan time, where it's a large issue.\n> \n\nSo let's say we want to measure the improvement this patch gives us.\nWhat would be some reasonable (and corner) cases to benchmark? I do have\nsome ideas, but as you've been looking at this in the past, perhaps you\nhave something better.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Mon, 25 Feb 2019 22:35:37 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On Mon, Feb 25, 2019 at 1:31 PM Andres Freund <andres@anarazel.de> wrote:\n> > Andres said that he doesn't like the pg_list.h API. It's not pretty,\n> > but is it really that bad?\n>\n> Yes. The function names alone confound anybody new to postgres, we tend\n> to forget that after a few years. A lot of the function return types are\n> basically unpredictable without reading the code, the number of builtin\n> types is pretty restrictive, and there's no typesafety around the choice\n> of actually stored.\n\nBut a lot of those restrictions are a consequence of needing what\namount to support functions in places as distant from pg_list.h as\npg_stat_statements.c, or the parser, or outfuncs.c. I'm not saying\nthat we couldn't do better here, but the design is constrained by\nthis. If you add a support for a new datatype, where does that leave\nstored rules? Seems ticklish to me, at the very least.\n\n-- \nPeter Geoghegan\n\n",
"msg_date": "Mon, 25 Feb 2019 13:41:48 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-25 22:35:37 +0100, Tomas Vondra wrote:\n> So let's say we want to measure the improvement this patch gives us.\n> What would be some reasonable (and corner) cases to benchmark? I do have\n> some ideas, but as you've been looking at this in the past, perhaps you\n> have something better.\n\nI think queries over tables with a fair number of columns very easily\nstress test the list overhead around targetlists - I don't have a\nprofile lying around, but the overhead of targetlist processing\n(ExecTypeFromTL etc) at execution time clearly shows up. Larger\nindividual expressions can easily show up via eval_const_expressions()\netc and ExecInitExpr(). Both probably can be separated into benchmarks\nwith prepared statements (ExecTypeFromTl() and ExecInitExpr() will show\nup, but planner work obviously not), and non-prepared benchmarks will\nstress the planner more. I suspect there's also a few planner benefits\nwith large numbers of paths, but I don't quite remember the profiles\nwell enough to construct a benchmark from memory.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 25 Feb 2019 13:43:24 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-25 13:41:48 -0800, Peter Geoghegan wrote:\n> On Mon, Feb 25, 2019 at 1:31 PM Andres Freund <andres@anarazel.de> wrote:\n> > > Andres said that he doesn't like the pg_list.h API. It's not pretty,\n> > > but is it really that bad?\n> >\n> > Yes. The function names alone confound anybody new to postgres, we tend\n> > to forget that after a few years. A lot of the function return types are\n> > basically unpredictable without reading the code, the number of builtin\n> > types is pretty restrictive, and there's no typesafety around the choice\n> > of actually stored.\n> \n> But a lot of those restrictions are a consequence of needing what\n> amount to support functions in places as distant from pg_list.h as\n> pg_stat_statements.c, or the parser, or outfuncs.c.\n\nThose could trivially support distinguisiong at least between lists\ncontaining pointer, int, oid, or node. But even optionally doing more\nthan that would be fairly easy. It's not those modules don't currently\nknow the types of elements they're dealing with?\n\n\n> If you add a support for a new datatype, where does that leave\n> stored rules?\n\nWe don't maintain stored rules across major versions (they break due to\na lot of changes), so I don't quite understand that problem.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 25 Feb 2019 13:51:04 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On Mon, Feb 25, 2019 at 1:51 PM Andres Freund <andres@anarazel.de> wrote:\n> Those could trivially support distinguisiong at least between lists\n> containing pointer, int, oid, or node. But even optionally doing more\n> than that would be fairly easy. It's not those modules don't currently\n> know the types of elements they're dealing with?\n>\n>\n> > If you add a support for a new datatype, where does that leave\n> > stored rules?\n>\n> We don't maintain stored rules across major versions (they break due to\n> a lot of changes), so I don't quite understand that problem.\n\nThe point is that the implicit need to have support for serializing\nand deserializing everything is something that constrains the design,\nand must also constrain the design of any successor data structure.\nThe contents of pg_list.[ch] are not why it's a PITA to add support\nfor a new datatype. Also, most of the time the Lists are lists of\nnodes, which is essentially an abstract base type for heterogeneous\ntypes anyway. I don't really get what you mean about type safety,\nbecause you haven't spelled it out in a way that acknowledges all of\nthis.\n\n-- \nPeter Geoghegan\n\n",
"msg_date": "Mon, 25 Feb 2019 14:07:40 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-23 21:24:40 -0500, Tom Lane wrote:\n> For quite some years now there's been dissatisfaction with our List\n> data structure implementation. Because it separately palloc's each\n> list cell, it chews up lots of memory, and it's none too cache-friendly\n> because the cells aren't necessarily adjacent.\n\nIndeed. Might be worthwhile to note that linked list, even if stored in\nadjacent memory, are *still* not very friendly for out-of-order CPUs, as\nthey introduce a dependency between fetching the pointer to the next\nelement, and processing the next element. Whereas for arrays etc CPUs\nstart executing instructions for the next element, before finishing the\nlast one.\n\n\n> Every time this has come up, I've opined that the right fix is to jack\n> up the List API and drive a new implementation underneath, as we did\n> once before (cf commit d0b4399d81).\n\nBtw, should we remove the ENABLE_LIST_COMPAT stuff independent of this\ndiscussion? Seems like we could even just do that for 12.\n\n\n> The big-picture performance change is that this makes list_nth()\n> a cheap O(1) operation, while lappend() is still pretty cheap;\n> on the downside, lcons() becomes O(N), as does insertion or deletion\n> in the middle of a list. But we don't use lcons() very much\n> (and maybe a lot of the existing usages aren't really necessary?),\n> while insertion/deletion in long lists is a vanishingly infrequent\n> operation. Meanwhile, making list_nth() cheap is a *huge* win.\n\nRight.\n\n\n> The most critical thing that we lose by doing this is that when a\n> List is modified, all of its cells may need to move, which breaks\n> a lot of code that assumes it can insert or delete a cell while\n> hanging onto a pointer to a nearby cell.\n\nWe could probably \"fix\" both this, and the cost of making such\nmodifications, by having more of an list-of-arrays style\nrepresentation. When adding/removing middle-of-the-\"list\" elements, we\ncould chop that array into two (by modifying metadata, not freeing), and\nshove the new data into a new array inbetween. But I don't think that'd\noverall be a win, even if it'd get us out of the silent API breakage\nbusiness.\n\n\n> Another annoying consequence of lnext() needing a List pointer is that\n> the List arguments of foreach() and related macros are now evaluated\n> each time through the loop. I could only find half a dozen places\n> where that was actually unsafe (all fixed in the draft patch), but\n> it's still bothersome. I experimented with ways to avoid that, but\n> the only way I could get it to work was to define foreach like this:\n\nYea, that problem is why the ilist stuff has the iterator\ndatastructure. That was before we allowed variable defs in for\nthough...\n\n\n> #define foreach(cell, l) for (const List *cell##__foreach = foreach_setup(l, &cell); cell != NULL; cell = lnext(cell##__foreach, cell))\n> \n> static inline const List *\n> foreach_setup(const List *l, ListCell **cell)\n> {\n> *cell = list_head(l);\n> return l;\n> }\n> \n> That works, but there are two problems. The lesser one is that a\n> not-very-bright compiler might think that the \"cell\" variable has to\n> be forced into memory, because its address is taken.\n\nI don't think that's a huge problem. I don't think there are any\nplatforms we really care about with such compilers? And I can't imagine\nthat being the only performance problem on such a platform.\n\n\n> The bigger one is\n> that this coding forces the \"cell\" variable to be exactly \"ListCell *\";\n> you can't add const or volatile qualifiers to it without getting\n> compiler warnings. There are actually more places that'd be affected\n> by that than by the need to avoid multiple evaluations. I don't think\n> the const markings would be a big deal to lose, and the two places in\n> do_autovacuum that need \"volatile\" (because of a nearby PG_TRY) could\n> be rewritten to not use foreach. So I'm tempted to do that, but it's\n> not very pretty.\n\nHm, that's a bit ugly, indeed.\n\n\n> Maybe somebody can think of a better solution?\n\nWe could cast away const & volatile on most compilers, and do better on\ngcc & clang, I guess. We could use typeof() and similar games to add the\nrelevant qualifiers. Or alternatively, also optionally of course, use\nC11 _Generic trickery for defining the type. But that seems\nunsatisfying (but safe, I think).\n\n\n\n> There's a lot of potential follow-on work that I've not touched yet:\n> \n> 1. This still involves at least two palloc's for every nonempty List,\n> because I kept the header and the data array separate. Perhaps it's\n> worth allocating those in one palloc. However, right now there's an\n> assumption that the header of a nonempty List doesn't move when you\n> change its contents; that's embedded in the API of the lappend_cell\n> functions, and more than likely there's code that depends on that\n> silently because it doesn't bother to store the results of other\n> List functions back into the appropriate pointer. So if we do that\n> at all I think it'd be better tackled in a separate patch; and I'm\n> not really convinced it's worth the trouble and extra risk of bugs.\n\nHm, I think if we force external code to audit their code, we better\nalso do this. This is a significant number of allocations, and I don't\nthink it'd be good to spread this out over two releases.\n\n\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 25 Feb 2019 15:00:21 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Btw, should we remove the ENABLE_LIST_COMPAT stuff independent of this\n> discussion? Seems like we could even just do that for 12.\n\n+1. I took it out in the POC patch, but I see no very good reason\nnot to do it sooner than that.\n\n>> The most critical thing that we lose by doing this is that when a\n>> List is modified, all of its cells may need to move, which breaks\n>> a lot of code that assumes it can insert or delete a cell while\n>> hanging onto a pointer to a nearby cell.\n\n> We could probably \"fix\" both this, and the cost of making such\n> modifications, by having more of an list-of-arrays style\n> representation. When adding/removing middle-of-the-\"list\" elements, we\n> could chop that array into two (by modifying metadata, not freeing), and\n> shove the new data into a new array inbetween. But I don't think that'd\n> overall be a win, even if it'd get us out of the silent API breakage\n> business.\n\nYeah, I'm afraid that would still leave us with pretty expensive\nprimitives.\n\n>> 1. This still involves at least two palloc's for every nonempty List,\n>> because I kept the header and the data array separate. Perhaps it's\n>> worth allocating those in one palloc.\n\n> Hm, I think if we force external code to audit their code, we better\n> also do this. This is a significant number of allocations, and I don't\n> think it'd be good to spread this out over two releases.\n\nIf we choose to do it, I'd agree with doing both in the same major release\ncycle, so that extensions see just one breakage. But I think it'd still\nbest be developed as a follow-on patch.\n\n\nI had an idea that perhaps is worth considering --- upthread I rejected\nthe idea of deleting lnext() entirely, but what if we did so? We could\nredefine foreach() to not use it:\n\n#define foreach(cell, l) \\\n for (int cell##__index = 0; \\\n (cell = list_nth_cell(l, cell##__index)) != NULL; \\\n cell##__index++)\n\nWe'd need to fix list_nth_cell() to return NULL not choke on an index\nequal to (or past?) the array end, but that's easy.\n\nI think this would go a very long way towards eliminating the hazards\nassociated with iterating around a list-modification operation.\nOn the downside, it's hard to see how to merge it with the other idea\nfor evaluating the List reference only once, so we'd still have the\nhazard that the list ref had better be a stable expression. But that's\nlikely to be much easier to audit for than what the POC patch asks\npeople to do (maybe there's a way to check it mechanically, even?).\n\nAlso, any code that does contain explicit use of lnext() is likely\nin need of rethinking anyhow, so taking it away would help answer\nthe objection about making problems easy to identify.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 25 Feb 2019 18:19:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "I wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>>> 1. This still involves at least two palloc's for every nonempty List,\n>>> because I kept the header and the data array separate. Perhaps it's\n>>> worth allocating those in one palloc.\n\n>> Hm, I think if we force external code to audit their code, we better\n>> also do this. This is a significant number of allocations, and I don't\n>> think it'd be good to spread this out over two releases.\n\n> If we choose to do it, I'd agree with doing both in the same major release\n> cycle, so that extensions see just one breakage. But I think it'd still\n> best be developed as a follow-on patch.\n\nBy the by ... this idea actively breaks the mechanism I'd proposed for\npreserving foreach's behavior of evaluating the List reference only once.\nIf we save a hidden copy of whatever the user says the List reference\nis, and then he assigns a new value to it mid-loop, we're screwed if\nthe list header can move.\n\nNow do you see why I'm a bit afraid of this? Perhaps it's worth doing,\nbut it's going to introduce a whole new set of code breakages that are\ngoing to be just as hard to audit for as anything else discussed in\nthis thread. (Example: outer function creates a nonempty list, and\npasses it down to some child function that appends to the list, and\nthere's no provision for returning the possibly-modified list header\npointer back up.) I'm not really convinced that saving one more palloc\nper List is worth it.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 25 Feb 2019 18:41:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-25 18:41:17 -0500, Tom Lane wrote:\n> I wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> >>> 1. This still involves at least two palloc's for every nonempty List,\n> >>> because I kept the header and the data array separate. Perhaps it's\n> >>> worth allocating those in one palloc.\n> \n> >> Hm, I think if we force external code to audit their code, we better\n> >> also do this. This is a significant number of allocations, and I don't\n> >> think it'd be good to spread this out over two releases.\n> \n> > If we choose to do it, I'd agree with doing both in the same major release\n> > cycle, so that extensions see just one breakage. But I think it'd still\n> > best be developed as a follow-on patch.\n> \n> By the by ... this idea actively breaks the mechanism I'd proposed for\n> preserving foreach's behavior of evaluating the List reference only once.\n> If we save a hidden copy of whatever the user says the List reference\n> is, and then he assigns a new value to it mid-loop, we're screwed if\n> the list header can move.\n\nHm, I wonder if that's necessary / whether we can just work around user\nvisible breakage at a small cost. I think I'm mostly concerned with two\nallocations for the very common case of small (1-3 entries) lists. We\ncould just allocate the first array together with the header, and not\nfree that if the list grows beyond that point. That'd mean we'd only do\nseparate allocations once they actually amortize over a number of\nallocations.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 25 Feb 2019 17:55:46 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Hi,\n\nOn 2019-02-25 17:55:46 -0800, Andres Freund wrote:\n> Hm, I wonder if that's necessary / whether we can just work around user\n> visible breakage at a small cost. I think I'm mostly concerned with two\n> allocations for the very common case of small (1-3 entries) lists. We\n> could just allocate the first array together with the header, and not\n> free that if the list grows beyond that point. That'd mean we'd only do\n> separate allocations once they actually amortize over a number of\n> allocations.\n\nBtw, if we actually were going to go for always allocating header + data\ntogether (and thus incuring the problems you mention upthread), we ought\nto store the members as a FLEXIBLE_ARRAY_MEMBER together with the\nlist. Probably not worth it, but reducing the number of pointer\nindirections for \"list\" accesses would be quite neat.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 25 Feb 2019 18:01:23 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On Sun, 24 Feb 2019 at 15:24, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I haven't attempted any performance measurements on this, but at\n> least in principle it should speed things up a bit, especially\n> for complex test cases involving longer Lists. I don't have any\n> very suitable test cases at hand, anyway.\n\nI've not yet looked at the code, but I thought I'd give this a quick benchmark.\n\nUsing the attached patch (as text file so as not to upset the CFbot),\nwhich basically just measures and logs the time taken to run\npg_plan_query. Using this, I ran make installcheck 3 times unpatched\nand same again with the patch. I pulled the results of each run into a\nspreadsheet and took the average of each of the 3 runs then took the\nsum of the total average planning time over the 20334 individual\nresults.\n\nResults patched atop of 067786cea:\n\nTotal average time unpatched: 0.739808667 seconds\nTotal average time patched: 0.748144333 seconds.\n\nSurprisingly it took 1.13% longer. I did these tests on an AWS\nmd5.large instance.\n\nIf required, I can send the entire spreadsheet. It's about 750 KB.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Tue, 26 Feb 2019 15:53:26 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> Using the attached patch (as text file so as not to upset the CFbot),\n> which basically just measures and logs the time taken to run\n> pg_plan_query. ...\n> Surprisingly it took 1.13% longer. I did these tests on an AWS\n> md5.large instance.\n\nInteresting. Seems to suggest that maybe the cases I discounted\nas being infrequent aren't so infrequent? Another possibility\nis that the new coding adds more cycles to foreach() loops than\nI'd hoped for.\n\nAnyway, it's just a POC; the main point at this stage is to be\nable to make such comparisons at all. If it turns out that we\n*can't* make this into a win, then all that bellyaching about\nhow inefficient Lists are was misinformed ...\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 26 Feb 2019 00:34:04 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "I wrote:\n> I had an idea that perhaps is worth considering --- upthread I rejected\n> the idea of deleting lnext() entirely, but what if we did so? We could\n> redefine foreach() to not use it:\n\n> #define foreach(cell, l) \\\n> for (int cell##__index = 0; \\\n> (cell = list_nth_cell(l, cell##__index)) != NULL; \\\n> cell##__index++)\n\n> I think this would go a very long way towards eliminating the hazards\n> associated with iterating around a list-modification operation.\n\nI spent some time trying to modify my patch to work that way, but\nabandoned the idea before completing it, because it became pretty\nclear that it is a bad idea. There are at least two issues:\n\n1. In many places, the patch as I had it only required adding an\nadditional parameter to lnext() calls. Removing lnext() altogether is\nfar more invasive, requiring restructuring loop logic in many places\nwhere we otherwise wouldn't need to. Since most of the point of this\nproposal is to not have a larger patch footprint than we have to, that\nseemed like a step in the wrong direction.\n\n2. While making foreach() work this way might possibly help in avoiding\nwriting bad code in the first place, a loop of this form is really\njust about as vulnerable to being broken by list insertions/deletions\nas what I had before. If you don't make suitable adjustments to the\ninteger index after an insertion/deletion then you're liable to skip\nover, or double-visit, some list entries; and there's nothing here to\nhelp you notice that you need to do that. Worse, doing things like\nthis utterly destroys our chance of detecting such errors, because\nthere's no state being kept that's in any way checkable.\n\nI was driven to realize point 2 by noticing, while trying to get rid\nof some lnext calls, that I'd mostly failed in the v1 patch to fix\nloops that contain embedded list_delete() calls other than\nlist_delete_cell(). This is because the debug support I'd added failed\nto force relocation of lists after a deletion (unlike the insertion\ncases). It won't take much to add such relocation, and I'll go do that;\nbut with an integer-index-based loop implementation we've got no chance\nof having debug support that could catch failure to update the loop index.\n\nSo I think that idea is a failure, and going forward with the v1\napproach has better chances.\n\nI did find a number of places where getting rid of explicit lnext()\ncalls led to just plain cleaner code. Most of these were places that\ncould be using forboth() or forthree() and just weren't. There's\nalso several places that are crying out for a forfour() macro, so\nI'm not sure why we've stubbornly refused to provide it. I'm a bit\ninclined to just fix those things in the name of code readability,\nindependent of this patch.\n\nI also noticed that there's quite a lot of places that are testing\nlnext(cell) for being NULL or not. What that really means is whether\nthis cell is last in the list or not, so maybe readability would be\nimproved by defining a macro along the lines of list_cell_is_last().\nAny thoughts about that?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 26 Feb 2019 18:51:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "I wrote:\n> I did find a number of places where getting rid of explicit lnext()\n> calls led to just plain cleaner code. Most of these were places that\n> could be using forboth() or forthree() and just weren't. There's\n> also several places that are crying out for a forfour() macro, so\n> I'm not sure why we've stubbornly refused to provide it. I'm a bit\n> inclined to just fix those things in the name of code readability,\n> independent of this patch.\n\n0001 below does this. I found a couple of places that could use\nforfive(), as well. I think this is a clear legibility and\nerror-proofing win, and we should just push it.\n\n> I also noticed that there's quite a lot of places that are testing\n> lnext(cell) for being NULL or not. What that really means is whether\n> this cell is last in the list or not, so maybe readability would be\n> improved by defining a macro along the lines of list_cell_is_last().\n> Any thoughts about that?\n\n0002 below does this. I'm having a hard time deciding whether this\npart is a good idea or just code churn. It might be more readable\n(especially to newbies) but I can't evaluate that very objectively.\nI'm particularly unsure about whether we need two macros; though the\nway I initially tried it with just list_cell_is_last() seemed kind of\ndouble-negatively confusing in the places where the test needs to be\nnot-last. Also, are these macro names too long, and if so what would\nbe better?\n\nAlso: if we accept either or both of these, should we back-patch the\nmacro additions, so that these new macros will be available for use\nin back-patched code? I'm not sure that forfour/forfive have enough\nuse-cases to worry about that; but the is-last macros would have a\nbetter case for that, I think.\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 27 Feb 2019 15:26:46 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On Wed, Feb 27, 2019 at 3:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> 0001 below does this. I found a couple of places that could use\n> forfive(), as well. I think this is a clear legibility and\n> error-proofing win, and we should just push it.\n\nIt sounds like some of these places might need a bigger restructuring\n- i.e. to iterate over a list/vector of structs with 5 members instead\nof iterating over five lists in parallel.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Wed, 27 Feb 2019 15:32:57 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Feb 27, 2019 at 3:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> 0001 below does this. I found a couple of places that could use\n>> forfive(), as well. I think this is a clear legibility and\n>> error-proofing win, and we should just push it.\n\n> It sounds like some of these places might need a bigger restructuring\n> - i.e. to iterate over a list/vector of structs with 5 members instead\n> of iterating over five lists in parallel.\n\nMeh. Most of them are iterating over parsetree substructure, eg the\ncomponents of a RowCompareExpr. So we could not change them without\npretty extensive infrastructure changes including a catversion bump.\nAlso, while the separated substructure is indeed a pain in the rear\nin some places, it's actually better for other uses. Two examples\nrelated to RowCompareExpr:\n\n* match_rowcompare_to_indexcol can check whether all the left-hand\nor right-hand expressions are nonvolatile with one easy call to\ncontain_volatile_functions on the respective list. To do the\nsame with a single list of sub-structs, it'd need bespoke code\nfor each case to run through the list and consider only the correct\nsubexpression of each sub-struct.\n\n* expand_indexqual_rowcompare can deal with commuted-clause cases just\nby swapping the list pointers at the start, it doesn't have to think\nabout it over again for each pair of elements.\n\nSo I'm not that excited about refactoring the data representation\nfor these. I'm content (for now) with getting these places in line\nwith the coding convention we use elsewhere for similar cases.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 27 Feb 2019 15:47:03 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On 2019-Feb-27, Tom Lane wrote:\n\n> I'm particularly unsure about whether we need two macros; though the\n> way I initially tried it with just list_cell_is_last() seemed kind of\n> double-negatively confusing in the places where the test needs to be\n> not-last. Also, are these macro names too long, and if so what would\n> be better?\n\nI think \"!list_cell_is_last()\" is just as readable, if not more, than\nthe \"is_not_last\" locution:\n\n\t\tappendStringInfoChar(&buf, '\\'');\n\t\tif (!list_cell_is_last(l))\n\t\t\tappendStringInfoString(&buf, \", \");\n\nI'd go with a single macro.\n\n\n+1 for backpatching the new macros, too. I suspect extension authors\nare going to need to provide compatibility versions anyway, to be\ncompilable against older minors.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Wed, 27 Feb 2019 18:41:37 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On Thu, 28 Feb 2019 at 09:26, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > I did find a number of places where getting rid of explicit lnext()\n> > calls led to just plain cleaner code. Most of these were places that\n> > could be using forboth() or forthree() and just weren't. There's\n> > also several places that are crying out for a forfour() macro, so\n> > I'm not sure why we've stubbornly refused to provide it. I'm a bit\n> > inclined to just fix those things in the name of code readability,\n> > independent of this patch.\n>\n> 0001 below does this. I found a couple of places that could use\n> forfive(), as well. I think this is a clear legibility and\n> error-proofing win, and we should just push it.\n\nI've looked over this and I agree that it's a good idea. Reducing the\nnumber of lnext() usages seems like a good idea in order to reduce the\nfootprint of the main patch.\n\nThe only thing of interest that I saw during the review was the fact\nthat you've chosen to assign colexpr and coldefexpr before the\ncontinue in get_tablefunc(). We may not end up using those values if\nwe find an ordinality column. I'm pretty sure it's not worth breaking\nthe mould for that case though, but just noting it anyway.\n\n> > I also noticed that there's quite a lot of places that are testing\n> > lnext(cell) for being NULL or not. What that really means is whether\n> > this cell is last in the list or not, so maybe readability would be\n> > improved by defining a macro along the lines of list_cell_is_last().\n> > Any thoughts about that?\n>\n> 0002 below does this. I'm having a hard time deciding whether this\n> part is a good idea or just code churn. It might be more readable\n> (especially to newbies) but I can't evaluate that very objectively.\n> I'm particularly unsure about whether we need two macros; though the\n> way I initially tried it with just list_cell_is_last() seemed kind of\n> double-negatively confusing in the places where the test needs to be\n> not-last. Also, are these macro names too long, and if so what would\n> be better?\n\nI'm less decided on this. Having this now means you'll need to break\nthe signature of the macro the same way as you'll need to break\nlnext(). It's perhaps easier to explain in the release notes about\nlnext() having changed so that extension authors can go fix their code\n(probably they'll know already from compile failures, but ....). On\nthe other hand, if the list_cell_is_last() is new, then there will be\nno calls to that in extensions anyway. Maybe it's better to do it at\nthe same time as the List reimplementation to ensure nobody needs to\nchange anything twice?\n\n> Also: if we accept either or both of these, should we back-patch the\n> macro additions, so that these new macros will be available for use\n> in back-patched code? I'm not sure that forfour/forfive have enough\n> use-cases to worry about that; but the is-last macros would have a\n> better case for that, I think.\n\nI see no reason not to put forfour() and forfive() in the back branches.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Thu, 28 Feb 2019 17:08:46 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> On Thu, 28 Feb 2019 at 09:26, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> 0002 below does this. I'm having a hard time deciding whether this\n>> part is a good idea or just code churn. It might be more readable\n>> (especially to newbies) but I can't evaluate that very objectively.\n\n> I'm less decided on this. Having this now means you'll need to break\n> the signature of the macro the same way as you'll need to break\n> lnext(). It's perhaps easier to explain in the release notes about\n> lnext() having changed so that extension authors can go fix their code\n> (probably they'll know already from compile failures, but ....). On\n> the other hand, if the list_cell_is_last() is new, then there will be\n> no calls to that in extensions anyway. Maybe it's better to do it at\n> the same time as the List reimplementation to ensure nobody needs to\n> change anything twice?\n\nYeah, I was considering the idea of setting up the macro as\n\"list_cell_is_last(list, cell)\" from the get-go, with the first\nargument just going unused for the moment. That would be a good\nway to back-patch it if we go through with this. On the other hand,\nif we end up not pulling the trigger on the main patch, that'd\nlook pretty silly ...\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 27 Feb 2019 23:23:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On Tue, 26 Feb 2019 at 18:34, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <david.rowley@2ndquadrant.com> writes:\n> > Using the attached patch (as text file so as not to upset the CFbot),\n> > which basically just measures and logs the time taken to run\n> > pg_plan_query. ...\n> > Surprisingly it took 1.13% longer. I did these tests on an AWS\n> > md5.large instance.\n>\n> Interesting. Seems to suggest that maybe the cases I discounted\n> as being infrequent aren't so infrequent? Another possibility\n> is that the new coding adds more cycles to foreach() loops than\n> I'd hoped for.\n\nI went and had a few adventures with this patch to see if I could\nfigure out why the small ~1% regression exists. Profiling did not\nprove very useful as I saw none of the list functions come up. I had\nsuspected it was the lcons() calls being expensive because then need\nto push the elements up one place each time, not something that'll\nscale well with larger lists. After changing things so that a new\n\"first\" element index in the List would allow new_head_cell() to just\nmove everything to the end of the list and mark the start of the\nactual data... I discovered that slowed things down further... Likely\ndue to all the additional arithmetic work required to find the first\nelement.\n\nI then tried hacking at the foreach() macro after wondering if the\nlnext() call was somehow making things difficult for the compiler to\npredict what cell would come next. I experimented with the following\nmonstrosity:\n\nfor ((cell) = list_head(l); ((cell) && (cell) < &((List *)\nl)->elements[((List *) l)->first + ((List *) l)->length]) || (cell =\nNULL) != NULL; cell++)\n\nit made things worse again... It ended up much more ugly than I\nthought it would have as I had to account for an empty list being NIL\nand the fact that we need to make cell NULL after the loop is over.\n\nI tried a few other things... I didn't agree with your memmove() in\nlist_concat(). I think memcpy() is fine, even when the list pointers\nare the same since we never overwrite any live cell values. Strangely\nI found memcpy slower than memmove... ?\n\nThe only thing that I did to manage to speed the patch up was to ditch\nthe additional NULL test in lnext(). I don't see why that's required\nsince lnext(NULL) would have crashed with the old implementation.\nRemoving this changed the 1.13% regression into a ~0.8% regression,\nwhich at least does show that the foreach() implementation can have an\neffect on performance.\n\n> Anyway, it's just a POC; the main point at this stage is to be\n> able to make such comparisons at all. If it turns out that we\n> *can't* make this into a win, then all that bellyaching about\n> how inefficient Lists are was misinformed ...\n\nMy primary concern is how much we bend over backwards because\nlist_nth() performance is not O(1). I know from my work on\npartitioning that ExecInitRangeTable()'s building of\nes_range_table_array has a huge impact for PREPAREd plans for simple\nPK lookup SELECT queries to partitioned tables with a large number of\npartitions, where only 1 of which will survive run-time pruning. I\ncould get the execution speed of such a query with 300 partitions to\nwithin 94% of the non-partitioned version if the rtable could be\nlooked up O(1) in the executor natively, (that some changes to\nExecCheckRTPerms() to have it skip rtable entries that don't require\npermission checks.).\n\nPerhaps if we're not going to see gains from the patch alone then\nwe'll need to tag on some of the additional stuff that will take\nadvantage of list_nth() being fast and test the performance of it all\nagain.\n\nAttached is the (mostly worthless) series of hacks I made to your\npatch. It might save someone some time if they happened to wonder the\nsame thing as I did.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Thu, 28 Feb 2019 17:41:39 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> I went and had a few adventures with this patch to see if I could\n> figure out why the small ~1% regression exists.\n\nThanks for poking around!\n\n> ... I had\n> suspected it was the lcons() calls being expensive because then need\n> to push the elements up one place each time, not something that'll\n> scale well with larger lists.\n\nI just did some looking around at lcons() calls, and it's hard to identify\nany that seem like they would be performance-critical. I did notice a\nnumber of places that think that lcons'ing a item onto a list, and later\nstripping it off with list_delete_first, is efficient. With the new\nimplementation it's far cheaper to lappend and then list_truncate instead,\nat least if the list is long. If the list order matters then that's not\nan option, but I found some places where it doesn't matter so we could get\nan easy win. Still, it wasn't obvious that this would move the needle at\nall.\n\n> I then tried hacking at the foreach() macro after wondering if the\n> lnext() call was somehow making things difficult for the compiler to\n> predict what cell would come next.\n\nYeah, my gut feeling right now is that foreach() is producing crummy\ncode, though it's not obvious why it would need to. Maybe some\nmicro-optimization is called for. But I've not had time to pursue it.\n\n> The only thing that I did to manage to speed the patch up was to ditch\n> the additional NULL test in lnext(). I don't see why that's required\n> since lnext(NULL) would have crashed with the old implementation.\n\nHmmm ...\n\n> Perhaps if we're not going to see gains from the patch alone then\n> we'll need to tag on some of the additional stuff that will take\n> advantage of list_nth() being fast and test the performance of it all\n> again.\n\nYeah, evaluating this patch in complete isolation is a bit unfair.\nStill, it'd be nice to hold the line in advance of any follow-on\nimprovements.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 28 Feb 2019 00:40:02 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": ">>>>> \"David\" == David Rowley <david.rowley@2ndquadrant.com> writes:\n\n David> I went and had a few adventures with this patch to see if I\n David> could figure out why the small ~1% regression exists.\n\nJust changing the number of instructions (even in a completely unrelated\nplace that's not called during the test) can generate performance\nvariations of this size, even when there's no real difference.\n\nTo get a reliable measurement of timing changes less than around 3%,\nwhat you have to do is this: pick some irrelevant function and add\nsomething like an asm directive that inserts a variable number of NOPs,\nand do a series of test runs with different values.\n\nSee http://tinyurl.com/op9qg8a for an example of the kind of variation\nthat one can get; this plot records timing runs where each different\npadding size was tested 3 times (non-consecutively, so you can see how\nrepeatable the test result is for each size), each timing is actually\nthe average of the last 10 of 11 consecutive runs of the test.\n\nTo establish a 1% performance benefit or regression you need to show\nthat there's still a difference _AFTER_ taking this kind of\nspooky-action-at-a-distance into account. For example, in the test shown\nat the link, if a substantive change to the code moved the upper and\nlower bounds of the output from (6091,6289) to (6030,6236) then one\nwould be justified in claiming it as a 1% improvement.\n\nSuch is the reality of modern CPUs.\n\n-- \nAndrew (irc:RhodiumToad)\n\n",
"msg_date": "Thu, 28 Feb 2019 05:42:13 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> To get a reliable measurement of timing changes less than around 3%,\n> what you have to do is this: pick some irrelevant function and add\n> something like an asm directive that inserts a variable number of NOPs,\n> and do a series of test runs with different values.\n\nGood point. If you're looking at a microbenchmark that only exercises\na small amount of code, it can be way worse than that. I was reminded\nof this the other day while fooling with the problem discussed in\nhttps://www.postgresql.org/message-id/flat/6970.1545327857@sss.pgh.pa.us\nin which we were spending huge amounts of time in a tight loop in\nmatch_eclasses_to_foreign_key_col. I normally run with --enable-cassert\nunless I'm trying to collect performance data; so I rebuilt with\n--disable-cassert, and was bemused to find out that that test case ran\ncirca 20% *slower* in the non-debug build. This is silly on its face,\nand even more so when you notice that match_eclasses_to_foreign_key_col\nitself contains no Asserts and so its machine code is unchanged by the\nswitch. (I went to the extent of comparing .s files to verify this.)\nSo that had to have been down to alignment/cacheline issues triggered\nby moving said function around. I doubt the case would be exactly\nreproducible on different hardware or toolchain, but another platform\nwould likely show similar issues on some case or other.\n\ntl;dr: even a 20% difference might be nothing more than an artifact.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 28 Feb 2019 01:27:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": ">\n> Hi,\n\n I have tested the three issues fixed in patch 001. Array Indexes\nissue is still there.Running the following query returns ERROR: more than\none value returned by column XPath expression\n\nSELECT xmltable.*\nFROM (SELECT data FROM xmldata) x,\nLATERAL XMLTABLE('/ROWS/ROW'\nPASSING data\nCOLUMNS\ncountry_name text PATH 'COUNTRY_NAME/text()' NOT NULL,\nsize_text float PATH 'SIZE/text()',\nsize_text_1 float PATH 'SIZE/text()[1]',\nsize_text_2 float PATH 'SIZE/text()[2]',\n\"SIZE\" float, size_xml xml PATH 'SIZE')\n\nThe other two issues are resolved by this patch.\n\n\n-- \nCheers\nRam 4.0\n\nHi, I have tested the three issues fixed in patch 001. Array Indexes issue is still there.Running the following query returns ERROR: more than one value returned by column XPath expressionSELECT xmltable.* FROM (SELECT data FROM xmldata) x, LATERAL XMLTABLE('/ROWS/ROW' PASSING data COLUMNS country_name text PATH 'COUNTRY_NAME/text()' NOT NULL, size_text float PATH 'SIZE/text()', size_text_1 float PATH 'SIZE/text()[1]', size_text_2 float PATH 'SIZE/text()[2]', \"SIZE\" float, size_xml xml PATH 'SIZE') The other two issues are resolved by this patch.-- CheersRam 4.0",
"msg_date": "Thu, 28 Feb 2019 14:28:30 +0530",
"msg_from": "Ramanarayana <raam.soft@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XML/XPath issues: text/CDATA in XMLTABLE, XPath evaluated with\n wrong context"
},
{
"msg_contents": "čt 28. 2. 2019 v 9:58 odesílatel Ramanarayana <raam.soft@gmail.com> napsal:\n\n> Hi,\n>\n> I have tested the three issues fixed in patch 001. Array Indexes\n> issue is still there.Running the following query returns ERROR: more than\n> one value returned by column XPath expression\n>\n> SELECT xmltable.*\n> FROM (SELECT data FROM xmldata) x,\n> LATERAL XMLTABLE('/ROWS/ROW'\n> PASSING data\n> COLUMNS\n> country_name text PATH 'COUNTRY_NAME/text()' NOT NULL,\n> size_text float PATH 'SIZE/text()',\n> size_text_1 float PATH 'SIZE/text()[1]',\n> size_text_2 float PATH 'SIZE/text()[2]',\n> \"SIZE\" float, size_xml xml PATH 'SIZE')\n>\n> The other two issues are resolved by this patch.\n>\n\nwhat patches you are used?\n\nRegards\n\nPavel\n\n\n> --\n> Cheers\n> Ram 4.0\n>\n\nčt 28. 2. 2019 v 9:58 odesílatel Ramanarayana <raam.soft@gmail.com> napsal:Hi, I have tested the three issues fixed in patch 001. Array Indexes issue is still there.Running the following query returns ERROR: more than one value returned by column XPath expressionSELECT xmltable.* FROM (SELECT data FROM xmldata) x, LATERAL XMLTABLE('/ROWS/ROW' PASSING data COLUMNS country_name text PATH 'COUNTRY_NAME/text()' NOT NULL, size_text float PATH 'SIZE/text()', size_text_1 float PATH 'SIZE/text()[1]', size_text_2 float PATH 'SIZE/text()[2]', \"SIZE\" float, size_xml xml PATH 'SIZE') The other two issues are resolved by this patch.what patches you are used? RegardsPavel-- CheersRam 4.0",
"msg_date": "Thu, 28 Feb 2019 10:31:06 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XML/XPath issues: text/CDATA in XMLTABLE, XPath evaluated with\n wrong context"
},
{
"msg_contents": "Hi,\nI applied the following patches\n\n0001-XML-XPath-comments-processing-instructions-array-ind.patch\n<https://www.postgresql.org/message-id/attachment/63467/0001-XML-XPath-comments-processing-instructions-array-ind.patch>\n\n0002-XML-avoid-xmlStrdup-if-possible.patch\n<https://www.postgresql.org/message-id/attachment/63468/0002-XML-avoid-xmlStrdup-if-possible.patch>\n\n\nCan you let me know what fix is done in patch 002. I will test that as well?\n\nRegards,\nRam.\n\nOn Thu, 28 Feb 2019 at 15:01, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n>\n>\n> čt 28. 2. 2019 v 9:58 odesílatel Ramanarayana <raam.soft@gmail.com>\n> napsal:\n>\n>> Hi,\n>>\n>> I have tested the three issues fixed in patch 001. Array Indexes\n>> issue is still there.Running the following query returns ERROR: more\n>> than one value returned by column XPath expression\n>>\n>> SELECT xmltable.*\n>> FROM (SELECT data FROM xmldata) x,\n>> LATERAL XMLTABLE('/ROWS/ROW'\n>> PASSING data\n>> COLUMNS\n>> country_name text PATH 'COUNTRY_NAME/text()' NOT NULL,\n>> size_text float PATH 'SIZE/text()',\n>> size_text_1 float PATH 'SIZE/text()[1]',\n>> size_text_2 float PATH 'SIZE/text()[2]',\n>> \"SIZE\" float, size_xml xml PATH 'SIZE')\n>>\n>> The other two issues are resolved by this patch.\n>>\n>\n> what patches you are used?\n>\n> Regards\n>\n> Pavel\n>\n>\n>> --\n>> Cheers\n>> Ram 4.0\n>>\n>\n\n-- \nCheers\nRam 4.0\n\nHi,I applied the following patches0001-XML-XPath-comments-processing-instructions-array-ind.patch 0002-XML-avoid-xmlStrdup-if-possible.patch Can you let me know what fix is done in patch 002. I will test that as well?Regards,Ram.On Thu, 28 Feb 2019 at 15:01, Pavel Stehule <pavel.stehule@gmail.com> wrote:čt 28. 2. 2019 v 9:58 odesílatel Ramanarayana <raam.soft@gmail.com> napsal:Hi, I have tested the three issues fixed in patch 001. Array Indexes issue is still there.Running the following query returns ERROR: more than one value returned by column XPath expressionSELECT xmltable.* FROM (SELECT data FROM xmldata) x, LATERAL XMLTABLE('/ROWS/ROW' PASSING data COLUMNS country_name text PATH 'COUNTRY_NAME/text()' NOT NULL, size_text float PATH 'SIZE/text()', size_text_1 float PATH 'SIZE/text()[1]', size_text_2 float PATH 'SIZE/text()[2]', \"SIZE\" float, size_xml xml PATH 'SIZE') The other two issues are resolved by this patch.what patches you are used? RegardsPavel-- CheersRam 4.0\n\n-- CheersRam 4.0",
"msg_date": "Thu, 28 Feb 2019 15:19:31 +0530",
"msg_from": "Ramanarayana <raam.soft@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XML/XPath issues: text/CDATA in XMLTABLE, XPath evaluated with\n wrong context"
},
{
"msg_contents": "čt 28. 2. 2019 v 10:49 odesílatel Ramanarayana <raam.soft@gmail.com> napsal:\n\n> Hi,\n> I applied the following patches\n>\n> 0001-XML-XPath-comments-processing-instructions-array-ind.patch\n> <https://www.postgresql.org/message-id/attachment/63467/0001-XML-XPath-comments-processing-instructions-array-ind.patch>\n>\n> 0002-XML-avoid-xmlStrdup-if-possible.patch\n> <https://www.postgresql.org/message-id/attachment/63468/0002-XML-avoid-xmlStrdup-if-possible.patch>\n>\n>\n> Can you let me know what fix is done in patch 002. I will test that as\n> well?\n>\n\nI afraid so this patch set was not finished, and is not in current\ncommitfest\n\nplease, check this set https://commitfest.postgresql.org/22/1872/\n\nRegards\n\nPavel\n\n\n\n> Regards,\n> Ram.\n>\n> On Thu, 28 Feb 2019 at 15:01, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n>>\n>>\n>> čt 28. 2. 2019 v 9:58 odesílatel Ramanarayana <raam.soft@gmail.com>\n>> napsal:\n>>\n>>> Hi,\n>>>\n>>> I have tested the three issues fixed in patch 001. Array Indexes\n>>> issue is still there.Running the following query returns ERROR: more\n>>> than one value returned by column XPath expression\n>>>\n>>> SELECT xmltable.*\n>>> FROM (SELECT data FROM xmldata) x,\n>>> LATERAL XMLTABLE('/ROWS/ROW'\n>>> PASSING data\n>>> COLUMNS\n>>> country_name text PATH 'COUNTRY_NAME/text()' NOT NULL,\n>>> size_text float PATH 'SIZE/text()',\n>>> size_text_1 float PATH 'SIZE/text()[1]',\n>>> size_text_2 float PATH 'SIZE/text()[2]',\n>>> \"SIZE\" float, size_xml xml PATH 'SIZE')\n>>>\n>>> The other two issues are resolved by this patch.\n>>>\n>>\n>> what patches you are used?\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>>\n>>> --\n>>> Cheers\n>>> Ram 4.0\n>>>\n>>\n>\n> --\n> Cheers\n> Ram 4.0\n>\n\nčt 28. 2. 2019 v 10:49 odesílatel Ramanarayana <raam.soft@gmail.com> napsal:Hi,I applied the following patches0001-XML-XPath-comments-processing-instructions-array-ind.patch 0002-XML-avoid-xmlStrdup-if-possible.patch Can you let me know what fix is done in patch 002. I will test that as well?I afraid so this patch set was not finished, and is not in current commitfestplease, check this set https://commitfest.postgresql.org/22/1872/RegardsPavel Regards,Ram.On Thu, 28 Feb 2019 at 15:01, Pavel Stehule <pavel.stehule@gmail.com> wrote:čt 28. 2. 2019 v 9:58 odesílatel Ramanarayana <raam.soft@gmail.com> napsal:Hi, I have tested the three issues fixed in patch 001. Array Indexes issue is still there.Running the following query returns ERROR: more than one value returned by column XPath expressionSELECT xmltable.* FROM (SELECT data FROM xmldata) x, LATERAL XMLTABLE('/ROWS/ROW' PASSING data COLUMNS country_name text PATH 'COUNTRY_NAME/text()' NOT NULL, size_text float PATH 'SIZE/text()', size_text_1 float PATH 'SIZE/text()[1]', size_text_2 float PATH 'SIZE/text()[2]', \"SIZE\" float, size_xml xml PATH 'SIZE') The other two issues are resolved by this patch.what patches you are used? RegardsPavel-- CheersRam 4.0\n\n-- CheersRam 4.0",
"msg_date": "Thu, 28 Feb 2019 11:04:50 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XML/XPath issues: text/CDATA in XMLTABLE, XPath evaluated with\n wrong context"
},
{
"msg_contents": "čt 28. 2. 2019 v 10:31 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\r\nnapsal:\r\n\r\n>\r\n>\r\n> čt 28. 2. 2019 v 9:58 odesílatel Ramanarayana <raam.soft@gmail.com>\r\n> napsal:\r\n>\r\n>> Hi,\r\n>>\r\n>> I have tested the three issues fixed in patch 001. Array Indexes\r\n>> issue is still there.Running the following query returns ERROR: more\r\n>> than one value returned by column XPath expression\r\n>>\r\n>> SELECT xmltable.*\r\n>> FROM (SELECT data FROM xmldata) x,\r\n>> LATERAL XMLTABLE('/ROWS/ROW'\r\n>> PASSING data\r\n>> COLUMNS\r\n>> country_name text PATH 'COUNTRY_NAME/text()' NOT NULL,\r\n>> size_text float PATH 'SIZE/text()',\r\n>> size_text_1 float PATH 'SIZE/text()[1]',\r\n>> size_text_2 float PATH 'SIZE/text()[2]',\r\n>> \"SIZE\" float, size_xml xml PATH 'SIZE')\r\n>>\r\n>> The other two issues are resolved by this patch.\r\n>>\r\n>\r\nI tested xmltable-xpath-result-processing-bugfix-6.patch\r\n\r\nand it is working\r\n\r\npostgres=# SELECT xmltable.*\r\npostgres-# FROM (SELECT data FROM xmldata) x,\r\npostgres-# LATERAL XMLTABLE('/ROWS/ROW'\r\npostgres(# PASSING data\r\npostgres(# COLUMNS\r\npostgres(# country_name text PATH\r\n'COUNTRY_NAME/text()' NOT NULL,\r\npostgres(# size_text float PATH\r\n'SIZE/text()',\r\npostgres(# size_text_1 float PATH\r\n'SIZE/text()[1]',\r\npostgres(# size_text_2 float PATH\r\n'SIZE/text()[2]',\r\npostgres(# \"SIZE\" float, size_xml xml\r\nPATH 'SIZE') ;\r\n┌──────────────┬───────────┬─────────────┬─────────────┬──────┬────────────────────────────┐\r\n\r\n│ country_name │ size_text │ size_text_1 │ size_text_2 │ SIZE │\r\nsize_xml │\r\n╞══════════════╪═══════════╪═════════════╪═════════════╪══════╪════════════════════════════╡\r\n\r\n│ Australia │ ∅ │ ∅ │ ∅ │ ∅ │\r\n∅ │\r\n│ China │ ∅ │ ∅ │ ∅ │ ∅ │\r\n∅ │\r\n│ HongKong │ ∅ │ ∅ │ ∅ │ ∅ │\r\n∅ │\r\n│ India │ ∅ │ ∅ │ ∅ │ ∅ │\r\n∅ │\r\n│ Japan │ ∅ │ ∅ │ ∅ │ ∅ │\r\n∅ │\r\n│ Singapore │ 791 │ 791 │ ∅ │ 791 │ <SIZE\r\nunit=\"km\">791</SIZE> │\r\n└──────────────┴───────────┴─────────────┴─────────────┴──────┴────────────────────────────┘\r\n\r\n(6 rows)\r\n\r\nRegards\r\n\r\nPavel\r\n\r\n\r\n>\r\n> what patches you are used?\r\n>\r\n> Regards\r\n>\r\n> Pavel\r\n>\r\n>\r\n>> --\r\n>> Cheers\r\n>> Ram 4.0\r\n>>\r\n>\r\n\nčt 28. 2. 2019 v 10:31 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:čt 28. 2. 2019 v 9:58 odesílatel Ramanarayana <raam.soft@gmail.com> napsal:Hi, I have tested the three issues fixed in patch 001. Array Indexes issue is still there.Running the following query returns ERROR: more than one value returned by column XPath expressionSELECT xmltable.* FROM (SELECT data FROM xmldata) x, LATERAL XMLTABLE('/ROWS/ROW' PASSING data COLUMNS country_name text PATH 'COUNTRY_NAME/text()' NOT NULL, size_text float PATH 'SIZE/text()', size_text_1 float PATH 'SIZE/text()[1]', size_text_2 float PATH 'SIZE/text()[2]', \"SIZE\" float, size_xml xml PATH 'SIZE') The other two issues are resolved by this patch.I tested xmltable-xpath-result-processing-bugfix-6.patchand it is workingpostgres=# SELECT xmltable.*\r\npostgres-# FROM (SELECT data FROM xmldata) x,\r\npostgres-# LATERAL XMLTABLE('/ROWS/ROW'\r\npostgres(# PASSING data\r\npostgres(# COLUMNS\r\npostgres(# country_name text PATH 'COUNTRY_NAME/text()' NOT NULL,\r\npostgres(# size_text float PATH 'SIZE/text()',\r\npostgres(# size_text_1 float PATH 'SIZE/text()[1]',\r\npostgres(# size_text_2 float PATH 'SIZE/text()[2]',\r\npostgres(# \"SIZE\" float, size_xml xml PATH 'SIZE') ;\r\n┌──────────────┬───────────┬─────────────┬─────────────┬──────┬────────────────────────────┐\r\n│ country_name │ size_text │ size_text_1 │ size_text_2 │ SIZE │ size_xml │\r\n╞══════════════╪═══════════╪═════════════╪═════════════╪══════╪════════════════════════════╡\r\n│ Australia │ ∅ │ ∅ │ ∅ │ ∅ │ ∅ │\r\n│ China │ ∅ │ ∅ │ ∅ │ ∅ │ ∅ │\r\n│ HongKong │ ∅ │ ∅ │ ∅ │ ∅ │ ∅ │\r\n│ India │ ∅ │ ∅ │ ∅ │ ∅ │ ∅ │\r\n│ Japan │ ∅ │ ∅ │ ∅ │ ∅ │ ∅ │\r\n│ Singapore │ 791 │ 791 │ ∅ │ 791 │ <SIZE unit=\"km\">791</SIZE> │\r\n└──────────────┴───────────┴─────────────┴─────────────┴──────┴────────────────────────────┘\r\n(6 rows)\r\nRegardsPavel what patches you are used? RegardsPavel-- CheersRam 4.0",
"msg_date": "Thu, 28 Feb 2019 13:24:53 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XML/XPath issues: text/CDATA in XMLTABLE, XPath evaluated with\n wrong context"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> On Thu, 28 Feb 2019 at 09:26, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> 0001 below does this. I found a couple of places that could use\n>> forfive(), as well. I think this is a clear legibility and\n>> error-proofing win, and we should just push it.\n\n> I've looked over this and I agree that it's a good idea. Reducing the\n> number of lnext() usages seems like a good idea in order to reduce the\n> footprint of the main patch.\n\nI've pushed that; thanks for reviewing!\n\n>> 0002 below does this. I'm having a hard time deciding whether this\n>> part is a good idea or just code churn. It might be more readable\n>> (especially to newbies) but I can't evaluate that very objectively.\n\n> I'm less decided on this.\n\nYeah, I think I'm just going to drop that idea. People didn't seem\nvery sold on list_cell_is_last() being a readability improvement,\nand it certainly does nothing to reduce the footprint of the main\npatch.\n\nI now need to rebase the main patch over what I pushed; off to do\nthat next.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 28 Feb 2019 14:28:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Here's a rebased version of the main patch.\n\nDavid Rowley <david.rowley@2ndquadrant.com> writes:\n> The only thing that I did to manage to speed the patch up was to ditch\n> the additional NULL test in lnext(). I don't see why that's required\n> since lnext(NULL) would have crashed with the old implementation.\n\nI adopted this idea. I think at one point where I was fooling with\ndifferent implementations for foreach(), it was necessary that lnext()\nbe cool with a NULL input; but as things stand now, it's not.\n\nI haven't done anything else in the performance direction, but am\nplanning to play with that next.\n\nI did run through all the list_delete_foo callers and fix the ones\nthat were still busted. I also changed things so that with\nDEBUG_LIST_MEMORY_USAGE enabled, list deletions would move the data\narrays around, in hopes of catching more stale-pointer problems.\nDepressingly, check-world still passed with that added, even before\nI'd fixed the bugs I found by inspection. This does not speak well\nfor the coverage of our regression tests.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 28 Feb 2019 16:49:31 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Hi,\r\nThe below statement needs to be executed before running the query to\r\nreplicate the issue\r\n\r\nupdate xmldata set data = regexp_replace(data::text, '791',\r\n'<!--ah-->7<!--oh-->9<!--uh-->1')::xml;\r\n\r\nOn Thu, 28 Feb 2019 at 17:55, Pavel Stehule <pavel.stehule@gmail.com> wrote:\r\n\r\n>\r\n>\r\n> čt 28. 2. 2019 v 10:31 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\r\n> napsal:\r\n>\r\n>>\r\n>>\r\n>> čt 28. 2. 2019 v 9:58 odesílatel Ramanarayana <raam.soft@gmail.com>\r\n>> napsal:\r\n>>\r\n>>> Hi,\r\n>>>\r\n>>> I have tested the three issues fixed in patch 001. Array Indexes\r\n>>> issue is still there.Running the following query returns ERROR: more\r\n>>> than one value returned by column XPath expression\r\n>>>\r\n>>> SELECT xmltable.*\r\n>>> FROM (SELECT data FROM xmldata) x,\r\n>>> LATERAL XMLTABLE('/ROWS/ROW'\r\n>>> PASSING data\r\n>>> COLUMNS\r\n>>> country_name text PATH 'COUNTRY_NAME/text()' NOT NULL,\r\n>>> size_text float PATH 'SIZE/text()',\r\n>>> size_text_1 float PATH 'SIZE/text()[1]',\r\n>>> size_text_2 float PATH 'SIZE/text()[2]',\r\n>>> \"SIZE\" float, size_xml xml PATH 'SIZE')\r\n>>>\r\n>>> The other two issues are resolved by this patch.\r\n>>>\r\n>>\r\n> I tested xmltable-xpath-result-processing-bugfix-6.patch\r\n>\r\n> and it is working\r\n>\r\n> postgres=# SELECT xmltable.*\r\n> postgres-# FROM (SELECT data FROM xmldata) x,\r\n> postgres-# LATERAL XMLTABLE('/ROWS/ROW'\r\n> postgres(# PASSING data\r\n> postgres(# COLUMNS\r\n> postgres(# country_name text PATH\r\n> 'COUNTRY_NAME/text()' NOT NULL,\r\n> postgres(# size_text float PATH\r\n> 'SIZE/text()',\r\n> postgres(# size_text_1 float PATH\r\n> 'SIZE/text()[1]',\r\n> postgres(# size_text_2 float PATH\r\n> 'SIZE/text()[2]',\r\n> postgres(# \"SIZE\" float, size_xml xml\r\n> PATH 'SIZE') ;\r\n> ┌──────────────┬───────────┬─────────────┬─────────────┬──────┬────────────────────────────┐\r\n>\r\n> │ country_name │ size_text │ size_text_1 │ size_text_2 │ SIZE │\r\n> size_xml │\r\n> ╞══════════════╪═══════════╪═════════════╪═════════════╪══════╪════════════════════════════╡\r\n>\r\n> │ Australia │ ∅ │ ∅ │ ∅ │ ∅ │\r\n> ∅ │\r\n> │ China │ ∅ │ ∅ │ ∅ │ ∅ │\r\n> ∅ │\r\n> │ HongKong │ ∅ │ ∅ │ ∅ │ ∅ │\r\n> ∅ │\r\n> │ India │ ∅ │ ∅ │ ∅ │ ∅ │\r\n> ∅ │\r\n> │ Japan │ ∅ │ ∅ │ ∅ │ ∅ │\r\n> ∅ │\r\n> │ Singapore │ 791 │ 791 │ ∅ │ 791 │ <SIZE\r\n> unit=\"km\">791</SIZE> │\r\n> └──────────────┴───────────┴─────────────┴─────────────┴──────┴────────────────────────────┘\r\n>\r\n> (6 rows)\r\n>\r\n> Regards\r\n>\r\n> Pavel\r\n>\r\n>\r\n>>\r\n>> what patches you are used?\r\n>>\r\n>> Regards\r\n>>\r\n>> Pavel\r\n>>\r\n>>\r\n>>> --\r\n>>> Cheers\r\n>>> Ram 4.0\r\n>>>\r\n>>\r\n\r\n-- \r\nCheers\r\nRam 4.0\r\n\nHi,The below statement needs to be executed before running the query to replicate the issueupdate xmldata set data = regexp_replace(data::text, '791', '<!--ah-->7<!--oh-->9<!--uh-->1')::xml; On Thu, 28 Feb 2019 at 17:55, Pavel Stehule <pavel.stehule@gmail.com> wrote:čt 28. 2. 2019 v 10:31 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:čt 28. 2. 2019 v 9:58 odesílatel Ramanarayana <raam.soft@gmail.com> napsal:Hi, I have tested the three issues fixed in patch 001. Array Indexes issue is still there.Running the following query returns ERROR: more than one value returned by column XPath expressionSELECT xmltable.* FROM (SELECT data FROM xmldata) x, LATERAL XMLTABLE('/ROWS/ROW' PASSING data COLUMNS country_name text PATH 'COUNTRY_NAME/text()' NOT NULL, size_text float PATH 'SIZE/text()', size_text_1 float PATH 'SIZE/text()[1]', size_text_2 float PATH 'SIZE/text()[2]', \"SIZE\" float, size_xml xml PATH 'SIZE') The other two issues are resolved by this patch.I tested xmltable-xpath-result-processing-bugfix-6.patchand it is workingpostgres=# SELECT xmltable.*\r\npostgres-# FROM (SELECT data FROM xmldata) x,\r\npostgres-# LATERAL XMLTABLE('/ROWS/ROW'\r\npostgres(# PASSING data\r\npostgres(# COLUMNS\r\npostgres(# country_name text PATH 'COUNTRY_NAME/text()' NOT NULL,\r\npostgres(# size_text float PATH 'SIZE/text()',\r\npostgres(# size_text_1 float PATH 'SIZE/text()[1]',\r\npostgres(# size_text_2 float PATH 'SIZE/text()[2]',\r\npostgres(# \"SIZE\" float, size_xml xml PATH 'SIZE') ;\r\n┌──────────────┬───────────┬─────────────┬─────────────┬──────┬────────────────────────────┐\r\n│ country_name │ size_text │ size_text_1 │ size_text_2 │ SIZE │ size_xml │\r\n╞══════════════╪═══════════╪═════════════╪═════════════╪══════╪════════════════════════════╡\r\n│ Australia │ ∅ │ ∅ │ ∅ │ ∅ │ ∅ │\r\n│ China │ ∅ │ ∅ │ ∅ │ ∅ │ ∅ │\r\n│ HongKong │ ∅ │ ∅ │ ∅ │ ∅ │ ∅ │\r\n│ India │ ∅ │ ∅ │ ∅ │ ∅ │ ∅ │\r\n│ Japan │ ∅ │ ∅ │ ∅ │ ∅ │ ∅ │\r\n│ Singapore │ 791 │ 791 │ ∅ │ 791 │ <SIZE unit=\"km\">791</SIZE> │\r\n└──────────────┴───────────┴─────────────┴─────────────┴──────┴────────────────────────────┘\r\n(6 rows)\r\nRegardsPavel what patches you are used? RegardsPavel-- CheersRam 4.0\n\n\n-- CheersRam 4.0",
"msg_date": "Fri, 1 Mar 2019 06:06:17 +0530",
"msg_from": "Ramanarayana <raam.soft@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XML/XPath issues: text/CDATA in XMLTABLE, XPath evaluated with\n wrong context"
},
{
"msg_contents": "Hi, thanks for checking the patches!\n\nOn 02/28/19 19:36, Ramanarayana wrote:\n> The below statement needs to be executed before running the query to\n> replicate the issue\n> \n> update xmldata set data = regexp_replace(data::text, '791',\n> '<!--ah-->7<!--oh-->9<!--uh-->1')::xml;\n\nIf you are applying that update (and there is a SIZE element originally\n791), and then receiving a \"more than one value returned by column XPath\nexpression\" error, I believe you are seeing documented, correct behavior.\n\nYour update changes the content of that SIZE element to have three\ncomment nodes and three text nodes.\n\nThe query then contains this column spec:\n\nsize_text float PATH 'SIZE/text()'\n\nwhere the target SQL column type is 'float' and the path expression will\nreturn an XML result consisting of the three text nodes.\n\nAs documented, \"An XML result assigned to a column of any other type may\nnot have more than one node, or an error is raised.\"\n\nSo I think this behavior is correct.\n\nIf you do any more testing (thank you for taking the interest, by the way!),\ncould you please add your comments, not to this email thread, but to [1]?\n\n[1]\nhttps://www.postgresql.org/message-id/3e8eab9e-7289-6c23-5e2c-153cccea2257%40anastigmatix.net\n\nThat's the one that is registered to the commitfest entry, so comments made\non this thread might be overlooked.\n\nThanks!\n-Chap\n\n",
"msg_date": "Thu, 28 Feb 2019 20:31:04 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: XML/XPath issues: text/CDATA in XMLTABLE, XPath evaluated with\n wrong context"
},
{
"msg_contents": "Here's a v3 incorporating Andres' idea of trying to avoid a separate\npalloc for the list cell array. In a 64-bit machine we can have up\nto five ListCells in the initial allocation without actually increasing\nspace consumption at all compared to the old code. So only when a List\ngrows larger than that do we need more than one palloc.\n\nI'm still having considerable difficulty convincing myself that this\nis enough of a win to justify the bug hazards we'll introduce, though.\nOn test cases like \"pg_bench -S\" it seems to be pretty much within the\nnoise level of being the same speed as HEAD. I did see a nice improvement\nin the test case described in\nhttps://www.postgresql.org/message-id/6970.1545327857@sss.pgh.pa.us\nbut considering that that's still mostly a tight loop in\nmatch_eclasses_to_foreign_key_col, it doesn't seem very interesting\nas an overall figure of merit.\n\nI wonder what test cases Andres has been looking at that convince\nhim that we need a reimplementation of Lists.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 02 Mar 2019 18:11:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Hi,\n\nOn 2019-03-02 18:11:43 -0500, Tom Lane wrote:\n> I wonder what test cases Andres has been looking at that convince\n> him that we need a reimplementation of Lists.\n\nMy main observation was from when the expression evaluation was using\nlists all over. List iteration overhead was very substantial there. But\nthat's not a problem anymore, because all of those are gone now due to\nthe expression rewrite. I personally wasn't actually advocating for a\nnew list implementation, I was/am advocating that we should move some\ntasks over to a more optimized representation.\n\nI still regularly see list overhead matter in production workloads. A\nlot of it being memory allocator overhead, which is why I'm concerned\nwith a rewrite that doesn't reduce the number of memory allocations. And\na lot of it is stuff that you won't see in pgbench - e.g. there's a lot\nof production queries that join a bunch of tables with a few dozen\ncolumns, where e.g. all the targetlists are much longer than what you'd\nsee in pgbench -S.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Sat, 2 Mar 2019 20:34:24 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-03-02 18:11:43 -0500, Tom Lane wrote:\n>> I wonder what test cases Andres has been looking at that convince\n>> him that we need a reimplementation of Lists.\n\n> My main observation was from when the expression evaluation was using\n> lists all over. List iteration overhead was very substantial there. But\n> that's not a problem anymore, because all of those are gone now due to\n> the expression rewrite. I personally wasn't actually advocating for a\n> new list implementation, I was/am advocating that we should move some\n> tasks over to a more optimized representation.\n\nI doubt that you'll get far with that; if this experiment is anything\nto go by, it's going to be really hard to make the case that twiddling\nthe representation of widely-known data structures is worth the work\nand bug hazards.\n\n> I still regularly see list overhead matter in production workloads. A\n> lot of it being memory allocator overhead, which is why I'm concerned\n> with a rewrite that doesn't reduce the number of memory allocations.\n\nWell, I did that in the v3 patch, and it still hasn't moved the needle\nnoticeably in any test case I've tried. At this point I'm really\nstruggling to see a reason why we shouldn't just mark this patch rejected\nand move on. If you have test cases that suggest differently, please\nshow them don't just handwave.\n\nThe cases I've been looking at suggest to me that we'd make far\nmore progress on the excessive-palloc'ing front if we could redesign\nthings to reduce unnecessary copying of parsetrees. Right now the\nplanner does an awful lot of copying because of fear of unwanted\nmodifications of multiply-linked subtrees. I suspect that we could\nreduce that overhead with some consistently enforced rules about\nnot scribbling on input data structures; but it'd take a lot of work\nto get there, and I'm afraid it'd be easily broken :-(\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 03 Mar 2019 13:29:04 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On Mon, 4 Mar 2019 at 07:29, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Andres Freund <andres@anarazel.de> writes:\n> > I still regularly see list overhead matter in production workloads. A\n> > lot of it being memory allocator overhead, which is why I'm concerned\n> > with a rewrite that doesn't reduce the number of memory allocations.\n>\n> Well, I did that in the v3 patch, and it still hasn't moved the needle\n> noticeably in any test case I've tried. At this point I'm really\n> struggling to see a reason why we shouldn't just mark this patch rejected\n> and move on. If you have test cases that suggest differently, please\n> show them don't just handwave.\n\nI think we discussed this before, but... if this patch is not a win by\nitself (and we've already seen it's not really causing much in the way\nof regression, if any), then we need to judge it on what else we can\ndo to exploit the new performance characteristics of List. For\nexample list_nth() is now deadly fast.\n\nMy primary interest here is getting rid of a few places where we build\nan array version of some List so that we can access the Nth element\nmore quickly. What goes on in ExecInitRangeTable() is not particularly\ngreat for queries to partitioned tables with a large number of\npartitions where only one survives run-time pruning. I've hacked\ntogether a patch to show you what wins we can have with the new list\nimplementation.\n\nUsing the attached, (renamed to .txt to not upset CFbot) I get:\n\nsetup:\n\ncreate table hashp (a int, b int) partition by hash (a);\nselect 'create table hashp'||x||' partition of hashp for values with\n(modulus 10000, remainder '||x||');' from generate_Series(0,9999) x;\n\\gexec\nalter table hashp add constraint hashp_pkey PRIMARY KEY (a);\n\npostgresql.conf\nplan_cache_mode = force_generic_plan\nmax_parallel_workers_per_gather=0\nmax_locks_per_transaction=256\n\nbench.sql\n\n\\set p random(1,10000)\nselect * from hashp where a = :p;\n\nmaster:\n\ntps = 189.499654 (excluding connections establishing)\ntps = 195.102743 (excluding connections establishing)\ntps = 194.338813 (excluding connections establishing)\n\nyour List reimplementation v3 + attached\n\ntps = 12852.003735 (excluding connections establishing)\ntps = 12791.834617 (excluding connections establishing)\ntps = 12691.515641 (excluding connections establishing)\n\nThe attached does include [1], but even with just that the performance\nis not as good as with the arraylist plus the follow-on exploits I\nadded. Now that we have a much faster bms_next_member() some form of\nwhat in there might be okay.\n\nA profile shows that in this workload we're still spending 42% of the\n12k TPS in hash_seq_search(). That's due to LockReleaseAll() having a\nhard time of it due to the bloated lock table from having to build the\ngeneric plan with 10k partitions. [2] aims to fix that, so likely\nwe'll be closer to 18k TPS, or about 100x faster.\n\nIn fact, I should test that...\n\ntps = 18763.977940 (excluding connections establishing)\ntps = 18589.531558 (excluding connections establishing)\ntps = 19011.295770 (excluding connections establishing)\n\nYip, about 100x.\n\nI think these are worthy goals to aspire to.\n\n[1] https://commitfest.postgresql.org/22/1897/\n[2] https://commitfest.postgresql.org/22/1993/\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Mon, 4 Mar 2019 14:18:38 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On Sun, Mar 3, 2019 at 1:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > My main observation was from when the expression evaluation was using\n> > lists all over. List iteration overhead was very substantial there. But\n> > that's not a problem anymore, because all of those are gone now due to\n> > the expression rewrite. I personally wasn't actually advocating for a\n> > new list implementation, I was/am advocating that we should move some\n> > tasks over to a more optimized representation.\n>\n> I doubt that you'll get far with that; if this experiment is anything\n> to go by, it's going to be really hard to make the case that twiddling\n> the representation of widely-known data structures is worth the work\n> and bug hazards.\n\nI'm befuddled by this comment. Andres is arguing that we shouldn't go\ndo a blind search-and-replace, but rather change certain things, and\nyou're saying that's going to be really hard because twiddling the\nrepresentation of widely-known data structures is really hard. But if\nwe only change certain things, we don't *need* to twiddle the\nrepresentation of a widely-known data structure. We just add a new\none and convert the things that benefit from it, like I proposed\nupthread (and promptly got told I was wrong).\n\nI think the reason why you're not seeing a performance benefit is\nbecause the problem is not that lists are generically a more expensive\ndata structure than arrays, but that there are cases when they are\nmore expensive than arrays. If you only ever push/pop at the front,\nof course a list is going to be better. If you often look up elements\nby index, of course an array is going to be better. If you change\nevery case where the code currently uses a list to use something else\ninstead, then you're changing both the winning and losing cases.\nYeah, changing things individually is more work, but that's how you\nget the wins without incurring the losses.\n\nI think David's results go in this direction, too. Code that was\nwritten on the assumption that list_nth() is slow is going to avoid\nusing it as much as possible, and therefore no benefit is to be\nexpected from making it fast. If the author written the same code\nassuming that the underlying data structure was an array rather than a\nlist, they might have picked a different algorithm which, as David's\nresults shows, could be a lot faster in some cases. But it's not\ngoing to come from just changing the way lists work internally; it's\ngoing to come from redesigning the algorithms that are using lists to\ndo something better instead, as Andres's example of linearized\nexpression evaluation also shows.\n\n> The cases I've been looking at suggest to me that we'd make far\n> more progress on the excessive-palloc'ing front if we could redesign\n> things to reduce unnecessary copying of parsetrees. Right now the\n> planner does an awful lot of copying because of fear of unwanted\n> modifications of multiply-linked subtrees. I suspect that we could\n> reduce that overhead with some consistently enforced rules about\n> not scribbling on input data structures; but it'd take a lot of work\n> to get there, and I'm afraid it'd be easily broken :-(\n\nI think that's a separate but also promising thing to attack, and I\nagree that it'd take a lot of work to get there. I don't think that\nthe problem with either parse-tree-copying or list usage is that no\nperformance benefits are to be had; I think it's that the amount of\nwork required to get those benefits is pretty large. If it were\notherwise, somebody likely would have done it before now.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Mon, 4 Mar 2019 12:44:41 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I think the reason why you're not seeing a performance benefit is\n> because the problem is not that lists are generically a more expensive\n> data structure than arrays, but that there are cases when they are\n> more expensive than arrays. If you only ever push/pop at the front,\n> of course a list is going to be better. If you often look up elements\n> by index, of course an array is going to be better. If you change\n> every case where the code currently uses a list to use something else\n> instead, then you're changing both the winning and losing cases.\n\nI don't think this argument is especially on-point, because what I'm\nactually seeing is just that there aren't any list operations that\nare expensive enough to make much of an overall difference in\ntypical queries. To the extent that an array reimplementation\nreduces the palloc traffic, it'd take some load off that subsystem,\nbut apparently you need not-typical queries to really notice.\n(And, if the real motivation is aggregate palloc savings, then yes you\nreally do want to replace everything...)\n\n> Yeah, changing things individually is more work, but that's how you\n> get the wins without incurring the losses.\n\nThe concern I have is mostly about the use of lists as core infrastructure\nin parsetree, plantree, etc data structures. I think any idea that we'd\nreplace those piecemeal is borderline insane: it's simply not worth it\nfrom a notational and bug-risk standpoint to glue together some parts of\nthose structures differently from the way other parts are glued together.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 04 Mar 2019 13:11:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Hi,\n\nOn 2019-03-02 18:11:43 -0500, Tom Lane wrote:\n> On test cases like \"pg_bench -S\" it seems to be pretty much within the\n> noise level of being the same speed as HEAD.\n\nI think that might be because it's bottleneck is just elsewhere\n(e.g. very context switch heavy, very few lists of any length).\n\nFWIW, even just taking context switches out of the equation leads to\na ~5-6 %benefit in a simple statement:\n\nDO $f$BEGIN FOR i IN 1..500000 LOOP EXECUTE $s$SELECT aid, bid, abalance, filler FROM pgbench_accounts WHERE aid = 2045530;$s$;END LOOP;END;$f$;\n\nmaster:\n+ 6.05% postgres postgres [.] AllocSetAlloc\n+ 5.52% postgres postgres [.] base_yyparse\n+ 2.51% postgres postgres [.] palloc\n+ 1.82% postgres postgres [.] hash_search_with_hash_value\n+ 1.61% postgres postgres [.] core_yylex\n+ 1.57% postgres postgres [.] SearchCatCache1\n+ 1.43% postgres postgres [.] expression_tree_walker.part.4\n+ 1.09% postgres postgres [.] check_stack_depth\n+ 1.08% postgres postgres [.] MemoryContextAllocZeroAligned\n\npatch v3:\n+ 5.77% postgres postgres [.] base_yyparse\n+ 4.88% postgres postgres [.] AllocSetAlloc\n+ 1.95% postgres postgres [.] hash_search_with_hash_value\n+ 1.89% postgres postgres [.] core_yylex\n+ 1.64% postgres postgres [.] SearchCatCache1\n+ 1.46% postgres postgres [.] expression_tree_walker.part.0\n+ 1.45% postgres postgres [.] palloc\n+ 1.18% postgres postgres [.] check_stack_depth\n+ 1.13% postgres postgres [.] MemoryContextAllocZeroAligned\n+ 1.04% postgres libc-2.28.so [.] _int_malloc\n+ 1.01% postgres postgres [.] nocachegetattr\n\nAnd even just pgbenching the EXECUTEd statement above gives me a\nreproducible ~3.5% gain when using -M simple, and ~3% when using -M\nprepared.\n\nNote than when not using prepared statement (a pretty important\nworkload, especially as long as we don't have a pooling solution that\nactually allows using prepared statement across connections), even after\nthe patch most of the allocator overhead is still from list allocations,\nbut it's near exclusively just the \"create a new list\" case:\n\n+ 5.77% postgres postgres [.] base_yyparse\n- 4.88% postgres postgres [.] AllocSetAlloc\n - 80.67% AllocSetAlloc\n - 68.85% AllocSetAlloc\n - 57.65% palloc\n - 50.30% new_list (inlined)\n - 37.34% lappend\n + 12.66% pull_var_clause_walker\n + 8.83% build_index_tlist (inlined)\n + 8.80% make_pathtarget_from_tlist\n + 8.73% get_quals_from_indexclauses (inlined)\n + 8.73% distribute_restrictinfo_to_rels\n + 8.68% RewriteQuery\n + 8.56% transformTargetList\n + 8.46% make_rel_from_joinlist\n + 4.36% pg_plan_queries\n + 4.30% add_rte_to_flat_rtable (inlined)\n + 4.29% build_index_paths\n + 4.23% match_clause_to_index (inlined)\n + 4.22% expression_tree_mutator\n + 4.14% transformFromClause\n + 1.02% get_index_paths\n + 17.35% list_make1_impl\n + 16.56% list_make1_impl (inlined)\n + 15.87% lcons\n + 11.31% list_copy (inlined)\n + 1.58% lappend_oid\n + 12.90% expression_tree_mutator\n + 9.73% get_relation_info\n + 4.71% bms_copy (inlined)\n + 2.44% downcase_identifier\n + 2.43% heap_tuple_untoast_attr\n + 2.37% add_rte_to_flat_rtable (inlined)\n + 1.69% btbeginscan\n + 1.65% CreateTemplateTupleDesc\n + 1.61% core_yyalloc (inlined)\n + 1.59% heap_copytuple\n + 1.54% text_to_cstring (inlined)\n + 0.84% ExprEvalPushStep (inlined)\n + 0.84% ExecInitRangeTable\n + 0.84% scanner_init\n + 0.83% ExecInitRangeTable\n + 0.81% CreateQueryDesc\n + 0.81% _bt_search\n + 0.77% ExecIndexBuildScanKeys\n + 0.66% RelationGetIndexScan\n + 0.65% make_pathtarget_from_tlist\n\n\nGiven how hard it is to improve performance with as flatly distributed\ncosts as the above profiles, I actually think these are quite promising\nresults.\n\nI'm not even convinced that it makes all that much sense to measure\nend-to-end performance here, it might be worthwhile to measure with a\ndebugging function that allows to exercise parsing, parse-analysis,\nrewrite etc at configurable loop counts. Given the relatively evenly\ndistributed profiles were going to have to make a few different\nimprovements to make headway, and it's hard to see benefits of\nindividual ones if you look at the overall numbers.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 4 Mar 2019 11:01:33 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Hi,\n\nOn 2019-03-03 13:29:04 -0500, Tom Lane wrote:\n> The cases I've been looking at suggest to me that we'd make far\n> more progress on the excessive-palloc'ing front if we could redesign\n> things to reduce unnecessary copying of parsetrees. Right now the\n> planner does an awful lot of copying because of fear of unwanted\n> modifications of multiply-linked subtrees. I suspect that we could\n> reduce that overhead with some consistently enforced rules about\n> not scribbling on input data structures; but it'd take a lot of work\n> to get there, and I'm afraid it'd be easily broken :-(\n\nGiven the difficulty of this tasks, isn't your patch actually a *good*\nattack on the problem? It makes copying lists considerably cheaper. As\nyou say, a more principled answer to this problem is hard, so attacking\nit from the \"make the constant factors smaller\" side doesn't seem crazy?\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 4 Mar 2019 11:03:27 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Hi,\n\nOn 2019-03-04 13:11:35 -0500, Tom Lane wrote:\n> The concern I have is mostly about the use of lists as core infrastructure\n> in parsetree, plantree, etc data structures. I think any idea that we'd\n> replace those piecemeal is borderline insane: it's simply not worth it\n> from a notational and bug-risk standpoint to glue together some parts of\n> those structures differently from the way other parts are glued together.\n\nI don't buy this. I think e.g. redisgning the way we represent\ntargetlists would be good (it's e.g. insane that we recompute\ndescriptors out of them all the time), and would reduce their allocator\ncosts.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 4 Mar 2019 11:06:12 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I don't buy this. I think e.g. redisgning the way we represent\n> targetlists would be good (it's e.g. insane that we recompute\n> descriptors out of them all the time), and would reduce their allocator\n> costs.\n\nMaybe we're not on the same page here, but it seems to me that that'd be\naddressable with pretty localized changes (eg, adding more fields to\nTargetEntry, or keeping a pre-instantiated output tupdesc in each Plan\nnode). But if the concern is about the amount of palloc bandwidth going\ninto List cells, we're not going to be able to improve that with localized\ndata structure changes; it'll take something like the patch I've proposed.\n\nI *have* actually done some tests of the sort you proposed, driving\njust the planner and not any of the rest of the system, but I still\ndidn't find much evidence of big wins. I find it interesting that\nyou get different results.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 04 Mar 2019 16:28:40 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On Mon, Mar 4, 2019 at 12:44:41PM -0500, Robert Haas wrote:\n> I think that's a separate but also promising thing to attack, and I\n> agree that it'd take a lot of work to get there. I don't think that\n> the problem with either parse-tree-copying or list usage is that no\n> performance benefits are to be had; I think it's that the amount of\n> work required to get those benefits is pretty large. If it were\n> otherwise, somebody likely would have done it before now.\n\nStupid question, but do we use any kind of reference counter to know if\ntwo subsystems look at a structure, and a copy is required?\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n",
"msg_date": "Mon, 4 Mar 2019 17:03:54 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On Mon, Mar 4, 2019 at 01:11:35PM -0500, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > I think the reason why you're not seeing a performance benefit is\n> > because the problem is not that lists are generically a more expensive\n> > data structure than arrays, but that there are cases when they are\n> > more expensive than arrays. If you only ever push/pop at the front,\n> > of course a list is going to be better. If you often look up elements\n> > by index, of course an array is going to be better. If you change\n> > every case where the code currently uses a list to use something else\n> > instead, then you're changing both the winning and losing cases.\n> \n> I don't think this argument is especially on-point, because what I'm\n> actually seeing is just that there aren't any list operations that\n> are expensive enough to make much of an overall difference in\n> typical queries. To the extent that an array reimplementation\n> reduces the palloc traffic, it'd take some load off that subsystem,\n> but apparently you need not-typical queries to really notice.\n> (And, if the real motivation is aggregate palloc savings, then yes you\n> really do want to replace everything...)\n\nCould it be that allocating List* structures near the structure it\npoints to is enough of a benefit in terms of cache hits that it is a\nloss when moving to a List* array?\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n",
"msg_date": "Mon, 4 Mar 2019 17:08:26 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On Mon, Mar 4, 2019 at 2:04 PM Bruce Momjian <bruce@momjian.us> wrote:\n> On Mon, Mar 4, 2019 at 12:44:41PM -0500, Robert Haas wrote:\n> > I think that's a separate but also promising thing to attack, and I\n> > agree that it'd take a lot of work to get there. I don't think that\n> > the problem with either parse-tree-copying or list usage is that no\n> > performance benefits are to be had; I think it's that the amount of\n> > work required to get those benefits is pretty large. If it were\n> > otherwise, somebody likely would have done it before now.\n>\n> Stupid question, but do we use any kind of reference counter to know if\n> two subsystems look at a structure, and a copy is required?\n\nNo, but I wonder if we could use Valgrind to enforce rules about who\nhas the right to scribble on what, when. That could make it a lot\neasier to impose a new rule.\n\n-- \nPeter Geoghegan\n\n",
"msg_date": "Mon, 4 Mar 2019 14:08:36 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Hi,\n\nOn 2019-03-04 16:28:40 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I don't buy this. I think e.g. redisgning the way we represent\n> > targetlists would be good (it's e.g. insane that we recompute\n> > descriptors out of them all the time), and would reduce their allocator\n> > costs.\n> \n> Maybe we're not on the same page here, but it seems to me that that'd be\n> addressable with pretty localized changes (eg, adding more fields to\n> TargetEntry, or keeping a pre-instantiated output tupdesc in each Plan\n> node). But if the concern is about the amount of palloc bandwidth going\n> into List cells, we're not going to be able to improve that with localized\n> data structure changes; it'll take something like the patch I've proposed.\n\nWhat I'm saying is that it'd be reasonable to replace the use of list\nfor targetlists with 'list2' without a wholesale replacement of all the\nlist code, and it'd give us benefits.\n\n\n> I find it interesting that you get different results.\n\nWhat I reported weren't vanilla pgbench -S results, so there's that\ndifference. If measure the DO loop based test I posted, do you see a\ndifference?\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 4 Mar 2019 14:11:01 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On Tue, 5 Mar 2019 at 11:11, Andres Freund <andres@anarazel.de> wrote:\n> On 2019-03-04 16:28:40 -0500, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > I don't buy this. I think e.g. redisgning the way we represent\n> > > targetlists would be good (it's e.g. insane that we recompute\n> > > descriptors out of them all the time), and would reduce their allocator\n> > > costs.\n> >\n> > Maybe we're not on the same page here, but it seems to me that that'd be\n> > addressable with pretty localized changes (eg, adding more fields to\n> > TargetEntry, or keeping a pre-instantiated output tupdesc in each Plan\n> > node). But if the concern is about the amount of palloc bandwidth going\n> > into List cells, we're not going to be able to improve that with localized\n> > data structure changes; it'll take something like the patch I've proposed.\n>\n> What I'm saying is that it'd be reasonable to replace the use of list\n> for targetlists with 'list2' without a wholesale replacement of all the\n> list code, and it'd give us benefits.\n\nSo you think targetlists are the only case to benefit from an array\nbased list? (Ignoring the fact that I already showed another case)\nWhen we discover the next thing to benefit, then the replacement will\nbe piecemeal, just the way Tom would rather not do it. I personally\ndon't want to be up against huge resistance when I discover that\nturning a single usage of a List into List2 is better. We'll need to\nconsider backpatching pain / API breakage *every single time*.\n\nA while ago I did have a go at changing some List implementations for\nmy then proposed ArrayList and it was beyond a nightmare, as each time\nI changed one I realised I needed to change another. In the end, I\njust gave up. Think of all the places we have forboth() and\nforthree(), we'll need to either provide a set of macros that take\nvarious combinations of List and List2 or do some conversion\nbeforehand. With respect, if you don't believe me, please take my\nArrayList patch [1] and have a go at changing targetlists to use\nArrayLists all the way from the parser through to the executor. I'll\nbe interested in the diff stat once you're done.\n\nIt's true that linked lists are certainly better for some stuff;\nlist_concat() is going to get slower, lcons() too, but likely we can\nhave a bonus lcons() elimination round at some point. I see quite a\nfew of them that look like they could be changed to lappend(). I also\njust feel that if we insist on more here then we'll get about nothing.\nI'm also blocked on my partition performance improvement goals on\nlist_nth() being O(N), so I'm keen to see progress here and do what I\ncan to help with that. With list_concat() I find that pretty scary\nanyway. Using it means we can have a valid list that does not get it's\nlength updated when someone appends a new item. Most users of that do\nlist_copy() to sidestep that and other issues... which likely is\nsomething we'd want to rip out with Tom's patch.\n\n[1] https://www.postgresql.org/message-id/CAKJS1f_2SnXhPVa6eWjzy2O9A=ocwgd0Cj-LQeWpGtrWqbUSDA@mail.gmail.com\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Tue, 5 Mar 2019 12:42:47 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Hi,\n\nOn 2019-03-05 12:42:47 +1300, David Rowley wrote:\n> So you think targetlists are the only case to benefit from an array\n> based list? (Ignoring the fact that I already showed another case)\n\nNo, that's not what I'm trying to say at all. I think there's plenty\ncases where it'd be beneficial. In this subthread we're just arguing\nwhether it's somewhat feasible to not change everything, and I'm still\nfairly convinced that's possible; but I'm not arguing that that's the\nbest way.\n\n\n> It's true that linked lists are certainly better for some stuff;\n> list_concat() is going to get slower, lcons() too, but likely we can\n> have a bonus lcons() elimination round at some point. I see quite a\n> few of them that look like they could be changed to lappend(). I also\n> just feel that if we insist on more here then we'll get about nothing.\n> I'm also blocked on my partition performance improvement goals on\n> list_nth() being O(N), so I'm keen to see progress here and do what I\n> can to help with that. With list_concat() I find that pretty scary\n> anyway. Using it means we can have a valid list that does not get it's\n> length updated when someone appends a new item. Most users of that do\n> list_copy() to sidestep that and other issues... which likely is\n> something we'd want to rip out with Tom's patch.\n\nYes, I think you have a point that progress here would be good and that\nit's worth some pain. But the names will make even less sense if we just\nshunt in an array based approach under the already obscure list\nAPI. Obviously the individual pain of that is fairly small, but over the\nyears and everybody reading PG code, it's also substantial. So I'm\ntorn.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 4 Mar 2019 15:54:02 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On Tue, 5 Mar 2019 at 12:54, Andres Freund <andres@anarazel.de> wrote:\n> Yes, I think you have a point that progress here would be good and that\n> it's worth some pain. But the names will make even less sense if we just\n> shunt in an array based approach under the already obscure list\n> API.\n\nIf we feel strongly about fixing that then probably it would be as\nsimple as renaming the functions and adding some macros with the old\nnames and insisting that all new or changed code use the functions and\nnot the macro wrappers. That could be followed up by a final sweep in\nN years time when the numbers have dwindled to a low enough level. All\nthat code mustn't be getting modified anyway, so not much chance\nbackpatching pain.\n\nI see length() finally died in a similar way in Tom's patch. Perhaps\ndoing this would have people consider lcons more carefully before they\nuse it over lappend.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Tue, 5 Mar 2019 13:16:42 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> ... With list_concat() I find that pretty scary\n> anyway. Using it means we can have a valid list that does not get it's\n> length updated when someone appends a new item. Most users of that do\n> list_copy() to sidestep that and other issues... which likely is\n> something we'd want to rip out with Tom's patch.\n\nYeah, it's a bit OT for this patch, but I'd noticed the prevalence of\nlocutions like list_concat(list_copy(list1), list2), and been thinking\nof proposing that we add some new primitives with, er, less ad-hoc\nbehavior. The patch at hand already changes the semantics of list_concat\nin a somewhat saner direction, but I think there is room for a version\nof list_concat that treats both its inputs as const Lists.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 04 Mar 2019 19:32:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> On Tue, 5 Mar 2019 at 12:54, Andres Freund <andres@anarazel.de> wrote:\n>> Yes, I think you have a point that progress here would be good and that\n>> it's worth some pain. But the names will make even less sense if we just\n>> shunt in an array based approach under the already obscure list\n>> API.\n\n> If we feel strongly about fixing that then probably it would be as\n> simple as renaming the functions and adding some macros with the old\n> names and insisting that all new or changed code use the functions and\n> not the macro wrappers.\n\nMeh ... Neil Conway already did a round of that back in 2004 or whenever,\nand I'm not especially excited about another round. I'm not really\nfollowing Andres' aversion to the list API --- it's not any more obscure\nthan a whole lot of things in Postgres. (Admittedly, as somebody who\ndabbled in Lisp decades ago, I might be more used to it than some folks.)\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 04 Mar 2019 19:36:12 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Here's a new version of the Lists-as-arrays patch. It's rebased up to\nHEAD, and I also realized that I could fix the problem with multiple\nevaluation of the List arguments of foreach etc. by using structure\nassignment. So that gets rid of a large chunk of the semantic gotchas\nthat were in the previous patch. You still have to be careful about\ncode that deletes list entries within a foreach() over the list ---\nbut nearly all such code is using list_delete_cell, which means\nyou'll have to touch it anyway because of the API change for that\nfunction.\n\nPreviously, the typical logic for deletion-within-a-loop involved\neither advancing or not advancing a \"prev\" pointer that was used\nwith list_delete_cell. The way I've recoded that here changes those\nloops to use an integer list index that gets incremented or not.\n\nNow, it turns out that the new formulation of foreach() is really\nstrictly equivalent to\n\n\tfor (int pos = 0; pos < list_length(list); pos++)\n\t{\n\t\twhatever-type item = list_nth(list, pos);\n\t\t...\n\t}\n\nwhich means that it could cope fine with deletion of the current\nlist element if we were to provide some supported way of not\nincrementing the list index counter. That is, instead of\ncode that looks more or less like this:\n\n\tfor (int pos = 0; pos < list_length(list); pos++)\n\t{\n\t\twhatever-type item = list_nth(list, pos);\n\t\t...\n\t\tif (delete_cur)\n\t\t{\n\t\t\tlist = list_delete_nth_cell(list, pos);\n\t\t\tpos--; /* keep loop in sync with deletion */\n\t\t}\n\t}\n\nwe could write, say:\n\n\tforeach(lc, list)\n\t{\n\t\twhatever-type item = lfirst(lc);\n\t\t...\n\t\tif (delete_cur)\n\t\t{\n\t\t\tlist = list_delete_cell(list, lc);\n\t\t\tforeach_backup(lc); /* keep loop in sync with deletion */\n\t\t}\n\t}\n\nwhich is the same thing under the hood. I'm not quite sure if that way\nis better or not. It's more magical than explicitly manipulating a list\nindex, but it's also shorter and therefore less subject to typos.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 24 May 2019 20:53:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On Sat, 25 May 2019 at 12:53, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Now, it turns out that the new formulation of foreach() is really\n> strictly equivalent to\n>\n> for (int pos = 0; pos < list_length(list); pos++)\n> {\n> whatever-type item = list_nth(list, pos);\n> ...\n> }\n>\n> which means that it could cope fine with deletion of the current\n> list element if we were to provide some supported way of not\n> incrementing the list index counter. That is, instead of\n> code that looks more or less like this:\n>\n> for (int pos = 0; pos < list_length(list); pos++)\n> {\n> whatever-type item = list_nth(list, pos);\n> ...\n> if (delete_cur)\n> {\n> list = list_delete_nth_cell(list, pos);\n> pos--; /* keep loop in sync with deletion */\n> }\n> }\n>\n> we could write, say:\n>\n> foreach(lc, list)\n> {\n> whatever-type item = lfirst(lc);\n> ...\n> if (delete_cur)\n> {\n> list = list_delete_cell(list, lc);\n> foreach_backup(lc); /* keep loop in sync with deletion */\n> }\n> }\n>\n> which is the same thing under the hood. I'm not quite sure if that way\n> is better or not. It's more magical than explicitly manipulating a list\n> index, but it's also shorter and therefore less subject to typos.\n\nIf we're doing an API break for this, wouldn't it be better to come up\nwith something that didn't have to shuffle list elements around every\ntime one is deleted?\n\nFor example, we could have a foreach_delete() that instead of taking a\npointer to a ListCell, it took a ListDeleteIterator which contained a\nListCell pointer and a Bitmapset, then just have a macro that marks a\nlist item as deleted (list_delete_current(di)) and have a final\ncleanup at the end of the loop.\n\nThe cleanup operation can still use memmove, but just only move up\nuntil the next bms_next_member on the deleted set, something like\n(handwritten and untested):\n\nvoid\nlist_finalize_delete(List *list, ListDeleteIterator *di)\n{\n int srcpos, curr, tarpos;\n\n /* Zero the source and target list position markers */\n srcpos = tarpos = 0;\n curr = -1;\n while ((curr = bms_next_member(di->deleted, curr) >= 0)\n {\n int n = curr - srcpos;\n if (n > 0)\n {\n memmove(&list->elements[tarpos], &list->elements[srcpos],\n n * sizeof(ListCell));\n tarpos += n;\n }\n srcpos = curr + 1;\n }\n list->length = tarpos;\n}\n\nOr maybe we should worry about having the list in an inconsistent\nstate during the loop? e.g if the list is getting passed into a\nfunction call to do something.\n\n\n--\n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Sat, 25 May 2019 15:26:33 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "I wrote:\n> Here's a new version of the Lists-as-arrays patch.\n\nThe cfbot noticed a set-but-not-used variable that my compiler hadn't\nwhined about. Here's a v5 to pacify it. No other changes.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 25 May 2019 11:48:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> If we're doing an API break for this, wouldn't it be better to come up\n> with something that didn't have to shuffle list elements around every\n> time one is deleted?\n\nAgreed that as long as there's an API break anyway, we could consider\nmore extensive changes for this use-case. But ...\n\n> For example, we could have a foreach_delete() that instead of taking a\n> pointer to a ListCell, it took a ListDeleteIterator which contained a\n> ListCell pointer and a Bitmapset, then just have a macro that marks a\n> list item as deleted (list_delete_current(di)) and have a final\n> cleanup at the end of the loop.\n\n... I'm not quite sold on this particular idea. The amount of added\nbitmapset manipulation overhead seems rather substantial in comparison\nto the memmove work saved. It might win for cases involving very\nlong lists with many entries being deleted in one operation, but\nI don't think that's a common scenario for us. It's definitely a\nloss when there's just one item to be deleted, which I think is a\ncommon case. (Of course, callers expecting that could just not\nuse this multi-delete API.)\n\n> Or maybe we should worry about having the list in an inconsistent\n> state during the loop? e.g if the list is getting passed into a\n> function call to do something.\n\nNot following that? If I understand your idea correctly, the list\ndoesn't actually get changed until the cleanup step. If we pass it to\nanother operation that independently deletes some members meanwhile,\nthat's trouble; but it'd be trouble for the existing code, and for my\nversion of the patch too.\n\nFWIW, I don't really see a need to integrate this idea into the\nloop logic as such. You could just define it as \"make a bitmap\nof the list indexes to delete, then call\n\"list = list_delete_multi(list, bitmapset)\". It would be\nhelpful perhaps if we provided official access to the current\nlist index that the foreach macro is maintaining internally.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 25 May 2019 12:46:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On Sat, May 25, 2019 at 11:48:47AM -0400, Tom Lane wrote:\n> I wrote:\n> > Here's a new version of the Lists-as-arrays patch.\n> \n> The cfbot noticed a set-but-not-used variable that my compiler hadn't\n> whined about. Here's a v5 to pacify it. No other changes.\n\nHave you tested the performance impact?\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Thu, 13 Jun 2019 21:54:01 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On Fri, 14 Jun 2019 at 13:54, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Sat, May 25, 2019 at 11:48:47AM -0400, Tom Lane wrote:\n> > I wrote:\n> > > Here's a new version of the Lists-as-arrays patch.\n> >\n> > The cfbot noticed a set-but-not-used variable that my compiler hadn't\n> > whined about. Here's a v5 to pacify it. No other changes.\n>\n> Have you tested the performance impact?\n\nI did some and posted earlier in the thread:\nhttps://postgr.es/m/CAKJS1f8h2vs8M0cgFsgfivfkjvudU5-MZO1gJB2uf0m8_9VCpQ@mail.gmail.com\n\nIt came out only slightly slower over the whole regression test run,\nwhich I now think is surprisingly good considering how much we've\ntuned the code over the years with the assumption that List is a\nsingly linked list. We'll be able to get rid of things like\nPlannerInfo's simple_rte_array and append_rel_array along with\nEState's es_range_table_array.\n\nI'm particularly happy about getting rid of es_range_table_array since\ninitialising a plan with many partitions ends up costing quite a bit\njust to build that array. Run-time pruning might end up pruning all\nbut one of those, so getting rid of something that's done per\npartition is pretty good. (There's also the locking, but that's\nanother problem).\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Fri, 14 Jun 2019 14:05:19 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> On Fri, 14 Jun 2019 at 13:54, Bruce Momjian <bruce@momjian.us> wrote:\n>> Have you tested the performance impact?\n\n> I did some and posted earlier in the thread:\n> https://postgr.es/m/CAKJS1f8h2vs8M0cgFsgfivfkjvudU5-MZO1gJB2uf0m8_9VCpQ@mail.gmail.com\n\n> It came out only slightly slower over the whole regression test run,\n> which I now think is surprisingly good considering how much we've\n> tuned the code over the years with the assumption that List is a\n> singly linked list. We'll be able to get rid of things like\n> PlannerInfo's simple_rte_array and append_rel_array along with\n> EState's es_range_table_array.\n\nYeah. I have not made any attempt at all in the current patch to\nre-tune the code, or clean up places that are maintaining parallel\nLists and arrays (such as the ones David mentions). So it's not\nentirely fair to take the current state of the patch as representative\nof where performance would settle once we've bought into the new\nmethod.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 13 Jun 2019 22:32:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Hi,\n\nOn 5/25/19 11:48 AM, Tom Lane wrote:\n> The cfbot noticed a set-but-not-used variable that my compiler hadn't\n> whined about. Here's a v5 to pacify it. No other changes.\n> \n\nThis needs a rebase. After that check-world passes w/ and w/o \n-DDEBUG_LIST_MEMORY_USAGE.\n\nThere is some unneeded MemoryContext stuff in async.c's \npg_listening_channels() which should be cleaned up.\n\nThanks for working on this, as the API is more explicit now about what \nis going on.\n\nBest regards,\n Jesper\n\n\n",
"msg_date": "Mon, 1 Jul 2019 12:58:07 -0400",
"msg_from": "Jesper Pedersen <jesper.pedersen@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Jesper Pedersen <jesper.pedersen@redhat.com> writes:\n> This needs a rebase. After that check-world passes w/ and w/o \n> -DDEBUG_LIST_MEMORY_USAGE.\n\nYup, here's a rebase against HEAD (and I also find that check-world shows\nno problems). This is pretty much of a pain to maintain, since it changes\nthe API for lnext() which is, um, a bit invasive. I'd like to make a\ndecision pretty quickly on whether we're going to do this, and either\ncommit this patch or abandon it.\n\n> There is some unneeded MemoryContext stuff in async.c's \n> pg_listening_channels() which should be cleaned up.\n\nYeah, there's a fair amount of follow-on cleanup that could be undertaken\nafterwards, but I've wanted to keep the patch's footprint as small as\npossible for the moment. Assuming we pull the trigger, I'd then go look\nat removing the planner's duplicative lists+arrays for RTEs and such as\nthe first cleanup step. But thanks for the pointer to async.c, I'll\ncheck that too.\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 01 Jul 2019 14:44:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Hi,\n\nOn 7/1/19 2:44 PM, Tom Lane wrote:\n> Yup, here's a rebase against HEAD (and I also find that check-world shows\n> no problems).\n\nThanks - no further comments.\n\n> This is pretty much of a pain to maintain, since it changes\n> the API for lnext() which is, um, a bit invasive. I'd like to make a\n> decision pretty quickly on whether we're going to do this, and either\n> commit this patch or abandon it.\n> \n\nIMHO it is an improvement over the existing API.\n\n>> There is some unneeded MemoryContext stuff in async.c's\n>> pg_listening_channels() which should be cleaned up.\n> \n> Yeah, there's a fair amount of follow-on cleanup that could be undertaken\n> afterwards, but I've wanted to keep the patch's footprint as small as\n> possible for the moment. Assuming we pull the trigger, I'd then go look\n> at removing the planner's duplicative lists+arrays for RTEs and such as\n> the first cleanup step. But thanks for the pointer to async.c, I'll\n> check that too.\n> \n\nYeah, I only called out the async.c change, as that function isn't \nlikely to change in any of the follow up patches.\n\nBest regards,\n Jesper\n\n\n",
"msg_date": "Mon, 1 Jul 2019 15:18:02 -0400",
"msg_from": "Jesper Pedersen <jesper.pedersen@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "I spent some time experimenting with the idea mentioned upthread of\nadding a macro to support deletion of a foreach loop's current element\n(adjusting the loop's state behind the scenes). This turns out to work\nreally well: it reduces the complexity of fixing existing loops around\nelement deletions quite a bit. Whereas in existing code you have to not\nuse foreach() at all, and you have to track both the next list element and\nthe previous undeleted element, now you can use foreach() and you don't\nhave to mess with extra variables at all.\n\nA good example appears in the trgm_regexp.c changes below. Typically\nwe've coded such loops with a handmade expansion of foreach, like\n\n\tprev = NULL;\n\tcell = list_head(state->enterKeys);\n\twhile (cell)\n\t{\n\t\tTrgmStateKey *existingKey = (TrgmStateKey *) lfirst(cell);\n\n\t\tnext = lnext(cell);\n\t\tif (need to delete)\n\t\t\tstate->enterKeys = list_delete_cell(state->enterKeys,\n\t\t\t\t\t\t\tcell, prev);\n\t\telse\n\t\t\tprev = cell;\n\t\tcell = next;\n\t}\n\nMy previous patch would have had you replace this with a loop using\nan integer list-position index. You can still do that if you like,\nbut it's less change to convert the loop to a foreach(), drop the\nprev/next variables, and replace the list_delete_cell call with\nforeach_delete_current:\n\n\tforeach(cell, state->enterKeys)\n\t{\n\t\tTrgmStateKey *existingKey = (TrgmStateKey *) lfirst(cell);\n\n\t\tif (need to delete)\n\t\t\tstate->enterKeys = foreach_delete_current(state->enterKeys,\n\t\t\t\t\t\t\t\tcell);\n\t}\n\nSo I think this is a win, and attached is v7.\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 01 Jul 2019 19:27:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On Tue, Jul 2, 2019 at 1:27 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>\n> So I think this is a win, and attached is v7.\n>\n\nNot related to the diff v6..v7, but shouldn't we throw additionally a\nmemset() with '\\0' before calling pfree():\n\n+ ListCell *newelements;\n+\n+ newelements = (ListCell *)\n+ MemoryContextAlloc(GetMemoryChunkContext(list),\n+ new_max_len * sizeof(ListCell));\n+ memcpy(newelements, list->elements,\n+ list->length * sizeof(ListCell));\n+ pfree(list->elements);\n+ list->elements = newelements;\n\nOr is this somehow ensured by debug pfree() implementation or does it work\ndifferently together with Valgrind?\n\nOtherwise it seems that the calling code can still be hanging onto a list\nelement from a freed chunk and (rather) happily accessing it, as opposed to\nalmost ensured crash if that is zeroed before returning from enlarge_list().\n\nCheers,\n--\nAlex\n\nOn Tue, Jul 2, 2019 at 1:27 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nSo I think this is a win, and attached is v7.Not related to the diff v6..v7, but shouldn't we throw additionally a memset() with '\\0' before calling pfree():+ ListCell *newelements;++ newelements = (ListCell *)+ MemoryContextAlloc(GetMemoryChunkContext(list),+ new_max_len * sizeof(ListCell));+ memcpy(newelements, list->elements,+ list->length * sizeof(ListCell));+ pfree(list->elements);+ list->elements = newelements;Or is this somehow ensured by debug pfree() implementation or does it work differently together with Valgrind?Otherwise it seems that the calling code can still be hanging onto a list element from a freed chunk and (rather) happily accessing it, as opposed to almost ensured crash if that is zeroed before returning from enlarge_list().Cheers,--Alex",
"msg_date": "Tue, 2 Jul 2019 09:35:18 +0200",
"msg_from": "Oleksandr Shulgin <oleksandr.shulgin@zalando.de>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Oleksandr Shulgin <oleksandr.shulgin@zalando.de> writes:\n> Not related to the diff v6..v7, but shouldn't we throw additionally a\n> memset() with '\\0' before calling pfree():\n\nI don't see the point of that. In debug builds CLOBBER_FREED_MEMORY will\ntake care of it, and in non-debug builds I don't see why we'd expend\nthe cycles.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 02 Jul 2019 11:12:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On Tue, 2 Jul 2019 at 11:27, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> My previous patch would have had you replace this with a loop using\n> an integer list-position index. You can still do that if you like,\n> but it's less change to convert the loop to a foreach(), drop the\n> prev/next variables, and replace the list_delete_cell call with\n> foreach_delete_current:\n>\n> foreach(cell, state->enterKeys)\n> {\n> TrgmStateKey *existingKey = (TrgmStateKey *) lfirst(cell);\n>\n> if (need to delete)\n> state->enterKeys = foreach_delete_current(state->enterKeys,\n> cell);\n> }\n>\n> So I think this is a win, and attached is v7.\n\nIt's pretty nice to get rid of those. I also like you've handled the\nchanges in SRFs.\n\nI've now read over the entire patch and have noted down the following:\n\n1. MergeAttributes() could do with a comment mention why you're not\nusing foreach() on the outer loop. I almost missed the\nlist_delete_nth_cell() call that's a few branches deep in the outer\nloop.\n\n2. In expandTupleDesc(), couldn't you just change the following:\n\nint i;\nfor (i = 0; i < offset; i++)\n{\nif (aliascell)\naliascell = lnext(eref->colnames, aliascell);\n}\n\nto:\n\naliascell = offset < list_length(eref->colnames) ?\nlist_nth_cell(eref->colnames, offset) : NULL;\n\n3. Worth Assert(list != NIL); in new_head_cell() and new_tail_cell() ?\n\n4. Do you think it would be a good idea to document that the 'pos' arg\nin list_insert_nth and co must be <= list_length(). I know you've\nmentioned that in insert_new_cell, but that's static and\nlist_insert_nth is not. I think it would be better not to have to\nchase down comments of static functions to find out how to use an\nexternal function.\n\n5. Why does enlarge_list() return List *? Can't it just return void?\nI noticed this after looking at add_new_cell_after() and reading your\ncautionary comment and then looking at lappend_cell(). At first, it\nseemed that lappend_cell() could end up reallocating List to make way\nfor the new cell, but from looking at enlarge_list() it seems we\nalways maintain the original allocation of the header. So why bother\nreturning List * in that function?\n\n6. Is there a reason to use memmove() in list_concat() rather than\njust memcpy()? I don't quite believe the comment you've written. As\nfar as I can see you're not overwriting any useful memory so the order\nof the copy should not matter.\n\n7. The last part of the following comment might not age well.\n\n/*\n* Note: unlike the individual-list-cell deletion functions, we never make\n* any effort to move the surviving list cells to new storage. This is\n* because none of them can move in this operation, so it's the same as\n* the old implementation in terms of what callers may assume.\n*/\n\nThe old comment about extending the list seems more fitting.\n\n9. I see your XXX comment in list_qsort(), but wouldn't it be better\nto just invent list_qsort_internal() as a static function and just\nhave it qsort the list in-place, then make list_qsort just return\nlist_qsort_internal(list_copy(list)); and keep the XXX comment so that\nthe fixup would just remove the list_copy()? That way, when it comes\nto fixing that inefficiency we can just cut and paste the internal\nimplementation into list_qsort(). It'll be much less churn, especially\nif you put the internal version directly below the external one.\n\n10. I wonder if we can reduce a bit of pain for extension authors by\nback patching a macro that wraps up a lnext() call adding a dummy\nargument for the List. That way they don't have to deal with their\nown pre-processor version dependent code. Downsides are we'd need to\nkeep the macro into the future, however, it's just 1 line of code...\n\n\nI also did some more benchmarking of the patch. Basically, I patched\nwith the attached file (converted to .txt not to upset the CFbot) then\nran make installcheck. This was done on an AWS m5d.large instance.\nThe patch runs the planner 10 times then LOGs the average time of\nthose 10 runs. Taking the sum of those averages I see:\n\nMaster: 0.482479 seconds\nPatched: 0.471949 seconds\n\nWhich makes the patched version 2.2% faster than master on that run.\nI've resisted attaching the spreadsheet since there are almost 22k\ndata points per run.\n\nApart from the 10 points above, I think the patch is good to go.\n\nI also agree with keeping the further improvements like getting rid of\nthe needless list_copy() in list_concat() calls as a followup patch. I\nalso agree with Tom about moving quickly with this one. Reviewing it\nin detail took me a long time, I'd really rather not do it again after\nleaving it to rot for a while.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Wed, 3 Jul 2019 18:56:25 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On Tue, Jul 2, 2019 at 5:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Oleksandr Shulgin <oleksandr.shulgin@zalando.de> writes:\n> > Not related to the diff v6..v7, but shouldn't we throw additionally a\n> > memset() with '\\0' before calling pfree():\n>\n> I don't see the point of that. In debug builds CLOBBER_FREED_MEMORY will\n> take care of it, and in non-debug builds I don't see why we'd expend\n> the cycles.\n>\n\nThis is what I was wondering about, thanks for providing a reference.\n\n--\nAlex\n\nOn Tue, Jul 2, 2019 at 5:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Oleksandr Shulgin <oleksandr.shulgin@zalando.de> writes:\n> Not related to the diff v6..v7, but shouldn't we throw additionally a\n> memset() with '\\0' before calling pfree():\n\nI don't see the point of that. In debug builds CLOBBER_FREED_MEMORY will\ntake care of it, and in non-debug builds I don't see why we'd expend\nthe cycles.This is what I was wondering about, thanks for providing a reference.--Alex",
"msg_date": "Wed, 3 Jul 2019 09:29:32 +0200",
"msg_from": "Oleksandr Shulgin <oleksandr.shulgin@zalando.de>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> I've now read over the entire patch and have noted down the following:\n\nThanks for the review! Attached is a v8 responding to most of your\ncomments. Anything not quoted below I just accepted.\n\n> 1. MergeAttributes() could do with a comment mention why you're not\n> using foreach() on the outer loop.\n\nCheck. I also got rid of the Assert added by f7e954ad1, as it seems\nnot to add any clarity in view of this comment.\n\n> 2. In expandTupleDesc(), couldn't you just change the following:\n\nDone, although this seems like the sort of follow-on optimization\nthat I wanted to leave for later.\n\n> 3. Worth Assert(list != NIL); in new_head_cell() and new_tail_cell() ?\n\nI don't think so. They're internal functions, and anyway they'll\ncrash very handily on a NIL pointer.\n\n> 5. Why does enlarge_list() return List *? Can't it just return void?\n\nAlso done. I had had some idea of maintaining flexibility, but\nconsidering that we still need the property that a List's header never\nmoves (as long as it stays nonempty), there's no circumstance where\nenlarge_list could validly move the header.\n\n> 9. I see your XXX comment in list_qsort(), but wouldn't it be better\n> to just invent list_qsort_internal() as a static function and just\n> have it qsort the list in-place, then make list_qsort just return\n> list_qsort_internal(list_copy(list)); and keep the XXX comment so that\n> the fixup would just remove the list_copy()?\n\nI don't really see the point of doing more than the minimum possible\nwork on list_qsort in this patch. The big churn from changing it\nis going to be in adjusting the callers' comparator functions for one\nless level of indirection, and I'd just as soon rewrite list_qsort\nin that patch not this one.\n\n> 10. I wonder if we can reduce a bit of pain for extension authors by\n> back patching a macro that wraps up a lnext() call adding a dummy\n> argument for the List.\n\nI was wondering about a variant of that yesterday; specifically,\nI thought of naming the new 2-argument function list_next() not lnext().\nThen we could add \"#define list_next(l,c) lnext(c)\" in the back branches.\nThis would simplify back-patching code that used the new definition, and\nit might save some effort for extension authors who are trying to maintain\ncross-branch code. On the other hand, it's more keystrokes forevermore,\nand I'm not entirely convinced that code that's using lnext() isn't likely\nto need other adjustments anyway. So I didn't pull the trigger on that,\nbut if people like the idea I'd be okay with doing it like that.\n\n> I also agree with keeping the further improvements like getting rid of\n> the needless list_copy() in list_concat() calls as a followup patch. I\n> also agree with Tom about moving quickly with this one. Reviewing it\n> in detail took me a long time, I'd really rather not do it again after\n> leaving it to rot for a while.\n\nIndeed. I don't want to expend a lot of effort keeping it in sync\nwith master over a long period, either. Opinions?\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 03 Jul 2019 14:15:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On 2019-Jul-03, Tom Lane wrote:\n\n> David Rowley <david.rowley@2ndquadrant.com> writes:\n> > 10. I wonder if we can reduce a bit of pain for extension authors by\n> > back patching a macro that wraps up a lnext() call adding a dummy\n> > argument for the List.\n> \n> I was wondering about a variant of that yesterday; specifically,\n> I thought of naming the new 2-argument function list_next() not lnext().\n> Then we could add \"#define list_next(l,c) lnext(c)\" in the back branches.\n> This would simplify back-patching code that used the new definition, and\n> it might save some effort for extension authors who are trying to maintain\n> cross-branch code. On the other hand, it's more keystrokes forevermore,\n> and I'm not entirely convinced that code that's using lnext() isn't likely\n> to need other adjustments anyway. So I didn't pull the trigger on that,\n> but if people like the idea I'd be okay with doing it like that.\n\nI was thinking about this issue too earlier today, and my conclusion is\nthat the way you have it in v7 is fine, because lnext() callsites are\nnot that numerous, so the cost to third-party code authors is not that\nhigh; the other arguments trump this consideration IMO. I say this as\nsomeone who curses every time he has to backpatch things across the\nheap_open / table_open change -- but there are a lot more calls of that.\n\n> Indeed. I don't want to expend a lot of effort keeping it in sync\n> with master over a long period, either. Opinions?\n\nYeah, let's get it done soon.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 3 Jul 2019 14:29:22 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> I also did some more benchmarking of the patch. ...\n> Which makes the patched version 2.2% faster than master on that run.\n\nBTW, further on the subject of performance --- I'm aware of at least\nthese topics for follow-on patches:\n\n* Fix places that are maintaining arrays parallel to Lists for\naccess-speed reasons (at least simple_rte_array, append_rel_array,\nes_range_table_array).\n\n* Look at places using lcons/list_delete_first to maintain FIFO lists.\nThe patch makes these O(N^2) for long lists. If we can reverse the list\norder and use lappend/list_truncate instead, it'd be better. Possibly in\nsome places the list ordering is critical enough to make this impractical,\nbut I suspect it's an easy win in most.\n\n* Rationalize places that are using combinations of list_copy and\nlist_concat, probably by inventing an additional list-concatenation\nprimitive that modifies neither input.\n\n* Adjust API for list_qsort(), as discussed, to save indirections and\navoid constructing an intermediate pointer array. I also seem to recall\none place in the planner that's avoiding using list_qsort by manually\nflattening a list into an array, qsort'ing, and rebuilding the list :-(\n\nI don't think that any one of these fixes would move the needle very\nmuch on \"typical\" simple workloads, but it's reasonable to hope that in\naggregate they'd make for a noticeable improvement. In the meantime,\nI'm gratified that the initial patch at least doesn't seem to have lost\nany ground.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 03 Jul 2019 15:20:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On Thu, 4 Jul 2019 at 06:15, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <david.rowley@2ndquadrant.com> writes:\n> > I've now read over the entire patch and have noted down the following:\n>\n> Thanks for the review! Attached is a v8 responding to most of your\n> comments. Anything not quoted below I just accepted.\n\nThanks for the speedy turnaround. I've looked at v8, as far as a diff\nbetween the two patches and I'm happy.\n\nI've marked as ready for committer.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Thu, 4 Jul 2019 10:25:52 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> Thanks for the speedy turnaround. I've looked at v8, as far as a diff\n> between the two patches and I'm happy.\n> I've marked as ready for committer.\n\nSo ... last chance for objections?\n\nI see from the cfbot that v8 is already broken (new call of lnext\nto be fixed). Don't really want to keep chasing a moving target,\nso unless I hear objections I'm going to adjust the additional\nspot(s) and commit this pretty soon, like tomorrow or Monday.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 13 Jul 2019 12:32:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "> On 13 Jul 2019, at 18:32, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I see from the cfbot that v8 is already broken (new call of lnext\n> to be fixed). Don't really want to keep chasing a moving target,\n> so unless I hear objections I'm going to adjust the additional\n> spot(s) and commit this pretty soon, like tomorrow or Monday.\n\nI just confirmed that fixing the recently introduced callsite not handled in\nthe patch still passes tests etc. +1 on this.\n\ncheers ./daniel\n\n\n",
"msg_date": "Mon, 15 Jul 2019 12:12:25 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 13 Jul 2019, at 18:32, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I see from the cfbot that v8 is already broken (new call of lnext\n>> to be fixed). Don't really want to keep chasing a moving target,\n>> so unless I hear objections I'm going to adjust the additional\n>> spot(s) and commit this pretty soon, like tomorrow or Monday.\n\n> I just confirmed that fixing the recently introduced callsite not handled in\n> the patch still passes tests etc. +1 on this.\n\nThanks for checking! I've now pushed this, with a bit of additional\ncleanup and comment-improvement in pg_list.h and list.c.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 15 Jul 2019 13:44:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "I wrote:\n> BTW, further on the subject of performance --- I'm aware of at least\n> these topics for follow-on patches:\n> ...\n> * Adjust API for list_qsort(), as discussed, to save indirections and\n> avoid constructing an intermediate pointer array. I also seem to recall\n> one place in the planner that's avoiding using list_qsort by manually\n> flattening a list into an array, qsort'ing, and rebuilding the list :-(\n\nHere's a proposed patch for that. There are only two existing calls\nof list_qsort(), it turns out. I didn't find the described spot in\nthe planner (I believe I was thinking of choose_bitmap_and(), but its\nusage isn't quite as described). However, I found about four other\nplaces that were doing pretty much exactly that, so the attached\nalso simplifies those places to use list_qsort().\n\n(There are some other places that could perhaps be changed also,\nbut it would require more invasive hacking than I wanted to do here,\nwith less-clear benefits.)\n\nA possibly controversial point is that I made list_qsort() sort the\ngiven list in-place, rather than building a new list as it has\nhistorically. For every single one of the existing and new callers,\ncopying the input list is wasteful, because they were just leaking\nthe input list anyway. But perhaps somebody feels that we should\npreserve the \"const input\" property? I thought that changing the\nfunction to return void, as done here, might be a good idea to\nensure that callers notice its API changed --- otherwise they'd\nonly get a warning about incompatible signature of the passed\nfunction pointer, which they might not notice; in fact I'm not\ntotally sure all compilers would even give such a warning.\n\nIf there's not complaints about that, I'm just going to go ahead\nand push this --- it seems simple enough to not need much review.\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 15 Jul 2019 15:49:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "I wrote:\n> [ list_qsort-API-change.patch ]\n\nAlso, here's a follow-on patch that cleans up some crufty code in\nheap.c and relcache.c to use list_qsort, rather than manually doing\ninsertions into a list that's kept ordered. The existing comments\nargue that that's faster than qsort for small N, but I think that's\na bit questionable. And anyway, that code is definitely O(N^2) if\nN isn't so small, while this replacement logic is O(N log N).\n\nThis incidentally removes the only two calls of lappend_cell_oid.\nThere are no callers of lappend_cell_int, and only one of lappend_cell,\nand that one would be noticeably cleaner if it were rewritten to use\nlist_insert_nth instead. So I'm a bit tempted to do so and then nuke\nall three of those functions, which would at least make some tiny dent\nin Andres' unhappiness with the messiness of the List API. Thoughts?\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 15 Jul 2019 18:10:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On Tue, 16 Jul 2019 at 07:49, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> A possibly controversial point is that I made list_qsort() sort the\n> given list in-place, rather than building a new list as it has\n> historically. For every single one of the existing and new callers,\n> copying the input list is wasteful, because they were just leaking\n> the input list anyway. But perhaps somebody feels that we should\n> preserve the \"const input\" property? I thought that changing the\n> function to return void, as done here, might be a good idea to\n> ensure that callers notice its API changed --- otherwise they'd\n> only get a warning about incompatible signature of the passed\n> function pointer, which they might not notice; in fact I'm not\n> totally sure all compilers would even give such a warning.\n>\n> If there's not complaints about that, I'm just going to go ahead\n> and push this --- it seems simple enough to not need much review.\n\nThe only thoughts I have so far here are that it's a shame that the\nfunction got called list_qsort() and not just list_sort(). I don't\nsee why callers need to know anything about the sort algorithm that's\nbeing used.\n\nIf we're going to break compatibility for this, should we rename the\nfunction too?\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Tue, 16 Jul 2019 14:56:46 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> The only thoughts I have so far here are that it's a shame that the\n> function got called list_qsort() and not just list_sort(). I don't\n> see why callers need to know anything about the sort algorithm that's\n> being used.\n\nMeh. list_qsort() is quicksort only to the extent that qsort()\nis quicksort, which in our current implementation is a bit of a\nlie already --- and, I believe, it's much more of a lie in some\nversions of libc. I don't really think of either name as promising\nanything about the underlying sort algorithm. What they do share\nis an API based on a callback comparison function, and if you are\nlooking for uses of those, it's a lot easier to grep for \"qsort\"\nthan some more-generic term.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 15 Jul 2019 23:07:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On 7/15/19 11:07 PM, Tom Lane wrote:\n> David Rowley <david.rowley@2ndquadrant.com> writes:\n>> The only thoughts I have so far here are that it's a shame that the\n>> function got called list_qsort() and not just list_sort(). I don't\n>> see why callers need to know anything about the sort algorithm that's\n>> being used.\n> \n> Meh. list_qsort() is quicksort only to the extent that qsort()\n> is quicksort, which in our current implementation is a bit of a\n> lie already --- and, I believe, it's much more of a lie in some\n> versions of libc. I don't really think of either name as promising\n> anything about the underlying sort algorithm. What they do share\n> is an API based on a callback comparison function, and if you are\n> looking for uses of those, it's a lot easier to grep for \"qsort\"\n> than some more-generic term.\n\nI agree with David -- list_sort() is better. I don't think \"sort\" is \nsuch a common stem that searching is a big issue, especially with modern \ncode indexing tools.\n\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Mon, 15 Jul 2019 23:33:48 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "David Steele <david@pgmasters.net> writes:\n> On 7/15/19 11:07 PM, Tom Lane wrote:\n>> David Rowley <david.rowley@2ndquadrant.com> writes:\n>>> The only thoughts I have so far here are that it's a shame that the\n>>> function got called list_qsort() and not just list_sort().\n\n> I agree with David -- list_sort() is better. I don't think \"sort\" is \n> such a common stem that searching is a big issue, especially with modern \n> code indexing tools.\n\nOK, I'm outvoted, will do it that way.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 16 Jul 2019 10:44:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On Tue, Jul 16, 2019 at 10:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> David Steele <david@pgmasters.net> writes:\n> > On 7/15/19 11:07 PM, Tom Lane wrote:\n> >> David Rowley <david.rowley@2ndquadrant.com> writes:\n> >>> The only thoughts I have so far here are that it's a shame that the\n> >>> function got called list_qsort() and not just list_sort().\n>\n> > I agree with David -- list_sort() is better. I don't think \"sort\" is\n> > such a common stem that searching is a big issue, especially with modern\n> > code indexing tools.\n>\n> OK, I'm outvoted, will do it that way.\n\nI cast my vote in the other direction i.e. for sticking with qsort.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 16 Jul 2019 12:01:38 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On Tue, Jul 16, 2019 at 9:01 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I cast my vote in the other direction i.e. for sticking with qsort.\n\nI do too.\n\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 16 Jul 2019 09:07:37 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Jul 16, 2019 at 10:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> OK, I'm outvoted, will do it that way.\n\n> I cast my vote in the other direction i.e. for sticking with qsort.\n\nDidn't see this until after pushing a commit that uses \"list_sort\".\n\nWhile composing that commit message another argument occurred to me,\nwhich is that renaming makes it absolutely sure that any external\ncallers will notice they have an API change to deal with, no matter\nhow forgiving their compiler is. Also, if somebody really really\ndoesn't want to cope with the change, they can now make their own\nversion of list_qsort (stealing it out of 1cff1b95a) and the core\ncode won't pose a conflict.\n\nSo I'm good with \"list_sort\" at this point.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 16 Jul 2019 12:08:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "I wrote:\n> * Rationalize places that are using combinations of list_copy and\n> list_concat, probably by inventing an additional list-concatenation\n> primitive that modifies neither input.\n\nI poked around to see what we have in this department. There seem to\nbe several identifiable use-cases:\n\n* Concat two Lists that are freshly built, or at least not otherwise\nreferenced. In the old code, list_concat serves fine, leaking the\nsecond List's header but not any of its cons cells. As of 1cff1b95a,\nthe second List's storage is leaked completely. We could imagine\ninventing a list_concat variant that list_free's its second input,\nbut I'm unconvinced that that's worth the trouble. Few if any\ncallers can't stand to leak any storage, and if there are any where\nit seems worth the trouble, adding an explicit list_free seems about\nas good as calling a variant of list_concat. (If we do want such a\nvariant, we need a name for it. list_join, maybe, by analogy to\nbms_join?)\n\n* Concat two lists where there exist other pointers to the second list,\nbut it's okay if the lists share cons cells afterwards. As of the\nnew code, they don't actually share any storage, which seems strictly\nbetter. I don't think we need to do anything here, except go around\nand adjust the comments that explain that that's what's happening.\n\n* Concat two lists where there exist other pointers to the second list,\nand it's not okay to share storage. This is currently implemented by\nlist_copy'ing the second argument, but we can just drop that (and\nadjust comments where needed).\n\n* Concat two lists where we mustn't modify either input list.\nThis is currently implemented by list_copy'ing both arguments.\nI'm inclined to replace this pattern with a function like\n\"list_concat_copy(const List *, const List *)\", although settling\non a suitable name might be difficult.\n\n* There's a small number of places that list_copy the first argument\nbut not the second. I believe that all of these are either of the form\n\"x = list_concat(list_copy(y), x)\", ie replacing the only reference to\nthe second argument, or are relying on the \"it's okay to share storage\"\nassumption to not copy a second argument that has other references.\nI think we can just replace these with list_concat_copy. We'll leak\nthe second argument's storage in the cases where another list is being\nprepended onto a working list, but I doubt it's worth fussing over.\n(But, if this is repeated a lot of times, maybe it is worth fussing\nover? Conceivably you could leak O(N^2) storage while building a\nlong working list, if you prepend many shorter lists onto it.)\n\n* Note that some places are applying copyObject() not list_copy().\nIn these places the idea is to make duplicates of pointed-to structures\nnot just the list proper. These should be left alone, I think.\nWhen the copyObject is applied to the second argument, we're leaking\nthe top-level List in the copy result, but again it's not worth\nfussing over.\n\nComments?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 16 Jul 2019 14:52:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "I wrote:\n> * Look at places using lcons/list_delete_first to maintain FIFO lists.\n> The patch makes these O(N^2) for long lists. If we can reverse the list\n> order and use lappend/list_truncate instead, it'd be better. Possibly in\n> some places the list ordering is critical enough to make this impractical,\n> but I suspect it's an easy win in most.\n\nAttached are two patches that touch all the places where it seemed like\nan easy win to stop using lcons and/or list_delete_first.\n\n0001 adds list_delete_last() as a mirror image to list_delete_first(),\nand changes all the places where it seemed 100% safe to do so (ie,\nthere's no behavioral change because the list order is demonstrably\nimmaterial).\n\n0002 changes some additional places where it's maybe a bit less safe,\nie there's a potential for user-visible behavioral change because\nprocessing will occur in a different order. In particular, the proposed\nchange in execExpr.c causes aggregates and window functions that are in\nthe same plan node to be executed in a different order than before ---\nbut it seems to me that this order is saner. (Note the change in the\nexpected regression results, in a test that's intentionally sensitive to\nthe execution order.) And anyway when did we guarantee anything about\nthat?\n\nI refrained from changing lcons to lappend in get_relation_info, because\nthat demonstrably causes the planner to change its choices when two\nindexes look equally attractive, and probably people would complain\nabout that. I think that the other changes proposed in 0002 are pretty\nharmless --- for example, in get_tables_to_cluster the order depends\ninitially on the results of a seqscan of pg_index, so anybody who's\nexpecting stability is in for rude surprises anyhow. Also, the proposed\nchanges in plancat.c, parse_agg.c, selfuncs.c almost certainly have no\nuser-visible effect, but maybe there could be changes at the\nroundoff-error level due to processing estimates in a different order?\n\nThere are a bunch of places that are using list_delete_first to remove\nthe next-to-process entry from a List used as a queue. In principle,\nwe could invert the order of those queues and then use list_delete_last,\nbut I thought this would probably be too confusing: it's natural to\nthink of the front of the list as being the head of the queue. I doubt\nthat any of those queues get long enough for it to be a serious\nperformance problem to leave them as-is.\n\n(Actually, I doubt that any of these changes will really move the\nperformance needle in the real world. It's more a case of wanting\nthe code to present good examples not bad ones.)\n\nThoughts? Anybody want to object to any of the changes in 0002?\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 16 Jul 2019 19:06:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On Wed, 17 Jul 2019 at 11:06, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> 0002 changes some additional places where it's maybe a bit less safe,\n> ie there's a potential for user-visible behavioral change because\n> processing will occur in a different order. In particular, the proposed\n> change in execExpr.c causes aggregates and window functions that are in\n> the same plan node to be executed in a different order than before ---\n> but it seems to me that this order is saner. (Note the change in the\n> expected regression results, in a test that's intentionally sensitive to\n> the execution order.) And anyway when did we guarantee anything about\n> that?\n\nI've only looked at 0002. Here are my thoughts:\n\nget_tables_to_cluster:\nLooks fine. It's a heap scan. Any previous order was accidental, so if\nit causes issues then we might need to think of using a more\nwell-defined order for CLUSTER;\n\nget_rels_with_domain:\nThis is a static function. Changing the order of the list seems to\nonly really affect the error message that a failed domain constraint\nvalidation could emit. Perhaps this might break someone else's tests,\nbut they should just be able to update their expected results.\n\nExecInitExprRec:\nAs you mention, the order of aggregate evaluation is reversed. I agree\nthat the new order is saner. I can't think why we'd be doing it in\nbackwards order beforehand.\n\nget_relation_statistics:\nRelationGetStatExtList does not seem to pay much attention to the\norder it returns its results, so I don't think the order we apply\nextended statistics was that well defined before. We always attempt to\nuse the stats with the most matching columns in\nchoose_best_statistics(), so I think\nfor people to be affected they'd either multiple stats with the same\nsets of columns or a complex clause that equally well matches two sets\nof stats, and in that case the other columns would be matched to the\nother stats later... I'd better check that... erm... actually that's\nnot true. I see statext_mcv_clauselist_selectivity() makes no attempt\nto match the clause list to another set of stats after finding the\nfirst best match. I think it likely should do that.\nestimate_multivariate_ndistinct() seems to have an XXX comment\nmentioning thoughts about the stability of which stats are used, but\nnothing is done.\n\nparseCheckAggregates:\nI can't see any user-visible change to this one. Not even in error messages.\n\nestimate_num_groups:\nSimilar to get_relation_statistics(), I see that\nestimate_multivariate_ndistinct() is only called once and we don't\nattempt to match up the remaining clauses with more stats. I can't\nimagine swapping lcons for lappend here will upset anyone. The\nbehaviour does not look well defined already. I think we should likely\nchange the \"if (estimate_multivariate_ndistinct(root, rel,\n&relvarinfos,\" to \"while ...\", then drop the else. Not for this patch\nthough...\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Wed, 17 Jul 2019 15:02:52 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "> On 17 Jul 2019, at 01:06, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> There are a bunch of places that are using list_delete_first to remove\n> the next-to-process entry from a List used as a queue. In principle,\n> we could invert the order of those queues and then use list_delete_last,\n> but I thought this would probably be too confusing: it's natural to\n> think of the front of the list as being the head of the queue. I doubt\n> that any of those queues get long enough for it to be a serious\n> performance problem to leave them as-is.\n\nFor cases where an Oid list is copied and then head elements immediately\nremoved, as in fetch_search_path, couldn’t we instead use a counter and\nlist_copy_tail to avoid repeated list_delete_first calls? Something like the\nattached poc.\n\n> +List *\n> +list_delete_last(List *list)\n> +{\n> +\tcheck_list_invariants(list);\n> +\n> +\tif (list == NIL)\n> +\t\treturn NIL;\t\t\t\t/* would an error be better? */\n\nSince we’ve allowed list_delete_first on NIL for a long time, it seems\nreasonable to do the same for list_delete_last even though it’s hard to come up\nwith a good usecase for deleting the last element without inspecting the list\n(a stack growing from the bottom perhaps?). It reads better to check for NIL\nbefore calling check_list_invariants though IMO.\n\nLooking mainly at 0001 for now, I agree that the order is insignificant.\n\ncheers ./daniel",
"msg_date": "Wed, 17 Jul 2019 16:12:28 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> I've only looked at 0002. Here are my thoughts:\n\nThanks for looking!\n\n> get_tables_to_cluster:\n> Looks fine. It's a heap scan. Any previous order was accidental, so if\n> it causes issues then we might need to think of using a more\n> well-defined order for CLUSTER;\n\nCheck.\n\n> get_rels_with_domain:\n> This is a static function. Changing the order of the list seems to\n> only really affect the error message that a failed domain constraint\n> validation could emit. Perhaps this might break someone else's tests,\n> but they should just be able to update their expected results.\n\nAlso, this is already dependent on the order of pg_depend entries,\nso it's not terribly stable anyhow.\n\n> get_relation_statistics:\n> RelationGetStatExtList does not seem to pay much attention to the\n> order it returns its results, so I don't think the order we apply\n> extended statistics was that well defined before. We always attempt to\n> use the stats with the most matching columns in\n> choose_best_statistics(), so I think\n> for people to be affected they'd either multiple stats with the same\n> sets of columns or a complex clause that equally well matches two sets\n> of stats, and in that case the other columns would be matched to the\n> other stats later... I'd better check that... erm... actually that's\n> not true. I see statext_mcv_clauselist_selectivity() makes no attempt\n> to match the clause list to another set of stats after finding the\n> first best match. I think it likely should do that.\n> estimate_multivariate_ndistinct() seems to have an XXX comment\n> mentioning thoughts about the stability of which stats are used, but\n> nothing is done.\n\nI figured that (a) this hasn't been around so long that anybody's\nexpectations are frozen, and (b) if there is a meaningful difference in\nresults then it's probably incumbent on the extstats code to do better.\nThat seems to match your conclusions. But I don't see any regression\ntest changes from making this change, so at least in simple cases it\ndoesn't matter.\n\n(As you say, any extstats changes that we conclude are needed should\nbe a separate patch.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 17 Jul 2019 10:16:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> For cases where an Oid list is copied and then head elements immediately\n> removed, as in fetch_search_path, couldn’t we instead use a counter and\n> list_copy_tail to avoid repeated list_delete_first calls?\n\nPerhaps, but I'm having a hard time getting excited about it.\nI don't think there's any evidence that fetch_search_path is a\nperformance issue. Also, this coding requires that the *only*\nchanges be deletion of head elements, whereas as it stands,\nonce we've copied the list we can do what we like.\n\n> Since we’ve allowed list_delete_first on NIL for a long time, it seems\n> reasonable to do the same for list_delete_last even though it’s hard to come up\n> with a good usecase for deleting the last element without inspecting the list\n> (a stack growing from the bottom perhaps?\n\nYeah, I intentionally made the edge cases the same. There's room to argue\nthat both functions should error out on NIL, instead. I've not looked\ninto that though, and would consider it material for a separate patch.\n\n> Looking mainly at 0001 for now, I agree that the order is insignificant.\n\nThanks for looking!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 17 Jul 2019 10:53:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On Wed, 17 Jul 2019 at 11:06, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> (Actually, I doubt that any of these changes will really move the\n> performance needle in the real world. It's more a case of wanting\n> the code to present good examples not bad ones.)\n\nIn spirit with the above, I'd quite like to fix a small bad example\nthat I ended up with in nodeAppend.c and nodeMergeappend.c for\nrun-time partition pruning.\n\nThe code in question performs a loop over a list and checks\nbms_is_member() on each element and only performs an action if the\nmember is present in the Bitmapset.\n\nIt would seem much more efficient just to perform a bms_next_member()\ntype loop then just fetch the list item with list_nth(), at least this\nis certainly the case when only a small number of the list items are\nindexed by the Bitmapset. With these two loops in particular, when a\nlarge number of list items are in the set the cost of the work goes up\ngreatly, so it does not seem unreasonable to optimise the case for\nwhen just a few match.\n\nA quick test shows that it's hardly groundbreaking performance-wise,\nbut test 1 does seem measurable above the noise.\n\n-- Setup\nplan_cache_mode = force_generic_plan\nmax_locks_per_transaction = 256\n\ncreate table ht (a int primary key, b int, c int) partition by hash (a);\nselect 'create table ht' || x::text || ' partition of ht for values\nwith (modulus 8192, remainder ' || (x)::text || ');' from\ngenerate_series(0,8191) x;\n\\gexec\n\n-- Test 1: Just one member in the Bitmapset.\n\ntest1.sql:\n\\set p 1\nselect * from ht where a = :p\n\nMaster:\n\n$ pgbench -n -f test1.sql -T 60 -M prepared postgres\ntps = 297.267191 (excluding connections establishing)\ntps = 298.276797 (excluding connections establishing)\ntps = 296.264459 (excluding connections establishing)\ntps = 298.968037 (excluding connections establishing)\ntps = 298.575684 (excluding connections establishing)\n\nPatched:\n\n$ pgbench -n -f test1.sql -T 60 -M prepared postgres\ntps = 300.924254 (excluding connections establishing)\ntps = 299.360196 (excluding connections establishing)\ntps = 300.197024 (excluding connections establishing)\ntps = 299.741215 (excluding connections establishing)\ntps = 299.748088 (excluding connections establishing)\n\n0.71% faster\n\n-- Test 2: when all list items are found in the Bitmapset.\n\ntest2.sql:\nselect * from ht;\n\nMaster:\n\n$ pgbench -n -f test2.sql -T 60 -M prepared postgres\ntps = 12.526578 (excluding connections establishing)\ntps = 12.528046 (excluding connections establishing)\ntps = 12.491347 (excluding connections establishing)\ntps = 12.538292 (excluding connections establishing)\ntps = 12.528959 (excluding connections establishing)\n\nPatched:\n\n$ pgbench -n -f test2.sql -T 60 -M prepared postgres\ntps = 12.503670 (excluding connections establishing)\ntps = 12.516133 (excluding connections establishing)\ntps = 12.404925 (excluding connections establishing)\ntps = 12.514567 (excluding connections establishing)\ntps = 12.541484 (excluding connections establishing)\n\n0.21% slower\n\nWith that removed the slowness of test 1 is almost entirely in\nAcquireExecutorLocks() and ExecCheckRTPerms(). We'd be up close to\nabout 30k tps instead of 300 tps if there was some solution to those\nproblems. I think it makes sense to remove the inefficient loops and\nleave the just final two bottlenecks, in the meantime.\n\nPatch attached.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Mon, 22 Jul 2019 01:32:00 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> On Wed, 17 Jul 2019 at 11:06, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> (Actually, I doubt that any of these changes will really move the\n>> performance needle in the real world. It's more a case of wanting\n>> the code to present good examples not bad ones.)\n\n> In spirit with the above, I'd quite like to fix a small bad example\n> that I ended up with in nodeAppend.c and nodeMergeappend.c for\n> run-time partition pruning.\n\nI didn't test the patch, but just by eyeball it looks sane,\nand I concur it should win if the bitmap is sparse.\n\nOne small question is whether it loses if most of the subplans\nare present in the bitmap. I imagine that would be close enough\nto break-even, but it might be worth trying to test to be sure.\n(I'd think about breaking out just the loops in question and\ntesting them stand-alone, or else putting in an outer loop to\nrepeat them, since as you say the surrounding work probably\ndominates.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 21 Jul 2019 10:45:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "I wrote:\n>> * Rationalize places that are using combinations of list_copy and\n>> list_concat, probably by inventing an additional list-concatenation\n>> primitive that modifies neither input.\n\n> I poked around to see what we have in this department. There seem to\n> be several identifiable use-cases:\n> [ ... analysis ... ]\n\nHere's a proposed patch based on that. I added list_concat_copy()\nand then simplified callers as appropriate.\n\nIt turns out there are a *lot* of places where list_concat() callers\nare now leaking the second input list (where before they just leaked\nthat list's header). So I've got mixed emotions about the choice not\nto add a variant function that list_free's the second input. On the\nother hand, the leakage probably amounts to nothing significant in\nall or nearly all of these places, and I'm concerned about the\nreadability/understandability loss of having an extra version of\nlist_concat. Anybody have an opinion about that?\n\nOther than that point, I think this is pretty much good to go.\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 21 Jul 2019 16:01:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On Mon, 22 Jul 2019 at 02:45, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> One small question is whether it loses if most of the subplans\n> are present in the bitmap. I imagine that would be close enough\n> to break-even, but it might be worth trying to test to be sure.\n> (I'd think about breaking out just the loops in question and\n> testing them stand-alone, or else putting in an outer loop to\n> repeat them, since as you say the surrounding work probably\n> dominates.)\n\nMy 2nd test was for when all subplans were present in the bitmap. It\ndid show a very slight slowdown for the case were all subplans were\npresent in the bitmapset. However, yeah, it seems like a good idea to\ntry it a million times to help show the true cost.\n\nI did:\n\nint x = 0;\n\n/* Patched version */\nfor (j = 0; j < 1000000; j++)\n{\n i = -1;\n while ((i = bms_next_member(validsubplans, i)) >= 0)\n {\n Plan *initNode = (Plan *) list_nth(node->appendplans, i);\n x++;\n }\n}\n\n/* Master version */\nfor (j = 0; j < 1000000; j++)\n{\n ListCell *lc;\n i = 0;\n foreach(lc, node->appendplans)\n {\n Plan *initNode;\n if (bms_is_member(i, validsubplans))\n {\n initNode = (Plan *)lfirst(lc);\n x++;\n }\n }\n}\n\nelog(DEBUG1, \"%d\", x); /* stop the compiler optimizing away the loops */\n\nI separately commented out each one of the outer loops away before\nperforming the test again.\n\nplan_cache_mode = force_generic_plan\n\n-- Test 1 (one matching subplan) --\n\nprepare q1(int) as select * from ht where a = $1;\nexecute q1(1);\n\nMaster version:\n\nTime: 14441.332 ms (00:14.441)\nTime: 13829.744 ms (00:13.830)\nTime: 13753.943 ms (00:13.754)\n\nPatched version:\n\nTime: 41.250 ms\nTime: 40.976 ms\nTime: 40.853 ms\n\n-- Test 2 (all matching subplans (8192 of them)) --\n\nprepare q2 as select * from ht;\nexecute q2;\n\nMaster version:\n\nTime: 14825.304 ms (00:14.825)\nTime: 14701.601 ms (00:14.702)\nTime: 14650.969 ms (00:14.651)\n\nPatched version:\n\nTime: 44551.811 ms (00:44.552)\nTime: 44357.915 ms (00:44.358)\nTime: 43454.958 ms (00:43.455)\n\nSo the bms_next_member() loop is slower when the bitmapset is fully\npopulated with all subplans, but way faster when there's just 1\nmember. In realiy, the ExecInitNode() call drowns most of it out.\nPlus a plan with more subnodes is going take longer to execute and\nthen shutdown the plan after too.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Mon, 22 Jul 2019 13:15:06 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> On Mon, 22 Jul 2019 at 02:45, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> One small question is whether it loses if most of the subplans\n>> are present in the bitmap. I imagine that would be close enough\n>> to break-even, but it might be worth trying to test to be sure.\n\n> ...\n> -- Test 2 (all matching subplans (8192 of them)) --\n\n> Master version:\n\n> Time: 14825.304 ms (00:14.825)\n> Time: 14701.601 ms (00:14.702)\n> Time: 14650.969 ms (00:14.651)\n\n> Patched version:\n\n> Time: 44551.811 ms (00:44.552)\n> Time: 44357.915 ms (00:44.358)\n> Time: 43454.958 ms (00:43.455)\n\n> So the bms_next_member() loop is slower when the bitmapset is fully\n> populated with all subplans, but way faster when there's just 1\n> member.\n\nInteresting. I wonder if bms_next_member() could be made any quicker?\nStill, I agree that this is negligible compared to the actual work\nneeded per live subplan, and the fact that the cost scales per live\nsubplan is a good thing. So no objection to this patch, but a mental\nnote to take another look at bms_next_member() someday.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 22 Jul 2019 00:37:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On Mon, 22 Jul 2019 at 16:37, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <david.rowley@2ndquadrant.com> writes:\n> > So the bms_next_member() loop is slower when the bitmapset is fully\n> > populated with all subplans, but way faster when there's just 1\n> > member.\n>\n> Interesting. I wonder if bms_next_member() could be made any quicker?\n\nI had a quick look earlier and the only thing I saw was maybe to do\nthe first loop differently from subsequent ones. The \"w &= mask;\"\ndoes nothing useful once we're past the first bitmapword that the loop\ntouches. Not sure what the could would look like exactly yet, or how\nmuch it would help. I'll maybe experiment a bit later, but as separate\nwork from the other patch.\n\n> Still, I agree that this is negligible compared to the actual work\n> needed per live subplan, and the fact that the cost scales per live\n> subplan is a good thing. So no objection to this patch, but a mental\n> note to take another look at bms_next_member() someday.\n\nThanks for having a look. I'll have another look and will likely push\nthis a bit later on today if all is well.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Mon, 22 Jul 2019 16:46:02 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> On Mon, 22 Jul 2019 at 16:37, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Interesting. I wonder if bms_next_member() could be made any quicker?\n\n> I had a quick look earlier and the only thing I saw was maybe to do\n> the first loop differently from subsequent ones. The \"w &= mask;\"\n> does nothing useful once we're past the first bitmapword that the loop\n> touches.\n\nGood thought, but it would only help when we're actually iterating to\nlater words, which happens just 1 out of 64 times in the fully-\npopulated-bitmap case.\n\nStill, I think it might be worth pursuing to make the sparse-bitmap\ncase faster.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 22 Jul 2019 00:54:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On Mon, 22 Jul 2019 at 08:01, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> >> * Rationalize places that are using combinations of list_copy and\n> >> list_concat, probably by inventing an additional list-concatenation\n> >> primitive that modifies neither input.\n>\n> > I poked around to see what we have in this department. There seem to\n> > be several identifiable use-cases:\n> > [ ... analysis ... ]\n>\n> Here's a proposed patch based on that. I added list_concat_copy()\n> and then simplified callers as appropriate.\n\nI looked over this and only noted down one thing:\n\nIn estimate_path_cost_size, can you explain why list_concat_copy() is\nneeded here? I don't see remote_param_join_conds being used after\nthis, so might it be better to just get rid of remote_param_join_conds\nand pass remote_conds to classifyConditions(), then just\nlist_concat()?\n\n/*\n* The complete list of remote conditions includes everything from\n* baserestrictinfo plus any extra join_conds relevant to this\n* particular path.\n*/\nremote_conds = list_concat_copy(remote_param_join_conds,\nfpinfo->remote_conds);\n\nclassifyConditions() seems to create new lists, so it does not appear\nthat you have to worry about modifying the existing lists.\n\n\n> It turns out there are a *lot* of places where list_concat() callers\n> are now leaking the second input list (where before they just leaked\n> that list's header). So I've got mixed emotions about the choice not\n> to add a variant function that list_free's the second input. On the\n> other hand, the leakage probably amounts to nothing significant in\n> all or nearly all of these places, and I'm concerned about the\n> readability/understandability loss of having an extra version of\n> list_concat. Anybody have an opinion about that?\n\nIn some of these places, for example, the calls to\ngenerate_join_implied_equalities_normal() and\ngenerate_join_implied_equalities_broken(), I wonder, since these are\nstatic functions if we could just change the function signature to\naccept a List to append to. This could save us from having to perform\nany additional pallocs at all, so there'd be no need to free anything\nor worry about any leaks. The performance of the code would be\nimproved too. There may be other cases where we can do similar, but\nI wouldn't vote we change signatures of non-static functions for that.\n\nIf we do end up with another function, it might be nice to stay away\nfrom using \"concat\" in the name. I think we might struggle if there\nare too many variations on concat and there's a risk we'll use the\nwrong one. If we need this then perhaps something like\nlist_append_all() might be a better choice... I'm struggling to build\na strong opinion on this though. (I know that because I've deleted\nthis paragraph 3 times and started again, each time with a different\nopinion.)\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Mon, 22 Jul 2019 20:34:13 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> I looked over this and only noted down one thing:\n\n> In estimate_path_cost_size, can you explain why list_concat_copy() is\n> needed here? I don't see remote_param_join_conds being used after\n> this, so might it be better to just get rid of remote_param_join_conds\n> and pass remote_conds to classifyConditions(), then just\n> list_concat()?\n\nHm, you're right, remote_param_join_conds is not used after that,\nso we could just drop the existing list_copy() and make it\n\n remote_conds = list_concat(remote_param_join_conds,\n fpinfo->remote_conds);\n\nI'm disinclined to change the API of classifyConditions(),\nif that's what you were suggesting.\n\n>> It turns out there are a *lot* of places where list_concat() callers\n>> are now leaking the second input list (where before they just leaked\n>> that list's header). So I've got mixed emotions about the choice not\n>> to add a variant function that list_free's the second input.\n\n> In some of these places, for example, the calls to\n> generate_join_implied_equalities_normal() and\n> generate_join_implied_equalities_broken(), I wonder, since these are\n> static functions if we could just change the function signature to\n> accept a List to append to.\n\nI'm pretty disinclined to do that, too. Complicating function APIs\nfor marginal performance gains isn't something that leads to\nunderstandable or maintainable code.\n\n> If we do end up with another function, it might be nice to stay away\n> from using \"concat\" in the name. I think we might struggle if there\n> are too many variations on concat and there's a risk we'll use the\n> wrong one. If we need this then perhaps something like\n> list_append_all() might be a better choice... I'm struggling to build\n> a strong opinion on this though. (I know that because I've deleted\n> this paragraph 3 times and started again, each time with a different\n> opinion.)\n\nYeah, the name is really the sticking point here; if we could think\nof a name that was easy to understand then the whole thing would be\nmuch easier to accept. The best I've been able to come up with is\n\"list_join\", by analogy to bms_join for bitmapsets ... but that's\nnot great.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 22 Jul 2019 10:50:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On 2019-Jul-22, Tom Lane wrote:\n\n> David Rowley <david.rowley@2ndquadrant.com> writes:\n\n> > If we do end up with another function, it might be nice to stay away\n> > from using \"concat\" in the name. I think we might struggle if there\n> > are too many variations on concat and there's a risk we'll use the\n> > wrong one. If we need this then perhaps something like\n> > list_append_all() might be a better choice... I'm struggling to build\n> > a strong opinion on this though. (I know that because I've deleted\n> > this paragraph 3 times and started again, each time with a different\n> > opinion.)\n> \n> Yeah, the name is really the sticking point here; if we could think\n> of a name that was easy to understand then the whole thing would be\n> much easier to accept. The best I've been able to come up with is\n> \"list_join\", by analogy to bms_join for bitmapsets ... but that's\n> not great.\n\nSo with this patch we end up with:\n\nlist_union (copies list1, appends list2 element not already in list1)\nlist_concat_unique (appends list2 elements not already in list)\nlist_concat (appends all list2 elements)\nlist_concat_copy (copies list1, appends all list2 elements)\n\nThis seems a little random -- for example we end up with \"union\" being\nthe same as \"concat_copy\" except for the copy; and the analogy between\nthose two seems to exactly correspond to that between \"concat_unique\"\nand \"concat\". I would propose to use the name list_union, with flags\nbeing \"unique\" (or \"uniquify\" if that's a word, or even just \"all\" which\nseems obvious to people with a SQL background), and something that\nsuggests \"copy_first\".\n\nMaybe we can offer a single name that does the four things, selecting\nthe exact semantics with boolean flags? (We can provide the old names\nas macros, to avoid unnecessarily breaking other code). Also, perhaps\nit would make sense to put them all closer in the source file.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 22 Jul 2019 13:29:32 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> So with this patch we end up with:\n\n> list_union (copies list1, appends list2 element not already in list1)\n> list_concat_unique (appends list2 elements not already in list)\n> list_concat (appends all list2 elements)\n> list_concat_copy (copies list1, appends all list2 elements)\n\n> This seems a little random -- for example we end up with \"union\" being\n> the same as \"concat_copy\" except for the copy; and the analogy between\n> those two seems to exactly correspond to that between \"concat_unique\"\n> and \"concat\".\n\nYeah, list_concat_unique is kind of weird here. Its header comment\neven points out that it's much like list_union:\n\n * This is almost the same functionality as list_union(), but list1 is\n * modified in-place rather than being copied. However, callers of this\n * function may have strict ordering expectations -- i.e. that the relative\n * order of those list2 elements that are not duplicates is preserved.\n\nI think that last sentence is bogus --- does anybody really think\npeople have been careful not to assume anything about the ordering\nof list_union results?\n\n> I would propose to use the name list_union, with flags\n> being \"unique\" (or \"uniquify\" if that's a word, or even just \"all\" which\n> seems obvious to people with a SQL background), and something that\n> suggests \"copy_first\".\n\nI really dislike using \"union\" for something that doesn't have the\nsame semantics as SQL's UNION (ie guaranteed duplicate elimination);\nso I've never been that happy with \"list_union\" and \"list_difference\".\nPropagating that into things that aren't doing any dup-elimination\nat all seems very wrong.\n\nAlso, a big -1 for replacing these calls with something with\nextra parameter(s). That's going to be verbose, and not any\nmore readable, and probably slower because the called code\nwill have to figure out what to do.\n\nPerhaps there's an argument for doing something to change the behavior\nof list_union and list_difference and friends. Not sure --- it could\nbe a foot-gun for back-patching. I'm already worried about the risk\nof back-patching code that assumes the new semantics of list_concat.\n(Which might be a good argument for renaming it to something else?\nJust not list_union, please.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 22 Jul 2019 13:46:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Hi,\n\nI was just looking at the diff for a fix, which adds a \"ListCell *lc;\"\nto function scope, even though it's only needed in a pretty narrow\nscope.\n\nUnfortunately foreach(ListCell *lc, ...) doesn't work with the current\ndefinition. Which I think isn't great, because the large scopes for loop\niteration variables imo makes the code harder to reason about.\n\nI wonder if we could either have a different version of foreach() that\nallows that, or find a way to make the above work. For the latter I\ndon't immediately have a good idea of how to accomplish that. For the\nformer it's easy enough if we either don't include the typename (thereby\nlooking more alien), or if we reference the name separately (making it\nmore complicated to use).\n\n\nI also wonder if a foreach version that includes the typical\n(Type *) var = (Type *) lfirst(lc);\nor\n(Type *) var = castNode(Type, lfirst(lc));\nor\nOpExpr\t *hclause = lfirst_node(OpExpr, lc);\n\nwould make it nicer to use lists.\n\nforeach_node_in(Type, name, list) could mean something like\n\nforeach(ListCell *name##_cell, list)\n{\n Type* name = lfirst_node(Type, name##_cell);\n}\n\n(using a hypothetical foreach that supports defining the ListCell in\nscope, just for display simplicity's sake).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 31 Jul 2019 15:57:56 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-31 15:57:56 -0700, Andres Freund wrote:\n> I also wonder if a foreach version that includes the typical\n> (Type *) var = (Type *) lfirst(lc);\n> or\n> (Type *) var = castNode(Type, lfirst(lc));\n> or\n> OpExpr\t *hclause = lfirst_node(OpExpr, lc);\n> \n> would make it nicer to use lists.\n> \n> foreach_node_in(Type, name, list) could mean something like\n> \n> foreach(ListCell *name##_cell, list)\n> {\n> Type* name = lfirst_node(Type, name##_cell);\n> }\n\ns/lfirst/linitial/ of course. Was looking at code that also used\nlfirst...\n\nReminds me that one advantage of macros like the second one would also\nbe to reduce the use of the confusingly named linitial*(), helping newer\nhackers.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 31 Jul 2019 16:00:47 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-31 16:00:47 -0700, Andres Freund wrote:\n> On 2019-07-31 15:57:56 -0700, Andres Freund wrote:\n> > I also wonder if a foreach version that includes the typical\n> > (Type *) var = (Type *) lfirst(lc);\n> > or\n> > (Type *) var = castNode(Type, lfirst(lc));\n> > or\n> > OpExpr\t *hclause = lfirst_node(OpExpr, lc);\n> > \n> > would make it nicer to use lists.\n> > \n> > foreach_node_in(Type, name, list) could mean something like\n> > \n> > foreach(ListCell *name##_cell, list)\n> > {\n> > Type* name = lfirst_node(Type, name##_cell);\n> > }\n> \n> s/lfirst/linitial/ of course. Was looking at code that also used\n> lfirst...\n\nBullshit, of course.\n\n/me performs a tactical withdrawal into his brown paper bag.\n\n\n> Reminds me that one advantage of macros like the second one would also\n> be to reduce the use of the confusingly named linitial*(), helping newer\n> hackers.\n\nBut that point just had two consecutive embarassing demonstrations...\n\n- Andres\n\n\n",
"msg_date": "Wed, 31 Jul 2019 16:04:03 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On 01/08/2019 01:04, Andres Freund wrote:\n> Hi,\n> \n> On 2019-07-31 16:00:47 -0700, Andres Freund wrote:\n>> On 2019-07-31 15:57:56 -0700, Andres Freund wrote:\n>>> I also wonder if a foreach version that includes the typical\n>>> (Type *) var = (Type *) lfirst(lc);\n>>> or\n>>> (Type *) var = castNode(Type, lfirst(lc));\n>>> or\n>>> OpExpr\t *hclause = lfirst_node(OpExpr, lc);\n>>>\n>>> would make it nicer to use lists.\n>>>\n>>> foreach_node_in(Type, name, list) could mean something like\n>>>\n>>> foreach(ListCell *name##_cell, list)\n>>> {\n>>> Type* name = lfirst_node(Type, name##_cell);\n>>> }\n>>\n>> s/lfirst/linitial/ of course. Was looking at code that also used\n>> lfirst...\n> \n> Bullshit, of course.\n> \n> /me performs a tactical withdrawal into his brown paper bag.\n> \n> \n>> Reminds me that one advantage of macros like the second one would also\n>> be to reduce the use of the confusingly named linitial*(), helping newer\n>> hackers.\n> \n> But that point just had two consecutive embarassing demonstrations...\n> \n\nYeah, pg_list.h is one file I never close.\n\n-- \nPetr Jelinek\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/\n\n\n",
"msg_date": "Thu, 1 Aug 2019 01:17:37 +0200",
"msg_from": "Petr Jelinek <petr@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Unfortunately foreach(ListCell *lc, ...) doesn't work with the current\n> definition. Which I think isn't great, because the large scopes for loop\n> iteration variables imo makes the code harder to reason about.\n\nYeah, I tried to make that possible when I redid those macros, but\ncouldn't find a way :-(. Even granting that we're willing to have\na different macro for this use-case, it doesn't seem easy, because\nyou can only put one <declaration> into the first element of a\nfor (;;).\n\nThat makes the other idea (of a foreach-ish macro declaring the\nlistcell value variable) problematic, too :-(.\n\nOne idea is that we could do something like\n\n foreach_variant(identifier, list_value)\n {\n type *v = (type *) lfirst_variant(identifier);\n ...\n }\n\nwhere the \"identifier\" isn't actually a variable name but just something\nwe use to construct the ForEachState variable's name. (The only reason\nwe need it is to avoid confusion in cases with nested foreach's.) The\nlfirst_variant macro would fetch the correct value just by looking\nat the ForEachState, so there's no separate ListCell* variable at all.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 31 Jul 2019 19:40:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "I wrote:\n> One idea is that we could do something like\n> foreach_variant(identifier, list_value)\n> {\n> type *v = (type *) lfirst_variant(identifier);\n> ...\n> }\n> where the \"identifier\" isn't actually a variable name but just something\n> we use to construct the ForEachState variable's name. (The only reason\n> we need it is to avoid confusion in cases with nested foreach's.)\n\nOn second thought, there seems no strong reason why you should need\nto fetch the current value of a foreach-ish loop that's not the most\nclosely nested one. So forget the dummy identifier, and consider\nthis straw-man proposal:\n\n#define aforeach(list_value) ...\n\n(I'm thinking \"anonymous foreach\", but bikeshedding welcome.) This\nis just like the current version of foreach(), except it uses a\nfixed name for the ForEachState variable and doesn't attempt to\nassign to a \"cell\" variable.\n\n#define aforeach_current() ...\n\nRetrieves the current value of the most-closely-nested aforeach\nloop, based on knowing the fixed name of aforeach's loop variable.\nThis replaces \"lfirst(lc)\", and we'd also need aforeach_current_int()\nand so on for the other variants of lfirst().\n\nSo usage would look like, say,\n\n\taforeach(my_list)\n\t{\n\t\ttype *my_value = (type *) aforeach_current();\n\t\t...\n\t}\n\nWe'd also want aforeach_delete_current() and aforeach_current_index(),\nto provide functionality equivalent to foreach_delete_current() and\nforeach_current_index().\n\nThese names are a bit long, and maybe we should try to make them\nshorter, but more shortness might also mean less clarity.\n\nBTW, I think we could make equivalent macros in the old regime,\nwhich would be a good thing because then it would be possible to\nback-patch code using this notation.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 31 Jul 2019 20:16:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "I wrote:\n> BTW, I think we could make equivalent macros in the old regime,\n> which would be a good thing because then it would be possible to\n> back-patch code using this notation.\n\nOh, wait-a-second. I was envisioning that\n\n\tfor (ListCell *anonymous__lc = ...)\n\nwould work for that, but of course that requires C99, so we could\nonly put it into v12.\n\nBut that might still be worth doing. It'd mean that the backpatchability\nof this notation is the same as that of \"for (int x = ...)\", which\nseems worth something.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 31 Jul 2019 20:25:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-31 19:40:09 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Unfortunately foreach(ListCell *lc, ...) doesn't work with the current\n> > definition. Which I think isn't great, because the large scopes for loop\n> > iteration variables imo makes the code harder to reason about.\n>\n> Yeah, I tried to make that possible when I redid those macros, but\n> couldn't find a way :-(. Even granting that we're willing to have\n> a different macro for this use-case, it doesn't seem easy, because\n> you can only put one <declaration> into the first element of a\n> for (;;).\n\nI remember hitting that at one point and annoyed/confused as there that\nrestriction came from. Probably some grammar difficulties. But still,\nodd.\n\n\n> That makes the other idea (of a foreach-ish macro declaring the\n> listcell value variable) problematic, too :-(.\n\nHm. One way partially around that would be using an anonymous struct\ninside the for(). Something like\n\n#define foreach_node(membertype, name, lst)\t\\\nfor (struct {membertype *node; ListCell *lc; const List *l; int i;} name = {...}; \\\n ...)\n\nwhich then would allow code like\n\nforeach_node(OpExpr, cur, list)\n{\n do_something_with_node(cur.node);\n\n foreach_delete_current(cur);\n}\n\n\nThat's quite similar to your:\n\n> One idea is that we could do something like\n>\n> foreach_variant(identifier, list_value)\n> {\n> type *v = (type *) lfirst_variant(identifier);\n> ...\n> }\n>\n> where the \"identifier\" isn't actually a variable name but just something\n> we use to construct the ForEachState variable's name. (The only reason\n> we need it is to avoid confusion in cases with nested foreach's.) The\n> lfirst_variant macro would fetch the correct value just by looking\n> at the ForEachState, so there's no separate ListCell* variable at all.\n\nbut would still allow to avoid the variable.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 31 Jul 2019 18:15:11 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On Thu, 1 Aug 2019 at 07:40, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Andres Freund <andres@anarazel.de> writes:\n> > Unfortunately foreach(ListCell *lc, ...) doesn't work with the current\n> > definition. Which I think isn't great, because the large scopes for loop\n> > iteration variables imo makes the code harder to reason about.\n>\n>\nTotally agree.\n\n\n>\n> you can only put one <declaration> into the first element of a\n> for (;;).\n>\n\nUse an anonymous block outer scope? Or if not permitted even by C99 (which\nI think it is), a do {...} while (0); hack?\n\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Thu, 1 Aug 2019 at 07:40, Tom Lane <tgl@sss.pgh.pa.us> wrote:Andres Freund <andres@anarazel.de> writes:\n> Unfortunately foreach(ListCell *lc, ...) doesn't work with the current\n> definition. Which I think isn't great, because the large scopes for loop\n> iteration variables imo makes the code harder to reason about.\nTotally agree. \nyou can only put one <declaration> into the first element of a\nfor (;;).Use an anonymous block outer scope? Or if not permitted even by C99 (which I think it is), a do {...} while (0); hack?-- Craig Ringer http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise",
"msg_date": "Thu, 8 Aug 2019 11:36:44 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-08 11:36:44 +0800, Craig Ringer wrote:\n> > you can only put one <declaration> into the first element of a\n> > for (;;).\n> >\n> \n> Use an anonymous block outer scope? Or if not permitted even by C99 (which\n> I think it is), a do {...} while (0); hack?\n\nYou can't easily - the problem is that there's no real way to add the\nclosing }, because that's after the macro.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 7 Aug 2019 21:18:03 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On Thu, 8 Aug 2019 at 12:18, Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2019-08-08 11:36:44 +0800, Craig Ringer wrote:\n> > > you can only put one <declaration> into the first element of a\n> > > for (;;).\n> > >\n> >\n> > Use an anonymous block outer scope? Or if not permitted even by C99\n> (which\n> > I think it is), a do {...} while (0); hack?\n>\n> You can't easily - the problem is that there's no real way to add the\n> closing }, because that's after the macro.\n\n\nAh, right. Hence our\n\nPG_TRY();\n{\n}\nPG_CATCH();\n{\n}\nPG_END_TRY();\n\nconstruct in all its beauty.\n\nI should've seen that.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Thu, 8 Aug 2019 at 12:18, Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2019-08-08 11:36:44 +0800, Craig Ringer wrote:\n> > you can only put one <declaration> into the first element of a\n> > for (;;).\n> >\n> \n> Use an anonymous block outer scope? Or if not permitted even by C99 (which\n> I think it is), a do {...} while (0); hack?\n\nYou can't easily - the problem is that there's no real way to add the\nclosing }, because that's after the macro.Ah, right. Hence our PG_TRY();{}PG_CATCH();{}PG_END_TRY();construct in all its beauty. I should've seen that.-- Craig Ringer http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise",
"msg_date": "Thu, 8 Aug 2019 14:10:44 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "[ returning to this topic now that the CF is over ]\n\nI wrote:\n> Perhaps there's an argument for doing something to change the behavior\n> of list_union and list_difference and friends. Not sure --- it could\n> be a foot-gun for back-patching. I'm already worried about the risk\n> of back-patching code that assumes the new semantics of list_concat.\n> (Which might be a good argument for renaming it to something else?\n> Just not list_union, please.)\n\nHas anyone got further thoughts about naming around list_concat\nand friends?\n\nIf not, I'm inclined to go ahead with the concat-improvement patch as\nproposed in [1], modulo the one improvement David spotted.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/6704.1563739305@sss.pgh.pa.us\n\n\n",
"msg_date": "Thu, 08 Aug 2019 12:24:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-07-31 19:40:09 -0400, Tom Lane wrote:\n>> That makes the other idea (of a foreach-ish macro declaring the\n>> listcell value variable) problematic, too :-(.\n\n> Hm. One way partially around that would be using an anonymous struct\n> inside the for(). Something like\n> #define foreach_node(membertype, name, lst)\t\\\n> for (struct {membertype *node; ListCell *lc; const List *l; int i;} name = {...}; \\\n> ...)\n> which then would allow code like\n\n> foreach_node(OpExpr, cur, list)\n> {\n> do_something_with_node(cur.node);\n> foreach_delete_current(cur);\n> }\n\nI'm hesitant to change the look of our loops quite that much, mainly\nbecause it'll be a pain for back-patching. If you write some code\nfor HEAD like this, and then have to back-patch it, you'll need to\ninsert/change significantly more code than if it's just a matter\nof whether there's a ListCell variable or not.\n\nI experimented with the \"aforeach\" idea I suggested upthread,\nto the extent of writing the macros and then converting\nparse_clause.c (a file chosen more or less at random) to use\naforeach instead of foreach. I was somewhat surprised to find\nthat every single foreach() did convert pleasantly. (There are\nseveral forboth's that I didn't try to do anything with, though.)\n\nIf we do go in this direction, I wouldn't suggest trying to\nactually do wholesale conversion of existing code like this;\nthat seems more likely to create back-patching land mines than\ndo anything helpful. I am slightly tempted to try to convert\neveryplace using foreach_delete_current, though, since those\nloops are different from v12 already.\n\nThoughts?\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 08 Aug 2019 14:39:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "I wrote:\n> BTW, further on the subject of performance --- I'm aware of at least\n> these topics for follow-on patches:\n\n> * Fix places that are maintaining arrays parallel to Lists for\n> access-speed reasons (at least simple_rte_array, append_rel_array,\n> es_range_table_array).\n\nAttached is a patch that removes simple_rte_array in favor of just\naccessing the query's rtable directly. I concluded that there was\nnot much point in messing with simple_rel_array or append_rel_array,\nbecause they are not in fact just mirrors of some List. There's no\nList at all of baserel RelOptInfos, and while we do have a list of\nAppendRelInfos, it's a compact, random-order list not one indexable\nby child relid.\n\nHaving done this, though, I'm a bit discouraged about whether to commit\nit. In light testing, it's not any faster than HEAD and in complex\nqueries seems to actually be a bit slower. I suspect the reason is\nthat we've effectively replaced\n\troot->simple_rte_array[i]\nwith\n\troot->parse->rtable->elements[i-1]\nand the two extra levels of indirection are taking their toll.\n\nIt'd be possible to get rid of one of those indirections by maintaining a\ncopy of root->parse->rtable directly in PlannerInfo; but that throws away\nmost of the intellectual appeal of not having two sources of truth to\nmaintain, and it won't completely close the performance gap.\n\nOther minor objections include:\n\n* Many RTE accesses now look randomly different from adjacent \nRelOptInfo accesses.\n\n* The inheritance-expansion code is a bit sloppy about how much it will\nexpand these arrays, which means it's possible in corner cases for\nlength(parse->rtable) to be less than root->simple_rel_array_size-1.\nThis resulted in a crash in add_other_rels_to_query, which was assuming\nit could fetch a possibly-null RTE pointer from indexes up to\nsimple_rel_array_size-1. While that wasn't hard to fix, I wonder\nwhether any third-party code has similar assumptions.\n\nSo on the whole, I'm inclined not to do this. There are some cosmetic\nbits of this patch that I do want to commit though: I found some\nout-of-date comments, and I realized that it's pretty dumb not to\njust merge setup_append_rel_array into setup_simple_rel_arrays.\nThe division of labor there is inconsistent with the fact that\nthere's no such division in expand_planner_arrays.\n\nI still have hopes for getting rid of es_range_table_array though,\nand will look at that tomorrow or so.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 08 Aug 2019 17:55:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On Fri, 9 Aug 2019 at 04:24, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wrote:\n> > Perhaps there's an argument for doing something to change the behavior\n> > of list_union and list_difference and friends. Not sure --- it could\n> > be a foot-gun for back-patching. I'm already worried about the risk\n> > of back-patching code that assumes the new semantics of list_concat.\n> > (Which might be a good argument for renaming it to something else?\n> > Just not list_union, please.)\n>\n> Has anyone got further thoughts about naming around list_concat\n> and friends?\n>\n> If not, I'm inclined to go ahead with the concat-improvement patch as\n> proposed in [1], modulo the one improvement David spotted.\n>\n> regards, tom lane\n>\n> [1] https://www.postgresql.org/message-id/6704.1563739305@sss.pgh.pa.us\n\nI'm okay with the patch once that one improvement is done.\n\nI think if we want to think about freeing the 2nd input List then we\ncan do that in another commit. Removing the redundant list_copy()\ncalls seems quite separate from that.\n\n--\n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Fri, 9 Aug 2019 12:41:15 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On Fri, 9 Aug 2019 at 09:55, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Attached is a patch that removes simple_rte_array in favor of just\n> accessing the query's rtable directly. I concluded that there was\n> not much point in messing with simple_rel_array or append_rel_array,\n> because they are not in fact just mirrors of some List. There's no\n> List at all of baserel RelOptInfos, and while we do have a list of\n> AppendRelInfos, it's a compact, random-order list not one indexable\n> by child relid.\n>\n> Having done this, though, I'm a bit discouraged about whether to commit\n> it. In light testing, it's not any faster than HEAD and in complex\n> queries seems to actually be a bit slower. I suspect the reason is\n> that we've effectively replaced\n> root->simple_rte_array[i]\n> with\n> root->parse->rtable->elements[i-1]\n> and the two extra levels of indirection are taking their toll.\n\nIf there are no performance gains from this then -1 from me. We're\nall pretty used to it the way it is\n\n> I realized that it's pretty dumb not to\n> just merge setup_append_rel_array into setup_simple_rel_arrays.\n> The division of labor there is inconsistent with the fact that\n> there's no such division in expand_planner_arrays.\n\nha, yeah I'd vote for merging those. It was coded that way originally\nuntil someone objected! :)\n\n> I still have hopes for getting rid of es_range_table_array though,\n> and will look at that tomorrow or so.\n\nYes, please. I've measured that to be quite an overhead with large\npartitioning setups. However, that was with some additional code which\ndidn't lock partitions until it was ... well .... too late... as it\nturned out. But it seems pretty good to remove code that could be a\nfuture bottleneck if we ever manage to do something else with the\nlocking of all partitions during UPDATE/DELETE.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Fri, 9 Aug 2019 12:52:15 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> On Fri, 9 Aug 2019 at 09:55, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I still have hopes for getting rid of es_range_table_array though,\n>> and will look at that tomorrow or so.\n\n> Yes, please. I've measured that to be quite an overhead with large\n> partitioning setups. However, that was with some additional code which\n> didn't lock partitions until it was ... well .... too late... as it\n> turned out. But it seems pretty good to remove code that could be a\n> future bottleneck if we ever manage to do something else with the\n> locking of all partitions during UPDATE/DELETE.\n\nI poked at this, and attached is a patch, but again I'm not seeing\nthat there's any real performance-based argument for it. So far\nas I can tell, if we've got a lot of RTEs in an executable plan,\nthe bulk of the startup time is going into lock (re) acquisition in\nAcquirePlannerLocks, and/or permissions scanning in ExecCheckRTPerms;\nboth of those have to do work for every RTE including ones that\nrun-time pruning drops later on. ExecInitRangeTable just isn't on\nthe radar.\n\nIf we wanted to try to improve things further, it seems like we'd\nhave to find a way to not lock unreferenced partitions at all,\nas you suggest above. But combining that with run-time pruning seems\nlike it'd be pretty horrid from a system structural standpoint: if we\nacquire locks only during execution, what happens if we find we must\ninvalidate the query plan?\n\nAnyway, the attached might be worth committing just on cleanliness\ngrounds, to avoid two-sources-of-truth issues in the executor.\nBut it seems like there's no additional performance win here\nafter all ... unless you've got a test case that shows differently?\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 09 Aug 2019 17:03:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On 2019-Aug-09, Tom Lane wrote:\n\n> I poked at this, and attached is a patch, but again I'm not seeing\n> that there's any real performance-based argument for it. So far\n> as I can tell, if we've got a lot of RTEs in an executable plan,\n> the bulk of the startup time is going into lock (re) acquisition in\n> AcquirePlannerLocks, and/or permissions scanning in ExecCheckRTPerms;\n> both of those have to do work for every RTE including ones that\n> run-time pruning drops later on. ExecInitRangeTable just isn't on\n> the radar.\n\nI'm confused. I thought that the point of doing this wasn't that we\nwanted to improve performance, but rather that we're now able to remove\nthe array without *losing* performance. I mean, those arrays were there\nto improve performance for code that wanted fast access to specific list\nitems, but now we have fast access to the list items without it. So a\nmeasurement that finds no performance difference is good news, and we\ncan get rid of the now-pointless optimization ...\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 9 Aug 2019 18:35:20 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "On Sat, 10 Aug 2019 at 09:03, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <david.rowley@2ndquadrant.com> writes:\n> > On Fri, 9 Aug 2019 at 09:55, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I still have hopes for getting rid of es_range_table_array though,\n> >> and will look at that tomorrow or so.\n>\n> > Yes, please. I've measured that to be quite an overhead with large\n> > partitioning setups. However, that was with some additional code which\n> > didn't lock partitions until it was ... well .... too late... as it\n> > turned out. But it seems pretty good to remove code that could be a\n> > future bottleneck if we ever manage to do something else with the\n> > locking of all partitions during UPDATE/DELETE.\n>\n> I poked at this, and attached is a patch, but again I'm not seeing\n> that there's any real performance-based argument for it. So far\n> as I can tell, if we've got a lot of RTEs in an executable plan,\n> the bulk of the startup time is going into lock (re) acquisition in\n> AcquirePlannerLocks, and/or permissions scanning in ExecCheckRTPerms;\n> both of those have to do work for every RTE including ones that\n> run-time pruning drops later on. ExecInitRangeTable just isn't on\n> the radar.\n\nIn the code I tested with locally I ended up with a Bitmapset that\nmarked which RTEs required permission checks so that\nExecCheckRTPerms() could quickly skip RTEs with requiredPerms == 0.\nThe Bitmapset was set in the planner. Note:\nexpand_single_inheritance_child sets childrte->requiredPerms = 0, so\nthere's nothing to do there for partitions, which is the most likely\nreason that the rtable list would be big. Sadly the locking is still\na big overhead even with that fixed. Robert threw around some ideas in\n[1], but that seems like a pretty big project.\n\nI don't think removing future bottlenecks is such a bad idea if it can\nbe done in such a way that the code remains clean. It may serve to\nincrease our motivation later to solve the remaining issues. We tend\nto go to greater lengths when there are more gains, and more gains are\nmore easily visible by removing more bottlenecks.\n\nAnother reason to remove the es_range_table_array is that the reason\nit was added in the first place is no longer valid. We'd never have\nadded it if we had array-based lists back then. (Reading below, it\nlooks like Alvaro agrees with this too)\n\n[1] https://www.postgresql.org/message-id/CA%2BTgmoYbtm1uuDne3rRp_uNA2RFiBwXX1ngj3RSLxOfc3oS7cQ%40mail.gmail.com\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Sat, 10 Aug 2019 16:03:45 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> On Fri, 9 Aug 2019 at 04:24, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Has anyone got further thoughts about naming around list_concat\n>> and friends?\n>> If not, I'm inclined to go ahead with the concat-improvement patch as\n>> proposed in [1], modulo the one improvement David spotted.\n>> [1] https://www.postgresql.org/message-id/6704.1563739305@sss.pgh.pa.us\n\n> I'm okay with the patch once that one improvement is done.\n\nPushed with that fix.\n\n> I think if we want to think about freeing the 2nd input List then we\n> can do that in another commit. Removing the redundant list_copy()\n> calls seems quite separate from that.\n\nThe reason I was holding off is that this patch obscures the distinction\nbetween places that needed to preserve the second input (which were\ndoing list_copy on it) and those that didn't (and weren't). If somebody\nwants to rethink the free-second-input business they'll now have to do\na bit of software archaeology to determine which calls to change. But\nI don't think we're going to bother.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 12 Aug 2019 11:25:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Aug-09, Tom Lane wrote:\n>> I poked at this, and attached is a patch, but again I'm not seeing\n>> that there's any real performance-based argument for it.\n\n> I'm confused. I thought that the point of doing this wasn't that we\n> wanted to improve performance, but rather that we're now able to remove\n> the array without *losing* performance. I mean, those arrays were there\n> to improve performance for code that wanted fast access to specific list\n> items, but now we have fast access to the list items without it. So a\n> measurement that finds no performance difference is good news, and we\n> can get rid of the now-pointless optimization ...\n\nYeah, fair enough, so pushed.\n\nIn principle, this change adds an extra indirection in exec_rt_fetch,\nso I went looking to see if there were any such calls in arguably\nperformance-critical paths. Unsurprisingly, most calls are in executor\ninitialization, and they tend to be adjacent to table_open() or other\nexpensive operations, so it's pretty hard to claim that there could\nbe any measurable hit. However, I did notice that trigger.c uses\nExecUpdateLockMode() and GetAllUpdatedColumns() in ExecBRUpdateTriggers\nwhich executes per-row, and so might be worth trying to optimize.\nexec_rt_fetch itself is not the main cost in either of those, but I wonder\nwhy we are doing those calculations over again for each row in the first\nplace. I'm not excited enough about the issue to do anything right now,\nbut the next time somebody whines about trigger-firing overhead, there\nmight be an easy win available there.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 12 Aug 2019 12:07:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: POC: converting Lists into arrays"
},
{
"msg_contents": "Hello, here some macros for list_make, now we can using\r\nlist_make(...), not list_make1/2/3 ...\r\n\r\n#define MACRO_ARGS(...) __VA_ARGS__\r\n#define LIST_MAKE_1_(narg_, postfix_, ...) list_make ## narg_ ## postfix_(__VA_ARGS__)\r\n#define LIST_MAKE_2_(...) LIST_MAKE_1_(__VA_ARGS__)\r\n#define LIST_MAKE_3_(...) LIST_MAKE_2_(__VA_ARGS__)\r\n\r\n#define list_make(...) LIST_MAKE_3_(MACRO_ARGS VA_ARGS_NARGS(__VA_ARGS__), /*empty*/, __VA_ARGS__)\r\n#define list_make_int(...) LIST_MAKE_3_(MACRO_ARGS VA_ARGS_NARGS(__VA_ARGS__), _int, __VA_ARGS__)\r\n#define list_make_oid(...) LIST_MAKE_3_(MACRO_ARGS VA_ARGS_NARGS(__VA_ARGS__), _oid, __VA_ARGS__)\r\n\r\nmacro VA_ARGS_NARGS defined in c.h\r\n\r\nHow to work:\r\nfor list_make_int(4,5,6)\r\nstep 1: LIST_MAKE_3_(MACRO_ARGS VA_ARGS_NARGS(4,5,6), _int, 4,5,6)\r\nsetp 2: LIST_MAKE_2_(MACRO_ARGS (3), _int, 4,5,6)\r\nstep 3: LIST_MAKE_1_(3, _int, 4,5,6)\r\nstep 4: list_make3_int(4,5,6)\r\nstep 5: list_make3_impl(T_IntList, ((ListCell) {.int_value = (4)}), ((ListCell) {.int_value = (5)}), ((ListCell) {.int_value = (6)}))\r\n\r\nOr we can define some public macros, like this:\r\n#define MACRO_ARGS(...) __VA_ARGS__\r\n#define MACRO_COMBIN_1(prefix_, center_, postfix_, ...) prefix_ ## center_ ## postfix_(__VA_ARGS__)\r\n#define MACRO_COMBIN_2(...) MACRO_COMBIN_1(__VA_ARGS__)\r\n#define MACRO_COMBIN_3(...) MACRO_COMBIN_2(__VA_ARGS__)\r\n\r\n#define list_make(...) MACRO_COMBIN_3(list_make, MACRO_ARGS VA_ARGS_NARGS(__VA_ARGS__), /*empty*/, __VA_ARGS__)\r\n#define list_make_int(...) MACRO_COMBIN_3(list_make, MACRO_ARGS VA_ARGS_NARGS(__VA_ARGS__), _int, __VA_ARGS__)\r\n#define list_make_oid(...) MACRO_COMBIN_3(list_make, MACRO_ARGS VA_ARGS_NARGS(__VA_ARGS__), _oid, __VA_ARGS__)\r\n\r\n\r\n\r\nbucoo@sohu.com\r\n \r\nFrom: Tom Lane\r\nDate: 2019-02-24 10:24\r\nTo: pgsql-hackers\r\nSubject: POC: converting Lists into arrays\r\nFor quite some years now there's been dissatisfaction with our List\r\ndata structure implementation. Because it separately palloc's each\r\nlist cell, it chews up lots of memory, and it's none too cache-friendly\r\nbecause the cells aren't necessarily adjacent. Moreover, our typical\r\nusage is to just build a list by repeated lappend's and never modify it,\r\nso that the flexibility of having separately insertable/removable list\r\ncells is usually wasted.\r\n \r\nEvery time this has come up, I've opined that the right fix is to jack\r\nup the List API and drive a new implementation underneath, as we did\r\nonce before (cf commit d0b4399d81). I thought maybe it was about time\r\nto provide some evidence for that position, so attached is a POC patch\r\nthat changes Lists into expansible arrays, while preserving most of\r\ntheir existing API.\r\n \r\nThe big-picture performance change is that this makes list_nth()\r\na cheap O(1) operation, while lappend() is still pretty cheap;\r\non the downside, lcons() becomes O(N), as does insertion or deletion\r\nin the middle of a list. But we don't use lcons() very much\r\n(and maybe a lot of the existing usages aren't really necessary?),\r\nwhile insertion/deletion in long lists is a vanishingly infrequent\r\noperation. Meanwhile, making list_nth() cheap is a *huge* win.\r\n \r\nThe most critical thing that we lose by doing this is that when a\r\nList is modified, all of its cells may need to move, which breaks\r\na lot of code that assumes it can insert or delete a cell while\r\nhanging onto a pointer to a nearby cell. In almost every case,\r\nthis takes the form of doing list insertions or deletions inside\r\na foreach() loop, and the pointer that's invalidated is the loop's\r\ncurrent-cell pointer. Fortunately, with list_nth() now being so cheap,\r\nwe can replace these uses of foreach() with loops using an integer\r\nindex variable and fetching the next list element directly with\r\nlist_nth(). Most of these places were loops around list_delete_cell\r\ncalls, which I replaced with a new function list_delete_nth_cell\r\nto go along with the emphasis on the integer index.\r\n \r\nI don't claim to have found every case where that could happen,\r\nalthough I added debug support in list.c to force list contents\r\nto move on every list modification, and this patch does pass\r\ncheck-world with that support turned on. I fear that some such\r\nbugs remain, though.\r\n \r\nThere is one big way in which I failed to preserve the old API\r\nsyntactically: lnext() now requires a pointer to the List as\r\nwell as the current ListCell, so that it can figure out where\r\nthe end of the cell array is. That requires touching something\r\nlike 150 places that otherwise wouldn't have had to be touched,\r\nwhich is annoying, even though most of those changes are trivial.\r\n \r\nI thought about avoiding that by requiring Lists to keep a \"sentinel\"\r\nvalue in the cell after the end of the active array, so that lnext()\r\ncould look for the sentinel to detect list end. However, that idea\r\ndoesn't really work, because if the list array has been moved, the\r\nspot where the sentinel had been could have been reallocated and\r\nfilled with something else. So this'd offer no defense against the\r\npossibility of a stale ListCell pointer, which is something that\r\nwe absolutely need defenses for. As the patch stands we can have\r\nquite a strong defense, because we can check whether the presented\r\nListCell pointer actually points into the list's current data array.\r\n \r\nAnother annoying consequence of lnext() needing a List pointer is that\r\nthe List arguments of foreach() and related macros are now evaluated\r\neach time through the loop. I could only find half a dozen places\r\nwhere that was actually unsafe (all fixed in the draft patch), but\r\nit's still bothersome. I experimented with ways to avoid that, but\r\nthe only way I could get it to work was to define foreach like this:\r\n \r\n#define foreach(cell, l) for (const List *cell##__foreach = foreach_setup(l, &cell); cell != NULL; cell = lnext(cell##__foreach, cell))\r\n \r\nstatic inline const List *\r\nforeach_setup(const List *l, ListCell **cell)\r\n{\r\n *cell = list_head(l);\r\n return l;\r\n}\r\n \r\nThat works, but there are two problems. The lesser one is that a\r\nnot-very-bright compiler might think that the \"cell\" variable has to\r\nbe forced into memory, because its address is taken. The bigger one is\r\nthat this coding forces the \"cell\" variable to be exactly \"ListCell *\";\r\nyou can't add const or volatile qualifiers to it without getting\r\ncompiler warnings. There are actually more places that'd be affected\r\nby that than by the need to avoid multiple evaluations. I don't think\r\nthe const markings would be a big deal to lose, and the two places in\r\ndo_autovacuum that need \"volatile\" (because of a nearby PG_TRY) could\r\nbe rewritten to not use foreach. So I'm tempted to do that, but it's\r\nnot very pretty. Maybe somebody can think of a better solution?\r\n \r\nThere's a lot of potential follow-on work that I've not touched yet:\r\n \r\n1. This still involves at least two palloc's for every nonempty List,\r\nbecause I kept the header and the data array separate. Perhaps it's\r\nworth allocating those in one palloc. However, right now there's an\r\nassumption that the header of a nonempty List doesn't move when you\r\nchange its contents; that's embedded in the API of the lappend_cell\r\nfunctions, and more than likely there's code that depends on that\r\nsilently because it doesn't bother to store the results of other\r\nList functions back into the appropriate pointer. So if we do that\r\nat all I think it'd be better tackled in a separate patch; and I'm\r\nnot really convinced it's worth the trouble and extra risk of bugs.\r\n \r\n2. list_qsort() is now absolutely stupidly defined. It should just\r\nqsort the list's data array in-place. But that requires an API\r\nbreak for the caller-supplied comparator, since there'd be one less\r\nlevel of indirection. I think we should change this, but again it'd\r\nbe better done as an independent patch to make it more visible in the\r\ngit history.\r\n \r\n3. There are a number of places where we've built flat arrays\r\nparalleling Lists, such as the planner's simple_rte_array. That's\r\npointless now and could be undone, buying some code simplicity.\r\nVarious other micro-optimizations could doubtless be done too;\r\nI've not looked hard.\r\n \r\nI haven't attempted any performance measurements on this, but at\r\nleast in principle it should speed things up a bit, especially\r\nfor complex test cases involving longer Lists. I don't have any\r\nvery suitable test cases at hand, anyway.\r\n \r\nI think this is too late for v12, both because of the size of the\r\npatch and because of the likelihood that it introduces a few bugs.\r\nI'd like to consider pushing it early in the v13 cycle, though.\r\n \r\nregards, tom lane\r\n \r\n\n\nHello, here some macros for list_make, now we can usinglist_make(...), not list_make1/2/3 ...#define MACRO_ARGS(...) __VA_ARGS__#define LIST_MAKE_1_(narg_, postfix_, ...) list_make ## narg_ ## postfix_(__VA_ARGS__)#define LIST_MAKE_2_(...) LIST_MAKE_1_(__VA_ARGS__)#define LIST_MAKE_3_(...) LIST_MAKE_2_(__VA_ARGS__)#define list_make(...) LIST_MAKE_3_(MACRO_ARGS VA_ARGS_NARGS(__VA_ARGS__), /*empty*/, __VA_ARGS__)#define list_make_int(...) LIST_MAKE_3_(MACRO_ARGS VA_ARGS_NARGS(__VA_ARGS__), _int, __VA_ARGS__)#define list_make_oid(...) LIST_MAKE_3_(MACRO_ARGS VA_ARGS_NARGS(__VA_ARGS__), _oid, __VA_ARGS__)macro VA_ARGS_NARGS defined in c.hHow to work:for list_make_int(4,5,6)step 1: LIST_MAKE_3_(MACRO_ARGS VA_ARGS_NARGS(4,5,6), _int, 4,5,6)setp 2: LIST_MAKE_2_(MACRO_ARGS (3), _int, 4,5,6)step 3: LIST_MAKE_1_(3, _int, 4,5,6)step 4: list_make3_int(4,5,6)step 5: list_make3_impl(T_IntList, ((ListCell) {.int_value = (4)}), ((ListCell) {.int_value = (5)}), ((ListCell) {.int_value = (6)}))Or we can define some public macros, like this:#define MACRO_ARGS(...) __VA_ARGS__#define MACRO_COMBIN_1(prefix_, center_, postfix_, ...) prefix_ ## center_ ## postfix_(__VA_ARGS__)#define MACRO_COMBIN_2(...) MACRO_COMBIN_1(__VA_ARGS__)#define MACRO_COMBIN_3(...) MACRO_COMBIN_2(__VA_ARGS__)#define list_make(...) MACRO_COMBIN_3(list_make, MACRO_ARGS VA_ARGS_NARGS(__VA_ARGS__), /*empty*/, __VA_ARGS__)#define list_make_int(...) MACRO_COMBIN_3(list_make, MACRO_ARGS VA_ARGS_NARGS(__VA_ARGS__), _int, __VA_ARGS__)#define list_make_oid(...) MACRO_COMBIN_3(list_make, MACRO_ARGS VA_ARGS_NARGS(__VA_ARGS__), _oid, __VA_ARGS__)\n\nbucoo@sohu.com\n From: Tom LaneDate: 2019-02-24 10:24To: pgsql-hackersSubject: POC: converting Lists into arraysFor quite some years now there's been dissatisfaction with our List\ndata structure implementation. Because it separately palloc's each\nlist cell, it chews up lots of memory, and it's none too cache-friendly\nbecause the cells aren't necessarily adjacent. Moreover, our typical\nusage is to just build a list by repeated lappend's and never modify it,\nso that the flexibility of having separately insertable/removable list\ncells is usually wasted.\n \nEvery time this has come up, I've opined that the right fix is to jack\nup the List API and drive a new implementation underneath, as we did\nonce before (cf commit d0b4399d81). I thought maybe it was about time\nto provide some evidence for that position, so attached is a POC patch\nthat changes Lists into expansible arrays, while preserving most of\ntheir existing API.\n \nThe big-picture performance change is that this makes list_nth()\na cheap O(1) operation, while lappend() is still pretty cheap;\non the downside, lcons() becomes O(N), as does insertion or deletion\nin the middle of a list. But we don't use lcons() very much\n(and maybe a lot of the existing usages aren't really necessary?),\nwhile insertion/deletion in long lists is a vanishingly infrequent\noperation. Meanwhile, making list_nth() cheap is a *huge* win.\n \nThe most critical thing that we lose by doing this is that when a\nList is modified, all of its cells may need to move, which breaks\na lot of code that assumes it can insert or delete a cell while\nhanging onto a pointer to a nearby cell. In almost every case,\nthis takes the form of doing list insertions or deletions inside\na foreach() loop, and the pointer that's invalidated is the loop's\ncurrent-cell pointer. Fortunately, with list_nth() now being so cheap,\nwe can replace these uses of foreach() with loops using an integer\nindex variable and fetching the next list element directly with\nlist_nth(). Most of these places were loops around list_delete_cell\ncalls, which I replaced with a new function list_delete_nth_cell\nto go along with the emphasis on the integer index.\n \nI don't claim to have found every case where that could happen,\nalthough I added debug support in list.c to force list contents\nto move on every list modification, and this patch does pass\ncheck-world with that support turned on. I fear that some such\nbugs remain, though.\n \nThere is one big way in which I failed to preserve the old API\nsyntactically: lnext() now requires a pointer to the List as\nwell as the current ListCell, so that it can figure out where\nthe end of the cell array is. That requires touching something\nlike 150 places that otherwise wouldn't have had to be touched,\nwhich is annoying, even though most of those changes are trivial.\n \nI thought about avoiding that by requiring Lists to keep a \"sentinel\"\nvalue in the cell after the end of the active array, so that lnext()\ncould look for the sentinel to detect list end. However, that idea\ndoesn't really work, because if the list array has been moved, the\nspot where the sentinel had been could have been reallocated and\nfilled with something else. So this'd offer no defense against the\npossibility of a stale ListCell pointer, which is something that\nwe absolutely need defenses for. As the patch stands we can have\nquite a strong defense, because we can check whether the presented\nListCell pointer actually points into the list's current data array.\n \nAnother annoying consequence of lnext() needing a List pointer is that\nthe List arguments of foreach() and related macros are now evaluated\neach time through the loop. I could only find half a dozen places\nwhere that was actually unsafe (all fixed in the draft patch), but\nit's still bothersome. I experimented with ways to avoid that, but\nthe only way I could get it to work was to define foreach like this:\n \n#define foreach(cell, l) for (const List *cell##__foreach = foreach_setup(l, &cell); cell != NULL; cell = lnext(cell##__foreach, cell))\n \nstatic inline const List *\nforeach_setup(const List *l, ListCell **cell)\n{\n *cell = list_head(l);\n return l;\n}\n \nThat works, but there are two problems. The lesser one is that a\nnot-very-bright compiler might think that the \"cell\" variable has to\nbe forced into memory, because its address is taken. The bigger one is\nthat this coding forces the \"cell\" variable to be exactly \"ListCell *\";\nyou can't add const or volatile qualifiers to it without getting\ncompiler warnings. There are actually more places that'd be affected\nby that than by the need to avoid multiple evaluations. I don't think\nthe const markings would be a big deal to lose, and the two places in\ndo_autovacuum that need \"volatile\" (because of a nearby PG_TRY) could\nbe rewritten to not use foreach. So I'm tempted to do that, but it's\nnot very pretty. Maybe somebody can think of a better solution?\n \nThere's a lot of potential follow-on work that I've not touched yet:\n \n1. This still involves at least two palloc's for every nonempty List,\nbecause I kept the header and the data array separate. Perhaps it's\nworth allocating those in one palloc. However, right now there's an\nassumption that the header of a nonempty List doesn't move when you\nchange its contents; that's embedded in the API of the lappend_cell\nfunctions, and more than likely there's code that depends on that\nsilently because it doesn't bother to store the results of other\nList functions back into the appropriate pointer. So if we do that\nat all I think it'd be better tackled in a separate patch; and I'm\nnot really convinced it's worth the trouble and extra risk of bugs.\n \n2. list_qsort() is now absolutely stupidly defined. It should just\nqsort the list's data array in-place. But that requires an API\nbreak for the caller-supplied comparator, since there'd be one less\nlevel of indirection. I think we should change this, but again it'd\nbe better done as an independent patch to make it more visible in the\ngit history.\n \n3. There are a number of places where we've built flat arrays\nparalleling Lists, such as the planner's simple_rte_array. That's\npointless now and could be undone, buying some code simplicity.\nVarious other micro-optimizations could doubtless be done too;\nI've not looked hard.\n \nI haven't attempted any performance measurements on this, but at\nleast in principle it should speed things up a bit, especially\nfor complex test cases involving longer Lists. I don't have any\nvery suitable test cases at hand, anyway.\n \nI think this is too late for v12, both because of the size of the\npatch and because of the likelihood that it introduces a few bugs.\nI'd like to consider pushing it early in the v13 cycle, though.\n \n\t\t\tregards, tom lane",
"msg_date": "Tue, 9 Mar 2021 11:51:03 +0800",
"msg_from": "\"bucoo@sohu.com\" <bucoo@sohu.com>",
"msg_from_op": false,
"msg_subject": "Re: POC: converting Lists into arrays"
}
] |
[
{
"msg_contents": "Hello hackers,\n\n1. In a nearby thread, I misdiagnosed a problem reported[1] by Justin\nPryzby (though my misdiagnosis is probably still a thing to be fixed;\nsee next). I think I just spotted the real problem he saw: if you\nexecute a parallel query after a smart shutdown has been initiated,\nyou wait forever in gather_readnext()! Maybe parallel workers can't\nbe launched in this state, but we lack code to detect this case? I\nhaven't dug into the exact mechanism or figured out what to do about\nit yet, and I'm tied up with something else for a bit, but I will come\nback to this later if nobody beats me to it.\n\n2. Commit cfdf4dc4 on the master branch fixed up all known waits that\ndidn't respond to postmaster death, and added an assertion to that\neffect. One of the cases fixed was in gather_readnext(), and\ninitially I thought that's what Justin was telling us about (his\nreport was from 11.x), until I reread his message and saw that it was\nSIGTERM and not eg SIGKILL. I should probably go and back-patch a fix\nfor that case anyway... but now I'm wondering, was there a reason for\nthat omission, and likewise for mq_putmessage()?\n\n(Another case of missing PM death detection in the back-branches is\npostgres_fdw.)\n\n[1] https://www.postgresql.org/message-id/CAEepm%3D0kMunPC0hhuT0VC-5dfMT3K-xsToJHkTznA6yrSARsPg%40mail.gmail.com\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n",
"msg_date": "Mon, 25 Feb 2019 14:13:11 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Parallel query vs smart shutdown and Postmaster death"
},
{
"msg_contents": "On Mon, Feb 25, 2019 at 2:13 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> 1. In a nearby thread, I misdiagnosed a problem reported[1] by Justin\n> Pryzby (though my misdiagnosis is probably still a thing to be fixed;\n> see next). I think I just spotted the real problem he saw: if you\n> execute a parallel query after a smart shutdown has been initiated,\n> you wait forever in gather_readnext()! Maybe parallel workers can't\n> be launched in this state, but we lack code to detect this case? I\n> haven't dug into the exact mechanism or figured out what to do about\n> it yet, and I'm tied up with something else for a bit, but I will come\n> back to this later if nobody beats me to it.\n\nGiven smart shutdown's stated goal, namely that it \"lets existing\nsessions end their work normally\", my questions are:\n\n1. Why does pmdie()'s SIGTERM case terminate parallel workers\nimmediately? That breaks aborts running parallel queries, so they\ndon't get to end their work normally.\n2. Why are new parallel workers not allowed to be started while in\nthis state? That hangs future parallel queries forever, so they don't\nget to end their work normally.\n3. Suppose we fix the above cases; should we do it for parallel\nworkers only (somehow), or for all bgworkers? It's hard to say since\nI don't know what all bgworkers do.\n\nIn the meantime, perhaps we should teach the postmaster to report this\ncase as a failure to launch in back-branches, so that at least\nparallel queries don't hang forever? Here's an initial sketch of a\npatch like that: it gives you \"ERROR: parallel worker failed to\ninitialize\" and \"HINT: More details may be available in the server\nlog.\" if you try to run a parallel query. The HINT is right, the\nserver logs say that a smart shutdown is in progress. If that seems a\nbit hostile, consider that any parallel queries that were running at\nthe moment the smart shutdown was requested have already been ordered\nto quit; why should new queries started after that get a better deal?\nThen perhaps we could do some more involved surgery on master that\nachieves smart shutdown's stated goal here, and lets parallel queries\nactually run? Better ideas welcome.\n\n-- \nThomas Munro\nhttps://enterprisedb.com",
"msg_date": "Wed, 27 Feb 2019 11:43:55 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Parallel query vs smart shutdown and Postmaster death"
},
{
"msg_contents": "On Tue, Feb 26, 2019 at 5:44 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Then perhaps we could do some more involved surgery on master that\n> achieves smart shutdown's stated goal here, and lets parallel queries\n> actually run? Better ideas welcome.\n\nI have noticed before that the smart shutdown code does not disclose\nto the rest of the system that a shutdown is in progress: no signals\nare sent, and no shared memory state is updated. That makes it a bit\nchallenging for any other part of the system to respond to the smart\nshutdown in a way that is, well, smart. But I guess that's not really\nthe problem in this case.\n\nIt seems to me that we could fix pmdie() to skip parallel workers; I\nthink that the postmaster could notice that they are lagged as\nBGWORKER_CLASS_PARALLEL. But we'd also have to fix things so that new\nparallel workers could be launched during a smart shutdown, which\nlooks not quite so simple.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Wed, 27 Feb 2019 10:38:46 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel query vs smart shutdown and Postmaster death"
},
{
"msg_contents": "Hi,\n\nThomas Munro <thomas.munro@gmail.com> writes:\n\n> 1. Why does pmdie()'s SIGTERM case terminate parallel workers\n> immediately? That breaks aborts running parallel queries, so they\n> don't get to end their work normally.\n> 2. Why are new parallel workers not allowed to be started while in\n> this state? That hangs future parallel queries forever, so they don't\n> get to end their work normally.\n> 3. Suppose we fix the above cases; should we do it for parallel\n> workers only (somehow), or for all bgworkers? It's hard to say since\n> I don't know what all bgworkers do.\n\nAttached patch fixes 1 and 2. As for the 3, the only other internal\nbgworkers currently are logical replication launcher and workers, which\narguably should be killed immediately.\n\n> Here's an initial sketch of a\n> patch like that: it gives you \"ERROR: parallel worker failed to\n> initialize\" and \"HINT: More details may be available in the server\n> log.\" if you try to run a parallel query.\n\nReporting bgworkers that postmaster is never going to start is probably\nworthwhile doing anyway to prevent potential further deadlocks like\nthis. However, I think there is a problem in your patch: we might be in\npost PM_RUN states due to FatalError, not because of shutdown. In this\ncase, we shouldn't refuse to run bgws in the future. I would also merge\nthe check into bgworker_should_start_now.\n\n\n--\nArseny Sher\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Sun, 17 Mar 2019 07:53:35 +0300",
"msg_from": "Arseny Sher <a.sher@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Parallel query vs smart shutdown and Postmaster death"
},
{
"msg_contents": "On Sun, Mar 17, 2019 at 5:53 PM Arseny Sher <a.sher@postgrespro.ru> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n>\n> > 1. Why does pmdie()'s SIGTERM case terminate parallel workers\n> > immediately? That breaks aborts running parallel queries, so they\n> > don't get to end their work normally.\n> > 2. Why are new parallel workers not allowed to be started while in\n> > this state? That hangs future parallel queries forever, so they don't\n> > get to end their work normally.\n> > 3. Suppose we fix the above cases; should we do it for parallel\n> > workers only (somehow), or for all bgworkers? It's hard to say since\n> > I don't know what all bgworkers do.\n>\n> Attached patch fixes 1 and 2. As for the 3, the only other internal\n> bgworkers currently are logical replication launcher and workers, which\n> arguably should be killed immediately.\n\nHi Arseny,\n\nThanks for working on this! Yes, it seems better to move forwards\nrather than backwards, and fix this properly as you have done in this\npatch.\n\nJust a thought: instead of the new hand-coded loop you added in\npmdie(), do you think it would make sense to have a new argument\n\"exclude_class_mask\" for SignalSomeChildren()? If we did that, I\nwould consider renaming the existing parameter \"target\" to\n\"include_type_mask\" to make it super clear what's going on.\n\n> > Here's an initial sketch of a\n> > patch like that: it gives you \"ERROR: parallel worker failed to\n> > initialize\" and \"HINT: More details may be available in the server\n> > log.\" if you try to run a parallel query.\n>\n> Reporting bgworkers that postmaster is never going to start is probably\n> worthwhile doing anyway to prevent potential further deadlocks like\n> this. However, I think there is a problem in your patch: we might be in\n> post PM_RUN states due to FatalError, not because of shutdown. In this\n> case, we shouldn't refuse to run bgws in the future. I would also merge\n> the check into bgworker_should_start_now.\n\nHmm, yeah, I haven't tested or double-checked in detail yet but I\nthink you're probably right about all of that.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n",
"msg_date": "Mon, 18 Mar 2019 16:55:02 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Parallel query vs smart shutdown and Postmaster death"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n\n> Just a thought: instead of the new hand-coded loop you added in\n> pmdie(), do you think it would make sense to have a new argument\n> \"exclude_class_mask\" for SignalSomeChildren()? If we did that, I\n> would consider renaming the existing parameter \"target\" to\n> \"include_type_mask\" to make it super clear what's going on.\n\nI thought that this is a bit too complicated for single use-case, but if\nyou like it better, here is an updated version.\n\n--\nArseny Sher\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 18 Mar 2019 11:25:01 +0300",
"msg_from": "Arseny Sher <a.sher@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Parallel query vs smart shutdown and Postmaster death"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.